ログイン
Language:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 学位論文
  2. 学位論文

組み込みシステムにおける画像分類のためのマルチトリムネットワーク構造を用いたモデル圧縮

https://doi.org/10.18997/00008631
https://doi.org/10.18997/00008631
85ffae7a-2ace-4079-bf42-f051fd6b8e91
名前 / ファイル ライセンス アクション
kou_k_532.pdf kou_k_532.pdf (22.1 MB)
アイテムタイプ 学位論文 = Thesis or Dissertation(1)
公開日 2021-12-06
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_db06
資源タイプ doctoral thesis
タイトル
タイトル Model Compression Using Multi-Trimmed Network Structure for Image Classification on Embedded Systems
言語 en
タイトル
タイトル 組み込みシステムにおける画像分類のためのマルチトリムネットワーク構造を用いたモデル圧縮
言語 ja
言語
言語 eng
著者 Sarakon, Pornthep

× Sarakon, Pornthep

en Sarakon, Pornthep

Search repository
抄録
内容記述タイプ Abstract
内容記述 Much effort has gone into developing smart robots, wherein perception and manipulation are among the most fundamental and challenging problems. Embedded systems (ESs) are critical in robot composition. However, as an embedded system, a robot brain has a fixed resource budget and is unsuitable for modern convolutional neural networks (CNNs). Thus, the approach of CNN compression plays an important role in reducing their computational cost to make a suitable model for embedded systems. Recently, CNN compression approaches can be categorized into two groups, namely hand-crafted and model compression (MC) approach. The hand-crafted approach involves factorization and manual compression, but it is time consuming and usually requires significant amounts of manual effort and domain knowledge. Instead, the MC approach takes advantage of pre-trained models and it can solve a hand-crafted problem. The MC squeezes an existing model into one that is smaller and requires less computation. Although most MC methods can achieve a low latency or high accuracy, they are non-optimum accuracy–latency trade-off, complex, and do not affect certain dimensions (e.g., the width, resolution, and depth) of the models. To overcome this problem, the thesis presents a simple model-compression approach that optimize the accuracy–latency trade-off of the model. The multi-trimmed network structure (MTNS) is a robust combination of model compression (MC) techniques providing a lightweight model with trade-off optimization. The thesis describes a number of significant advances. Firstly, a new simple and efficient MC technique is introduced, which takes into width, resolution and depth compression. Secondly, a new multi-objective function is devised, which uses the accuracy–latency trade-off of compressed models to optimize the performance of a target model. Thirdly, a new training-accelerator is developed, which integrates pruning of convolutional kernels into shrinking the model structure to reduce training time at compressing width dimension. Finally, a new search strategy is developed, which combines Neural Architecture Search (NAS) with shrinking the model structure to explore more-complex conditions of shrinking the model structure with a relatively short training period. In an experimental evaluation, the thesis compares the performances of the proposed MTNS approach with those of CNN filter pruning, the model quantization technique, an adaptive mixture of low-rank factorizations, and knowledge distillation. The MTNS better resolved the accuracy–latency trade-off in image classification than the modern MC methods. It will be useful and friendly to the embedded system to perform a compressed model of MTNS with the maximum trade-off, lightweight, low computation and rapid process. The outstanding of the thesis is that the model compression problems have been solved by using MTNS techniques which are simple and optimum accuracy–latency trade-off for model compression.
目次
内容記述タイプ TableOfContents
内容記述 1 Introduction||2 Literature Reviews||3 Preliminary Knowledge and Technique for Model Compression||4 Shrinking Structure of Models||5 Shrinking Structure of Models with Training Accelerator||6 Trim Neural Architecture Search||7 Conclusions
備考
内容記述タイプ Other
内容記述 九州工業大学博士学位論文 学位記番号:工博甲第532号 学位授与年月日:令和3年9月24日
キーワード
主題Scheme Other
主題 Model compression
キーワード
主題Scheme Other
主題 Computational efficiency
キーワード
主題Scheme Other
主題 Image classification
キーワード
主題Scheme Other
主題 Convolutional neural network
キーワード
主題Scheme Other
主題 Embedded system
アドバイザー
河野, 英昭
学位授与番号
学位授与番号 甲第532号
学位名
学位名 博士(工学)
学位授与年月日
学位授与年月日 2021-09-24
学位授与機関
学位授与機関識別子Scheme kakenhi
学位授与機関識別子 17104
学位授与機関名 九州工業大学
学位授与年度
内容記述タイプ Other
内容記述 令和3年度
出版タイプ
出版タイプ VoR
出版タイプResource http://purl.org/coar/version/c_970fb48d4fbd8a85
アクセス権
アクセス権 open access
アクセス権URI http://purl.org/coar/access_right/c_abf2
ID登録
ID登録 10.18997/00008631
ID登録タイプ JaLC
戻る
0
views
See details
Views

Versions

Ver.1 2023-05-15 12:56:50.556547
Show All versions

Share

Share
tweet

Cite as

Other

print

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX
  • ZIP

コミュニティ

確認

確認

確認


Powered by WEKO3


Powered by WEKO3