WEKO3
アイテム
{"_buckets": {"deposit": "b6781984-301a-4ea9-a6d5-bfcda4b93b42"}, "_deposit": {"created_by": 18, "id": "7458", "owners": [18], "pid": {"revision_id": 0, "type": "depid", "value": "7458"}, "status": "published"}, "_oai": {"id": "oai:kyutech.repo.nii.ac.jp:00007458", "sets": ["7"]}, "author_link": [], "item_20_date_granted_61": {"attribute_name": "学位授与年月日", "attribute_value_mlt": [{"subitem_dategranted": "2021-09-24"}]}, "item_20_degree_grantor_59": {"attribute_name": "学位授与機関", "attribute_value_mlt": [{"subitem_degreegrantor": [{"subitem_degreegrantor_name": "九州工業大学"}], "subitem_degreegrantor_identifier": [{"subitem_degreegrantor_identifier_name": "17104", "subitem_degreegrantor_identifier_scheme": "kakenhi"}]}]}, "item_20_degree_name_58": {"attribute_name": "学位名", "attribute_value_mlt": [{"subitem_degreename": "博士(工学)"}]}, "item_20_description_30": {"attribute_name": "目次", "attribute_value_mlt": [{"subitem_description": "1 Introduction||2 Background||3 Mixed Precision Weight Network and FPGA Design||4 Related Works||5 Experimental Results and Discussion||6 Conclusion", "subitem_description_type": "TableOfContents"}]}, "item_20_description_4": {"attribute_name": "抄録", "attribute_value_mlt": [{"subitem_description": "In this study, I introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {-1, 1}, ternary {-1, 0, 1}, and 32-bit floating-point. I further developed the MPWN from both software and hardware aspects. From the software aspect, I evaluated the MPWN on the Fashion-MNIST, CIFAR10, and ILSVRC 2012 datasets. I systematized the accuracy sparsity bit score, which is a linear combination of accuracy, sparsity, and number of bits. This score allows Bayesian optimization to be used efficiently to search for MPWN weight space combinations. From the hardware aspect, I proposed XOR signed-bits to explore floating-point and binary weight spaces in the MPWN. XOR signed-bits is an efficient implementation equivalent to the multiplication of floating-point and binary weight spaces. Using the concept from XOR signed bits, I also provide a ternary bitwise operation that is an efficient implementation equivalent to the multiplication of floating-point and ternary weight space. To demonstrate the compatibility of the MPWN with hardware implementation, I synthesized and implemented the MPWN in a field-programmable gate array using high-level synthesis. My proposed MPWN implementation utilized up to 1.68-4.89 times less hardware resources depending on the type of resources than a conventional the 32-bit floating-point model. In addition, my implementation reduced the latency up to 31.55 times compared to 32-bit floating-point model without optimizations.", "subitem_description_type": "Abstract"}]}, "item_20_description_5": {"attribute_name": "備考", "attribute_value_mlt": [{"subitem_description": "九州工業大学博士学位論文 学位記番号:生工博甲第419号 学位授与年月日:令和3年9月24日", "subitem_description_type": "Other"}]}, "item_20_description_60": {"attribute_name": "学位授与年度", "attribute_value_mlt": [{"subitem_description": "令和3年度", "subitem_description_type": "Other"}]}, "item_20_dissertation_number_62": {"attribute_name": "学位授与番号", "attribute_value_mlt": [{"subitem_dissertationnumber": "甲第419号"}]}, "item_20_identifier_registration": {"attribute_name": "ID登録", "attribute_value_mlt": [{"subitem_identifier_reg_text": "10.18997/00008662", "subitem_identifier_reg_type": "JaLC"}]}, "item_20_text_34": {"attribute_name": "アドバイザー", "attribute_value_mlt": [{"subitem_text_value": "田向, 権"}]}, "item_20_version_type_63": {"attribute_name": "出版タイプ", "attribute_value_mlt": [{"subitem_version_resource": "http://purl.org/coar/version/c_970fb48d4fbd8a85", "subitem_version_type": "VoR"}]}, "item_access_right": {"attribute_name": "アクセス権", "attribute_value_mlt": [{"subitem_access_right": "open access", "subitem_access_right_uri": "http://purl.org/coar/access_right/c_abf2"}]}, "item_creator": {"attribute_name": "著者", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "Ninnart, Fuengfusin", "creatorNameLang": "en"}]}]}, "item_files": {"attribute_name": "ファイル情報", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_date", "date": [{"dateType": "Available", "dateValue": "2021-12-20"}], "displaytype": "detail", "download_preview_message": "", "file_order": 0, "filename": "sei_k_419.pdf", "filesize": [{"value": "2.7 MB"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensetype": "license_note", "mimetype": "application/pdf", "size": 2700000.0, "url": {"label": "sei_k_419.pdf", "objectType": "fulltext", "url": "https://kyutech.repo.nii.ac.jp/record/7458/files/sei_k_419.pdf"}, "version_id": "ce25768b-9bd9-4c3c-8035-7b93d77b3924"}]}, "item_keyword": {"attribute_name": "キーワード", "attribute_value_mlt": [{"subitem_subject": "Deep Learning", "subitem_subject_scheme": "Other"}, {"subitem_subject": "FPGA", "subitem_subject_scheme": "Other"}, {"subitem_subject": "Quantization Neural Networks", "subitem_subject_scheme": "Other"}, {"subitem_subject": "Neural network", "subitem_subject_scheme": "Other"}, {"subitem_subject": "Image recognition", "subitem_subject_scheme": "Other"}]}, "item_language": {"attribute_name": "言語", "attribute_value_mlt": [{"subitem_language": "eng"}]}, "item_resource_type": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"resourcetype": "doctoral thesis", "resourceuri": "http://purl.org/coar/resource_type/c_db06"}]}, "item_title": "Mixed-precision weights network for field-programmable gate array", "item_titles": {"attribute_name": "タイトル", "attribute_value_mlt": [{"subitem_title": "Mixed-precision weights network for field-programmable gate array", "subitem_title_language": "en"}, {"subitem_title": "Field Programmable Gate Arrayでの実装に適した混合精度重みモデルに基づくニューラルネットワーク", "subitem_title_language": "ja"}]}, "item_type_id": "20", "owner": "18", "path": ["7"], "permalink_uri": "https://doi.org/10.18997/00008662", "pubdate": {"attribute_name": "PubDate", "attribute_value": "2021-12-20"}, "publish_date": "2021-12-20", "publish_status": "0", "recid": "7458", "relation": {}, "relation_version_is_last": true, "title": ["Mixed-precision weights network for field-programmable gate array"], "weko_shared_id": -1}
Field Programmable Gate Arrayでの実装に適した混合精度重みモデルに基づくニューラルネットワーク
https://doi.org/10.18997/00008662
https://doi.org/10.18997/00008662ab5431a9-33e8-4fde-b2c6-10a77259634d
名前 / ファイル | ライセンス | アクション |
---|---|---|
sei_k_419.pdf (2.7 MB)
|
|
Item type | 学位論文 = Thesis or Dissertation(1) | |||||||
---|---|---|---|---|---|---|---|---|
公開日 | 2021-12-20 | |||||||
資源タイプ | ||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_db06 | |||||||
資源タイプ | doctoral thesis | |||||||
タイトル | ||||||||
言語 | en | |||||||
タイトル | Mixed-precision weights network for field-programmable gate array | |||||||
タイトル | ||||||||
言語 | ja | |||||||
タイトル | Field Programmable Gate Arrayでの実装に適した混合精度重みモデルに基づくニューラルネットワーク | |||||||
言語 | ||||||||
言語 | eng | |||||||
著者 |
Ninnart, Fuengfusin
× Ninnart, Fuengfusin
|
|||||||
抄録 | ||||||||
内容記述タイプ | Abstract | |||||||
内容記述 | In this study, I introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {-1, 1}, ternary {-1, 0, 1}, and 32-bit floating-point. I further developed the MPWN from both software and hardware aspects. From the software aspect, I evaluated the MPWN on the Fashion-MNIST, CIFAR10, and ILSVRC 2012 datasets. I systematized the accuracy sparsity bit score, which is a linear combination of accuracy, sparsity, and number of bits. This score allows Bayesian optimization to be used efficiently to search for MPWN weight space combinations. From the hardware aspect, I proposed XOR signed-bits to explore floating-point and binary weight spaces in the MPWN. XOR signed-bits is an efficient implementation equivalent to the multiplication of floating-point and binary weight spaces. Using the concept from XOR signed bits, I also provide a ternary bitwise operation that is an efficient implementation equivalent to the multiplication of floating-point and ternary weight space. To demonstrate the compatibility of the MPWN with hardware implementation, I synthesized and implemented the MPWN in a field-programmable gate array using high-level synthesis. My proposed MPWN implementation utilized up to 1.68-4.89 times less hardware resources depending on the type of resources than a conventional the 32-bit floating-point model. In addition, my implementation reduced the latency up to 31.55 times compared to 32-bit floating-point model without optimizations. | |||||||
目次 | ||||||||
内容記述タイプ | TableOfContents | |||||||
内容記述 | 1 Introduction||2 Background||3 Mixed Precision Weight Network and FPGA Design||4 Related Works||5 Experimental Results and Discussion||6 Conclusion | |||||||
備考 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | 九州工業大学博士学位論文 学位記番号:生工博甲第419号 学位授与年月日:令和3年9月24日 | |||||||
キーワード | ||||||||
主題Scheme | Other | |||||||
主題 | Deep Learning | |||||||
キーワード | ||||||||
主題Scheme | Other | |||||||
主題 | FPGA | |||||||
キーワード | ||||||||
主題Scheme | Other | |||||||
主題 | Quantization Neural Networks | |||||||
キーワード | ||||||||
主題Scheme | Other | |||||||
主題 | Neural network | |||||||
キーワード | ||||||||
主題Scheme | Other | |||||||
主題 | Image recognition | |||||||
アドバイザー | ||||||||
田向, 権 | ||||||||
学位授与番号 | ||||||||
学位授与番号 | 甲第419号 | |||||||
学位名 | ||||||||
学位名 | 博士(工学) | |||||||
学位授与年月日 | ||||||||
学位授与年月日 | 2021-09-24 | |||||||
学位授与機関 | ||||||||
学位授与機関識別子Scheme | kakenhi | |||||||
学位授与機関識別子 | 17104 | |||||||
学位授与機関名 | 九州工業大学 | |||||||
学位授与年度 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | 令和3年度 | |||||||
出版タイプ | ||||||||
出版タイプ | VoR | |||||||
出版タイプResource | http://purl.org/coar/version/c_970fb48d4fbd8a85 | |||||||
アクセス権 | ||||||||
アクセス権 | open access | |||||||
アクセス権URI | http://purl.org/coar/access_right/c_abf2 | |||||||
ID登録 | ||||||||
ID登録 | 10.18997/00008662 | |||||||
ID登録タイプ | JaLC |