ログイン
Language:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 学位論文
  2. 学位論文

深層学習に基づく実情景での個人の再識別に関する研究

https://doi.org/10.18997/00008769
https://doi.org/10.18997/00008769
e579ed52-bc2b-423a-b849-fb88e4abc1b0
名前 / ファイル ライセンス アクション
kou_k_533.pdf kou_k_533.pdf (29.2 MB)
アイテムタイプ 学位論文 = Thesis or Dissertation(1)
公開日 2022-03-17
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_db06
資源タイプ doctoral thesis
タイトル
タイトル Study on the Individual Person Re-identification in the Real Scene based on the Deep Learning
言語 en
タイトル
タイトル 深層学習に基づく実情景での個人の再識別に関する研究
言語 ja
言語
言語 eng
著者 朱, 苗苗

× 朱, 苗苗

en Zhu, Miaomiao

ja 朱, 苗苗

Search repository
抄録
内容記述タイプ Abstract
内容記述 Person re-identification (ReID), as an instance-level recognition problem, aims to automatically retrieve a person-of-interest across multiple non-overlapping camera views, which is considered a sub-problem of image retrieval. Due to the increasing demand for real-world applications in intelligent video surveillance and public safety, as an effective supplement to face recognition, ReID has become an important task in the field of computer vision and has drawn a lot of attention from both academia and industry in recent years. Depending on the improvements of deep learning and the release of many large-scale datasets, many ReID models have been proposed and have achieved high performance in the past years. However, compared with face recognition, under different cameras, ReID is challenging due to the significant differences and changes of viewpoint, resolution, illumination, obstruction, pose of person, etc. The traditional ReID research mainly focuses on matching cropped pedestrian images between queries and candidates, carried out through experimental verification and evaluation on commonly used datasets, which are independent of detection and only focus on identification. In other words, the query process of ReID is divided into two separate steps: pedestrian detection and person reidentification, where has a big gap with practical applications. However, like any advanced algorithm, the ultimate goal of ReID research is to contribute to practical application. In the real scene, the goal of ReID is to search for a target person in a gallery of images or videos which come from multiple non-overlapping cameras. Compared with traditional ReID research, its purpose is to search a person from the whole scene images or videos instead of matching them with manually cropped pedestrians in the existing dataset. In this paper, from the perspective of computer vision and practical application, based on the analysis of the shortage of traditional research methods and the existing deep learning object detection and ReID research published by the computer vision conference in recent years, instead of breaking ReID down into two separate tasks: pedestrian detection and ReID, after selecting and optimizing the pedestrian detection model YOLOv3 and ReID models model strong Reid baseline respectively, through the combination of two models, a novel and complete practical ReID system are designed to achieve one-step search of specific pedestrians in images or video sequences in actual application scenarios. Unlike the traditional ReID method, the proposed approach combines pedestrian detection and ReID to perform one-step pedestrian detection and search. Compared with other similar work, the most significant advantage is effectively and directly using the existing pedestrian detection and ReID models. To evaluate the effectiveness of our approach, firstly, evaluate our proposed method on the commonly used benchmark datasets. Many test results show that the average accuracy of a single query on the commonly used ReID datasets is over 90%. Secondly, to verify the effectiveness of the proposed method in the real scene, whether it can work in complex application scenarios, and evaluate which factors will impact our one-step person search task,we took four experiments datasets in complex scenes, respectively. Before the experimental verification, the data is preprocessed, such as cropping the videos as required, converting videos into images (at intervals of 5 frames), and adding the logo. In our setting, the query task of ReID is divided into three classes, image-based, video-based and real-time one-step person search. That is, for images and video sequences obtained by cameras distributed in different locations, given a person-of-interest to be queried, the goal of our method is to search a person from the whole scene images or videos directly instead of matching them with manually cropped pedestrians in the existing datasets. The search results are output through two channels in real-time. One is the file output in the specified folder, and the other is the terminal information display. The experimental results show that our proposed method performs well for real scene application and commonly used datasets. At the same time, the overall experimental results also show that ReID in the real scene is feasible both in terms of retrieval speed and accuracy. The proposed approach improved the availability of video surveillance application, such as criminals finding, crosscamera person tracking and activity analysis, , etc. In this paper, from the perspective of the practical application of person re-identification, the main contributions can be summarized as follow: (1) Summarize and analyze the history and shortage of traditional ReID research, including the relationship with image classification, instance retrieval, face recognition, and pedestrian detection. (2) Based on the analysis of existing deep learning object detection and person reidentification research published by the computer vision conference in recent years, propose a complete process to perform a fast pedestrian detection, and query in a large gallery set collected by camera networks. (3) Using the object detection model YOLOv3 and person re-identification strong ReID baseline, and then combining pedestrian detection and person re-identification and under the premise of model optimization, a novel and complete practical person reidentification system is designed to achieve a one-step search of specific pedestrians in images or video sequences in actual application scenarios. (4) To evaluate the effectiveness of our approach, firstly, we evaluate the proposed method on the commonly used benchmark datasets, including three image-based ReID datasets, Market-1501, DukeMTMC-reID, MSMT17, and one video-based dataset, MARS, respectively. Finally, many test results show that the average accuracy of a single query on the commonly used ReID datasets is over 90%, and we can conclude that our proposed method can be further applied to find a specific pedestrian in the real scene. (5) To further verify whether the proposed method can work effectively in complex application scenarios and evaluate which factors will impact our one-step person search task, we took four experiments datasets in complex scenes, respectively. Before the experimental verification, the data is preprocessed, such as cropping the videos as required, converting videos into images (at intervals of 5 frames), and adding the logo. In our setting, the query task of ReID is divided into three classes, image-based, video-based, and real-time person search. (6) To improve the efficiency of pedestrian retrieval, the experimental results in real scenes are analyzed. After many experiments verification in the real scene, we can conclude that our proposed method has achieved good results. However, we can also see that occlusion and resolution are the two most important factors affecting the retrieval results. At the same time, the computational complexity and processing speed are also important requirements of our method. To improve the retrieval efficiency, retrieve the image one by one while retrieving the video every few frames. In addition, we also use some other methods to improve the retrieval speed, for example, using GPU, modifying the inference method, which is also the main task that we need to improve in the future. (7) The research will continue to achieve end-to-end ReID in the real scene in future work. The main work includes the following aspects, the first one is to improve retrieval efficiency, such as accuracy, computational complexity, and processing speed, mainly to solve the occlusion problem. Meanwhile, it is found that the current ReID model will not work when facing clothes changing, whether it is in the traditional ReID method based on commonly used datasets or our proposed approach in this paper. In other words, the current research of ReID mainly focuses on short-term scenarios. Therefore, to make ReID research closer to real life, it urgently needs to consider the more complex variability, i.e., long-term ReID. The problem of clothes changes and performing real-time pedestrian detection and query on mobile devices will be the main research directions and content of our future work.
目次
内容記述タイプ TableOfContents
内容記述 1 Introduction||2 One-step pedestrian detection based on the optimized YoloV3||3 Person re-identification method and optimization||4 Combination of pedestrian detection and ReID in the real scene||5 Conclusions and future work
備考
内容記述タイプ Other
内容記述 九州工業大学博士学位論文 学位記番号:工博甲第533号 学位授与年月日:令和3年9月24日
キーワード
主題Scheme Other
主題 Deep Learning
キーワード
主題Scheme Other
主題 Person Re-identification
キーワード
主題Scheme Other
主題 Pedestrian Detection
キーワード
主題Scheme Other
主題 Image Retrieval
キーワード
主題Scheme Other
主題 Convolutional Neural Network
キーワード
主題Scheme Other
主題 Video Surveillance
アドバイザー
張, 力峰
学位授与番号
学位授与番号 甲第533号
学位名
学位名 博士(工学)
学位授与年月日
学位授与年月日 2021-09-24
学位授与機関
学位授与機関識別子Scheme kakenhi
学位授与機関識別子 17104
学位授与機関名 九州工業大学
学位授与年度
内容記述タイプ Other
内容記述 令和3年度
出版タイプ
出版タイプ VoR
出版タイプResource http://purl.org/coar/version/c_970fb48d4fbd8a85
アクセス権
アクセス権 open access
アクセス権URI http://purl.org/coar/access_right/c_abf2
ID登録
ID登録 10.18997/00008769
ID登録タイプ JaLC
戻る
0
views
See details
Views

Versions

Ver.1 2023-05-15 12:56:49.767068
Show All versions

Share

Share
tweet

Cite as

Other

print

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX
  • ZIP

コミュニティ

確認

確認

確認


Powered by WEKO3


Powered by WEKO3