Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:Domain-Specific Adaptation for Enhanced Gait Recognition in Practical Scenarios 
著者
和文: Nitish Jaiswal, Vi Duc Huan, LIMANTA Felix, 篠田 浩一, Masahiro Wakasa.  
英文: Nitish Jaiswal, Vi Duc Huan, Felix Limanta, Koichi Shinoda, Masahiro Wakasa.  
言語 English 
掲載誌/書名
和文: 
英文:Proceedings of the 2024 6th International Conference on Image, Video and Signal Processing 
巻, 号, ページ         Page 8-15
出版年月 2024年3月 
出版者
和文: 
英文:Association for Computing Machinery, ACM 
会議名称
和文: 
英文:International Conference on Image, Video and Signal Processing (IVSP) 2024 
開催地
和文:神奈川県川崎市 
英文: 
DOI https://doi.org/10.1145/3655755.3655757
アブストラクト Gait recognition is a burgeoning field within biometric recognition which utilizes computer vision technology to extract silhouette images or body skeletons to identify users by leveraging the unique walking patterns of individuals. However, with its huge potential for user identification in diverse settings especially in security and surveillance applications, it faces challenges in transitioning from controlled datasets to real-world applications. In the regime of silhouette-based models, the most challenging covariate is associated with varying viewing angles, which has often been a bottleneck to achieving optimal accuracy for practical application in a real-world situation. Addressing this challenge, this paper introduces a novel domain adaptation technique tailored for gait recognition for practical applications, capitalising on expansive dataset pretraining and precise fine-tuning on targeted, smaller datasets pertaining to specific camera views. Our deep dive reveals that models adopting this adaptive training approach, especially when fine-tuned with viewing angles mirroring the test domain, witness a significant boost in pure cross-domain performance. Moreover, in a stride towards practical gait recognition, we present Asilla-Office—a non-synthetic dataset captured in an indoor office simulating real walking patterns of people in a real application environment. With its roots in real-world challenges, Asilla-Office is poised to be an initial benchmark, promoting research reflecting genuine application needs. In-depth experiments show that our domain-adapted fine-tuning approach trumps traditional single-staged training, marking a notable leap of more than 11% in Rank-1 accuracy on the new Asilla-Office dataset. In the spirit of fostering community-driven progress, the Asilla-Office dataset will be made publicly available.

©2007 Institute of Science Tokyo All rights reserved.