Home >

news Help

Publication Information


Title
Japanese: 
English:Domain-Specific Adaptation for Enhanced Gait Recognition in Practical Scenarios 
Author
Japanese: Nitish Jaiswal, Vi Duc Huan, LIMANTA Felix, 篠田 浩一, Masahiro Wakasa.  
English: Nitish Jaiswal, Vi Duc Huan, Felix Limanta, Koichi Shinoda, Masahiro Wakasa.  
Language English 
Journal/Book name
Japanese: 
English:Proceedings of the 2024 6th International Conference on Image, Video and Signal Processing 
Volume, Number, Page         Page 8-15
Published date Mar. 2024 
Publisher
Japanese: 
English:Association for Computing Machinery, ACM 
Conference name
Japanese: 
English:International Conference on Image, Video and Signal Processing (IVSP) 2024 
Conference site
Japanese:神奈川県川崎市 
English: 
DOI https://doi.org/10.1145/3655755.3655757
Abstract Gait recognition is a burgeoning field within biometric recognition which utilizes computer vision technology to extract silhouette images or body skeletons to identify users by leveraging the unique walking patterns of individuals. However, with its huge potential for user identification in diverse settings especially in security and surveillance applications, it faces challenges in transitioning from controlled datasets to real-world applications. In the regime of silhouette-based models, the most challenging covariate is associated with varying viewing angles, which has often been a bottleneck to achieving optimal accuracy for practical application in a real-world situation. Addressing this challenge, this paper introduces a novel domain adaptation technique tailored for gait recognition for practical applications, capitalising on expansive dataset pretraining and precise fine-tuning on targeted, smaller datasets pertaining to specific camera views. Our deep dive reveals that models adopting this adaptive training approach, especially when fine-tuned with viewing angles mirroring the test domain, witness a significant boost in pure cross-domain performance. Moreover, in a stride towards practical gait recognition, we present Asilla-Office—a non-synthetic dataset captured in an indoor office simulating real walking patterns of people in a real application environment. With its roots in real-world challenges, Asilla-Office is poised to be an initial benchmark, promoting research reflecting genuine application needs. In-depth experiments show that our domain-adapted fine-tuning approach trumps traditional single-staged training, marking a notable leap of more than 11% in Rank-1 accuracy on the new Asilla-Office dataset. In the spirit of fostering community-driven progress, the Asilla-Office dataset will be made publicly available.

©2007 Institute of Science Tokyo All rights reserved.