Home >

news Help

Publication Information


Title
Japanese: 
English:EvIs-Kitchen: Egocentric Human Activities Recognition with Video and Inertial Sensor data 
Author
Japanese: HAO Yuzhe, 宇都 有昭, 金崎 朝子, 佐藤 育郎, 川上 玲, 篠田 浩一.  
English: Yuzhe Hao, Kuniaki Uto, Asako Kanezaki, Ikuro Sato, Rei Kawakami, Koichi Shinoda.  
Language English 
Journal/Book name
Japanese: 
English:Proc. International Conference on MULTIMEDIA MODELING 
Volume, Number, Page         pp. 373–384
Published date Mar. 29, 2023 
Publisher
Japanese: 
English:Springer Nature 
Conference name
Japanese: 
English:29TH INTERNATIONAL CONFERENCE ON MULTIMEDIA MODELING (MMM) 
Conference site
Japanese: 
English:Bergen 
DOI https://doi.org/10.1007/978-3-031-27077-2_29
Abstract Egocentric Human Activity Recognition (ego-HAR) has received attention in fields where human intentions in a video must be estimated. The performance of existing methods, however, are limited due to insufficient information about the subject's motion in egocentric videos. We consider that a dataset of egocentric videos along with two inertial sensors attached to both wrists of the subject to obtain more information about the subject's motion will be useful to study the problem in depth. Therefore, this paper provides a publicly available dataset, EvIs-Kitchen, which contains well-synchronized egocentric videos and two-hand inertial sensor data, as well as interaction-highlighted annotations. We also present a baseline multimodal activity recognition method with two-stream architecture and score fusion to validate that such multimodal learning on egocentric videos and intertial sensor data is more effective to tackle the problem. Experiments show that our multimodal method outperforms other single-modal methods on EvIs-Kitchen.

©2007 Tokyo Institute of Technology All rights reserved.