Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:EvIs-Kitchen: Egocentric Human Activities Recognition with Video and Inertial Sensor data 
著者
和文: HAO Yuzhe, 宇都 有昭, 金崎 朝子, 佐藤 育郎, 川上 玲, 篠田 浩一.  
英文: Yuzhe Hao, Kuniaki Uto, Asako Kanezaki, Ikuro Sato, Rei Kawakami, Koichi Shinoda.  
言語 English 
掲載誌/書名
和文: 
英文:Proc. International Conference on MULTIMEDIA MODELING 
巻, 号, ページ         pp. 373–384
出版年月 2023年3月29日 
出版者
和文: 
英文:Springer Nature 
会議名称
和文: 
英文:29TH INTERNATIONAL CONFERENCE ON MULTIMEDIA MODELING (MMM) 
開催地
和文: 
英文:Bergen 
DOI https://doi.org/10.1007/978-3-031-27077-2_29
アブストラクト Egocentric Human Activity Recognition (ego-HAR) has received attention in fields where human intentions in a video must be estimated. The performance of existing methods, however, are limited due to insufficient information about the subject's motion in egocentric videos. We consider that a dataset of egocentric videos along with two inertial sensors attached to both wrists of the subject to obtain more information about the subject's motion will be useful to study the problem in depth. Therefore, this paper provides a publicly available dataset, EvIs-Kitchen, which contains well-synchronized egocentric videos and two-hand inertial sensor data, as well as interaction-highlighted annotations. We also present a baseline multimodal activity recognition method with two-stream architecture and score fusion to validate that such multimodal learning on egocentric videos and intertial sensor data is more effective to tackle the problem. Experiments show that our multimodal method outperforms other single-modal methods on EvIs-Kitchen.

©2007 Tokyo Institute of Technology All rights reserved.