Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:Egocentric Human Activities Recognition With Multimodal Interaction Sensing 
著者
和文: HAO Yuzhe, 金崎 朝子, 佐藤 育郎, 川上 玲, 篠田 浩一.  
英文: Yuzhe Hao, Asako Kanezaki, Ikuro Sato, Rei Kawakami, Koichi Shinoda.  
言語 English 
掲載誌/書名
和文: 
英文:IEEE Sensors Journal 
巻, 号, ページ Vol. 24    No. 5    7085 - 7096
出版年月 2024年3月1日 
出版者
和文: 
英文:IEEE 
会議名称
和文: 
英文: 
開催地
和文: 
英文: 
DOI https://doi.org/10.1109/JSEN.2023.3349191
アブストラクト Egocentric human activity recognition (ego-HAR) has received attention in fields where human intentions in a video must be estimated. However, the performance of existing methods is limited due to insufficient information about the subject’s motion in egocentric videos. To overcome the problem, we proposed to use two hands’ inertial sensor data as supplements for egocentric videos to do the ego-HAR task. For this purpose, we construct a publicly available dataset, egocentric video, and inertial sensor data kitchen (EvIs-Kitchen), which contains well-synchronized egocentric videos and two-hand inertial sensor data and includes interaction-focus actions as recognition targets. We also designed the optimal choices of input combination and component variants through experiments under two-branch late-fusion architecture. The results show our multimodal setup outperforms any other single-modal methods on EvIs-Kitchen.

©2007 Institute of Science Tokyo All rights reserved.