Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:Object-wise 3D Gaze Mapping in Physical Workspace 
著者
和文: Irshad Abibouraguimane, 伊藤勇太, 樋口 啓太, 大塚慈雨, 杉本麻樹, 佐藤洋一.  
英文: Irshad Abibouraguimane, Yuta Itoh, Keita Higuchi, Jiu Otsuka, Maki Sugimoto, Yoichi Sato.  
言語 English 
掲載誌/書名
和文: 
英文:AH '18: Proceedings of the 9th Augmented Human International Conference 
巻, 号, ページ Vol. Article No.25        pp. 1-5
出版年月 2018年2月7日 
出版者
和文: 
英文:ACM 
会議名称
和文: 
英文:the 9th Augmented Human International Conference (AH2018) 
開催地
和文: 
英文:Seoul 
DOI https://doi.org/10.1145/3174910.3174921
アブストラクト Understanding the intention of other people is a fundamental social skill in human communication. Eye behavior is an important, yet implicit communication cue. In this work, we focus on enabling people to see the users' gaze associated with objects in the 3D space, namely, we present users the history of gaze linked to real 3D objects. Our 3D gaze visualization system automatically segments objects in the workspace and projects user's gaze trajectory onto the objects in 3D for visualizing user's intention. By combining automated object segmentation and head tracking via the first-person video from a wearable eye tracker, our system can visualize user's gaze behavior more intuitively and efficiently compared to 2D based methods and 3D methods with manual annotation. We performed an evaluation of the system to measure the accuracy of object-wise gaze mapping. In the evaluation, the system achieved 94% accuracy of gaze mapping onto 40, 30, 20, 10-centimeter cubes. We also conducted a case study of through a case study where the user looks at food products, we showed that our system was able to predict products that the user is interested in.

©2007 Institute of Science Tokyo All rights reserved.