Home >

news Help

Publication Information


Title
Japanese: 
English:Cross-View Human Action Recognition from Depth Maps Using Spectral Graph Sequences 
Author
Japanese: Kerola Tommi Matias, 井上 中順, 篠田浩一.  
English: Tommi Kerola, Nakamasa Inoue, Koichi Shinoda.  
Language English 
Journal/Book name
Japanese: 
English:Elsevier Journal of Computer Vision and Image Understanding (CVIU) 
Volume, Number, Page vol. 154        pp. 108-126
Published date Jan. 1, 2017 
Publisher
Japanese: 
English:ElsevierInc. 
Conference name
Japanese: 
English: 
Conference site
Japanese: 
English: 
File
Official URL http://www.sciencedirect.com/science/article/pii/S1077314216301588
 
DOI https://doi.org/10.1016/j.cviu.2016.10.004
Abstract We present a method for view-invariant action recognition from depth cameras based on graph signal processing techniques. Our framework leverages a novel graph representation of an action as a tempo- ral sequence of graphs, onto which we apply a spectral graph wavelet transform for creating our fea- ture descriptor. We evaluate two view-invariant graph types: skeleton-based and keypoint-based. The skeleton-based descriptor captures the spatial pose of the subject, whereas the keypoint-based is able to capture complementary information about human-object interaction and the shape of the point cloud. We investigate the effectiveness of our method by experiments on five publicly available datasets. By the graph structure, our method captures the temporal interaction between depth map interest points and achieves a 19.8% increase in performance compared to state-of-the-art results for cross-view action recog- nition, and competing results for frontal-view action recognition and human-object interaction. Namely, our method results in 90.8% accuracy on the cross-view N-UCLA Multiview Action3D dataset and 91.4% accuracy on the challenging MSRAction3D dataset in the cross-subject setting. For human-object interac- tion, our method achieves 72.3% accuracy on the Online RGBD Action dataset. We also achieve 96.0% and 98.8% accuracy on the MSRActionPairs3D and UCF-Kinect datasets, respectively.

©2007 Tokyo Institute of Technology All rights reserved.