Home >

news ヘルプ

論文・著書情報


タイトル
和文:Implicit Neural Representations for Variable Length Human Motion Generation 
英文:Implicit Neural Representations for Variable Length Human Motion Generation 
著者
和文: CERVANTES BAQUE Pablo Alberto, Yusuke Sekikawa, 佐藤 育郎, 篠田 浩一.  
英文: Pablo Cervantes, Yusuke Sekikawa, Ikuro Sato, Koichi Shinoda.  
言語 English 
掲載誌/書名
和文: 
英文:Proc. European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science 
巻, 号, ページ Vol. 13677        pp. 356–372
出版年月 2022年10月24日 
出版者
和文: 
英文:Springer, Cham 
会議名称
和文: 
英文:European Conference on Computer Vision 2022 
開催地
和文:テルアビブ 
英文:Tel Aviv 
公式リンク https://link.springer.com/chapter/10.1007/978-3-031-19790-1_22
 
DOI https://doi.org/10.1007/978-3-031-19790-1_22
アブストラクト We propose an action-conditional human motion generation method using variational implicit neural representations (INR). The variational formalism enables action-conditional distributions of INRs, from which one can easily sample representations to generate novel human motion sequences. Our method offers variable-length sequence generation by construction because a part of INR is optimized for a whole sequence of arbitrary length with temporal embeddings. In contrast, previous works reported difficulties with modeling variable-length sequences. We confirm that our method with a Transformer decoder outperforms all relevant methods on HumanAct12, NTU-RGBD, and UESTC datasets in terms of realism and diversity of generated motions. Surprisingly, even our method with an MLP decoder consistently outperforms the state-of-the-art Transformer-based auto-encoder. In particular, we show that variable-length motions generated by our method are better than fixed-length motions generated by the state-of-the-art method in terms of realism and diversity. Code at https://github.com/PACerv/ImplicitMotion

©2007 Tokyo Institute of Technology All rights reserved.