Home >

news Help

Publication Information


Title
Japanese: 
English:Text-Guided Object Detector for Multi-modal Video Question Answering 
Author
Japanese: Ruoyue Shen, 井上 中順, 篠田 浩一.  
English: Ruoyue Shen, Nakamasa Inoue, Koichi Shinoda.  
Language English 
Journal/Book name
Japanese: 
English:Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023 
Volume, Number, Page         pp. 1032-1042
Published date Jan. 2023 
Publisher
Japanese: 
English:IEEE 
Conference name
Japanese: 
English:IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023 
Conference site
Japanese:ハワイ 
English:Hawaii 
File
Official URL https://wacv2023.thecvf.com/home
 
DOI https://doi.org/10.1109/WACV56688.2023.00109
Abstract Video Question Answering (Video QA) is a task to answer a text-format question based on the understanding of linguistic semantics, visual information, and also linguisticvisual alignment in the video. In Video QA, an object detector pre-trained with large-scale datasets, such as Faster RCNN, has been widely used to extract visual representations from video frames. However, it is not always able to precisely detect the objects needed to answer the question because of the domain gaps between the datasets for training the object detector and those for Video QA. In this paper, we propose a text-guided object detector (TGOD), which takes text question-answer pairs and video frames as inputs, detects the objects relevant to the given text, and thus provides intuitive visualization and interpretable results. Our experiments using the STAGE framework on the TVQA+ dataset show the effectiveness of our proposed detector. It achieves a 2.02 points improvement in accuracy of QA, 12.13 points improvement in object detection (mAP50), 1.1 points improvement in temporal location, and 2.52 points improvement in ASA over the STAGE original detector.

©2007 Tokyo Institute of Technology All rights reserved.