Home >

news Help

Publication Information


Title
Japanese:Multimodal Fusion of BERT-CNN and Gated CNN Representations for Depression Detection 
English:Multimodal Fusion of BERT-CNN and Gated CNN Representations for Depression Detection 
Author
Japanese: R Makiuchi Mariana, Warnita Tifani, 宇都 有昭, 篠田 浩一.  
English: Mariana Rodrigues Makiuchi, Tifani Warnita, Kuniaki Uto, Koichi Shinoda.  
Language English 
Journal/Book name
Japanese:Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop 
English:Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop 
Volume, Number, Page         Page 55-63
Published date Oct. 2019 
Publisher
Japanese: 
English:Association for Computing Machinery 
Conference name
Japanese: 
English:9th International Audio/Visual Emotion Challenge and Workshop (AVEC) 2019 
Conference site
Japanese:ニース 
English:Nice 
File
Official URL https://dl.acm.org/citation.cfm?id=3357694
 
DOI https://doi.org/10.1145/3347320.3357694
Abstract Depression is a common, but serious mental disorder that affects people all over the world. Besides providing an easier way of diagnosing the disorder, a computer-aided automatic depression assessment system is demanded in order to reduce subjective bias in the diagnosis. We propose a multimodal fusion of speech and linguistic representation for depression detection. We train our modelto infer the Patient Health Questionnaire (PHQ) score of subjects from AVEC 2019 DDS Challenge database, the E-DAIC corpus. For the speech modality, we use deep spectrum features extracted from a pretrained VGG-16 network and employ a Gated Convolutional Neural Network (GCNN) followed by a LSTM layer. For the textual embeddings, we extract BERT textual features and employ a Convolutional Neural Network (CNN) followed by a LSTM layer. We achieved a CCC score equivalent to 0.497 and 0.608 on the E-DAICcorpus development set using the unimodal speech and linguistic models respectively. We further combine the two modalities using a feature fusion approach in which we apply the last representationof each single modality model to a fully-connected layer in order to estimate the PHQ score. With this multimodal approach, it was possible to achieve the CCC score of 0.696 on the development setand 0.403 on the testing set of the E-DAIC corpus, which shows an absolute improvement of 0.283 points from the challenge baseline.

©2007 Tokyo Institute of Technology All rights reserved.