Home >

news ヘルプ

論文・著書情報


タイトル
和文:Incorporating Acoustic and Textual Information for Language Modeling in Code-switching Speech Recognition 
英文:Incorporating Acoustic and Textual Information for Language Modeling in Code-switching Speech Recognition 
著者
和文: Hartanto Roland, 宇都 有昭, 篠田 浩一.  
英文: Roland HARTANTO, Kuniaki UTO, Koichi SHINODA.  
言語 English 
掲載誌/書名
和文: 
英文:IEICE Technical Report 
巻, 号, ページ vol. 121    no. 385    pp. 56-63
出版年月 2022年3月 
出版者
和文:一般社団法人電子情報通信学会 
英文:The Institute of Electronics, Information and Communication Engineers 
会議名称
和文: 
英文:The Institute of Electronics, Information and Communication Engineers Technical Committee Conference (IEICE SP) 2022-03 
開催地
和文:沖縄 
英文:Okinawa 
ファイル
公式リンク https://www.ieice.org/ken/paper/20220301sCJ2/
 
アブストラクト People who speak two or more languages tend to alternate the language when they are speaking. This particular phenomenon is called code-switching, and it frequently occurs in multicultural society. Automatic speech recognition (ASR) for code-switching speech is a challenging task acoustically and linguistically because of the lack of code-switching data. This work aims to improve code-switching ASR system by improving the language model. We explore the code-switching data augmentation for language modeling by utilizing the ASR decoding lattice to tackle the pronunciation variation and data scarcity problems. We incorporate both acoustic and textual information by pretraining GPT2, a transformer-based language model, with the code-switching ASR decoding lattice. Our work achieves around 2 point absolute word error rate reduction from the baseline n-gram language model, and 0.33 point absolute reduction from the lattice-rescored baseline word error rate.

©2007 Tokyo Institute of Technology All rights reserved.