Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:Wise Teachers Train Better DNN Acoustic Models 
著者
和文: Ryan Price, 磯 健一, Koichi Shinoda.  
英文: Ryan Price, Kenichi Iso, Koichi Shinoda.  
言語 English 
掲載誌/書名
和文: 
英文:EURASIP Journal on Audio Speech and Music Processing 
巻, 号, ページ     10    pp. 1-19
出版年月 2016年4月12日 
出版者
和文: 
英文:Springer International Publishing Ltd 
会議名称
和文: 
英文: 
開催地
和文: 
英文: 
ファイル
公式リンク http://asmp.eurasipjournals.springeropen.com/articles/10.1186/s13636-016-0088-7
 
DOI https://doi.org/10.1186/s13636-016-0088-7
アブストラクト Automatic speech recognition is becoming more ubiquitous as recognition performance improves, capable devices increase in number, and areas of new application open up. Neural network acoustic models that can utilize speaker-adaptive features, have deep and wide layers, or more computationally expensive architectures, for example, often obtain best recognition accuracy but may not be suitable for the given budget of computational and storage resources or latency required by the deployed system. We explore a straightforward training approach which takes advantage of highly accurate but expensive-to-evaluate neural network acoustic models by using their outputs to relabel training examples for easier-to-deploy models. Experiments on a large vocabulary continuous speech recognition task offer relative reductions in word error rate of up to 16.7 % over training with the hard aligned labels by effectively making use of large amounts of additional untranscribed data. Somewhat remarkably, the approach works well even when only two output classes are present. Experiments on a voice activity detection task give relative reductions in equal error rate of up to 11.5 % when using a convolutional neural network to relabel training examples for a feedforward neural network. An investigation into the hidden layer weight matrices finds that soft target-trained networks tend to produce weight matrices having fuller rank and slower decay in singular values than their hard target-trained counterparts, suggesting that more of the network’s capacity is utilized for learning additional information giving better accuracy.

©2007 Tokyo Institute of Technology All rights reserved.