Home >

news Help

Publication Information


Title
Japanese: 
English:Feasibility of decoding covert speech in ECoG with a Transformer trained on overt speech 
Author
Japanese: Shuji Komeiji, Takumi Mitsuhashi, Yasushi Iimura, Hiroharu Suzuki, Hidenori Sugano, 篠田 浩一, 田中 聡久.  
English: Shuji Komeiji, Takumi Mitsuhashi, Yasushi Iimura, Hiroharu Suzuki, Hidenori Sugano, Koichi Shinoda, Toshihisa Tanaka.  
Language English 
Journal/Book name
Japanese: 
English:Scientific Reports 
Volume, Number, Page     14   
Published date May 20, 2024 
Publisher
Japanese: 
English:Springer Nature 
Conference name
Japanese: 
English: 
Conference site
Japanese: 
English: 
File
Official URL https://www.nature.com/articles/s41598-024-62230-9#citeas
 
DOI https://doi.org/10.1038/s41598-024-62230-9
Abstract Several attempts for speech brain–computer interfacing (BCI) have been made to decode phonemes, sub-words, words, or sentences using invasive measurements, such as the electrocorticogram (ECoG), during auditory speech perception, overt speech, or imagined (covert) speech. Decoding sentences from covert speech is a challenging task. Sixteen epilepsy patients with intracranially implanted electrodes participated in this study, and ECoGs were recorded during overt speech and covert speech of eight Japanese sentences, each consisting of three tokens. In particular, Transformer neural network model was applied to decode text sentences from covert speech, which was trained using ECoGs obtained during overt speech. We first examined the proposed Transformer model using the same task for training and testing, and then evaluated the model’s performance when trained with overt task for decoding covert speech. The Transformer model trained on covert speech achieved an average token error rate (TER) of 46.6% for decoding covert speech, whereas the model trained on overt speech achieved a TER of 46.3% . Therefore, the challenge of collecting training data for covert speech can be addressed using overt speech. The performance of covert speech can improve by employing several overt speeches.

©2007 Institute of Science Tokyo All rights reserved.