Home >

news ヘルプ

論文・著書情報


タイトル
和文:Fixed-Weight Difference Target Propagation 
英文:Fixed-Weight Difference Target Propagation 
著者
和文: 澁谷 辰吉, 井上 中順, 川上 玲, 佐藤 育郎.  
英文: Tatsukichi Shibuya, Nakamasa Inoue, Rei Kawakami, Ikuro Sato.  
言語 English 
掲載誌/書名
和文: 
英文:Proceedings of the AAAI Conference on Artificial Intelligence 
巻, 号, ページ vol. 37    no. 8    pp. 9811-9819
出版年月 2023年2月 
出版者
和文: 
英文: 
会議名称
和文: 
英文:the 37th AAAI Conference on Artificial Intelligence 
開催地
和文: 
英文:Washington, DC 
アブストラクト Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving the practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values generated at the output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning is required to synchronize the feedforward and feedback training, and frequent updates of the feedback path are usually needed more than that of the feedforward path. Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work? We answer this question by presenting Fixed-Weight Difference Target Propagation (FW-DTP) that keeps the feedback weights constant during training. We confirmed that this simple method, which naturally resolves the abovementioned problems of TP, can still deliver informative target values to hidden layers for a given task; indeed, FW-DTP consistently achieves higher test performance than a baseline, the Difference Target Propagation (DTP), on four classification datasets. We also present a novel propagation architecture that explains the exact form of the feedback function of DTP to analyze FW-DTP.

©2007 Tokyo Institute of Technology All rights reserved.