Home >

news Help

Publication Information


Title
Japanese:[ショートペーパー]変分自己符号化器との統合によるFew-Shot継続学習 
English:[Short Paper] Few-Shot Incremental Learning by Unifying with Variational Autoencoder 
Author
Japanese: 髙山 啓太, 宇都 有昭, 篠田 浩一.  
English: Keita Takayama, Kuniaki Uto, Koichi Shinoda.  
Language Japanese 
Journal/Book name
Japanese:信学技報 
English:IEICE Tech. Rep. 
Volume, Number, Page vol. 120    no. 300    pp. 58-62
Published date Dec. 2020 
Publisher
Japanese:電子情報通信学会 
English: 
Conference name
Japanese:パターン認識・メディア理解研究会 (PRMU) 
English: 
Conference site
Japanese: 
English: 
Official URL https://www.ieice.org/ken/paper/20201217HCbb/
 
Abstract We propose a few-shot incremental learning method using a variational autoencoder for deep learning. In incremental learning, new classes are given in sequence, and the data of the classes previously given are not available for training a classifier. Recently, a method called Generative Replay has achieved high performance by using samples generated by a variational autoencoder (VAE) for training, but its performance degrades when the amount of data of a new class is small. Our proposed method, Few-Shot Generative Replay, solves this problem by simultaneously learning a VAE and a classifier, where the latent variables of VAE are used as the input features for the classifier. By sharing the variance of the latent variable distributions between classes, the resulting model is robust against data insufficiency. We limited number of samples to 10 for each class from the 2nd task of SplitMNIST and evaluated on it. Its accuracy was 64.8%, which was 7.9 points higher than 56.9% of Generative Replay.

©2007 Institute of Science Tokyo All rights reserved.