We propose a few-shot incremental learning method using a variational autoencoder for deep learning. In incremental learning, new classes are given in sequence, and the data of the classes previously given are not available for training a classifier. Recently, a method called Generative Replay has achieved high performance by using samples generated by a variational autoencoder (VAE) for training, but its performance degrades when the amount of data of a new class is small. Our proposed method, Few-Shot Generative Replay, solves this problem by simultaneously learning a VAE and a classifier, where the latent variables of VAE are used as the input features for the classifier. By sharing the variance of the latent variable distributions between classes, the resulting model is robust against data insufficiency. We limited number of samples to 10 for each class from the 2nd task of SplitMNIST and evaluated on it. Its accuracy was 64.8%, which was 7.9 points higher than 56.9% of Generative Replay.