Ensuring the quality and quantity of labeled training data has long been a challenge in training deep neural networks for discriminative tasks. One solution to this problem is to use a generative model to augment training data and learn a discriminative model with it. For image classification with the recent development of diffusion models it has become possible to generate a variety synthetic images and there are high expectations for their use as training data. However, to obtain high-quality labeled synthetic images the hyperparameters and prompts often need to be manually tuned and the accuracy of the trained image classification model is highly dependent on them. To address this issue this paper proposes diffusion-based generative regularization a supervised discriminative learning framework that utilizes a diffusion-based image generation model as a regularizer to robustly learn discriminative representations without the need to synthesize images. Our experiments using vision transformers and stable diffusion models on ImageNet-1k demonstrate that the proposed framework improves classification accuracy on both in-distribution and distribution-shifted data.