Cascaded Diffusion Models for High Fidelity Image Generation

Jonathan Ho*, Chitwan Saharia*, William Chan,
David J. Fleet, Mohammad Norouzi, Tim Salimans

Google Research
(* denotes equal contribution)
Samples from denoising diffusion probabilistic models trained on CelebA-HQ, LSUN Bedrooms, LSUN church and LSUN cat datasets at 256x256 resolutionSelected generated images from our 256x256 class-conditional ImageNet model.


  • Cascaded Diffusion Models (CDM) are pipelines of diffusion models that generate images of increasing resolution.
  • CDMs yield high fidelity samples superior to BigGAN-deep and VQ-VAE-2 in terms of both FID score and classification accuracy score on class-conditional ImageNet generation.
  • These results are achieved with pure generative models without any classifier.
  • We introduce conditioning augmentation, our data augmentation technique that we find critical towards achieving high sample fidelity.

A cascaded diffusion model comprising a base model and two super-resolution models.


We show that cascaded diffusion models are capable of generating high fidelity images on the class-conditional ImageNet generation challenge, without any assistance from auxiliary image classifiers to boost sample quality. A cascaded diffusion model comprises a pipeline of multiple diffusion models that generate images of increasing resolution, beginning with a standard diffusion model at the lowest resolution, followed by one or more super-resolution diffusion models that successively upsample the image and add higher resolution details. We find that the sample quality of a cascading pipeline relies crucially on conditioning augmentation, our proposed method of data augmentation of the lower resolution conditioning inputs to the super-resolution models. Our experiments show that conditioning augmentation prevents compounding error during sampling in a cascaded model, helping us to train cascading pipelines achieving FID scores of 1.48 at 64x64, 3.52 at 128x128 and 4.88 at 256x256 resolutions, outperforming BigGAN-deep, and classification accuracy scores of 63.02% (top-1) and 84.06% (top-5) at 256x256, outperforming VQ-VAE-2.


Below are example generated images at the 256x256 resolution.


On FID score, our models outperform BigGAN-deep and ADM without classifier guidance. On classification accuracy score, we outperform VQ-VAE-2 by a large margin.

Related Work

Concurrently, Dhariwal and Nichol showed that their diffusion models, named ADM, also outperform GANs on ImageNet generation. ADM achieves this result using classifier guidance, which boosts sample quality by modifying the diffusion sampling procedure to simultaneously maximize the score of an extra image classifier. As measured by FID score, ADM with classifier guidance outperforms our reported results, but our reported results outperform ADM without classifier guidance.

Our work is a demonstration of the effectiveness of pure generative models, namely cascaded diffusion models without the assistance of extra image classifiers. Nonetheless, classifier guidance and cascading are complementary techniques for improving sample quality, and a detailed investigation of how they interact is warranted.

Paper and Citation

Details can be found in our full paper here.

  title={Cascaded Diffusion Models for High Fidelity Image Generation},
  author={Ho, Jonathan and Saharia, Chitwan and Chan, William and Fleet, David J and Norouzi, Mohammad and Salimans, Tim},
  journal={arXiv preprint arXiv:2106.15282},