Download

View publication

Abstract

Denoising diffusion models are a very popular class of generative models which provide state-of-the-art results in a variety of domains such as image and speech synthesis. One adds gradually noise to data using a diffusion to transform the data distribution into a Gaussian distribution. Samples from the generative model are then obtained by simulating an approximation of the time-reversal of this diffusion initialized by Gaussian samples. Practically, the intractable score terms appearing in the time-reversed process are approximated using score matching techniques. We explore here a similar idea to sample approximately from unnormalized probability density functions and estimate their normalizing constants. We consider a process where the target density diffuses towards a Gaussian. Denoising Diffusion Samplers (DDS) are obtained by approximating the corresponding time-reversal. Score matching is not applicable in this context and an alternative variational inference approach is used. However, we can leverage some of the ideas introduced in generative modeling to this Monte Carlo sampling task. Similarly, we can adapt existing theoretical results from denoising diffusion models to provide theoretical guarantees for DDS. We discuss the connections between DDS, optimal control and Schr"odinger bridge and we finally demonstrate them experimentally on a variety of sampling tasks.