Convergence of denoising diffusion models

Series
Applied and Computational Mathematics Seminar
Time
Monday, August 29, 2022 - 2:00pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Valentin DE BORTOLI – CNRS and ENS Ulm – valentin.debortoli@gmail.com
Organizer
Molei Tao
Generative modeling is the task of drawing new samples from an underlying distribution known only via an empirical measure. There exists a myriad of models to tackle this problem with applications in image and speech processing, medical imaging, forecasting and protein modeling to cite a few.  Among these methods score-based generative models (or diffusion models) are a  new powerful class of generative models that exhibit remarkable empirical performance. They consist of a ``noising'' stage, whereby a diffusion is used to gradually add Gaussian noise to data, and a generative model, which entails a ``denoising'' process defined by approximating the time-reversal of the diffusion.

In this talk I will present some of their theoretical guarantees with an emphasis on their behavior under the so-called manifold hypothesis. Such theoretical guarantees are non-vacuous and provide insight on the empirical behavior of these models. I will show how these results imply generalization bounds on denoising diffusion models. This presentation is based on https://arxiv.org/abs/2208.05314