Deep generative models parametrize very flexible families of distributions able to fit complicated datasets of images or text. These models provide independent samples from complex high-distributions at negligible costs. On the other hand, sampling exactly a target distribution, such a Bayesian posterior, is typically challenging: either because of dimensionality, multi-modality, ill-conditioning or a combination of the previous. In this talk, I will review recent works trying to enhance traditional inference and sampling algorithms with learning. I will present in particular flowMC, an adaptive MCMC with Normalizing Flows along with first applications and remaining challenges.
Organisme intervenant (ou équipe pour les séminaires internes)
Ecole Polytechnique, CMAP
Opportunities and Challenges in Enhancing Sampling with Learning
Salle de réunion 142, bâtiment 210
Date du jour