Imaging in Paris Seminar


Parisian Seminar on the Mathematics of Imaging

Welcome to the website of the Parisian Seminar on the Mathematics of Imaging !

The goal of this seminar is to cover the fields of the mathematics of imaging in a very wide sense (including for instance signal processing, image processing, computer graphics, computer vision, various applications and connections with statistics and machine learning). It is open to everyone. It takes place at Institut Henri Poincaré on the first Tuesday of each month from 2pm to 4pm. Each seminar is composed of two presentations.

You can subscribe or unsubscribe to the mailing list of the seminar and to the agenda of the seminar.

Upcoming seminars

Click on the title to read the abstract.

Pascal Monasse (IMAGINE, École Nationale des Ponts et Chaussées)
February 4th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Flavier Léger (INRIA, Cérémade, Université Paris Dauphine)
February 4th, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title: Gradient descent with a general cost
Abstract: In this talk I will present an approach to iteratively minimize a given objective function using minimizing movement schemes built on general cost functions. I will introduce an explicit method, gradient descent with a general cost (GDGC), as well as an implicit, proximal-like scheme and an explicit-implicit (forward-backward) method.
GDGC unifies several standard gradient descent-type methods: gradient descent, mirror descent, Newton’s method, and Riemannian gradient descent. I will explain how the so-called nonnegative cross-curvature condition provides tractable conditions to prove convergence rates for GDGC.
Byproducts of this framework include: (1) a new nonsmooth mirror descent, (2) global convergence rates for Newton’s method, and (3) a clear picture of the type of convexity needed for converging schemes in the Riemannian setting.

Matthieu Serfaty (Centre Borelli, ENS Paris-Saclay)
March 4th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Yanhao Li (Centre Borelli, ENS Paris-Saclay)
March 4th, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Clément Rambour (ISIR, Sorbonne Université)
April 1st, 2pm, room Amphi Hermite (Bat Borel).
Title:
Abstract:

TBA (TBA)
May 6th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Gabriel Peyré (DMA, École Normale Supérieure)
June 3rd, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Previous seminars of 2024-2025

The list of seminars prior to summer 2024 is available here.

Thomas Moreau (INRIA Saclay)
January 7th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title: Unrolling algorithms for inverse problems: the critical role of warm starts in bilevel optimization
Abstract: Algorithm unrolling, a method that parameterizes classical optimization algorithms as differentiable procedures, has emerged as a powerful tool for solving inverse problems. These unrolled methods allow for the learning of problem-specific parameters, often leading to improved performance in the early iterations of optimization. In this talk, I explore the links between algorithm unrolling and bilevel optimization. First, I will discuss results that highlight the asymptotic limitations of unrolled algorithms. These findings emphasize the advantages of using unrolling with a limited number of iterations. I will then discuss some of my recent works on combining unrolled algorithms with dictionary learning to capture data-driven structures in inverse problem solutions. These results highlight the non robustness of the gradient estimation obtained with unrolling. A possible way to limit this drawback is to rely on warm starting, which has been known to be critical to derive converging bilevel optimization algorithms. This offers new insights into designing efficient and robust plug-and-play algorithms based on unrolled denoisers for solving challenging inverse problems.

Émile Pierret (IDP, Université d'Orléans)
January 7th, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title: Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors
Abstract: Diffusion or score-based models recently showed high performance in image generation. They rely on a forward and a backward stochastic differential equations (SDE). The sampling of a data distribution is achieved by solving numerically the backward SDE or its associated flow ODE. Studying the convergence of these models necessitates to control four different types of error: the initialization error, the truncation error, the discretization and the score approximation. In this paper, we study theoretically the behavior of diffusion models and their numerical implementation when the data distribution is Gaussian. In this restricted framework where the score function is a linear operator, we can derive the analytical solutions of the forward and backward SDEs as well as the associated flow ODE. This provides exact expressions for various Wasserstein errors which enable us to compare the influence of each error type for any sampling scheme, thus allowing to monitor convergence directly in the data space instead of relying on Inception features. Our experiments show that the recommended numerical schemes from the diffusion models literature are also the best sampling schemes for Gaussian distributions.

Andrés Almansa (MAP5, Université Paris Cité)
December 3rd 2024, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title: Posterior sampling in imaging with learnt priors: from Langevin to diffusion models
Abstract: In this talk we explore some recent techniques to perform posterior sampling for ill-posed inverse problems in imaging when the likelihood is known explicitly, and the prior is only known implicitly via a denoising neural network that has been pretrained on a large collection of images. We show how to extend the Unadjusted Langevin Algorithm (ULA) to this particular setting leading to Plug & Play ULA. We explore the convergence properties of PnP-ULA, the crucial role of the stepsize and its relationship with the smoothness of the prior and the likelihood. In order to relax stringent constraints on the stepsize, annealed Langevin algorithms have been proposed, which are tightly related to generative denoising diffusion probabilistic models (DDPM). The image prior that is implicit in these generative models can be adapted to perform posterior sampling, by a clever use of Gaussian approximations, with varying degrees of accuracy, like in Diffusion Posterior Sampling (DPS) and Pseudo-Inverse Guided Diffusion Models (PiGDM). We conclude with an application to blind deblurring, where DPS and PiGDM are used in combination with an Expectation Maximization algorithm to jointly estimate the unknown blur kernel, and sample sharp images from the posterior.
Collaborators (in alphabetical order) Guillermo Carbajal, Eva Coupeté, Valentin De Bortoli, Julie Delon, Alain Durmus, Ulugbek Kamilov, Charles Laroche, Rémy Laumont, Jiaming Liu, Pablo Musé, Marcelo Pereyra, Marien Renaud, Matias Tassano.

Stanislas Strasman (LPSM, Sorbonne Université)
December 3rd 2024, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title: An analysis of the noise schedule for score-based generative models.
Abstract: Score-based generative models (SGMs) aim at estimating a target data distribution by learning score functions using only noise-perturbed samples from the target. Recent literature has focused extensively on assessing the error between the target and estimated distributions, gauging the generative quality through the Kullback-Leibler (KL) divergence and Wasserstein distances. Under mild assumptions on the data distribution, we establish an upper bound for the KL divergence between the target and the estimated distributions, explicitly depending on any time-dependent noise schedule. Under additional regularity assumptions, taking advantage of favorable underlying contraction mechanisms, we provide a tighter error bound in Wasserstein distance compared to state-of-the-art results. In addition to being tractable, this upper bound jointly incorporates properties of the target distribution and SGM hyperparameters that need to be tuned during training.

Samuel Vaiter (CNRS, LJAD Université Côte d'Azur)
November 5th 2024, 14h, room Maryam Mirzakhani (Bat Borel, 2nd floor).
Title: Successes and pitfalls of bilevel optimization in machine learning
Abstract: In this talk, I will introduce bilevel optimization (BO) as a powerful framework to address several machine learning-related problems, including hyperparameter tuning, meta-learning, and data cleaning. Based on this formulation, I will describe some successes of BO, particularly in a strongly convex setting, where strong guarantees can be provided along with efficient stochastic algorithms. I will also discuss the outstanding issues of this framework, presenting geometrical and computational complexity results that show the potential difficulties in going beyond convexity, at least from a theoretical perspective.

Anna Starynska (Rochester Institute of Technology, invited by the AISSAI Center)
November 5th 2024, 15h, room Maryam Mirzakhani (Bat Borel, 2nd floor).
Title: Supervised erased ink detection in damaged palimpsested manuscripts
Abstract: Transcribing a historical manuscript is a tedious task, especially in the case of palimpsests, where the sought text was erased and overwritten with another text. Recently, advancements in deep learning text recognition models, especially in multimodal large language models, have raised hopes for future automatization of this process. However, the two issues have prevented this progress so far. First, the absence of sufficient ground-truth data. Historical texts transcription platform Transkribus, estimates that approximately 20-30 pages of transcribed pages are required for training a model, which is already a very difficult task for historians. We assume that was meant for an undamaged manuscript, since remarks are made about enlarging the dataset in case of more variations. Second is the extreme damage to the text, which pushes us to image text in more complex modalities than a simple image scan. Thus, instead of capturing the text image, the push was made to capture the chemical composition of materials. One of the most popular systems for this became multispectral imaging systems. While it will not capture the chemical composition, it reveals the difference in the spectrum of materials. However, until recently, msi palimpsest imaging systems lacked the data standardization procedures that created perturbation unrelated to the data composition, enabling the usage of the text transcription model on raw data. However, more and more attempts are being made to apply the standardization of multispectral imaging. This will allow us not only to create substantial data collection but also to unleash the potential presented by multispectral imaging. Our goal in this work is to test the capacity of neural network to detect the traces of undertext.

Marien Renaud (Institut de Mathématiques de Bordeaux)
October 1st 2024, 14h, room Maryam Mirzakhani (Bat Borel, 2nd floor).
Title: Plug-and-Play image restoration with Stochastic deNOising REgularization
Abstract: Plug-and-Play (PnP) algorithms are a class of iterative algorithms that address image inverse problems by combining a physical model and a deep neural network for regularization. Even if they produce impressive image restoration results, these algorithms rely on a non-standard use of a denoiser on images that are less and less noisy along the iterations, which contrasts with recent algorithms based on Diffusion Models, where the denoiser is applied only on re-noised images. We will introduce a new PnP framework, called Stochastic deNOising REgularization (SNORE), which applies the denoiser only on images with noise of the adequate level. It is based on an explicit stochastic regularization, which leads to a stochastic gradient descent algorithm to solve ill-posed inverse problems. A convergence analysis of this algorithm and its annealing extension will be presented. Experimental results, competitive with respect to state-of-the-art methods, will be shown on deblurring and inpainting tasks.

Organizers

Thanks

The seminar is hosted by IHP, and supported by RT-MAIAGES, Télécom Paris and CMM Mines Paris - PSL.