Imaging in Paris Seminar


Parisian Seminar on the Mathematics of Imaging

Welcome to the website of the Parisian Seminar on the Mathematics of Imaging !

The goal of this seminar is to cover the fields of the mathematics of imaging in a very wide sense (including for instance signal processing, image processing, computer graphics, computer vision, various applications and connections with statistics and machine learning). It is open to everyone. It takes place at Institut Henri Poincaré on the first Tuesday of each month from 2pm to 4pm. Each seminar is composed of two presentations.

You can subscribe or unsubscribe to the mailing list of the seminar and to the agenda of the seminar.

Upcoming seminars

Click on the title to read the abstract.

Andrés Almansa (MAP5, Université Paris Cité)
December 3rd, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title: Posterior sampling in imaging with learnt priors: from Langevin to diffusion models
Abstract: In this talk we explore some recent techniques to perform posterior sampling for ill-posed inverse problems in imaging when the likelihood is known explicitly, and the prior is only known implicitly via a denoising neural network that has been pretrained on a large collection of images. We show how to extend the Unadjusted Langevin Algorithm (ULA) to this particular setting leading to Plug & Play ULA. We explore the convergence properties of PnP-ULA, the crucial role of the stepsize and its relationship with the smoothness of the prior and the likelihood. In order to relax stringent constraints on the stepsize, annealed Langevin algorithms have been proposed, which are tightly related to generative denoising diffusion probabilistic models (DDPM). The image prior that is implicit in these generative models can be adapted to perform posterior sampling, by a clever use of Gaussian approximations, with varying degrees of accuracy, like in Diffusion Posterior Sampling (DPS) and Pseudo-Inverse Guided Diffusion Models (PiGDM). We conclude with an application to blind deblurring, where DPS and PiGDM are used in combination with an Expectation Maximization algorithm to jointly estimate the unknown blur kernel, and sample sharp images from the posterior.
Collaborators (in alphabetical order) Guillermo Carbajal, Eva Coupeté, Valentin De Bortoli, Julie Delon, Alain Durmus, Ulugbek Kamilov, Charles Laroche, Rémy Laumont, Jiaming Liu, Pablo Musé, Marcelo Pereyra, Marien Renaud, Matias Tassano.

Stanislas Strasman (LPSM, Sorbonne Université)
December 3rd, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title: An analysis of the noise schedule for score-based generative models.
Abstract: Score-based generative models (SGMs) aim at estimating a target data distribution by learning score functions using only noise-perturbed samples from the target. Recent literature has focused extensively on assessing the error between the target and estimated distributions, gauging the generative quality through the Kullback-Leibler (KL) divergence and Wasserstein distances. Under mild assumptions on the data distribution, we establish an upper bound for the KL divergence between the target and the estimated distributions, explicitly depending on any time-dependent noise schedule. Under additional regularity assumptions, taking advantage of favorable underlying contraction mechanisms, we provide a tighter error bound in Wasserstein distance compared to state-of-the-art results. In addition to being tractable, this upper bound jointly incorporates properties of the target distribution and SGM hyperparameters that need to be tuned during training.

Thomas Moreau (INRIA Saclay)
January 7th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Émile Pierret (IDP, Université d'Orléans)
January 7th, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Pascal Monasse (IMAGINE, École Nationale des Ponts et Chaussées)
February 4th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Flavier Léger (INRIA, Cérémade, Université Paris Dauphine)
February 4th, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Matthieu Serfaty (Centre Borelli, ENS Paris-Saclay)
March 4th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Yanhao Li (Centre Borelli, ENS Paris-Saclay)
March 4th, 3pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Clément Rambour (ISIR, Sorbonne Université)
April 1st, 2pm, room Amphi Hermite (Bat Borel).
Title:
Abstract:

TBA (TBA)
May 6th, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Gabriel Peyré (DMA, École Normale Supérieure)
June 3rd, 2pm, room Amphi Yvonne Choquet-Bruhat (Bat Perrin).
Title:
Abstract:

Previous seminars of 2024-2025

The list of seminars prior to summer 2024 is available here.

Samuel Vaiter (CNRS, LJAD Université Côte d'Azur)
November 5th 2024, 14h, room Maryam Mirzakhani (Bat Borel, 2nd floor).
Title: Successes and pitfalls of bilevel optimization in machine learning
Abstract: In this talk, I will introduce bilevel optimization (BO) as a powerful framework to address several machine learning-related problems, including hyperparameter tuning, meta-learning, and data cleaning. Based on this formulation, I will describe some successes of BO, particularly in a strongly convex setting, where strong guarantees can be provided along with efficient stochastic algorithms. I will also discuss the outstanding issues of this framework, presenting geometrical and computational complexity results that show the potential difficulties in going beyond convexity, at least from a theoretical perspective.

Anna Starynska (Rochester Institute of Technology, invited by the AISSAI Center)
November 5th 2024, 15h, room Maryam Mirzakhani (Bat Borel, 2nd floor).
Title: Supervised erased ink detection in damaged palimpsested manuscripts
Abstract: Transcribing a historical manuscript is a tedious task, especially in the case of palimpsests, where the sought text was erased and overwritten with another text. Recently, advancements in deep learning text recognition models, especially in multimodal large language models, have raised hopes for future automatization of this process. However, the two issues have prevented this progress so far. First, the absence of sufficient ground-truth data. Historical texts transcription platform Transkribus, estimates that approximately 20-30 pages of transcribed pages are required for training a model, which is already a very difficult task for historians. We assume that was meant for an undamaged manuscript, since remarks are made about enlarging the dataset in case of more variations. Second is the extreme damage to the text, which pushes us to image text in more complex modalities than a simple image scan. Thus, instead of capturing the text image, the push was made to capture the chemical composition of materials. One of the most popular systems for this became multispectral imaging systems. While it will not capture the chemical composition, it reveals the difference in the spectrum of materials. However, until recently, msi palimpsest imaging systems lacked the data standardization procedures that created perturbation unrelated to the data composition, enabling the usage of the text transcription model on raw data. However, more and more attempts are being made to apply the standardization of multispectral imaging. This will allow us not only to create substantial data collection but also to unleash the potential presented by multispectral imaging. Our goal in this work is to test the capacity of neural network to detect the traces of undertext.

Marien Renaud (Institut de Mathématiques de Bordeaux)
October 1st 2024, 14h, room Maryam Mirzakhani (Bat Borel, 2nd floor).
Title: Plug-and-Play image restoration with Stochastic deNOising REgularization
Abstract: Plug-and-Play (PnP) algorithms are a class of iterative algorithms that address image inverse problems by combining a physical model and a deep neural network for regularization. Even if they produce impressive image restoration results, these algorithms rely on a non-standard use of a denoiser on images that are less and less noisy along the iterations, which contrasts with recent algorithms based on Diffusion Models, where the denoiser is applied only on re-noised images. We will introduce a new PnP framework, called Stochastic deNOising REgularization (SNORE), which applies the denoiser only on images with noise of the adequate level. It is based on an explicit stochastic regularization, which leads to a stochastic gradient descent algorithm to solve ill-posed inverse problems. A convergence analysis of this algorithm and its annealing extension will be presented. Experimental results, competitive with respect to state-of-the-art methods, will be shown on deblurring and inpainting tasks.

Organizers

Thanks

The seminar is hosted by IHP, and supported by the RT-MAIAGES and Télécom Paris.