The Young Researchers in Imaging Seminars aims at offering the opportunity to PhD students and Post Docs working on topics related to the thematic program to present their work.
Starting from Wednesday 20 February, we organise every Wednesday (except for March 13th) in Darboux amphiteater a two-hour session from 14:00 to 16:00 dedicated to young researchers, in order give them a chance to stimulate the scientific discussion. The seminar is followed by a coffee time (with snacks) at the 2nd floor.
Image reconstruction consists in recovering an image from an indirect observation (for instance its Radon transform). In general this observation does not allow to determine a unique image and some prior (e.g. image regularity) needs to be incorporated in the reconstruction framework. I will present how one can incorporate intuitive priors about the geometric variation of the image from a reference one using the framework of deformation modules. The framework of deformation modules allows to build deformations satisfying some prior and the idea is to reconstruct an image --from some indirect observations-- as the deformation of the reference one while constraining the deformation to satisfy certain constraints. I will present this notion of deformation modules and show how it can be used to perform image reconstruction.
Many problems in machine learning and imaging can be framed as an infinite dimensional Lasso problem to estimate a sparse measure. This includes for instance regression using a continuously parameterized dictionary, mixture model estimation and super-resolution of images. To make the problem tractable, one typically sketches the observations (often called compressive-sensing in imaging) using randomized projections. In this work, we provide a comprehensive treatment of the recovery performances of this class of approaches. We show that for a large class of operators, the Fisher-Rao distance induced by the measurement process is the natural way to enforce and generalize the classical minimal separation condition appearing in the literature. We then prove that (up to log factors) a number of sketches proportional to the sparsity is enough to identify the sought after measure with robustness to noise. Finally, we show that, under additional hypothesis, exact support stability holds (the number of recovered atoms matches that of the measure of interest) when the level of noise is smaller than a specified value. This is a joint work with Clarice Poon (University of Bath) and Gabriel Peyré (ENS).
The geometry-texture decomposition of images produced by X-Ray Computed Tomography (CT) is a challenging inverse problem, which is usually performed in two steps: reconstruction and decomposition. Decomposition can be used for instance to produce an approximate segmentation of the image, but this one can be compromised by artifacts and noise arising from the acquisition and reconstruction processes. Hence, reconstruction and decomposition benefit from being performed in a joint manner. We propose a geometry-texture decomposition based on a TV-Laplacian model, well-suited for segmentation and edge detection. The problem of joint reconstruction and decomposition of CT data is then formulated as a convex constrained minimization problem, which is solved using a recently introduced proximal interior point method. Numerical experiments on realistic images of material samples illustrate the practical efficiency of the proposed approach.
Some recent denoising methods are based on a statistical modeling of the image patches. In the literature, Gaussian models or Gaussian mixture models are the most widely used priors. In this presentation, after introducing the statistical framework of patch-based image denoising, I will propose some clues to answer the following questions: Why are these Gaussian priors so widely used? What information do they encode? In the second part, I will present a mixture model for noisy patches adapted to the high dimension of the patch space. This results in a denoising algorithm only based on statistical tools, which achieves state-of-the-art performance. Finally, I will discuss the limitations and some developments of the proposed method.
No seminar this week : it is the “Statistical Modeling for Shapes and Imaging” workshop.
I will briefly introduce the notions of generalized averages, power mean, their particular cases, analysis and level set representation. We apply these generalized averages and power mean to construct a general image data term. The properties of the general data term will also be discussed for multi-region image segmentation and handling outliers. Few test results will be exhibited. Moreover, performance of a joint segmentation and de-hazing model will also be displayed. This is a joint work with Noor Badshah, Ke Chen, Gulzar Ali Khan and Nosheen, Lavdi Rada, Awal Sher, Afzal, Haroon and Amna Shujah.
We will bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of bilevel minimization schemes. The schemes, considered in function space, take advantage of dualization frameworks and they are designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized Variation, leading to automated (monolithic), image reconstruction workflows.