This Winter school is intended to be a pre-school for the thematic trimester “The mathematics of Imaging”, that will be held in Paris at the IHP (Institut Henri Poincaré), from January 7 to April 5, 2019.
This pre-school will take place at the CIRM (Centre International de Rencontres Mathématiques), Marseille, the week January 7-11, 2019.
You can pre-register on the CIRM website of the school.
The pre-school will contain courses, practical sessions, flash presentations and poster sessions.
The 4 main courses will be:
Daniel Cremers (Technische Universität München), Variational methods and convex relaxation for Computer Vision: Variational methods are among the most classical and established methods to solve a multitude of problems arising in computer vision, image processing and beyond. My presentation covers four parts: First, I will introduce the basic concepts of variational methods and present a number of examples. Second, I will show how respective energies can be derived from the principle of Bayesian inference. Third, I will discuss techniques of convex relaxation and functional lifting which allow us to compute globally optimal or near-optimal solutions to certain non-convex energy minimization problems. And lastly, I will present variational methods for 3d reconstruction and visual SLAM (simultaneous localization and mapping). If time permits, I will talk about convex relaxations for elastic shape matching.
Clarice Poon (University of Cambridge): Sparsity in imaging: In the last few decades sparsity has become ubiquitous and is often one of the key assumptions behind imaging methods. In this course, we will discuss how sparsity arises in imaging (in particular, wavelets) and some ways in which sparsity has been exploited (in particular, compressed sensing and super resolution of measures).
Marcelo Pereyra (Heriot-Watt University, Edinburgh): Bayesian methods in imaging: This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.
Alexandre Gramfort (INRIA, Parietal Team, Université Paris-Saclay), Practical machine learning: What you will learn in this course:
I will present along this course examples of applications in the field of neuroimaging and neuroscience.