Past seminar


Past seminars

List of past seminars:

03 mai 2018, 14h-15h, Guillaume Charpiat (Équipe TAO - INRIA Saclay)
Title: Introduction to Neural Networks
Abstract: Neural networks have become extremely popular these last years, notably due to their recent impressive successes in computer vision, under the name of deep learning. This tutorial will describe the main principles and properties of neural networks, with a focus on convolutional neural networks (CNN), particularly suited for image-based machine learning tasks. Depending on the audience, and if time permits, we may also cover topics such as auto-encoders, generative adversarial networks, or style transfer.

03 mai 2018, 15h-16h, Martin Holler (CMAP, Ecole Polytechnique)
Title: Analysis and applications of coupled regularization with multiple data discrepancies
Abstract: In many applications of inverse problem in imaging, the measured data does not correspond to a single measurement but rather to multiple simultaneous or sequential measurements, featuring different forward models and/or noise characteristics. Examples of such a setting are the joint acquisition of magnetic resonance (MR) and positron emission tomography (PET) images or the sequential acquisition of multiple time frames in dynamic imaging. Assuming that the images one aims to reconstruct from such measurements have different but related content, coupled regularization techniques aim at exploiting such correlations for improved reconstruction. A corresponding variational formulation comprises multiple potentially different data discrepancies and raises the question of how standard stability and convergence results in inverse problems transfer to such a situation. In this talk, we address this question. Motivated by concrete applications with different noise characteristics, we first consider a rather general setting and in particular show how the adaption of parameter choices strategies to different discrepancy terms yields improved convergence results. We then further elaborate on practically relevant special cases and show numerical results for joint MR-PET reconstruction and multi-spectral electron microscopy.

05 avril 2018, 14h-15h, Nicolas Keriven (ENS)
Title: Sketched Learning from Random Features Moments [Slides]
Abstract: Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, modern architectures often ask for learning methods to be amenable to streaming or distributed computing. In this context, a popular approach is to first compress the database into a representation called a linear sketch, then learn the desired information using only this sketch. In this talk, we introduce a methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, kernel mean embedding and random features. It is seen to correspond to linear measurements of the probability distribution of the data, and the problem is thus analyzed under the lens of Compressive Sensing (CS), in which a signal is randomly measured and recovered. We analyze the problem using two classical approaches in CS: first a Restricted Isometry Property in the Banach space of finite signed measures, from which we obtain strong recovery guarantees however with an intractable non-convex minimization problem, and second with a dual certificate analysis, from which we show that total-variation regularization yields a convex minimization problem that in some cases recovers exactly the number of components of a gaussian mixture model. We also briefly describe a flexible heuristic greedy algorithm to estimate mixture models from a sketch, and apply it on synthetic and real data.

05 avril 2018, 15h-16h, Luca Calatroni (CMAP, Ecole Polytechnique)
Title: A variational model for mixed noise removal: analysis, optimisation and structure of solutions
Abstract: In several real-word imaging applications such as microscopy, astronomy and medical imaging, a combination of transmission and acquisition faults result in multiple noise statistics in the observed image, such as impulsive/Gaussian or Gaussian/Possion mixtures. By means of a joint MAP estimation, we derive a statistically consistent variational model where single data fidelities are combined in a handy infimal convoution fashion to model the noise mixture and separated from each other via a Total Variation smoothing. By means of a fine analysis in suitable function spaces, we then study the structure of the solutions of the corresponding variational model and propose a bilevel optimisation strategy for the estimation of the optimal regularisation weights. This is joint work with C.B. Schönlieb (University of Cambridge, UK), J.C. De Los Reyes (ModeMat, Quito, Ecuador) and K. Papafitsoros (WIAS Institue, Berlin, Germany).

08 mars 2018, 14h-15h, Nelly Pustelnik (CNRS, Laboratoire de Physique - CNRS UMR 5672 -- ENS Lyon)
Title: Analyse multirésolution et optimisation non lisse pour la segmentation de texture
Abstract: La segmentation d'images texturées continue de présenter un challenge majeur en traitement d'images quand les textures rencontrées sont de type stochastiques. Dans cet exposé, nous aborderons cette question par le couplage entre analyse multirésolution et outils d’optimisation non lisse. Nous présenterons d'une part les approches usuelles en deux temps estimation/segmentation puis nous expliciterons les modèles développés permettant d'effectuer les deux étapes de façon jointe. D'autre part, nous nous intéresserons à affiner ces procédures de segmentation de textures d'un point de vue algorithmique pour obtenir des méthodes à faible coût calculatoire de façon à évaluer les performances des outils développés sur les grands volumes de données tels que ceux rencontrés dans l’étude de la dynamique des écoulements multiphasiques dans des milieux poreux.

08 mars 2018, 15h-16h, Barbara Gris (KTH Royal Institute of Technology in Stockholm)
Title: Reconstruction d'image à l'aide d'un a priori de déformation
Abstract: La tomographie est une technique d'imagerie médicale qui consiste à reconstruire le volume d’un objet à partir de ses projections. Lorsque l'acquisition des données est longue, le sujet peut effectuer des mouvements (par exemple respiratoires) qui provoquent des artefacts dans l'image reconstruite. Le modèle que je propose a pour but de tenir compte des mouvements possibles afin d'aider à la reconstruction de l'image. Une première étape est de reconstruire cette image comme déformation d'une image template supposée connue, tout en incorporant un a priori dans les déformations possibles. Je présenterai la notion de module de déformation et montrerai comment elle permet de contraindre les déformations à respecter une certaine structure (donnée par exemple des contraintes physiques) tout en laissant certains paramètres libres afin qu'elles puissent s'adapter aux données.

08 février 2018, 14h-15h, Lénaïc Chizat (Lénaïc Chizat (SIERRA team, INRIA))
Title: A tutorial on optimal transport, part I: theory, model, properties [Slides]
Abstract: The optimal transport (OT) problem is often described as that of finding the most efficient way of moving a pile of dirt from one configuration to another. Once stated formally, OT provides extremely useful tools for comparing, interpolating and processing objects such as distributions of mass, probability measures, histograms or densities. This talk is an up-to-date tutorial on a selection of topics in OT. In the first part, we will present an intuitive description on OT, its behaviour and main properties. In the second part, we will introduce state-of-the-art numerical methods for solving OT (based on entropic regularization) and present how this tool can be used for both imaging and machine learning problems.

08 février 2018, 15h-16h, Aude Genevay (Aude Genevay (Ecole Normale Supérieure et Université Paris Dauphine))
Title: A tutorial on optimal transport, part II: Optimal transport for machine learning [Slides]
Abstract: The optimal transport (OT) problem is often described as that of finding the most efficient way of moving a pile of dirt from one configuration to another. Once stated formally, OT provides extremely useful tools for comparing, interpolating and processing objects such as distributions of mass, probability measures, histograms or densities. This talk is an up-to-date tutorial on a selection of topics in OT. In the first part, we will present an intuitive description on OT, its behaviour and main properties. In the second part, we will introduce state-of-the-art numerical methods for solving OT (based on entropic regularization) and present how this tool can be used for both imaging and machine learning problems.

11 janvier 2018, 14h-15h, Edouard Oyallon (CentraleSupelec)
Title: Deep CNNs: the end of prior features?
Abstract: Since 2012, Deep Convolutional Neural Networks provide generic and robust methods, that replaced predefined representations like SIFTs or HoGs in many state-of-the-art applications. In this talk, we show that supervised CNNs can be improved by incorporating geometric priors like a Scattering Transform: for instance, they learn with less samples, and are more interpretable.

11 janvier 2018, 15h-16h, Arthur Leclaire (CMLA, ENS Paris-Saclay)
Title: Semi-discrete optimal transport in patch space for structured texture synthesis
Abstract: In this talk we address exemplar-based texture synthesis using a model obtained as a local transform of a Gaussian random field. The local transformation is designed to solve a semi-discrete optimal transport problem in the patch space in order to reimpose the patch distribution of the exemplar texture. Since the patch space is high-dimensional, the optimal transport problem is solved with a stochastic optimization procedure. The resulting model inherits several benefits of the Gaussian model (stationarity, mid-range correlations) with an additional statistical guarantee on the patch distribution. We will also propose a multiscale extension of this model, which allows to synthesize structured textures with low requirements in terms of time and memory storage.

7 décembre 2017, 14h-15h, Irene Kaltenmark (Institut de Neurosciences de La Timone - Université d'Aix-Marseille)
Title: Modèles géométriques de croissance en anatomie computationnelle
Abstract: L’utilisation de groupes de difféomorphismes agissant sur des ensembles de formes, équipant ces derniers d’une structure riemannienne, s’est avérée extrêmement efficace pour modéliser et analyser la variabilité de populations de formes issues de données d’imagerie médicale. Néanmoins, à l'intégration de l'analyse longitudinale des données, ont émergé des phénomènes biologiques de croissance ou de dégénérescence se manifestant par des déformations de nature non difféomorphique. La croissance d'un organisme par adjonction progressive et localisée de nouvelles molécules, à l’instar d’un processus de cristallisation, ne s'apparente pas à un simple étirement du tissu initial. Face à cette observation, nous proposons de garder l'esprit géométrique qui fait la puissance des approches difféomorphiques dans les espaces de formes mais en introduisant un concept assez général de déploiement où l'on modélise les phénomènes de croissance comme le déploiement optimal progressif d’une forme préalablement repliée dans une région de l'espace. À la question délicate de la caractérisation des appariements partiels modélisant le déploiement de la forme, nous répondons par un système de coordonnées biologiques évolutif et nous aboutissons finalement à un nouveau problème de contrôle optimal pour l'assimilation de données de surfaces évoluant dans le temps et représentées par des courants ou des varifolds.

7 décembre 2017, 15h-16h, Chloé-Agathe Azencott (CBIO -- Institut Mines-ParisTech, Institut Curie \& INSERM)
Title: Structured feature selection in high dimension for precision medicine
Abstract: Differences in disease predisposition or response to treatment can be explained in great part by genomic differences between individuals. This realization has given birth to precision medicine, where treatment is tailored to the genome of patients. This field depends on collecting considerable amounts of molecular data for large numbers of individuals, which is being enabled by thriving developments in genome sequencing and other high-throughput experimental technologies. Unfortunately, we still lack effective methods to reliably detect, from this data, which of the genomic features determine a phenotype such as disease predisposition or response to treatment. One of the major issues is that the number of features that can be measured is large (easily reaching tens of millions) with respect to the number of samples for which they can be collected (more usually of the order of hundreds or thousands), posing both computational and statistical difficulties. In my talk I will discuss several ways to use constraints on the feature selection procedure to address this problem.

9 Novembre 2017, 14h-15h, Pauline Tan (ONERA \& CMLA - ENS Paris Saclay)
Title: Alternating proximal gradient descent for nonconvex regularised problems with biconvex and multiconvex coupling terms
Abstract: There has been an increasing interest in constrained nonconvex regularized block biconvex / multiconvex optimization problems. We introduce an approach that effectively exploits the biconvex / multiconvex structure of the coupling term and enables complex application-dependent regularization terms to be used.The proposed ASAP algorithm enjoys simple well defined updates. Global convergence of the algorithm to a critical point is proved using the so-called Kurdyka-Lojasiewicz property for subanalytic functions. Moreover, we prove that a large class of useful objective functions obeying our assumptions are subanalytic and thus satisfy the Kurdyka-Lojasiewicz property. I will also present two particular applications of the algorithm to big-data air-born sequences of images, which are already used by our industrial partner ONERA. This is a joint work with Mila Nikolova (CMLA, CNRS, ENS Paris-Saclay).

9 Novembre 2017, 15h-16h, Emilie Kaufmann (CNRS, INRIA Lille, Université de Lille)
Title: A tutorial on Multi-Armed Bandit problems, Theory and Practice
Abstract: A Multi-Armed Bandit (MAB) model is a simple framework in which an agent sequentially sample arms, that are unknown probability distributions, in order to learn something about these underlying distributions, possibly under the constraint of maximizing some notion of reward. Stochastic MABs have been introduced in the 1930s as a simple model for clinical trials, and are widely studied nowadays for several applications, that range from sequential content optimization, cognitive radios or the design of AI for games. In this introduction to MAB, we will review existing (efficient) algorithms to either achieve an exploration/exploitation trade-off or to optimally explore a simple, i.i.d., stochastic environment. We will then see how these algorithms can be extended to deal with more realistic applications.

5 Octobre 2017, 14h-15h, François Malgouyres (MIP - Université Paul Sabatier, Toulouse)
Title: Stable recovery of the factors from a deep matrix product and application to convolutional network
Abstract: We study a deep matrix factorization problem. It takes as input a matrix \(X\) obtained by multiplying \(K\) matrices (called factors). Each factor is obtained by applying a fixed linear operator to a short vector of parameters satisfying a model (for instance sparsity, grouped sparsity, non-negativity, constraints defining a convolution network... ). We call the problem deep or multi-layer because the number of factors is not limited. In the practical situations we have in mind, we can typically have \(K=10\) or \(100\). This work aims at identifying conditions on the structure of the model that guarantees the stable recovery of the factors from the knowledge of \(X\) and the model for the factors. We provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We also provide a necessary and sufficient condition called Deep Null-Space-Property (because of the analogy with the usual Null Space Property in the compressed sensing framework) which guarantees that even an inaccurate optimization algorithm for the factorization stably recovers the factors. We illustrate the theory with a practical example where the deep factorization is a linear convolutional network.

5 Octobre 2017, 15h-16h, Fabien Pierre (LORIA - Université de Lorraine)
Title: Colorisation de vidéos, de l'état-de-l'art aux débouchés industriels
Abstract: La colorisation d'image est un problème extrêmement mal posé mais qui intéresse l'industrie du divertissement. Ce double point de vue en fait un sujet très attractif. Dans cet exposé, on présentera l'état de l'art et les méthodes qui ont été développées par l'orateur pendant sa thèse. Celles-ci reposent sur des approches non-locales et variationnelles. Les fonctionnelles utilisées sont non-lisses et non-convexes et ont fait l'objet de techniques de minimisation originales. Cela a permis d'implémenter un logiciel expérimental qui associe l'utilisateur à une approche basée-exemple ce qui donne une méthode efficace, flexible et rapide. Une extension à la vidéo est proposée, dont l'implémentation en GPU permet une interactivité de l'approche variationnelle avec l'utilisateur. Néanmoins, celle-ci n'est pas opérationnelle aux yeux d'experts du milieu de la colorisation. En vue de se conformer à ces besoins, quelques pistes seront proposées.

1er Juin 2017, 14h-15h, Charles Bouveyron (MAP5 - Université Paris Descartes)
Title: High-Dimensional Mixture Models for Unsupervised Image Denoising
Abstract: This work addresses the problem of patch-based single image denoising through the unsupervised learning of a probabilistic high-dimensional mixture models on the noisy patches. The model, named hereafter HDMI, proposes a full modeling of the process that is supposed to have generated the noisy patches. To overcome the potential estimation problems due to the high dimension of the patches, the HDMI model adopts a parsimonious modeling which assumes that the data live in group-specific subspaces of low dimensionalities. This parsimonious modeling allows in turn to get a numerically stable computation of the conditional expectation of the image which is applied for denoising. The use of such a model also permits to rely on model selection tools, such as BIC, to automatically determine the intrinsic dimensions of the subspaces and the variance of the noise. This yields a blind denoising algorithm that demonstrates state-of-the-art performance, both when the noise level is known and unknown. Joint work with A. Houdard (Télécom ParisTech) and J. Delon (MAP5 - Paris Descartes).

1er Juin 2017, 15h-16h, Julien Tierny (CNRS et LIP6)
Title: Topological Data Analysis for Scientific Visualization.
Abstract: Scientific visualization aims at helping users (i) represent, (ii) explore, and (iii) analyze acquired or simulated geometrical data, for interpretation, validation or communication purposes. Among the existing techniques, algorithms inspired by Morse theory have demonstrated their utility in this context for the efficient and robust extraction of geometrical features, at multiple scales of importance. In this talk, I will give a brief tutorial on the topological methods used in scientific visualization for the analysis of scalar data. I will present algorithms with practical efficiency for the computation of topological abstractions (Reeb graphs, Morse-Smale complexes, persistence diagrams, etc.) in low dimensions (typically 2 or 3). I will also illustrate these notions with concrete use cases in astrophysics, fluid dynamics, molecular chemistry or combustion. I will also present the "Topology ToolKit" (topology-tool-kit.github.io), a recently released open-source library for topological data analysis, which implements most of the algorithms described above. I will give a brief usage tutorial, both for end-users and developers. I will also describe how easily it can be extended to disseminate research code. Finally,I will discuss perspectives, both from a research and implementation point of view.

04 Mai 2017, 14h-15h, Stéphane Jaffard (Paris Est)
Title: Analyse multifractale pour la classification d'images.
Abstract: L'analyse multifractale a été introduite à la fin des années 1980 par des physiciens dont le but était de relier les indices de régularité globale d'un signal (la vitesse d'un fluide turbulent), avec la distribution des singularités ponctuelles présentes dans les données. Différentes variantes de la méthode existent, basées sur les sup locaux d'une transformée continue en ondelettes, ou sur la DFA (Detrented Fluctuation Analysis). Nous considérerons d'autres versions, construites à partir des coefficients sur une base orthonormée d'ondelettes. Nous verrons comment les outils fournis par l'analyse multifractale peuvent être adaptés à différents types de données; utilisation des ``p-leaders'' (normes \(\ell^p\) locales de coefficients d'odelettes) à la place des 'leaders' (sup locaux de coefficients d'ondelettes) pour des données peu régulières, ou encore ondelettes anisotropes pour l'analyse de textures anisotropes. Nous verrons aussi comment adapter l'analyse quand les données ne présentent pas d'autosimilarité. Les exemples illustrant ces méthode seront tirés (en 1D) de la turbulence, le trafic internet, le rythme cardiaque, les textes littéraires, et (en 2D), des images naturelles, des peintures et des papiers photographiques anciens. En ce qui concerne les textes littéraires et les peintures, nous verrons en quoi ces méthodes permettent de fournir de nouveaux outils en textométrie et en stylométrie.

04 Mai 2017, 15h-16h, Johannes Ballé (Google & New York University)
Title: The importance of local gain control. [Slides]
Abstract: Local gain control is ubiquitous in biological sensory systems and leads, for example, to masking effects in the visual system. When modeled as an operation known as divisive normalization, it represents an invertible nonlinear transformation, and has several interesting properties useful for image processing. We introduce a generalized version of the transform (GDN), and use it to construct a novel visual quality metric which outperforms MS-SSIM in predicting human distortion assessments. We also show it can be used to Gaussianize image densities, yielding factorized representations, and providing probabilistic image models superior to sparse representations. Finally, we use it to design a simple image compression method, yielding compression quality which is visually close to the state of the art.

30 Mars 2017, 14h-15h, Claire Boyer (UPMC)
Title: Adapting to unknown noise level in super-resolution.
Abstract: We study sparse spikes deconvolution over the space of complex-valued measures when the input measure is a finite sum of Dirac masses. We introduce a new procedure to handle the spike deconvolution when the noise level is unknown. Prediction and localization results will be presented for this approach. An insight on the probabilistic tools used in the proofs could be briefly given as well.

30 Mars 2017, 15h-16h, Nicolas Papadakis (CNRS et Bordeaux 1)
Title: Covariant LEAst-Square Re-fitting for image restoration.
Abstract: We propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with a special focus for image processing tasks. Generalizing ideas that emerged for l1 regularization, we develop an approach re-fitting the results of standard methods towards the input data. Total variation regularizations and non-local means are special cases of interest. We identify important covariant information that should be preserved by the re-fitting method, and emphasize the importance of preserving the Jacobian (w.r.t. the observed signal) of the original estimator. Then, we provide an approach that has a ``twicing'' flavor and allows re-fitting the restored signal by adding back a local affine transformation of the residual term. We illustrate the benefits of our method on numerical simulations for image restoration tasks. Joint work with C.-A. Deledalle (IMBordeaux), J. Salmon (TELECOM ParisTech) and S. Vaiter (IMBourgogne).

2 Mars 2017, 14h-15h, Patrick Perez (Technicolor)
Title: Signaux sur graphe, du traitement à l'apprentissage. [Slides]
Abstract: Motivées par la profusion de signaux intéressants qui sont attachés un graphe (un réseau de transport, un réseau social, un maillage 3D) ou dont la structure interne est bien captée par un graphe entre ses parties (un image, un son), des études visant à étendre aux graphes les outils classiques de la théorie et du traitement des signaux ont vu le jour dans un passé récent. Nous rappellerons les bases de telles extensions, en particulier au moyen de l'analyse spectrale de graphe, pour nous concentrer ensuite sur plusieurs problèmes et applications; (1) L'échantillonnage aléatoire de signaux sur graphe et la reconstruction à partir des échantillons obtenus avec application aux super-pixels d'une image; (2) L'extraction et la régression de corrections harmoniques de maillages paramétriques avec application à la modélisation de visages; (3) L'unification de traitements locaux et non-locaux de signaux sur graphe au moyen de réseaux convolutifs aléatoires ou appris, avec application au débruitage et à l'édition d'images.

2 Mars 2017, 15h-16h, Valérie Perrier (LJK)
Title: Application des ondelettes à divergence nulle pour le transport optimal
Abstract: Dans de nombreuses applications, la solution du problème est un champ de vecteur qui doit vérifier une condition de divergence nulle : c'est le cas des champs de vitesse incompressibles solutions des équations de Navier-Stokes, ou du champ magnétique pour les solutions de Maxwell. Plus récemment, les champs à divergence nulle ont trouvé d'autres applications, comme la compression de champs de vecteur en infographie, ou encore la résolution du transport optimal dans sa formulation dynamique. Dans cet exposé, nous intéressons à la décomposition des champs à divergence nulle vérifiant des conditions aux limites "physiques" : pour cela nous introduisons une nouvelle base d'ondelettes à divergence nulle sur le carré ou le cube, qui diagonalise les opérateurs de dérivation. En particulier sur cette base, la complexité pour résoudre un Laplacien-Dirichlet avec condition de divergence nulle est optimale (linéaire). Dans un deuxième temps, nous considérons la formulation du transport optimal dynamique de Benamou-Brenier, que nous reformulons sur un espace de contraintes à divergence nulle. La minimisation de la fonctionnelle est alors effectuée par une descente de gradient sur l'espace des coefficients d'ondelettes à divergence nulle, et uniquement grâce à des décompositions-recompositions sur ondelettes. Ce travail est effectué en collaboration avec Morgane Henri, Souleymane Kadri-Harouna (université de La Rochelle) et Emmanuel Maître.

2 Février 2017, 14h-15h, Caroline Chaux (CNRS et I2M)
Title: Nonnegative Tensor Factorization using a proximal algorithm, application to 3D fluorescence spectroscopy. [Slides]
Abstract: This is a Joint work with Xuan Vu, Nadège Thirion-Moreau and Sylvain Maire (LSIS, Toulon). We address the problem of third order nonnegative tensor factorization with penalization. More precisely, the Canonical Polyadic Decomposition (CPD) is considered. It constitutes a compact and informative model consisting of decomposing a tensor into a minimal sum of rank-one terms. This multi-linear decomposition has been widely studied in the litterature. Coupled with 3D fluorescence spectroscopy analysis, it has found numerous interesting applications in chemistry, chemometrics, data analysis for the environment, monitoring and so on. The resulting inverse problem at hand is often hard to solve especially when the tensor rank is unknown and when data corrupted by noise and large dimensions are considered. We adopted a variational approach and the factorization problem is thus formulated under a penalized minimization problem. Indeed, a new penalized nonnegative third order CPD algorithm has been derived based on a block coordinate variable metric forward-backward method. The proposed iterative algorithm have been successfully applied not only to synthetic data (showing its efficiency, robustness and flexibility) but also on real 3D fluorescence spectroscopy data.

2 Février 2017, 15h-16h, Simon Masnou (Institut Camille Jordan)
Title: Reconstruction de volume à partir de coupes. [Slides]
Abstract: Le problème de reconstruire un volume 3D à partir de coupes 2D est fréquent dans de nombreuses applications en imagerie médicale ou en infographie. La principale difficulté est d'incorporer les contraintes car, en fonction du contexte, on peut parfois vouloir imposer des contraintes strictes, et d'autres fois conserver une certaine liberté en cas de données bruitées ou imprécises. Je présenterai des résultats récents que nous avons obtenus pour ce problème avec Elie Bretin et François Dayrens. Notre approche repose sur un modèle variationnel utilisant un terme de régularisation géométrique (tel que le périmètre ou une énergie faisant intervenir la courbure) couplé à des contraintes de densité pour les coupes. Nous avons démontré que ce modèle peut être bien approché par des énergies régulières à l'aide d'une méthode de champ de phase et nous avons proposé un schéma numérique efficace et précis pour son approximation numérique. Je présenterai les résultats que nous avons obtenus pour des contraintes variées, coupes planaires ou non planaires, parallèles ou non parallèles, surfaciques ou ponctuelles, etc. La méthode peut être étendue à des volumes multiples, ce qui est notamment intéressant pour la reconstruction de données segmentées.

5 Janvier 2017, 14h-15h, Sandrine Anthoine (CNRS et I2M)
Title: Generalized greedy algorithms [Slides]
Abstract: Matching Pursuit or CoSaMP are classical algorithms in signal processing that seek the best \(k\)-term approximation of a signal on a specified dictionary. Matching Pursuit is greedy in the sense that it chooses the atoms that enter the decomposition one at a time. Its descendants, such as CoSaMP or Subspace Pursuit, do not exactly choose one atom at a time but still aim at pinpointing the support of length \(k\) exactly of the solution. By opposition to convex relaxation alternatives, such as \(\ell_1\) penalized solutions, which do not seek an exactly \(k\)-sparse solution, we generally term Matching Pursuit, and its descendants "greedy". In approximation theory, the notion of "best" approximation is naturally in the sense of the \(\ell_2\) norm. Hence greedy algorithms are designed to find the \(k\)-sparse element that minimizes the \(\ell_2\) discrepancy. By contrast with convex relaxation, it is not easy to extend their scope to other discrepancies and obtain convergence guarantees. In this work, we propose to extend the scope of four greedy algorithms, Subspace Pursuit, CoSaMP, Orthogonal Matching Pursuit with Replacement and Iterative Hard Thresholding to the problem of findings zeros of operators in a Hilbert space. To do so we design the "Restricted Diagonal Property", which, as the "Restricted Isometry Property" in the classical case, ensures the good behavior of the algorithms. We are thus able for example to use these algorithms to find sparse critical points of functions that are neither convex nor concave. We finally give examples that illustrate the method. This is joint work with F.-X. Dupé (LIF).

5 Janvier 2017, 15h-16h, Jean-Marie Mirebeau (CNRS, labo de mathématiques d'Orsay)
Title: Calcul de chemins minimaux avec pénalisation de courbure, via l'algorithme du Fast Marching. Applications en segmentation d'images. [Slides]
Abstract: Nous considérons des modèles de plus courts chemins avec pénalisation de courbure, tels que les élasticas d'Euler/Mumford, ou la voiture de Reed-Shepp avec ou sans marche arrière. Pour calculer le chemin d'énérgie minimale joignant deux points donnés, nous approchons ces modèles singuliers à l'aide de métriques Riemanniennes ou Finsleriennes fortement anisotropes sur l'espace produit \(\mathbb{R}^d \times S^{d-1}\). Les équations eikonales associées sont ensuites résolues via des variantes spécialisées de l'algorithme du Fast-Marching. Nous présentons des applications à la segmentation de structures tubulaires dans les images médicales.

24 Novembre 2016, 14h-15h, Jean-Michel Morel (ENS Cachan)
Title: The ego-motion scale space. [Slides]
Abstract: This is a joint work with Javier Sánchez Pérez (Universidad de Las Palmas Gran Canaria). We address the homographic stabilization of video. This is the process by which the jitter of a moving camera is being compensated automatically from the video itself, in absence of external calibration information like the one that would be provided by accelerometers or gyroscopes. I will discuss the various modes to define video stabilization. Then I will display several examples illustrating the visual benefits and inconveniences of stabilization. It turns out that the filtering process of the signal produced by the stabilization brings valuable intrinsic information about ego-motion. This yields what we naturally called ego-motion scale space. Indeed the stabilization signal can be the object of a time-frequency analysis and yield an intrinsic description of the camera motion.

24 Novembre 2016, 15h-16h, Maureen Clerc (INRIA)
Title: Imaging brain activity [Slides]
Abstract: The living human brain is a tremendously complex organ that modern science is striving to better understand. Electroencephalography (EEG) allows to study it non-invasively, at a macroscopic scale. Typically, EEG datasets consist of multi-trial and multi-sensor signals, buried in very strong noise, making information extraction extremely challenging. In this talk I will address brain activity reconstruction and its application to real-time brain activity interpretation for brain-computer interfaces.

3 Novembre 2016, 14h-15h, Stéphane Mallat (Ecole Normale Superieure)
Title: Unsupervised Learning and Inverse Problems with Deep Neural Networks [Slides]
Abstract: Deep neural networks have obtained remarkable results to learn generative image models. We show that it opens a new probabilistic framework to define non-Gaussian and non-ergodic random processes, which can be estimated with a reduced number of samples. The mathematics are introduced through multiscale wavelet scattering networks and applied to image and audio textures, but also to standard statistical physics processes such as Ising or stochatic geometry. We explain how such models are applied to inverse problems and super-resolution.

3 Novembre 2016, 15h-16h, Emilie Chouzenoux (Université Paris-Est Marne-La-Vallée)
Title: A Block Parallel Majorize-Minimize Memory Gradient Algorithm [Slides]
Abstract: In the field of 3D image recovery, huge amounts of data need to be processed. Parallel optimization methods are then of main interest since they allow to overcome memory limitation issues, while benefiting from the intrinsic acceleration provided by recent multicore computing architectures. In this context, we propose a Block Parallel Majorize-Minimize Memory Gradient (BP3MG) algorithm for solving large scale optimization problems. This algorithm combines a block coordinate strategy with an efficient parallel update. The proposed method is applied to a 3D microscopy image restoration problem involving a depth-variant blur, where it is shown to lead to significant computational time savings with respect to a sequential approach.

6 Octobre 2016, 14h-15h, Frédéric Champagnat (ONERA)
Title: Régularisation spatio-temporelle physique pour la mesure de champs de vitesse des fluides [Slides]
Abstract: La vélocimétrie par imagerie de particules PIV est un outil essentiel d'investigation de la turbulence ouvrant la voie d'une analyse Lagrangienne et offrant un moyen d'accéder à la mesure de pression. Le développement de la PIV haute cadence (dite PIV TR pour "time resolved") a permis l'émergence de nouvelles classes de méthodes reposant sur la cohérence spatio-temporelle des champs de vitesse. Les approches les plus courantes en PIV TR reposent sur un développement de Taylor spatio-temporel du champ de mouvement. L'exploitation de ces régularités par des outils de régularisation "générique" permet déjà de pallier efficacement les défauts de l'imagerie TR (résolution spatiale limitée, biais liés au repliement spatial). L'objet de cette présentation est d'aborder la régularisation physique de ces données qui s'appuie en l'espèce sur les équations de Navier-Stokes incompressibles (ou des approximations physiques de ces dernières). Nous donnons d'abord les principes généraux des méthodes d'assimilation qui permettent d'estimer des champs de vitesses respectant strictement Navier-Stokes à partir d'images PIV TR. Puis nous présentons une alternative originale basée sur une approximation de Navier-Stokes permettant sous certaines hypothèses d'obtenir un champ résolu en temps à partir d'une mesure du champ moyen et d'une mesure ponctuelle résolue en temps. Nous illustrons la capacité d'amélioration du RSB et de super-résolution de ces méthodes et traçons leurs limites et les voies de recherche en cours. Collaborateurs: R. Yegavian, B. Leclaire, O. Marquet, S. Beneddine, D. Sipp

6 Octobre 2016, 15h00-16h00, Stephanie Allassonniere (Paris 5)
Title: Mixed-effect model for the spatiotemporal analysis of longitudinal manifold-valued data [Slides]
Abstract: In this work, we propose a generic hierarchical spatiotemporal model for longitudinal manifold-valued data, which consist in repeated measurements over time for a group of individuals. This model allows us to estimate a group-average trajectory of progression, considered as a geodesic of a given Riemannian manifold. Individual trajectories of progression are obtained as random variations, which consist in parallel shifting and time reparametrization, of the average trajectory. These spatiotemporal transformations allow us to characterize changes in the direction and in the pace at which trajectories are followed. We propose to estimate the parameters of the model using a stochastic version of the expectation-maximization (EM) algorithm, the Monte Carlo Markov Chain Stochastic Approximation EM (MCMC SAEM) algorithm. This generic spatiotemporal model is used to analyze the temporal progression of a family of biomarkers. This progression model estimates a normative scenario of the progressive impairments of several cognitive functions, considered here as biomarkers, during the course of Alzheimer’s disease. The estimated average trajectory provides a normative scenario of disease progression. Random effects provide unique insights into the variations in the ordering and timing of the succession of cognitive impairments across different individuals.