Variational methods and optimization in imaging


Mathematics of Imaging Workshop #1

Tentative program

Monday 4 February : To the memory of our dear friend and colleague Mila Nikolova

  • 10h30 : Welcome/Coffee
  • 11h-11h45 : Carola Schoenlieb (A geometric integration approach to non-smooth and non-convex optimisation) (Slides)
  • 11h45-12h30 : Antonin Chambolle (Finite element discretizations of the total variation) (Slides)
  • 12h30-14h : Lunch break
  • 14h-14h45 : Fabien Pierre (Coupling variational method with CNN for image colorization) (Slides)
  • 14h45-15h30 : Joachim Weickert (Stable Models and Algorithms for Backward Diffusion Evolutions) (Slides)
  • 15h30-16h : Coffee break
  • 16h-16h45 : Gabriele Steidl (Vector-valued optimal Lipschitz extensions on finite graphs)

Tuesday 5 February

  • 9h30-10h15: Christoph Schnörr (The Assignment Flow) (Slides)
  • 10h15-10h45 : Coffee break
  • 10h45-11h30 : Guy Gilboa (Characterizing functionals and flows by nonlinear eigenvalue analysis)
  • 11h30-12h15 : Nicolas Papadakis (Covariant LEAst-square Re-fitting for Image Restoration) (Slides)
  • 12h15-14h : Lunch break
  • 14h-14h45 : Xavier Bresson (Convolutional Neural Networks on Graphs)
  • 14h45-15h30 : Emilie Chouzenoux (Deep Unfolding of a Proximal Interior Point Method for Image Restoration) (Slides)
  • 15h30-16h : Coffee break
  • 16h-16h45 : Camille Couprie (Image generative modeling for future prediction or inspirational purposes) (Slides)
  • 16h45 -17h30 : Rebecca Willett (Learning to Solve Inverse Problems in Imaging) (Slides)
  • 18h30: welcome cocktail at Tour Zamansky Jussieu

Wednesday 6 February

  • 9h30-10h15: Anders Hansen (On computational barriers in mathematics of information and instabilities in deep learning for inverse problems)
  • 10h15-10h45 : Coffee break
  • 10h45-11h30 : Vincent Duval (An atomic norm perspective on total variation regularization in image processing) (Slides)
  • 11h30-12h15 : Clarice Poon (On support localisation, the Fisher metric and optimal sampling in off-the-grid sparse regularisation) (Slides)
  • 12h15-14h30 : Lunch break
  • 14h30-15h15 : Charles Dossal (Exact rate of Nesterov Scheme) (Slides)
  • 15h15-16h : Rémy Abergel (The Shannon Total Variation) (Slides)
  • 16h-16h45: Coffee break

Thursday 7 February

  • 9h30-10h15: Blanche Buet (A varifold approach to surface approximation and curvature estimation on point clouds) (Slides)
  • 10h15-10h45 : Coffee break
  • 10h45-11h30 : Kristian Bredies (Infimal-convolution-type regularization for inverse problems in imaging) (Slides)
  • 11h30-12h15 : Caroline Chaux (From the modelization of direct problems in image processing to the resolution of inverse problems) (Slides)
  • 12h15-14h : Lunch break
  • 14h-14h45 : Yves van Gennip (Variational methods on graphs with applications in imaging and data classification)
  • 14h45-15h30 : Nicolas Bonneel (Sliced Partial Optimal Transport) (Slides)
  • 15h30-16h : Coffee break
  • 16h-16h45 : Martin Rumpf (Metamorphosis on generalized image manifolds)
  • 16h45 -17h30 : Dirk Lorenz (Quadratically regularized optimal transport) (Slides)

Friday 8 February

  • 9h30-10h15: Albert Fannjiang (Blind Ptychography: Theory and Algorithm) (Slides)
  • 10h15-11h : Luca Zanni (Spectral properties of steplength selections in gradient methods: from unconstrained to constrained optimization) (Slides)
  • 11h-11h30 : Coffee break
  • 11h30-12h15 : Hugues Talbot (Discrete multigrid convergent estimators of curvature)
  • Lunch break - end of the workshop

Abstracts

Rémy Abergel
Title: The Shannon Total Variation
Abstract: Joint work with Lionel Moisan. In image processing problems, the minimization of total variation (TV) based energies requires discretization schemes, such as the commonly used finite differences approach. Unfortunately, such schemes generally lead to images which are difficult to interpolate at sub-pixel scales, which can be extremely problematic for subsequent processing. In this talk, we study a Fourier-based estimate called Shannon total variation (STV), which behaves much better in terms of sub-pixel accuracy and isotropy. We will first explain how the STV regularization can be efficiently handled with modern dual algorithms, and show that replacing the classical discrete TV model by this STV variant does not raise any theoretical or numerical difficulties. We will then consider many classical TV-based restoration models, such as image denoising (Rudin, Osher and Fatemi), image deblurring and spectrum extrapolation (Guichard-Malgouyres), where the improved behavior of the Shannon total variation yields images that are easy to interpolate. Lastly, we will propose a new STV-regularized optimization problem (involving a data-fidelity term formulated in the frequency domain), which can be used to remove aliasing from an image, or given an image which is difficult to interpolate, can produce a visually similar image which can be easily interpolated. The experimental results that we provide show the interesting perspectives opened by this model in applications where the correct sampling of the data (according to the Shannon sampling theory) is carefully taken into account.

Jérôme Bobin
Title: Sparse matrix factorization, and its applications in astrophysics
Abstract: Unsupervised matrix factorization is a classical mathematical problem that plays a key role in statistics, such as in blind source separation (BSS). In this presentation we show how the problem of separating sparse sources is of utmost importance in the context of astrophysics. More specifically, we will see how sparse matrix factorization (SMF) has been applied in cosmology to yield an accurate estimation of the so-called CMB (cosmological microwave background - the most ancient light that is observable in our Universe) from the space mission Planck. As well, it has recently been adapted to understand the content of multispectral images from high-energy astrophysics, and especially from X-ray images. We will see how SMF has been tailored to account for the statistics of such data and illustrate it with recent results obtained from Chandra data.

Nicolas Bonneel
Title: Sliced Partial Optimal Transport
Abstract: Sliced optimal transport is a blazing fast way to compute a notion of optimal transport between uniform measures supported on point clouds via 1-d projections. However, it requires these point clouds to have the same cardinality. This talk will show a fast numerical scheme to compute partial optimal transport in 1-d : this corresponds to solving an alignment problem often solved with dynamic programming, though our solution is much faster. We integrate this 1-d alignment algorithm within a sliced transport framework, for applications such as color transfer. We also make use of sliced partial optimal transport to solve point cloud registration tasks such as those traditionally solved with ICP. I'll show results involving hundreds of thousands of points computed within seconds or minutes. I'll also show preliminary results on sliced partial Wasserstein barycenters.

Kristian Bredies
Title: Infimal-convolution-type regularization for inverse problems in imaging
Abstract: Infimal-convolution-type regularization for inverse problems in imaging In the last decades, infimal-convolution-type techniques developed to a viable set of tools in variational imaging. Nowadays, the infimal convolution has successfully been used to construct various regularization functionals for inverse problems in imaging. It is commonly interpreted as a way to realize two or more competing variational image models by a single functional. Many convex models base on this approach, such as, for instance, regularization on multiple orders of smoothness via total variation (TV) or total generalized variation (TGV), energies for cartoon/texture decomposition as well as modelling of different time scales for image sequences. In the talk, we consider a general framework for the infimal convolution of regularization functionals with a focus on the one-homogeneous case. We show that the latter class of functionals can arbitrarily be combined by infimal convolution in order to regularize a general class of ill-posed inverse problems, provided that their kernels are finite-dimensional and that each functional is coercive up to the kernel, leading to a well-posed decomposition of the image to recover. We then discuss recently-introduced instances where these conditions are met and infimal convolution has been successfully applied: an oscillatory version of the second-order total generalized variation (TGV\( ^{osci} \)) as well as anisotropic total (generalized) variation. In particular, imaging applications are shown where the use of infimal-convolution-type regularization is beneficial, such as applications in magnetic resonance imaging (MRI), electron tomography, and photoacoustic imaging. Finally, we discuss how the constructed regularization functionals can be used to derive dedicated efficient preconditioners for numerical optimization algorithms.

Xavier Bresson
Title: Convolutional Neural Networks on Graphs
Abstract: In the past years, deep learning methods have achieved unprecedented performance on a broad range of problems in various fields from computer vision to speech recognition. So far research has mainly focused on developing deep learning methods for grid-structured data, while many important applications have to deal with graph-structured data. Such geometric data are becoming increasingly important in computer graphics and 3D vision, sensor networks, drug design, biomedicine, recommendation systems, and web applications. The purpose of this talk is to introduce the emerging field of deep learning on graphs, overview existing solutions as well as applications for this class of problems.

Blanche Buet
Title: A varifold approach to surface approximation and curvature estimation on point clouds
Abstract: We propose a natural framework for the study of surfaces and their different discretizations based on varifolds. Varifolds have been introduced by Almgren to carry out the study of minimal surfaces. Though mainly used in the context of rectifiable sets, they turn out to be well suited to the study of discrete type objects as well. Let us briefly explain what a \( d \)-varifold is: it is a Radon measure on \( \Omega \times G_{d,n} \) where \( G_{d,n} = \{ d\text{-vector plane of } \mathbb{R}^n \} \) is the \( d \)--Grassmanian. It can be equivalently understood as the data of a Radon measure \( \mu \) on \( \mathbb{R}^n \) and a probability measure \( \nu_x \) on \( G_{d,n} \) at each point \( x \) in the support of \( \mu \). Using this point of view, we can easily associate a \( d \)-varifold with a \( d \)-submanifold \( M \) of \( \mathbb{R}^n \): we take the surface measure for \( \mu \) (the \( d \)-Hausdorff measure restricted to \( M \), which can be weighted) and for \( \nu_x \), we take the Dirac mass at the tangent plane \( T_x M \) on \( G_{d,n} \). Loosely speaking, mass and tangent planes are enough to define a varifold. Hence, given a finite set of points \( \{ x_i \}_{i=1 \ldots N} \subset \mathbb{R}^n \), weighted by masses \( \{ m_i \}_{i=1 \ldots N} \subset \mathbb{R}_+ \), and provided with directions \( \{ P_i \}_{i=1 \ldots N} \subset G_{d,n} \), we associate the \( d \)-varifold \[ V_N = \sum_{i=1}^N m_i , \delta_{(x_i, P_i)} . \] While the structure of varifold is flexible enough to adapt to both regular and discrete objects, it allows to define variational notions of mean curvature and second fundamental form based on the divergence theorem. Thanks to a regularization of these weak formulations, we propose a notion of discrete curvature (actually a family of discrete curvatures associated with a regularization scale) relying only on the varifold structure. We prove nice convergence properties involving a natural growth assumption: the scale of regularization must be large with respect to the accuracy of the discretization. We performed numerical computations of mean curvature and Gaussian curvature on point clouds in \( \mathbb{R}^3 \) to illustrate this approach. Joint work with Gian Paolo Leonardi (Modena) and Simon Masnou (Lyon).

Antonin Chambolle
Title: Finite element discretizations of the total variation
Abstract: In this talk we will discuss the merits of the P1 and non-conforming P1 finite elements for approximating the total variation, and in particular of discontinuous functions. We propose, in 2D, an automatic mesh adaption process which adapts to the direction of the discontinuities.

Caroline Chaux
Title: From the modelization of direct problems in image processing to the resolution of inverse problems
Abstract: In this work, we are interested in the resolution of inverse problems raised in many image processing applications. We considered inverse problems starting from models (understanding the acquisition process), then addressing their resolution (formulated as an optimization problem) while considering the parameters or hyperparameters involved all along the process (e.g. noise nature/intensity, regularization parameters). Different models will be considered corresponding to different application cases such as tensor factorization, source separation or time-frequency inpainting. All these issues have been addressed by adopting a variational approach leading to various optimization problems that we propose to solve by developing proximal approaches.

Emilie Chouzenoux
Title: Deep Unfolding of a Proximal Interior Point Method for Image Restoration
Abstract: Variational methods have started to be widely applied to ill-posed inverse problems since they have the ability to embed prior knowledge about the solution. However, the level of performance of these methods significantly depends on a set of parameters, which can be estimated through computationally expensive and time-consuming processes. In contrast, deep learning offers very generic and efficient architectures, at the expense of explainability, since it is often used as a black-box, without any fine control over its output. Deep unfolding provides a convenient approach to combine variational-based and deep learning approaches. Starting from a variational formulation for image restoration, we develop iRestNet, a neural network architecture obtained by unfolding an interior point proximal algorithm. Hard constraints, encoding desirable properties for the restored image, are incorporated into the network thanks to a logarithmic barrier, while the barrier parameter, the stepsize, and the penalization weight are learned by the network. We derive explicit expressions for the gradient of the proximity operator for various choices of constraints, which allows training iRestNet with gradient descent and backpropagation. In addition, we provide theoretical results regarding the stability of the network. Numerical experiments on image deblurring problems show that the proposed approach outperforms both state-of-the-art variational and machine learning methods in terms of image quality. Joint work with C. Bertocchi, M.C. Corbineau, J.C. Pesquet and M. Prato.

Camille Couprie
Title: Image generative modeling for future prediction or inspirational purposes
Abstract: Generative models, and in particular adversarial ones, are becoming prevalent in computer vision as they enable enhancing artistic creation, inspire designers, prove usefulness in semi-supervised learning or robotics applications. An important prerequisite towards intelligent behavior is the ability to anticipate future events. Predicting the appearance of future video frames is a proxy task towards pursuing this ability. We will present how generative adversarial networks (GANs) can help, and novel approaches predicting in higher level feature spaces of semantic segmentations. In a second part, we will see how to develop the abilities of GANs to deviate from training examples to generate novel images. Finally, as a limitation of GANs is the production of raw images of low resolution, we present solutions to produce vectorized results.

Charles Dossal
Title: Exact rate of Nesterov Scheme
Abstract: In 1984 Nesterov proposed an inertial gradient scheme to minimize convex functions which ensures a \( 1/n^2 \) decay rate. In this talk, we give the exact decay rate of this scheme depending on the geometrical properties of the function to minimize.

Vincent Duval
Title: An atomic norm perspective on total variation regularization in image processing
Abstract: It is folklore knowledge that the total (gradient) variation regularization tends to promote piecewise constant ``cartoon-like'' images. In this talk I will relate that property to the description of the extreme points of the total variation unit ball. These extreme points have been characterized by Ambrosio, Caselles, Masnou and Morel as the indicator functions of ``simple sets''. I will explain how it is possible to describe the solutions of variational problems as a sum of such functions, by using a general representation principle. This is a joint work with C. Boyer, A. Chambolle, Y. De Castro, F. de Gournay and P. Weiss.

Albert Fannjiang
Title: Blind Ptychography: Theory and Algorithm
Abstract: Blind ptychography is a phase retrieval method using multiple coded diffraction patterns from different, overlapping parts of the unknown extended object illuminated with an unknown window function. As such blind ptychography is the inverse problem of simultaneous recovery of the object and the window function given the intensities of the windowed Fourier transform. We derive a general set of conditions under which the object and the window function can be uniquely determined up to a scaling factor and an affine phase factor. We also characterize all the other ambiguities inherent to the raster scan which consists of the shift positions of the standard windowed Fourier transform and propose an explicit remedy. Finally, we present a reconstruction algorithm based on the Douglas-Rachford Splitting with initialization informed by the uniqueness theory.

Guy Gilboa
Title: Characterizing functionals and flows by nonlinear eigenvalue analysis
Abstract: Nonquadratic regularizers and nonlinear flows are often difficult to characterize analytically. Nonlinear eigenvalue analysis can provide a convenient framework for such investigations. Two examples are given. First, we examine nonlinear eigenfunctions (calibrable sets) of adaptive-anisotropic total-variation. Theoretical and experimental results show the type of geometrical structures that can be perfectly preserved under the regularization or descent flow, generalizing the TV theory. A second part examines explicit methods for p-Laplacian flows. Analytic solutions of the flow for p-Laplacian eigenfunctions suggest a new type of stability criterion, generalizing the CFL time-step bound.

Anders Hansen
Title: On computational barriers in mathematics of information and instabilities in deep learning for inverse problems
Abstract: Modern mathematics of information relies heavily on computing minimisers of optimisation problems such as linear programming, constrained and unconstrained Lasso, Basis Pursuit etc. We will discuss the following potentially surprising issue. When given irrational inputs or using floating point arithmetic, we have the following phenomenon. For any of the mentioned optimisation problems, and given any integer \( K > 2 \), there does exists a class of well conditioned inputs, such that: (1) there does not exists any algorithm that can produce a minimiser with \( K \) correct digits. (2) However, there does exist an algorithm that can produce \( K-1 \) correct digits, but any algorithm doing so will use arbitrarily long time. (3) Moreover, the problem of computing \( K-2 \) digits is in P, that is, there exists an algorithm providing a solution with \( K-2 \) correct digits requiring polynomial runtime in the number of variables. A seemingly unrelated problem is the phenomenon of instabilities in deep learning. It is a well documented issue that deep learning for the classification problem becomes unstable. We will discuss how deep learning for inverse problems in general also becomes unstable. Paradoxically, despite the mentioned instabilities, it is possible to produce neural networks that are completely stable for Magnetic Resonance Imaging (MRI). We will show how such neural networks require no training (no deep learning), have recovery guarantees, and that there does exist efficient algorithms to compute them. The existence of such networks are intimately related to the \( (K,K-1,K-2) \)-phenomenon described above regarding existence of algorithms for optimisation problems in modern mathematics of information.

Dirk Lorenz
Title: Quadratically regularized optimal transport
Abstract: Among regularization techniques for optimal transport, entropic regularization has played a pivotal rule. The main reason may be its computational simplicity: the Sinkhorn-Knopp iteration can be implemented in two- or even one line ad enjoys a linear convergence rate. However, some care is needed to calculate optimizer for small regularization parameters and convergence can be quite slow for badly behaved data. Faster algorithms, e.g. Newton methods, are hard to analyze and tend to be unstable in practice. Moreover, the continuous theory is intricate in this case and takes place in Orlicz-Luxemburg spaces (as we will illustrate in this talk). After sketching parts of the continuous theory for entropic regularization, we will analyze a different regularizer, namely a simple quadratic penalty. First our focus lies on the continuous case where it is still quite challenging to show existence of suitable solutions for the dual problem. Then we will derive different numerical methods for the discrete problem which include a globally convergent Newton method which converges very fast to high accuracy even for fairly small regularization parameters. The talk is based on joint work with Christoph Brauer, Christian Clason, Paul Manns, Christian Meyer, and Benedikt Wirth.

Nicolas Papadakis
Title: Covariant LEAst-square Re-fitting for Image Restoration
Abstract: In this talk, a framework to remove parts of the systematic errors affecting popular restoration algorithms is presented, with a special focus on image processing tasks. Generalizing ideas that emerged for \( \ell_1 \) regularization, an approach re-fitting the results of standard methods towards the input data is developed. Total variation regularization and non-local means are special cases of interest. Important covariant information that should be preserved by the re-fitting method are identified, and the importance of preserving the Jacobian (w.r.t. the observed signal) of the original estimator is emphasized. Then, a numerical approach is proposed. It has a twicing flavor and allows re-fitting the restored signal by adding back a local affine transformation of the residual term. The benefits of the method are illustrated on numerical simulations for image restoration tasks. This a joint work with Charles-Alban. Deledalle (CNRS), Joseph Salmon (Univ. Montpellier) and Samuel Vaiter (CNRS).

Fabien Pierre
Title: Coupling variational method with CNN for image colorization
Abstract: Our works aim to join the powerful prediction of the convolutional neural network (CNN) with the pixel-level accuracy of variational methods. The limitations of CNN-based image colorization approaches will be described. We then focus on a CNN that is able to compute a statistical color distribution for each pixel of the image from a learning process on a large color image database. After describing its limitation, the variational method of Pierre et al. 2015 is briefly recalled. This method selects a color from a given set while regularizing the result. By combining this approach with a CNN, we have designed a fully automatic image colorization framework taht improves the accuracy in comparison to CNN alone. Some numerical experiments show the accuracy provided by our method.

Clarice Poon
Title: On support localisation, the Fisher metric and optimal sampling in off-the-grid sparse regularisation
Abstract: Sparse regularization is a central technique for both machine learning and imaging sciences. Existing performance guarantees assume a separation of the spikes based on an ad-hoc (usually Euclidean) minimum distance condition, which ignore the geometry of the problem. In this talk, we study the BLASSO (i.e. the off-the-grid version of \( \ell_1 \) LASSO regularization) and show that the Fisher-Rao distance is the natural way to ensure and quantify support recovery. Under a separation imposed by this distance, I will present results which show that stable recovery of a sparse measure can be achieved when the sampling complexity is (up to log factors) linear with sparsity. On deconvolution problems, which are translation invariant, this generalizes to the multi-dimensional setting existing results of the literature. For more complex translation-varying problems, such as Laplace transform inversion, this gives the first geometry-aware guarantees for sparse recovery. This is joint work with Nicolas Keriven and Gabriel Peyré.

Martin Rumpf
Title: Metamorphosis on generalized image manifolds
Abstract: In the metamorphosis model the space of images is equipped with a Riemannian metric measuring both the cost of transport of image intensities and the variation of them along motion lines. In this talk a recently introduced variational time discretization to compute discrete geodesics and a discrete exponential map will be reviewed. The classical metamorphosis model considers images as square-integrable functions and thus is non-sensitive to image features such as sharp interfaces or fine texture patterns. To resolve this drawback, we treat images not as intensity maps. Instead, we consider two different approaches based on convolutional neural networks methodology. In an image analysis approach, we use deep CNN features to treat local structures and semantic information and morph images via a morphing in feature space. Alternatively, in an image synthesis approach, we take into account learned rotational invariant kernels for sparse image representation and morph images in the space of this representation. This is joint work with Alexander Effland, Thomas Pock, Erich Kobler.

Christoph Schnörr
Title: The Assignment Flow
Abstract: The assignment flow is a dynamical system that evolves on an elementary statistical manifold and performs image labeling, i.e.~context-sensitive image classification. It provides a smooth and computationally efficient alternative to non-smooth discrete graphical models and enables to study basic problems related to the design of larger systems for image analysis: supervised labeling, unsupervised label learning and learning using optimal control. The talk reports the mathematical ingredients (information theory, discrete optimal transport, geometric integration), recent results and perspectives.

Carola Schoenlieb
Title: A geometric integration approach to non-smooth and non-convex optimisation
Abstract: The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh--Abe methods. These are generalisations of the Itoh--Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well-defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh--Abe methods are superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while remaining competitive in terms of efficiency. If time allows we will also give some results in the smooth setting where we could derive convergence rates. This is joint work with Erlend Riis, Matthias Ehrhardt, Torbjørn Ringholm and Reinout Quispel.

Gabriele Steidl
Title: Vector-valued optimal Lipschitz extensions on finite graphs
Abstract: Let \( G := (V,E,w) \) be an undirected connected weighted graph with weight function \( w : E \to [0,1] \) and \( \emptyset \neq U \subseteq V \), where \( (u,v) \not \in E \) if \( u,v \in U \). We deal with minimizers of the functionals \[ E_{p} f:= \sum_{u \in V} \Big( \sum_{v \sim u} w(u,v)^p |f(u) - f(v)|^p \Big), \] \[E_{\infty} f:=\max_{u \in V} \left(\max_{v \sim u} w(u,v) |f(u) - f(v)| \right) \] subject to \( f|_U = g \) for vector-valued functions \( f \) and address their relation to midrange filters and \( p \)-Laplacians. Joint work with M. Bacak, J. Hertrich and S. Neumayer.

Hugues Talbot
Title: Discrete multigrid convergent estimators of curvature
Abstract: Recent works have indicated the potential of using curvature as a regularizer in image segmentation, in particular for the class of thin and elongated objects. These are ubiquitous in bio-medical imaging (e.g. vascular networks), in which length regularization can sometime performs badly, as well as in texture identication. However, curvature is a second-order dierential measure, and so its estimators are sensitive to noise. The straightforward extentions to Total Variation are not convex, making it a challenge to optimize. State-of-art techniques make use of a coarse approximation of curvature that limit practical applications. We argue that curvature must instead be computed using a multigrid convergent estimator, and we propose in this paper a new digital curvature ow which mimicks continuous curvature flow. We illustrate its potential as a post-processing step to a variational segmentation framework.

Yves van Gennip
Title: Variational methods on graphs with applications in imaging and data classification
Abstract: Applications that can be described by variational models profit from all the advantages those models bring along. Both on the functional level as on the level of the associated differential equations, powerful techniques have been developed over the years to study these models. Up until fairly recently, such models were typically formulated in a continuum setting, i.e. as the minimization of a functional over an admissible class of functions whose domains are subsets of Euclidean space or Riemannian manifolds. The field of variational methods and partial differential equations (PDEs) on graphs aims to harness the power of variational methods and PDEs to tackle problems that inherently have a graph (network) structure. In this talk we will encounter the graph Ginzburg--Landau model, which is a paradigmatic example of a variational model on graphs. Just as its continuum forebear is used to model phase separation on a continuum domain ---it assigns to each point of the domain a value from an (approximately) discrete set of values--- the graph Ginzburg--Landau model describes phase separation on the nodes of a graph. This makes it extremely well suited for applications such as data clustering, data classification, community detection in networks, and image segmentation. Theoretically there are also interesting questions to ask, often driven by the properties that have already been established for the continuum Ginzburg--Landau model, such as Gamma-convergence properties of the functional and relationships between its associated differential equations. This presentation will give an overview of some recent developments.

Joachim Weickert
Title: Stable Models and Algorithms for Backward Diffusion Evolutions
Abstract: Backward diffusion equations are potentially useful for image enhancement and deblurring. However, these processes are regarded as typical representatives for ill-posed problems that suffer from intrinsic instabilities. These difficulties have prevented many researchers from using such evolutions. The goal of this talk is to show that this fear is unsubstantiated, provided that one supplements the models with suitable stabilisation techniques and takes care that the numerical algorithms reproduce all qualitative properties of the continuous models in an adequate way. Prototypical models include forward-backward diffusion processes and repulsive particle systems with range constraints. Joint work with Martin Welk (UMIT), Leif Bergerhoff (Saarland University), Marcelo Càrdenas (Saarland University), and Guy Gilboa (Technion).

Rebecca Willett
Title: Learning to Solve Inverse Problems in Imaging
Abstract: Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. I will describe an end-to-end, data-driven method of solving inverse problems inspired by the Neumann series, called a Neumann network. Rather than unroll an iterative optimization algorithm, we truncate a Neumann series which directly solves the linear inverse problem with a data-driven nonlinear regularizer. The Neumann network architecture outperforms traditional inverse problem solution methods, model-free deep learning approaches, and state-of-the-art unrolled iterative methods on standard datasets. Finally, when the images belong to a union of subspaces and under appropriate assumptions on the forward model, we prove there exists a Neumann network configuration that well-approximates the optimal oracle estimator for the inverse problem and demonstrate empirically that the trained Neumann network has the form predicted by theory. This is joint work with Davis Gilton and Greg Ongie.

Luca Zanni
Title: Spectral properties of steplength selections in gradient methods: from unconstrained to constrained optimization
Abstract: The steplength selection strategies have a remarkable effect on the efficiency of gradient-based methods for both unconstrained and constrained optimization. In the last years, many challenging optimization problems arising from different domains of applied sciences, such as imaging and machine learning, have been successfully faced with first-order approaches thanks to the design of new adaptive steplength rules. A crucial aspect at the basis of these new rules is the ability to capture low-cost second-order information by exploiting special relationship between the steplengths and the spectrum of the Hessian of the objective function. In this talk, starting from a spectral analysis of popular steplength rules in unconstrained optimization, we introduce some ideas for exploiting the steplength spectral properties in case of gradient projection approaches for box-constrained problems. This study suggests how state of the art steplength rules need to be modified for taking into account the presence of box-constraints. Furthermore, the combination of the new rules with variable metric strategies is also discussed. Numerical results on both randomly generated test problems and imaging applications are reported for evaluating the behaviour of the considered steplengths within gradient projection methods. This is a joint work with S. Crisci, F. Porta and V. Ruggiero.

Sponsors