- Analysis Seminar
- Tuesday, November 24, 2020 - 14:00 for 1 hour (actually 50 minutes)
- Carlos Cabrelli – University of Buenos Aires – email@example.com
For any nonnegative density f and radially decreasing interaction potential W, the celebrated Riesz rearrangement inequality shows the interaction energy E[f] = \int f(x)f(y)W(x-y) dxdy satisfies E[f] <= E[f^*], where f^* is the radially decreasing rearrangement of f. It is a natural question to look for a quantitative version of this inequality: if its two sides almost agree, how close must f be to a translation of f^*? Previously the stability estimate was only known for characteristic functions. I will discuss a recent work with Xukai Yan, where we found a simple proof of stability estimates for general densities.
I will also discuss another work with Matias Delgadino and Xukai Yan, where we constructed an interpolation curve between any two radially decreasing densities with the same mass, and show that the interaction energy is convex along this interpolation. As an application, this leads to uniqueness of steady states in aggregation-diffusion equations with any attractive interaction potential for diffusion power m>=2, where the threshold is sharp.
The Yamabe problem asks whether, given a closed Riemannian manifold, one can find a conformal metric of constant scalar curvature (CSC). An affirmative answer was given by Schoen in 1984, following contributions from Yamabe, Trudinger, and Aubin, by establishing the existence of a function that minimizes the so-called Yamabe energy functional; the minimizing function corresponds to the conformal factor of the CSC metric.
We address the quantitative stability of minimizing Yamabe metrics. On any closed Riemannian manifold we show—in a quantitative sense—that if a function nearly minimizes the Yamabe energy, then the corresponding conformal metric is close to a CSC metric. Generically, this closeness is controlled quadratically by the Yamabe energy deficit. However, we construct an example demonstrating that this quadratic estimate is false in the general. This is joint work with Max Engelstein and Luca Spolaor.
We improve on some recent results of Sagiv and Steinerberger that quantify the following uncertainty principle: for a function f with mean zero, then either the size of the zero set of the function or the cost of transporting the mass of the positive part of f to its negative part must be big. We also provide a sharp upper estimate of the transport cost of the positive part of an eigenfunction of the Laplacian.
This proves a conjecture of Steinerberger and provides a lower bound of the size of a nodal set of the eigenfunction. Finally, we use a similar technique to provide a measure of how well the points in a design in a manifold are equidistributed. This is a joint work with Tom Carroll and Xavier Massaneda.
The weak-type (1,1) estimate for Calderón-Zygmund operators is fundamental in harmonic analysis. We investigate weak-type inequalities for Calderón-Zygmund singular integral operators using the Calderón-Zygmund decomposition and ideas inspired by Nazarov, Treil, and Volberg. We discuss applications of these techniques in the Euclidean setting, in weighted settings, for multilinear operators, for operators with weakened smoothness assumptions, and in studying the dimensional dependence of the Riesz transforms.
Artificial neural networks have gained widespread adoption as a powerful tool for various machine learning tasks in recent years. Training a neural network to approximate a target function involves solving an inherently non-convex problem. In practice, this is done using stochastic gradient descent with random initialization. For the approximation problem with neural networks error rate guarantees are established for different classes of functions however these rates are not always achieved in practice due to many local minima of the resulting optimization problem.
The challenge we address in this work is the following. We want to find small size shallow neural networks that can be trained algorithmically and which achieve guaranteed approximation speed and precision. To maintain the small size we apply penalties on the weights of the network. We show that under minimal requirements, all local minima of the resulting problem are well behaved and possess a desirable small size without sacrificing precision. We adopt the integral neural network framework and use techniques from optimization theory and harmonic analysis to prove our results. In this talk, we will discuss our existing work and possible future promising areas of interest where this approach can potentially be adopted.