TBA by Stephane Dartois
- Series
- Stochastics Seminar
- Time
- Thursday, April 3, 2025 - 15:30 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Stephane Dartois – Université Paris-Saclay, CEA, List – stephane.dartois@cea.fr
Please Note: This talk is hosted jointly with the Analysis Seminar.
In the 1990s, Arkadi Nemirovski asked the question:
How hard is it to estimate a solution to an unknown homogeneous linear difference equation with constant coefficients of order S, observed in the Gaussian noise on [0,N]?
The class of all such solutions, or "signals," is parametric -- described by 2S complex parameters -- but extremely rich: it contains all exponential polynomials over C with total degree S, including harmonic oscillations with S arbitrary frequencies. Geometrically, this class corresponds to the projection onto C^n of the union of all shift-invariant subspaces of C^Z of dimension S. We show that the statistical complexity of this class, quantified by the squared minimax radius of the (1-P)-confidence Euclidean norm ball, is nearly the same as for the class of S-sparse signals, namely (S log(N) + log(1/P)) log^2(S) log(N/S) up to a constant factor. Moreover, the corresponding near-minimax estimator is tractable, and it can be used to build a test statistic with a near-minimax detection threshold in the associated detection problem. These statistical results rest upon an approximation-theoretic one: we show that finite-dimensional shift-invariant subspaces admit compactly supported reproducing kernels whose Fourier spectra have nearly the smallest possible p-norms, for all p ≥ 1 at once.
The talk is based on the recent preprint https://arxiv.org/pdf/2411.03383.
Houdré and Tetali defined a class of isoperimetric constants phi_p of graphs for 1/2 <= p <= 1. When p=1, the classical Cheeger's inequality relates phi_1 to the second smallest eigenvalue of the normalized Laplacian matrix. Houdré and Tetali conjectured that a similar Cheeger-type inequality holds for p=1/2, which if true would be a strengthening of Cheeger's inequality. Morris and Peres proved the Houdré-Tetali conjecture up to an additional log factor, using techniques from evolving sets. In this talk, we discuss the following results about this conjecture:
- There is a family of counterexamples to the conjecture of Houdré and Tetali, showing that the logarithmic factor is needed.
- Morris and Peres' result can be recovered using standard spectral arguments.
- The Houdré-Tetali conjecture is true for any constant p strictly bigger than 1/2, which is also a strengthening of Cheeger's inequality.
If time permits, we also discuss other strengthenings of Cheeger's inequality. No background is assumed from the audience.
The normal distribution appears in a wide and disparate set of circumstances, and this ubiquity is explained by the central limit phenomenon. This talk will explore several forms of the central limit theorem, as well as different methods of proof. Highlights include a new method of moments proof for entries on a hypersphere sphere and results for traces of large random matrices utilizing the Malliavin-Stein method.
In recent years, significant progress has been made in our understanding of the quantitative behavior of random matrices. Such results include delocalization properties of eigenvectors and tail estimates for the smallest singular value. A key ingredient in their proofs is a 'distance theorem', which is a small ball estimate for the distance between a random vector and subspace. Building on work of Livshyts and Livshyts, Tikhomirov and Vershynin, we introduce a new distance theorem for inhomogeneous vectors and subspaces spanned by the columns of an inhomogeneous matrix. Such a result has a number of applications for generalizing results about the quantitative behavior of i.i.d. matrices to matrices without any identical distribution assumptions. To highlight this, we show that the smallest singular value estimate of Rudelson and Vershynin, proven for i.i.d. subgaussian rectangular matrices, holds true for inhomogeneous and heavy-tailed matrices.
This talk is partially based on joint work with Max Dabagia.
Iterative algorithms are the workhorses of modern statistical signal processing and machine learning. While the choice of an algorithm and its hyperparameters determines both the speed and fidelity of the learning pipeline, it is common for this choice to be made heuristically, either by expensive trial-and-error or by comparing upper bounds on convergence rates of various candidate algorithms. Motivated by these issues, I will present a toolbox for deriving “state evolutions” for a wide variety of algorithms with random data. These are non-asymptotic, near-exact predictions of the statistical behavior of the algorithm, which apply even when the underlying optimization problem is nonconvex or the algorithm is randomly initialized. We will showcase these predictions on deterministic and stochastic variants of complex algorithms employed in some canonical statistical models.
In this talk, we explore random trigonometric polynomials with dependent coefficients, moving beyond the typical assumption of independent or Gaussian-distributed coefficients. We show that, under mild conditions on the dependencies between the coefficients, the asymptotic behavior of the expected number of real zeros is still universal. This universality result, to our knowledge, is the first of its kind for non-Gaussian dependent settings. Additionally, we present an elegant proof, highlighting the robustness of real zeros even in the presence of dependencies. Our findings bring the study of random polynomials closer to models encountered in practice, where dependencies between coefficients are common.
Joint work with Jurgen Angst and Guillaume Poly.
Mean-field particle systems are well-understood by now. Typical results involve obtaining a McKean-Vlasov equation for the fluid limit that provides a good approximation for the particle system over compact time intervals. However, when the driving vector field lacks a gradient structure or in the absence of convexity or functional inequalities, the long-time behavior of such systems is far from clear. In this talk, I will discuss two such systems, one arising in the context of flocking and the other in the context of sampling (Stein Variational Gradient Descent), where there is no uniform-in-time control on the discrepancy between the limit and prelimit dynamics. We will explore methods involving Lyapunov functions and weak convergence which shed light on their long-time behavior in the absence of such uniform control.
Based on joint works with Amarjit Budhiraja, Dilshad Imon (UNC, Chapel Hill), Krishnakumar Balasubramanian (UC Davis) and Promit Ghosal (UChicago).