Seminars and Colloquia by Series

On the Houdré-Tetali conjecture about an isoperimetric constant of graphs and Cheeger's inequality

Series
Stochastics Seminar
Time
Thursday, November 21, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Lap Chi LauUniversity of Waterloo

Houdré and Tetali defined a class of isoperimetric constants phi_p of graphs for 1/2 <= p <= 1.  When p=1, the classical Cheeger's inequality relates phi_1 to the second smallest eigenvalue of the normalized Laplacian matrix.  Houdré and Tetali conjectured that a similar Cheeger-type inequality holds for p=1/2, which if true would be a strengthening of Cheeger's inequality.  Morris and Peres proved the Houdré-Tetali conjecture up to an additional log factor, using techniques from evolving sets.  In this talk, we discuss the following results about this conjecture:

  - There is a family of counterexamples to the conjecture of Houdré and Tetali, showing that the logarithmic factor is needed.

  - Morris and Peres' result can be recovered using standard spectral arguments. 

  - The Houdré-Tetali conjecture is true for any constant p strictly bigger than 1/2, which is also a strengthening of Cheeger's inequality.

If time permits, we also discuss other strengthenings of Cheeger's inequality.  No background is assumed from the audience.

Limiting Expectations: The Central Limit Phenomenon

Series
Stochastics Seminar
Time
Thursday, November 14, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
David Grzybowski

The normal distribution appears in a wide and disparate set of circumstances, and this ubiquity is explained by the central limit phenomenon. This talk will explore several forms of the central limit theorem, as well as different methods of proof. Highlights include a new method of moments proof for entries on a hypersphere sphere and results for traces of large random matrices utilizing the Malliavin-Stein method.

Distance theorems and the smallest singular value of inhomogeneous random rectangular matrices

Series
Stochastics Seminar
Time
Thursday, November 7, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Manuel FernandezGeorgia Tech

In recent years, significant progress has been made in our understanding of the quantitative behavior of random matrices. Such results include delocalization properties of eigenvectors and tail estimates for the smallest singular value. A key ingredient in their proofs is a 'distance theorem', which is a small ball estimate for the distance between a random vector and subspace.  Building on work of Livshyts and Livshyts, Tikhomirov and Vershynin, we introduce a new distance theorem for inhomogeneous vectors and subspaces spanned by the columns of an inhomogeneous matrix. Such a result has a number of applications for generalizing results about the quantitative behavior of i.i.d. matrices to matrices without any identical distribution assumptions. To highlight this, we show that the smallest singular value estimate of Rudelson and Vershynin, proven for i.i.d. subgaussian rectangular matrices, holds true for inhomogeneous and heavy-tailed matrices.
This talk is partially based on joint work with Max Dabagia.

Statistical trajectory predictions for complex algorithms with random data

Series
Stochastics Seminar
Time
Thursday, October 31, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Ashwin PananjadyGeorgia Tech

Iterative algorithms are the workhorses of modern statistical signal processing and machine learning. While the choice of an algorithm and its hyperparameters determines both the speed and fidelity of the learning pipeline, it is common for this choice to be made heuristically, either by expensive trial-and-error or by comparing upper bounds on convergence rates of various candidate algorithms. Motivated by these issues, I will present a toolbox for deriving “state evolutions” for a wide variety of algorithms with random data. These are non-asymptotic, near-exact predictions of the statistical behavior of the algorithm, which apply even when the underlying optimization problem is nonconvex or the algorithm is randomly initialized. We will showcase these predictions on deterministic and stochastic variants of complex algorithms employed in some canonical statistical models.

Random Polynomials: Universality with Dependency

Series
Stochastics Seminar
Time
Thursday, October 24, 2024 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Oanh NguyenBrown University

In this talk, we explore random trigonometric polynomials with dependent coefficients, moving beyond the typical assumption of independent or Gaussian-distributed coefficients. We show that, under mild conditions on the dependencies between the coefficients, the asymptotic behavior of the expected number of real zeros is still universal. This universality result, to our knowledge, is the first of its kind for non-Gaussian dependent settings. Additionally, we present an elegant proof, highlighting the robustness of real zeros even in the presence of dependencies. Our findings bring the study of random polynomials closer to models encountered in practice, where dependencies between coefficients are common.

Joint work with Jurgen Angst and Guillaume Poly.

Non-linear mean-field systems in flocking and sampling

Series
Stochastics Seminar
Time
Thursday, October 17, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Sayan BanerjeeUNC

Mean-field particle systems are well-understood by now. Typical results involve obtaining a McKean-Vlasov equation for the fluid limit that provides a good approximation for the particle system over compact time intervals. However, when the driving vector field lacks a gradient structure or in the absence of convexity or functional inequalities, the long-time behavior of such systems is far from clear. In this talk, I will discuss two such systems, one arising in the context of flocking and the other in the context of sampling (Stein Variational Gradient Descent), where there is no uniform-in-time control on the discrepancy between the limit and prelimit dynamics. We will explore methods involving Lyapunov functions and weak convergence which shed light on their long-time behavior in the absence of such uniform control.

 

Based on joint works with Amarjit Budhiraja, Dilshad Imon (UNC, Chapel Hill), Krishnakumar Balasubramanian (UC Davis) and Promit Ghosal (UChicago).

Improved performance guarantees for Tukey’s median

Series
Stochastics Seminar
Time
Thursday, October 10, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Stanislav MinskerUniversity of Southern California

Is there a natural way to order data in dimension greater than one? The approach based on the notion of data depth, often associated with the name of John Tukey, is among the most popular. Tukey’s depth has found applications in robust statistics, graph theory, and the study of elections and social choice. 

We will give an introduction to the topic, describe the properties of Tukey’s depth, and introduce some remaining open questions as well as our recent progress towards the solutions.

The talk is based on a joint work with Yinan Shen.

Minimum Norm Interpolation Meets The Local Theory of Banach Spaces

Series
Stochastics Seminar
Time
Thursday, October 3, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Gil KurETH
Minimum-norm interpolators have recently gained attention primarily as an analyzable model to shed light on the double descent phenomenon observed for neural networks.  The majority of the work has focused on analyzing interpolators in Hilbert spaces, where typically an effectively low-rank structure of the feature covariance prevents a large bias. More recently, tight vanishing bounds have also been shown for isotropic high-dimensional data  for $\ell_p$-spaces with $p\in[1,2)$, leveraging the sparse structure of the ground truth.
This work takes a first step towards establishing a general framework that connects generalization properties of the interpolators to well-known concepts from high-dimensional geometry, specifically, from the local theory of Banach spaces.

Near-optimal estimation on the union of shift-invariant subspaces

Series
Stochastics Seminar
Time
Thursday, September 26, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Dmitrii OstrovskiiGeorgia Tech

In the 1990s, Arkadi Nemirovski asked the following question:

How hard is it to estimate a solution to unknown homogeneous linear difference equation with constant coefficients of order S, observed in the Gaussian noise on [0,N]?

The class of all such solutions, or "signals," is parametric---described by 2S complex parameters---but extremely rich: it includes the weighted sums of S exponentials, polynomials of degree S, harmonic oscillations with S arbitrary frequencies, and their algebraic combinations. Geometrically, this class is the union of all S-dimensional shift-invariant subspaces of the space of two-sided sequences, and of interest is the minimax risk on it with respect to the mean-squared error on [0,N]. I will present a recent result that shows this minimax risk to be O( S log(N) log(S)^2 ), improving over the state of the art by a polynomial in S factor, and coming within an O( log(S)^2 ) factor from the lower bound. It relies upon an approximation-theoretic construction related to minimal-norm interpolation over shift-invariant subspaces, in the spirit of the Landau-Kolmogorov problem, that I shall present in some detail. Namely, we will see that any shift-invariant subspace admits a bounded-support reproducing kernel whose spectrum has nearly the smallest possible Lp-energies for all p ≥ 1 at once.

Pseudo-Maximum Likelihood Theory for High-Dimension Rank-One Inference

Series
Stochastics Seminar
Time
Thursday, September 19, 2024 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Justin KoUniversity of Waterloo

We consider the task of estimating a rank-one matrix from noisy observations. Models that fall in this framework include community detection and spiked Wigner models. In this talk, I will discuss pseudo-maximum likelihood theory for such inference problems. We provide a variational formula for the asymptotic maximum pseudo-likelihood and characterize the asymptotic performance of pseudo maximum likelihood estimators. We will also discuss the implications of these findings to least squares estimators. Our approach uses the recent connections between statistical inference and statistical physics, and in particular the connection between the maximum likelihood and the ground state of a modified spin glass.

Based on joint work with Curtis Grant and Aukosh Jagannath.

Pages