- You are here:
- GT Home
- Home
- News & Events

Series: High Dimensional Seminar

TBA

Series: High Dimensional Seminar

We discuss the asymptotic value of the maximal perimeter of a convex set in an n-dimensional space with respect to certain classes of measures. Firstly, we derive a lower bound for this quantity for a large class of probability distributions; the lower bound depends on the moments only. This lower bound is sharp in the case of the Gaussian measure (as was shown by Nazarov in 2001), and, more generally, in the case of rotation invariant log-concave measures (as was shown by myself in 2014). We discuss another class of measures for which this bound is sharp. For isotropic log-concave measures, the value of the lower bound is at least n^{1/8}.

In addition, we show a uniform upper bound of Cn||f||^{1/n}_{\infty} for all log-concave measures in a special position, which is attained for the uniform distribution on the cube. We further estimate the maximal perimeter of isotropic log-concave measures by n^2.

Series: High Dimensional Seminar

We discuss a general approach to a problem of estimation of a smooth function $f(\theta)$ of a high-dimensional parameter $\theta$

of statistical models. In particular, in the case of $n$ i.i.d. Gaussian observations $X_1,\doot, X_n$ with mean $\mu$ and covariance

matrix $\Sigma,$ the unknown parameter is $\theta = (\mu, \Sigma)$ and our approach yields an estimator of $f(\theta)$

for a function $f$ of smoothness $s>0$ with mean squared error of the order $(\frac{1}{n} \vee (\frac{d}{n})^s) \wedge 1$

(provided that the Euclidean norm of $\mu$ and operator norms of $\Sigma,\Sigma^{-1}$ are uniformly bounded),

with the error rate being minimax optimal up to a log factor (joint result with Mayya Zhilova). The construction of optimal estimators

crucially relies on a new bias reduction method in high-dimensional problems

and the bounds on the mean squared error are based on controlling finite differences of smooth functions along certain Markov chains

in high-dimensional parameter spaces as well as on concentration inequalities.

Series: High Dimensional Seminar

Computing the eigenvalues and eigenvectors of a large matrix is a basic task in high dimensional data analysis with many applications in computer science and statistics. In practice, however, data is often perturbed by noise. A natural question is the following: How much does a small perturbation to the matrix change the eigenvalues and eigenvectors? In this talk, I will consider the case where the perturbation is random. I will discuss perturbation results for the eigenvalues and eigenvectors as well as for the singular values and singular vectors. This talk is based on joint work with Van Vu, Ke Wang, and Philip Matchett Wood.

Series: High Dimensional Seminar

One of the most famous methods for solving large-scale over-determined linear systems is Kaczmarz algorithm, which iteratively projects the previous approximation x_k onto the solution spaces of the next equation in the system. An elegant proof of the exponential convergence of this method using correct randomization of the process is due to Strohmer and Vershynin (2009). Many extensions and generalizations of the method were proposed since then, including the works of Needell, Tropp, Ward, Srebro, Tan and many others. An interesting unifying view on a number of iterative solvers (including several versions of the Kaczmarz algorithm) was proposed by Gower and Richtarik in 2016. The main idea of their sketch-and-project framework is the following: one can observe that the random selection of a row (or a row block) can be represented as a sketch, that is, left multiplication by a random vector (or a matrix), thereby pre-processing every iteration of the method, which is represented by a projection onto the image of the sketch.

I will give an overview of some of these methods, and talk about the role that random matrix theory plays in the showing their convergence. I will also discuss our new results with Deanna Needell on the block Gaussian sketch and project method.

Series: High Dimensional Seminar

We will try to address a few universality questions for the behavior of large random matrices over finite fields, and then present a minimal progress on one of these questions.

Series: High Dimensional Seminar

Linear Schur multipliers, which act on matrices by entrywisemultiplications, as well as their generalizations have been studiedfor over a century and successfully applied in perturbation theory (asdemonstrated in the previous talk). In this talk, we will discussestimates for finite dimensional multilinear Schur multipliersunderlying these applications.

Series: High Dimensional Seminar

A single soap bubble has a spherical shape since it minimizes its surface area subject to a fixed enclosed volume of air. When two soap bubbles collide, they form a “double-bubble” composed of three spherical caps. The double-bubble minimizes total surface area among all sets enclosing two fixed volumes. This was proven mathematically in a landmark result by Hutchings-Morgan-Ritore-Ros and Reichardt using the calculus of variations in the early 2000s. The analogous case of three or more Euclidean sets is considered difficult if not impossible. However, if we replace Lebesgue measure in these problems with the Gaussian measure, then recent work of myself (for 3 sets) and of Milman-Neeman (for any number of sets) can actually solve these problems. We also use the calculus of variations. Time permitting, we will discuss an improvement to the Milman-Neeman result and applications to optimal clustering of data and to designing elections that are resilient to hacking. http://arxiv.org/abs/1901.03934

Series: High Dimensional Seminar

Moment problem is a classical question in real analysis, which asks whether a set of moments can be realized as integration of corresponding monomials with respect to a Borel measure. Truncated moment problem asks the same question given a finite set of moments. I will explain how some of the fundamental results in the truncated moment problem can be proved (in a very general setting) using elementary convex geometry. No familiarity with moment problems will be assumed. This is joint work with Larry Fialkow.

Series: High Dimensional Seminar

We study delocalization properties of null vectors and eigenvectors of matrices with i.i.d. subgaussian entries. Such properties describe quantitatively how "flat" is a vector and confirm one of the universality conjectures stating that distributions of eigenvectors of many classes of random matrices are close to the uniform distribution on the unit sphere. In particular, we get lower bounds on the smallest coordinates of eigenvectors, which are optimal as the case of Gaussian matrices shows. The talk is based on the joint work with Konstantin Tikhomirov.