Seminars and Colloquia by Series

Stochastic analysis and geometric functional inequalities

Series
High Dimensional Seminar
Time
Wednesday, October 9, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Masha GordinaUniversity of Connecticut

We will survey different methods of proving functional inequalities for hypoelliptic  diffusions and the corresponding heat kernels. Some of these methods rely on geometric methods such as curvature-dimension inequalities (due to Baudoin-Garofalo), and some are probabilistic  such as coupling, and finally some use structure  theory and a Fourier transform on Lie groups. This is based on joint work with M. Asaad, F. Baudoin, B. Driver, T. Melcher, Ph. Mariano et al.

Invertibility of inhomogenuous random matrices

Series
High Dimensional Seminar
Time
Wednesday, October 2, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Galyna LivshytsGeorgia Tech

We will show the sharp estimate on the behavior of the smallest singular value of random matrices under very general assumptions. One of the steps in the proof is a result about the efficient discretization of the unit sphere in an n-dimensional euclidean space. Another step involves the study of the regularity of the behavior of lattice sets. Some elements of the proof will be discussed. Based on the joint work with Tikhomirov and Vershynin.

Size of nodal domains for Erdős–Rényi Graph

Series
High Dimensional Seminar
Time
Wednesday, September 25, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Han HuangGeorgia Tech

In the realm of Laplacians of Riemannian manifolds, nodal domains have been the subject of intensive research for well over a hundred years. 

Given a Riemannian manifold M, let f be an eigenfunctions f of the Laplacian with respect to some boundary conditions.  A nodal domain associated with f is the maximal connected subset of the domain M  for which the f does not change sign.

Here we examine the discrete cases, namely we consider nodal domains for graphs. Dekel-Lee-Linial shows that for a Erdős–Rényi graph G(n, p), with high probability there are exactly two nodal domains for each eigenvector corresponding to a non-leading eigenvalue.  We prove that with high probability, the sizes of these nodal domains are approximately equal to each other. 

 

John’s ellipsoid is not good for approximation

Series
High Dimensional Seminar
Time
Wednesday, September 18, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Han HuangGeorgia Tech

We study the subject of approximation of convex bodies by polytopes in high dimension.  

For a convex set K in R^n, we say that K can be approximated by a polytope of m facets by a distance R>1 if there exists a polytope of P m facets such that K contains P and RP contains K. 

When K is symmetric, the maximal volume ellipsoid of K is used heavily on how to construct such polytope of poly(n) facets to approximate K. In this talk, we will discuss why the situation is entirely different for non-symmetric convex bodies.

Geometric inequalities via information theory

Series
High Dimensional Seminar
Time
Wednesday, September 11, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Jing HaoGeorgia Tech

Using ideas from information theory, we establish lower bounds on the volume and the surface area of a geometric body using the size of its slices along different directions.  In the first part of the talk, we derive volume bounds for convex bodies using generalized subadditivity properties of entropy combined with entropy bounds for log-concave random variables. In the second part, we investigate a new notion of Fisher information which we call the L1-Fisher information and show that certain superadditivity properties of the L1-Fisher information lead to lower bounds for the surface areas of polyconvex sets in terms of its slices.

On the QQR codes in coding theory

Series
High Dimensional Seminar
Time
Wednesday, September 4, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Jing HaoGeorgia Tech

In this talk I will briefly talk about coding theory and introduce a specific family of codes called Quasi-quadratic residue (QQR) codes. These codes have large minimum distances, which means they have good error-correcting capabilities. The weights of their codewords are directly related to the number of points on corresponding hyperelliptic curves. I will show a heuristic model to count the number of points on hyperelliptic curves using a coin-toss model, which in turn casts light on the relation between efficiency and the error-correcting capabilities of QQR codes. I will also show an interesting phenomenon we found about the weight enumerator of QQR codes. Lastly, using the bridge between QQR codes and hyperelliptic curves again, we derive the asymptotic behavior of point distribution of a family of hyperelliptic curves using results from coding theory.

Anti-concentration of random sums with dependent terms, and singularity of sparse Bernoulli matrices

Series
High Dimensional Seminar
Time
Wednesday, August 28, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Konstantin TikhomirovGeorgiaTech

We will consider the problem of estimating the singularity probability of sparse Bernoulli matrices, and a related question of anti-concentration of weighted sums of dependent Bernoulli(p) variables.

Based on joint work with Alexander Litvak.

On maximal perimeters of convex sets with respect to measures

Series
High Dimensional Seminar
Time
Wednesday, April 17, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Galyna LivshytsGeorgia Tech

We discuss the asymptotic value of the maximal perimeter of a convex set in an n-dimensional space with respect to certain classes of measures. Firstly, we derive a lower bound for this quantity for a large class of probability distributions; the lower bound depends on the moments only. This lower bound is sharp in the case of the Gaussian measure (as was shown by Nazarov in 2001), and, more generally, in the case of rotation invariant log-concave measures (as was shown by myself in 2014). We discuss another class of measures for which this bound is sharp. For isotropic log-concave measures, the value of the lower bound is at least n^{1/8}.

In addition, we show a uniform upper bound of Cn||f||^{1/n}_{\infty} for all log-concave measures in a special position, which is attained for the uniform distribution on the cube. We further estimate the maximal perimeter of isotropic log-concave measures by n^2. 

Optimal estimation of smooth functionals of high-dimensional parameters

Series
High Dimensional Seminar
Time
Wednesday, April 10, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Vladimir KoltchinskiiGeorgia Tech

Please Note: We discuss a general approach to a problem of estimation of a smooth function $f(\theta)$ of a high-dimensional parameter $\theta$ of statistical models. In particular, in the case of $n$ i.i.d. Gaussian observations $X_1,\doot, X_n$ with mean $\mu$ and covariance matrix $\Sigma,$ the unknown parameter is $\theta = (\mu, \Sigma)$ and our approach yields an estimator of $f(\theta)$ for a function $f$ of smoothness $s>0$ with mean squared error of the order $(\frac{1}{n} \vee (\frac{d}{n})^s) \wedge 1$ (provided that the Euclidean norm of $\mu$ and operator norms of $\Sigma,\Sigma^{-1}$ are uniformly bounded), with the error rate being minimax optimal up to a log factor (joint result with Mayya Zhilova). The construction of optimal estimators crucially relies on a new bias reduction method in high-dimensional problems and the bounds on the mean squared error are based on controlling finite differences of smooth functions along certain Markov chains in high-dimensional parameter spaces as well as on concentration inequalities.

Random matrix perturbations

Series
High Dimensional Seminar
Time
Wednesday, April 3, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Sean O'RourkeUniversity of Colorado Boulder

Computing the eigenvalues and eigenvectors of a large matrix is a basic task in high dimensional data analysis with many applications in computer science and statistics. In practice, however, data is often perturbed by noise. A natural question is the following: How much does a small perturbation to the matrix change the eigenvalues and eigenvectors? In this talk, I will consider the case where the perturbation is random. I will discuss perturbation results for the eigenvalues and eigenvectors as well as for the singular values and singular vectors.  This talk is based on joint work with Van Vu, Ke Wang, and Philip Matchett Wood.

Pages