Seminars and Colloquia by Series

A Generalization to DAGs for Hierarchical Exchangeability

Series
Stochastics Seminar
Time
Thursday, August 22, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Paul JungKAIST

A random array indexed by the paths of an infinitely-branching rooted tree of finite depth is hierarchically exchangeable if its joint distribution is invariant under rearrangements that preserve the tree structure underlying the index set. Austin and Panchenko (2014) prove that such arrays have de Finetti-type representations, and moreover, that an array indexed by a finite collection of such trees has an Aldous-Hoover-type representation.

Motivated by problems in Bayesian nonparametrics and probabilistic programming discussed in Staton et al. (2018), we generalize hierarchical exchangeability to a new kind of partial exchangeability for random arrays which we call DAG-exchangeability. In our setting a random array is indexed by N^{|V|} for some DAG G=(V,E), and its exchangeability structure is governed by the edge set E. We prove a representation theorem for such arrays which generalizes the Aldous-Hoover representation theorem, and for which the Austin-Panchenko representation is a special case.

Constructive regularization of the random matrix norm.

Series
Stochastics Seminar
Time
Sunday, April 28, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
006
Speaker
Liza RebrovaUCLA

I will talk about the structure of large square random matrices with centered i.i.d. heavy-tailed entries (only two finite moments are assumed). In our previous work with R. Vershynin we have shown that the operator norm of such matrix A can be reduced to the optimal sqrt(n)-order with high probability by zeroing out a small submatrix of A, but did not describe the structure of this "bad" submatrix, nor provide a constructive way to find it. Now we can give a very simple description of this small "bad" subset: it is enough to zero out a small fraction of the rows and columns of A with largest L2 norms to bring its operator norm to the almost optimal sqrt(loglog(n)*n)-order, under additional assumption that the entries of A are symmetrically distributed. As a corollary, one can also obtain a constructive procedure to find a small submatrix of A that one can zero out to achieve the same regularization.

I am planning to discuss some details of the proof, the main component of which is the development of techniques that extend constructive regularization approaches known for the Bernoulli matrices (from the works of Feige and Ofek, and Le, Levina and Vershynin) to the considerably broader class of heavy-tailed random matrices.

TBA by N Demni

Series
Stochastics Seminar
Time
Thursday, April 18, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Nizar DemniUniversity of Marseille

Random Neural Networks with applications to Image Recovery

Series
Stochastics Seminar
Time
Thursday, April 11, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Paul HandNortheastern University
Neural networks have led to new and state of the art approaches for image recovery. They provide a contrast to standard image processing methods based on the ideas of sparsity and wavelets. In this talk, we will study two different random neural networks. One acts as a model for a learned neural network that is trained to sample from the distribution of natural images. Another acts as an unlearned model which can be used to process natural images without any training data. In both cases we will use high dimensional concentration estimates to establish theory for the performance of random neural networks in imaging problems.

Constructive regularization of the random matrix norm

Series
Stochastics Seminar
Time
Thursday, March 28, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Liza RebovaMathematics, UCLA

I will talk about the structure of large square random matrices with centered i.i.d. heavy-tailed entries (only two finite moments are assumed). In our previous work with R. Vershynin we have shown that the operator norm of such matrix A can be reduced to the optimal sqrt(n)-order with high probability by zeroing out a small submatrix of A, but did not describe the structure of this "bad" submatrix, nor provide a constructive way to find it. Now we can give a very simple description of this small "bad" subset: it is enough to zero out a small fraction of the rows and columns of A with largest L2 norms to bring its operator norm to the almost optimal sqrt(loglog(n)*n)-order, under additional assumption that the entries of A are symmetrically distributed. As a corollary, one can also obtain a constructive procedure to find a small submatrix of A that one can zero out to achieve the same regularization.
Im am planning to discuss some details of the proof, the main component of which is the development of techniques that extend constructive regularization approaches known for the Bernoulli matrices (from the works of Feige and Ofek, and Le, Levina and Vershynin) to the considerably broader class of heavy-tailed random matrices.

Iterative linear solvers and random matrices: new bounds for the block Gaussian sketch and project method

Series
Stochastics Seminar
Time
Wednesday, March 27, 2019 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Liza RebrovaUCLA

One of the most famous methods for solving large-scale over-determined linear systems is Kaczmarz algorithm, which iteratively projects the previous approximation x_k onto the solution spaces of the next equation in the system. An elegant proof of the exponential convergence of this method using correct randomization of the process is due to Strohmer and Vershynin (2009). Many extensions and generalizations of the method were proposed since then, including the works of Needell, Tropp, Ward, Srebro, Tan and many others. An interesting unifying view on a number of iterative solvers (including several versions of the Kaczmarz algorithm) was proposed by Gower and Richtarik in 2016. The main idea of their sketch-and-project framework is the following: one can observe that the random selection of a row (or a row block) can be represented as a sketch, that is, left multiplication by a random vector (or a matrix), thereby pre-processing every iteration of the method, which is represented by a projection onto the image of the sketch.

I will give an overview of some of these methods, and talk about the role that random matrix theory plays in the showing their convergence. I will also discuss our new results with Deanna Needell on the block Gaussian sketch and project method.

TBA by

Series
Stochastics Seminar
Time
Thursday, March 14, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
TBASOM, GaTech

1-d parabolic Anderson model with rough spatial noise

Series
Stochastics Seminar
Time
Thursday, March 7, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Samy TindelPurdue University
In this talk I will first recall some general facts about the parabolic Anderson model (PAM), which can be briefly described as a simple heat equation in a random environment. The key phenomenon which has to be observed in this context is called localization. I will review some ways to express this phenomenon, and then single out the so called eigenvectors localization for the Anderson operator. This particular instance of localization motivates our study of large time asymptotics for the stochastic heat equation. In the second part of the talk I will describe the Gaussian environment we consider, which is rougher than white noise, then I will give an account on the asymptotic exponents we obtain as time goes to infinity. If time allows it, I will also give some elements of proof.

On the reconstruction error of PCA

Series
Stochastics Seminar
Time
Tuesday, March 5, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 168
Speaker
Martin WahlHumboldt University, Berlin.

We identify principal component analysis (PCA) as an empirical risk minimization problem with respect to the reconstruction error and prove non-asymptotic upper bounds for the corresponding excess risk. These bounds unify and improve existing upper bounds from the literature. In particular, they give oracle inequalities under mild eigenvalue conditions. We also discuss how our results can be transferred to the subspace distance and, for instance, how our approach leads to a sharp $\sin \Theta$ theorem for empirical covariance operators. The proof is based on a novel contraction property, contrasting previous spectral perturbation approaches. This talk is based on joint works with Markus Reiß and Moritz Jirak.

Joint distribution of Busemann functions for the corner growth model

Series
Stochastics Seminar
Time
Thursday, February 28, 2019 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Wai Tong (Louis) FanIndiana University, Bloomington
We present the joint distribution of the Busemann functions, in all directions of growth, of the exactly solvable corner growth model (CGM). This gives a natural coupling of all stationary CGMs and leads to new results about geodesics. Properties of this joint distribution are accessed by identifying it as the unique invariant distribution of a multiclass last passage percolation model. This is joint work with Timo Seppäläinen.

Pages