Seminars and Colloquia by Series

Thursday, October 24, 2019 - 15:00 , Location: , Petros Valettas , Organizer:
Thursday, October 24, 2019 - 15:00 , Location: Skiles 006 , Petros Valettas , University of Missouri, Columbia , valettasp@missouri.edu , Organizer:
Thursday, October 10, 2019 - 15:00 , Location: Skiles 006 , Maria Gordina , University of Connecticut , maria.gordina@uconn.edu , Organizer:
Thursday, August 22, 2019 - 15:05 , Location: Skiles 005 , Paul Jung , KAIST , pauljung@kaist.ac.kr , Organizer: Michael Damron

A random array indexed by the paths of an infinitely-branching rooted tree of finite depth is hierarchically exchangeable if its joint distribution is invariant under rearrangements that preserve the tree structure underlying the index set. Austin and Panchenko (2014) prove that such arrays have de Finetti-type representations, and moreover, that an array indexed by a finite collection of such trees has an Aldous-Hoover-type representation.

Motivated by problems in Bayesian nonparametrics and probabilistic programming discussed in Staton et al. (2018), we generalize hierarchical exchangeability to a new kind of partial exchangeability for random arrays which we call DAG-exchangeability. In our setting a random array is indexed by N^{|V|} for some DAG G=(V,E), and its exchangeability structure is governed by the edge set E. We prove a representation theorem for such arrays which generalizes the Aldous-Hoover representation theorem, and for which the Austin-Panchenko representation is a special case.

Sunday, April 28, 2019 - 15:05 , Location: 006 , Liza Rebrova , UCLA , Organizer:

I will talk about the structure of large square random matrices with centered i.i.d. heavy-tailed entries (only two finite moments are assumed). In our previous work with R. Vershynin we have shown that the operator norm of such matrix A can be reduced to the optimal sqrt(n)-order with high probability by zeroing out a small submatrix of A, but did not describe the structure of this "bad" submatrix, nor provide a constructive way to find it. Now we can give a very simple description of this small "bad" subset: it is enough to zero out a small fraction of the rows and columns of A with largest L2 norms to bring its operator norm to the almost optimal sqrt(loglog(n)*n)-order, under additional assumption that the entries of A are symmetrically distributed. As a corollary, one can also obtain a constructive procedure to find a small submatrix of A that one can zero out to achieve the same regularization.

I am planning to discuss some details of the proof, the main component of which is the development of techniques that extend constructive regularization approaches known for the Bernoulli matrices (from the works of Feige and Ofek, and Le, Levina and Vershynin) to the considerably broader class of heavy-tailed random matrices.

Thursday, April 18, 2019 - 15:05 , Location: Skiles 006 , Nizar Demni , University of Marseille , Organizer: Christian Houdre
Thursday, April 11, 2019 - 15:05 , Location: Skiles 006 , Paul Hand , Northeastern University , p.hand@northeastern.edu , Organizer: Michael Damron

Neural networks have led to new and state of the art approaches for image recovery. They provide a contrast to standard image processing methods based on the ideas of sparsity and wavelets. In this talk, we will study two different random neural networks. One acts as a model for a learned neural network that is trained to sample from the distribution of natural images. Another acts as an unlearned model which can be used to process natural images without any training data. In both cases we will use high dimensional concentration estimates to establish theory for the performance of random neural networks in imaging problems.

Thursday, March 28, 2019 - 15:05 , Location: Skiles 006 , Liza Rebova , Mathematics, UCLA , Organizer: Christian Houdre

I will talk about the structure of large square random matrices with centered i.i.d. heavy-tailed entries (only two finite moments are assumed). In our previous work with R. Vershynin we have shown that the operator norm of such matrix A can be reduced to the optimal sqrt(n)-order with high probability by zeroing out a small submatrix of A, but did not describe the structure of this "bad" submatrix, nor provide a constructive way to find it. Now we can give a very simple description of this small "bad" subset: it is enough to zero out a small fraction of the rows and columns of A with largest L2 norms to bring its operator norm to the almost optimal sqrt(loglog(n)*n)-order, under additional assumption that the entries of A are symmetrically distributed. As a corollary, one can also obtain a constructive procedure to find a small submatrix of A that one can zero out to achieve the same regularization.
Im am planning to discuss some details of the proof, the main component of which is the development of techniques that extend constructive regularization approaches known for the Bernoulli matrices (from the works of Feige and Ofek, and Le, Levina and Vershynin) to the considerably broader class of heavy-tailed random matrices.

Wednesday, March 27, 2019 - 15:00 , Location: Skiles 006 , Liza Rebrova , UCLA , rebrova@math.ucla.edu , Organizer: Galyna Livshyts

One of the most famous methods for solving large-scale over-determined linear systems is Kaczmarz algorithm, which iteratively projects the previous approximation x_k onto the solution spaces of the next equation in the system. An elegant proof of the exponential convergence of this method using correct randomization of the process is due to Strohmer and Vershynin (2009). Many extensions and generalizations of the method were proposed since then, including the works of Needell, Tropp, Ward, Srebro, Tan and many others. An interesting unifying view on a number of iterative solvers (including several versions of the Kaczmarz algorithm) was proposed by Gower and Richtarik in 2016. The main idea of their sketch-and-project framework is the following: one can observe that the random selection of a row (or a row block) can be represented as a sketch, that is, left multiplication by a random vector (or a matrix), thereby pre-processing every iteration of the method, which is represented by a projection onto the image of the sketch.

I will give an overview of some of these methods, and talk about the role that random matrix theory plays in the showing their convergence. I will also discuss our new results with Deanna Needell on the block Gaussian sketch and project method.

Thursday, March 14, 2019 - 15:05 , Location: Skiles 006 , TBA , SOM, GaTech , Organizer: Christian Houdre

Pages