Seminars and Colloquia by Series

Coupling at infinity

Series
Stochastics Seminar
Time
Thursday, March 10, 2011 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jonathan MattinglyDuke University, Mathematics Department
I will discuss how the idea of coupling at time infinity is equivalent to unique ergodicity of a markov process. In general, the coupling will be a kind of "asymptotic Wasserstein" coupling. I will draw examples from SDEs with memory and SPDEs. The fact that both are infinite dimensional markov processes is no coincidence.

Plug-in Approach to Active Learning

Series
Stochastics Seminar
Time
Thursday, March 3, 2011 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Stas MinskerGeorgia Tech
 Let (X,Y) be a random couple with unknown distribution P, X being an observation and Y - a binary label to be predicted. In practice, distribution P remains unknown but the learning algorithm has access to the training data - the sample from P. It often happens that the cost of obtaining the training data is associated with labeling the observations while the pool of observations itself is almost unlimited. This suggests to measure the performance of a learning algorithm in terms of its label complexity, the number of labels required to obtain a classifier with the desired accuracy. Active Learning theory explores the possible advantages of this modified framework.We will present a new active learning algorithm based on nonparametric estimators of the regression function and explain main improvements over the previous work.Our investigation provides upper and lower bounds for the performance of proposed method over a broad class of underlying distributions. 

Exact results for percolation thresholds, enclosed-area distribution functions and correlation functions in percolation

Series
Stochastics Seminar
Time
Tuesday, March 1, 2011 - 16:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Robert ZiffMichigan Center for Theoretical Physics, Department of Chemical Engineering, University of Michigan
Various exact results in two-dimensional percolation are presented. A method for finding exact thresholds for a wide variety of systems, which greatly expands previously known exactly solvable systems to such new lattices as "martini" and generalized "bowtie" lattices, is given. The size distribution is written in a Zipf's-law form in terms of the enclosed- area distribution, and the coefficient can be written in terms of the the number of hulls crossing a cylinder. Additional properties of hull walks (equivalent to some kinds of trajectories) are given. Finally, some ratios of correlation functions are shown to be universal, with a functional form that can be found exactly from conformal field theory.

The Convex Geometry of Inverse Problems

Series
Stochastics Seminar
Time
Thursday, February 24, 2011 - 15:05 for 1 hour (actually 50 minutes)
Location
Skyles 005
Speaker
Ben RechtComputer Sciences Department, University of Wisconsin
Deducing the state or structure of a system from partial, noisy measurements is a fundamental task throughout the sciences and engineering. The resulting inverse problems are often ill-posed because there are fewer measurements available than the ambient dimension of the model to be estimated. In practice, however, many interesting signals or models contain few degrees of freedom relative to their ambient dimension: a small number of genes may constitute the signature of a disease, very few parameters may specify the correlation structure of a time series, or a sparse collection of geometric constraints may determine a molecular configuration. Discovering, leveraging, or recognizing such low-dimensional structure plays an important role in making inverse problems well-posed. In this talk, I will propose a unified approach to transform notions of simplicity and latent low-dimensionality into convex penalty functions. This approach builds on the success of generalizing compressed sensing to matrix completion, and greatly extends the catalog of objects and structures that can be recovered from partial information. I will focus on a suite of data analysis algorithms designed to decompose general signals into sums of atoms from a simple---but not necessarily discrete---set. These algorithms are derived in a convex optimization framework that encompasses previous methods based on l1-norm minimization and nuclear norm minimization for recovering sparse vectors and low-rank matrices. I will provide sharp estimates of the number of generic measurements required for exact and robust recovery of a variety of structured models. These estimates are based on computing certain Gaussian statistics related to the latent model geometry. I will detail several example applications and describe how to scale the corresponding inference algorithms to very large data sets. (Joint work with Venkat Chandrasekaran, Pablo Parrilo, and Alan Willsky)

Generalized Fiducial Inference and Its Application to Wavelet Regression

Series
Stochastics Seminar
Time
Thursday, January 27, 2011 - 16:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Thomas LeeUniversity of California, Davis
In this talk we re-visit Fisher's controversial fiducial technique for conducting statistical inference. In particular, a generalization of Fisher's technique, termed generalized fiducial inference, is introduced. We illustrate its use with wavelet regression. Current and future work for generalized fiducial inference will also be discussed. Joint work with Jan Hannig and Hari Iyer

Global Testing under Sparse Alternatives: ANOVA, Multiple Comparisons and the Higher Criticism

Series
Stochastics Seminar
Time
Thursday, January 27, 2011 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Ery Arias-CastroUniversity of California, San Diego
We study the problem of testing for the significance of a subset of regression coefficients in a linear model under the assumption that the coefficient vector is sparse, a common situation in modern high-dimensional settings.  Assume there are p variables and let S be the number of nonzero coefficients.  Under moderate sparsity levels, when we may have S > p^(1/2), we show that the analysis of variance F-test is essentially optimal.  This is no longer the case under the sparsity constraint S < p^(1/2).  In such settings, a multiple comparison procedure is often preferred and we establish its optimality under the stronger assumption S < p^(1/4).  However, these two very popular methods are suboptimal, and sometimes powerless, when p^(1/4) < S < p^(1/2).  We suggest a method based on the Higher Criticism that is essentially optimal in the whole range S < p^(1/2).  We establish these results under a variety of designs, including the classical (balanced) multi-way designs and more modern `p > n' designs arising in genetics and signal processing. (Joint work with Emmanuel Candès and Yaniv Plan.)

Planted Cliques and Random Tensors

Series
Stochastics Seminar
Time
Thursday, December 2, 2010 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 002
Speaker
Santosh VempalaCollege of Computing, Georgia Tech
For general graphs, approximating the maximum clique is a notoriously hard problem even to approximate to a factor of nearly n, the number of vertices. Does the situation get better with random graphs? A random graph on n vertices where each edge is chosen with probability 1/2 has a clique of size nearly 2\log n with high probability. However, it is not know how to find one of size 1.01\log n in polynomial time. Does the problem become easier if a larger clique were planted in a random graph? The current best algorithm can find a planted clique of size roughly n^{1/2}. Given that any planted clique of size greater than 2\log n is unique with high probability, there is a large gap here. In an intriguing paper, Frieze and Kannan introduced a tensor-based method that could reduce the size of the planted clique to as small as roughly n^{1/3}. Their method relies on finding the spectral norm of a 3-dimensional tensor, a problem whose complexity is open. Moreover, their combinatorial proof does not seem to extend beyond this threshold. We show how to recover the Frieze-Kannan result using a purely probabilistic argument that generalizes naturally to r-dimensional tensors and allows us recover cliques of size as small as poly(r).n^{1/r} provided we can find the spectral norm of r-dimensional tensors. We highlight the algorithmic question that remains open. This is joint work with Charlie Brubaker.

Maximum likelihood estimation of a multidimensional log-concave density

Series
Stochastics Seminar
Time
Thursday, November 18, 2010 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 002
Speaker
Richard SamworthStatistical Laboratory, Cambridge, UK
If $X_1,...,X_n$ are a random sample from a density $f$ in $\mathbb{R}^d$, then with probability one there exists a unique log-concave maximum likelihood estimator $\hat{f}_n$ of $f$. The use of this estimator is attractive because, unlike kernel density estimation, the estimator is fully automatic, with no smoothing parameters to choose. We exhibit an iterative algorithm for computing the estimator and show how the method can be combined with the EM algorithm to fit finite mixtures of log-concave densities. Applications to classification, clustering and functional estimation problems will be discussed, as well as recent theoretical results on the performance of the estimator. The talk will be illustrated with pictures from the R package LogConcDEAD. Co-authors: Yining Chen, Madeleine Cule, Lutz Duembgen (Bern), RobertGramacy (Cambridge), Dominic Schuhmacher (Bern) and Michael Stewart

Random matrices with independent log-concave columns

Series
Stochastics Seminar
Time
Thursday, November 11, 2010 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 002
Speaker
Radoslaw AdamczakUniversity of Warsaw and Fields Institute
I will discuss certain geometric properties of random matrices with independent logarithmically concave columns, obtained in the last several years jointly with O. Guedon, A. Litvak, A. Pajor and N. Tomczak-Jaegermann. In particular I will discuss estimates on the largest and smallest singular values of such matrices and rates on convergence of empirical approximations to covariance matrices of log-concave measures (the Kannan-Lovasz-Simonovits problem).

Asymptotic properties of random matrices of long-range percolation model

Series
Stochastics Seminar
Time
Thursday, October 21, 2010 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 002
Speaker
Slim AyadiSchool of Math, Georgia Tech
We study the spectral properties of matrices of long-range percolation model. These are N*N random real symmetric matrices H whose elements are independent random variables taking zero value with probability 1-\psi((i-j)/b), b\in \R^{+}, where \psi is an even positive function with \psi(t)<1 and vanishing at infinity. We show that under rather general conditions on the probability distribution of H(i,j) the semicircle law is valid for the ensemble we study in the limit N,b\to\infty. In the second part, we study the leading term of the correlation function of the resolvent G(z)=(H-z)^{-1} with large enough |Imz| in the limit N,b\to\infty, b=O(N^{\alpha}), 1/3<\alpha<1. We show that this leading term, when considered in the local spectral scale leads to an expression found earlier by other authors for band random matrix ensembles. This shows that the ensemble we study and that of band random matrices belong to the same class of spectral universality.

Pages