Seminars and Colloquia by Series

Calibrated Elastic Regularization in Matrix Completion

Series
Stochastics Seminar
Time
Thursday, November 15, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skyles 006
Speaker
Cun-Hui ZhangRutgers University
This paper concerns the problem of matrix completion, which is to estimate a matrix from observations in a small subset of indices. We propose a calibrated spectrum elastic net method with a sum of the nuclear and Frobenius penalties and develop an iterative algorithm to solve the convex minimization problem. The iterative algorithm alternates between imputing the missing entries in the incomplete matrix by the current guess and estimating the matrix by a scaled soft-thresholding singular value decomposition of the imputed matrix until the resulting matrix converges. A calibration step follows to correct the bias caused by the Frobenius penalty. Under proper coherence conditions and for suitable penalties levels, we prove that the proposed estimator achieves an error bound of nearly optimal order and in proportion to the noise level. This provides a unified analysis of the noisy and noiseless matrix completion problems. Tingni Sun and Cun-Hui Zhang, Rutgers University

A Wong-Zakai Approximation Scheme for Reflected Stochastic Differential Equations

Series
Stochastics Seminar
Time
Thursday, November 8, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Chris EvansUniversity of Missouri
In a series of famous papers E. Wong and M. Zakai showed that the solution to a Stratonovich SDE is the limit of the solutions to a corresponding ODE driven by the piecewise-linear interpolation of the driving Brownian motion. In particular, this implies that solutions to Stratonovich SDE "behave as we would expect from ODE theory". Working with my PhD adviser, Daniel Stroock, we have shown that a similar approximation result holds, in the sense of weak convergence of distributions, for reflected Stratonovich SDE.

Explorations in Burgers turbulence: integrability and exact solutions

Series
Stochastics Seminar
Time
Thursday, November 1, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Ravi SrinivasanUniversity of Texas at Austin
Burgers turbulence is the study of Burgers equation with random initial data or forcing. While having its origins in hydrodynamics, this model has remarkable connections to a variety of seemingly unrelated problems in statistics, kinetic theory, random matrices, and integrable systems. In this talk I will survey these connections and discuss the crucial role that exact solutions have played in the development of the theory.

Beyond the Gaussian Universality Class

Series
Stochastics Seminar
Time
Thursday, October 18, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Ivan CorwinClay Mathematics Institute and MIT
The Gaussian central limit theorem says that for a wide class of stochastic systems, the bell curve (Gaussian distribution) describes the statistics for random fluctuations of important observables. In this talk I will look beyond this class of systems to a collection of probabilistic models which include random growth models, polymers,particle systems, matrices and stochastic PDEs, as well as certain asymptotic problems in combinatorics and representation theory. I will explain in what ways these different examples all fall into a single new universality class with a much richer mathematical structure than that of the Gaussian.

Selectable Reduced Rank Regression and Principle Component Analysis

Series
Stochastics Seminar
Time
Tuesday, October 9, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skyles 005
Speaker
Yiyuan SheFlorida State University
Rank reduction as an effective technique for dimension reduction is widely used in statistical modeling and machine learning. Modern statistical applications entail high dimensional data analysis where there may exist a large number of nuisance variables. But the plain rank reduction cannot discern relevant or important variables. The talk discusses joint variable and rank selection for predictive learning. We propose to apply sparsity and reduced rank techniques to attain simultaneous feature selection and feature extraction in a vector regression setup. A class of estimators is introduced based on novel penalties that impose both row and rank restrictions on the coefficient matrix. Selectable principle component analysis is proposed and studied from a self-regression standpoint which gives an extension to the sparse principle component analysis. We show that these estimators adapt to the unknown matrix sparsity and have fast rates of convergence in comparison with LASSO and reduced rank regression. Efficient computational algorithms are developed and applied to real world applications.

Cramér type theorem for Wiener and Wigner stochastic integrals

Series
Stochastics Seminar
Time
Thursday, October 4, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
J.-C. BretonInstitut de Recherche Mathématique de Rennes
Cramér's theorem from 1936 states that the sum of two independent random variables is Gaussian if and only if these random variables are Gaussian. Since then, this property has been explored in different directions, such as for other distributions or non-commutative random variables. In this talk, we will investigate recent results in Gaussian chaoses and free chaoses. In particular, we will give a first positive Cramér type result in a free probability context.

Stochastic Target Approach to Ricci Flow on surfaces

Series
Stochastics Seminar
Time
Thursday, September 27, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Ionel PopescuSchool of Mathematics, Georgia Tech
Ricci flow is a sort of (nonlinear) heat problem under which the metric on a given manifold is evolving. There is a deep connection between probability and heat equation. We try to setup a probabilistic approach in the framework of a stochastic target problem. A major result in the Ricci flow is that the normalized flow (the one in which the area is preserved) exists for all positive times and it converges to a metric of constant curvature. We reprove this convergence result in the case of surfaces of non-positive Euler characteristic using coupling ideas from probability. At certain point we need to estimate the second derivative of the Ricci flow and for that we introduce a coupling of three particles. This is joint work with Rob Neel.

Estimation and Support Recovery with Exponential Weights

Series
Stochastics Seminar
Time
Thursday, September 20, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skyles 006
Speaker
Karim LouniciGeorgia Institute of Technology
In the context of a linear model with a sparse coefficient vector, sharp oracle inequalities have been established for the exponential weights concerning the prediction problem. We show that such methods also succeed at variable selection and estimation under near minimum condition on the design matrix, instead of much stronger assumptions required by other methods such as the Lasso or the Dantzig Selector. The same analysis yields consistency results for Bayesian methods and BIC-type variable selection under similar conditions. Joint work with Ery Arias-Castro

Space-time stationary solutions for the Burgers equation with random forcing

Series
Stochastics Seminar
Time
Thursday, September 6, 2012 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Yuri BakhtinGeorgia Tech
The Burgers equation is a basic hydrodynamic model describing the evolution of the velocity field of sticky dust particles. When supplied with random forcing it turns into an infinite-dimensional random dynamical system that has been studied since late 1990's. The variational approach to Burgers equation allows to study the system by analyzing optimal paths in the random landscape generated by the random force potential. Therefore, this is essentially a random media problem. For a long time only compact cases of Burgers dynamics on the circle or a torus were understood well. In this talk I discuss the Burgers dynamics on the entire real line with no compactness or periodicity assumption. The main result is the description of the ergodic components and One Force One Solution principle on each component. Joint work with Eric Cator and Kostya Khanin.

Sparse Singular Value Decomposition in High Dimensions

Series
Stochastics Seminar
Time
Tuesday, April 24, 2012 - 16:05 for 1 hour (actually 50 minutes)
Location
skyles 006
Speaker
Zongming MaThe Wharton School, Department of Statistics, University of Pennsylvania
Singular value decomposition is a widely used tool for dimension reduction in multivariate analysis. However, when used for statistical estimation in high-dimensional low rank matrix models, singular vectors of the noise-corrupted matrix are inconsistent for their counterparts of the true mean matrix. In this talk, we suppose the true singular vectors have sparse representations in a certain basis. We propose an iterative thresholding algorithm that can estimate the subspaces spanned by leading left and right singular vectors and also the true mean matrix optimally under Gaussian assumption. We further turn the algorithm into a practical methodology that is fast, data-driven and robust to heavy-tailed noises. Simulations and a real data example further show its competitive performance. This is a joint work with Andreas Buja and Dan Yang.

Pages