Seminars and Colloquia by Series

Thursday, January 31, 2013 - 15:05 , Location: Skiles 006 , Geordie Richards , IMA , Organizer:
The periodic generalized Korteweg-de Vries equation (gKdV) is a canonical dispersive partial differential equation with numerous applications in physics and engineering.  In this talk we present invariance of the Gibbs measure under the flow of the gauge transformed periodic quartic gKdV.  The proof relies on probabilistic arguments which exhibit nonlinear smoothing when the initial data are randomized.  As a corollary we obtain almost sure global well-posedness for the (ungauged) quartic gKdV at regularities where this PDE is deterministically ill-posed.
Thursday, January 24, 2013 - 15:05 , Location: Skyles 006 , Sebastien Bubeck , Princeton University , Organizer: Karim Lounici
In small dimension a random geometric graph behaves very differently from a standard Erdös-Rényi random graph. On the other hand when the dimension tends to infinity (with the number of vertices being fixed) both models coincides. In this talk we study the behavior of the clique number of random geometric graphs when the dimension grows with the number of vertices.
Thursday, January 17, 2013 - 15:05 , Location: Skiles 006 , Louis-Pierre Arguin , Université de Montréal , Organizer:
Gaussian fields with logarithmically decaying correlations, such as branching Brownian motion and the 2D Gaussian free field, are conjectured to form a new universality class of extreme value statistics (notably in the work of Carpentier & Ledoussal and Fyodorov & Bouchaud). This class is the borderline case between the class of IID random variables, and models where correlations start to affect the statistics. In this talk, I will report on the recent rigorous progress in describing the new features of this class. In particular, I will describe the emergence of Poisson-Dirichlet statistics. This is joint work with Olivier Zindy.
Thursday, December 6, 2012 - 15:05 , Location: Skiles 006 , Wenbo Li , University of Delaware , Organizer:
There is a long history on the study of zeros of  random polynomials whose coefficients are independent, identically distributed, non-degenerate random variables. We will first provide an overview on zeros of random functions and then show exact and/or asymptotic bounds on probabilities that all  zeros of a random polynomial  are real under various distributions. The talk is accessible to undergraduate and graduate students in any areas of mathematics.
Thursday, November 29, 2012 - 15:05 , Location: Skiles 006 , Tai Melcher , University of Virginia , Organizer:
 Smoothness is a fundamental principle in the study of measures on infinite-dimensional spaces, where an obvious obstruction to overcome is the lack of an infinite-dimensional Lebesgue or volume measure. Canonical examples of smooth measures include those induced by a Brownian motion, both its end point distribution and as a real-valued path. More generally, any Gaussian measure on a Banach space is smooth. Heat kernel measure is the law of a Brownian motion on a curved space, and as such is the natural analogue of Gaussian measure there. We will discuss some recent smoothness results for these measures on certain classes of infinite-dimensional groups, including in some degenerate settings. This is joint work with Fabrice Baudoin, Daniel Dobbs, and Masha Gordina.
Thursday, November 15, 2012 - 15:05 , Location: Skyles 006 , Cun-Hui Zhang , Rutgers University , Organizer: Karim Lounici
This paper concerns the problem of matrix completion, which is to estimate a matrix from observations in a small subset of indices. We propose a calibrated spectrum elastic net method with a sum of the nuclear and Frobenius penalties and develop an iterative algorithm to solve the convex minimization problem. The iterative algorithm alternates between imputing the missing entries in the incomplete matrix by the current guess and estimating the matrix by a scaled soft-thresholding singular value decomposition of the imputed matrix until the resulting matrix converges. A calibration step follows to correct the bias caused by the Frobenius penalty. Under proper coherence conditions and for suitable penalties levels, we prove that the proposed estimator achieves an error bound of nearly optimal order and in proportion to the noise level. This provides a unified analysis of the noisy and noiseless matrix completion problems. Tingni Sun and Cun-Hui Zhang, Rutgers University
Thursday, November 8, 2012 - 15:05 , Location: Skiles 006 , Chris Evans , University of Missouri , Organizer:
In a series of famous papers E. Wong and M. Zakai showed that the solution to a Stratonovich SDE is the limit of the solutions to a corresponding ODE driven by the piecewise-linear interpolation of the driving Brownian motion. In particular, this implies that solutions to Stratonovich SDE "behave as we would expect from ODE theory". Working with my PhD adviser, Daniel Stroock, we have shown that a similar approximation result holds, in the sense of weak convergence of distributions, for reflected Stratonovich SDE.
Thursday, November 1, 2012 - 15:05 , Location: Skiles 006 , Ravi Srinivasan , University of Texas at Austin , Organizer:
Burgers turbulence is the study of Burgers equation with random initial data or forcing.  While having its origins in hydrodynamics, this model has remarkable connections to a variety of seemingly unrelated problems in statistics, kinetic theory, random matrices, and integrable systems. In this talk I will survey these connections and discuss the crucial role that exact solutions have played in the development of the theory.
Thursday, October 18, 2012 - 15:05 , Location: Skiles 006 , Ivan Corwin , Clay Mathematics Institute and MIT , Organizer:
The Gaussian central limit theorem says that for a wide class of stochastic systems, the bell curve (Gaussian distribution) describes the statistics for random fluctuations of important observables. In this talk I will look beyond this class of systems to a collection of probabilistic models which include random growth models, polymers,particle systems, matrices and stochastic PDEs, as well as certain asymptotic problems in combinatorics and representation theory. I will explain in what ways these different examples all fall into a single new universality class with a much richer mathematical structure than that of the Gaussian.
Tuesday, October 9, 2012 - 15:05 , Location: Skyles 005 , Yiyuan She , Florida State University , Organizer: Karim Lounici
Rank reduction as an effective technique for dimension reduction is widely used in statistical modeling and machine learning. Modern statistical applications entail high dimensional data analysis where there may exist a large number of nuisance variables. But the plain rank reduction cannot discern relevant or important variables. The talk discusses joint variable and rank selection for predictive learning. We propose to apply sparsity and reduced rank techniques to attain simultaneous feature selection and feature extraction in a vector regression setup. A class of estimators is introduced based on novel penalties that impose both row and rank restrictions on the coefficient matrix. Selectable principle component analysis is proposed and studied from a self-regression standpoint which gives an extension to the sparse principle component analysis. We show that these estimators adapt to the unknown matrix sparsity and have fast rates of convergence in comparison with LASSO and reduced rank regression. Efficient computational algorithms are developed and applied to real world applications.

Pages