Seminars and Colloquia by Series

Estimation of smooth functionals in high-dimensional and infinite-dimensional models

Series
Stochastics Seminar
Time
Thursday, February 16, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Vladimir KoltchinskiiGeorgia Tech

The problem of estimation of smooth functionals of unknown parameters of statistical models will be discussed in the cases of high-dimensional log-concave location models (joint work with Martin Wahl) and infinite dimensional Gaussian models with unknown covariance operator. In both cases, the minimax optimal error rates have been obtained in the classes of H\”older smooth functionals with precise dependence on the sample size, the complexity of the parameter (its dimension in the case of log-concave location models or the effective rank of the covariance in the case of Gaussian models)  and on the degree of smoothness of the functionals. These rates are attained for different types of estimators based on two different methods of bias reduction in functional estimation.

Large Dimensional Independent Component Analysis: Statistical Optimality and Computational Tractability

Series
Stochastics Seminar
Time
Thursday, November 17, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Ming YuanColumbia University

Independent component analysis is a useful and general data analysis tool. It has found great successes in many applications. But in recent years, it has been observed that many popular approaches to ICA do not scale well with the number of components. This debacle has inspired a growing number of new proposals. But it remains unclear what the exact role of the number of components is on the information theoretical limits and computational complexity for ICA. Here I will describe our recent work to specifically address these questions and introduce a refined method of moments that is both computationally tractable and statistically optimal.

Breaking the curse of dimensionality for boundary value PDE in high dimensions

Series
Stochastics Seminar
Time
Thursday, November 10, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Ionel PopescuUniversity of Bucharest and Simion Stoilow Institute of Mathematics

Zoom link to the seminar: https://gatech.zoom.us/j/91330848866

I will show how to construct a numerical scheme for solutions to linear Dirichlet-Poisson boundary problems which does not suffer of the curse of dimensionality. In fact we show that as the dimension increases, the complexity of this  scheme increases only (low degree) polynomially with the dimension. The key is a subtle use of walk on spheres combined with a concentration inequality. As a byproduct we show that this result has a simple consequence in terms of neural networks for the approximation of the solution. This is joint work with Iulian Cimpean, Arghir Zarnescu, Lucian Beznea and Oana Lupascu.

Fluctuation results for size of the vacant set for random walks on discrete torus

Series
Stochastics Seminar
Time
Thursday, November 3, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Daesung KimGeorgia Tech

We consider a random walk on the $d\ge 3$ dimensional discrete torus starting from vertices chosen independently and uniformly at random. In this talk, we discuss the fluctuation behavior of the size of the range of the random walk trajectories at a time proportional to the size of the torus. The proof relies on a refined analysis of tail estimates for hitting time. We also discuss related results and open problems. This is based on joint work with Partha Dey.

Ballistic Annihilation

Series
Stochastics Seminar
Time
Thursday, October 27, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Matthew JungeBaruch College, CUNY

In the late 20th century, statistical physicists introduced a chemical reaction model called ballistic annihilation. In it, particles are placed randomly throughout the real line and then proceed to move at independently sampled velocities. Collisions result in mutual annihilation. Many results were inferred by physicists, but it wasn’t until recently that mathematicians joined in. I will describe my trajectory through this model. Expect tantalizing open questions.

Statistical Tensor Learning in 2020s: Methodology, Theory, and Applications

Series
Stochastics Seminar
Time
Thursday, October 20, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Anru ZhangDuke University

The analysis of tensor data, i.e., arrays with multiple directions, has become an active research topic in the era of big data. Datasets in the form of tensors arise from a wide range of scientific applications. Tensor methods also provide unique perspectives to many high-dimensional problems, where the observations are not necessarily tensors. Problems in high-dimensional tensors generally possess distinct characteristics that pose great challenges to the data science community. 

In this talk, we discuss several recent advances in statistical tensor learning and their applications in computational imaging, social network, and generative model. We also illustrate how we develop statistically optimal methods and computationally efficient algorithms that interact with the modern theories of computation, high-dimensional statistics, and non-convex optimization.

Efficient and Near-Optimal Online Portfolio Selection

Series
Stochastics Seminar
Time
Friday, October 14, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Dmitrii M. OstrovskiiUniversity of Southern California

In the problem of online portfolio selection as formulated by Cover (1991), the trader repeatedly distributes her capital over $ d $ assets in each of $ T > 1 $ rounds, with the goal of maximizing the total return. Cover proposed an algorithm called Universal Portfolios, that performs nearly as well as the best (in hindsight) static assignment of a portfolio, with 

an $ O(d\log(T)) $ regret in terms of the logarithmic return. Without imposing any restrictions on the market, this guarantee is known to be worst-case optimal, and no other algorithm attaining it has been discovered so far. Unfortunately, Cover's algorithm crucially relies on computing the expectation over certain log-concave density in R^d, so in a practical implementation this expectation has to be approximated via sampling, which is computationally challenging. In particular, the fastest known implementation, proposed by Kalai and Vempala in 2002, runs in $ O( d^4 (T+d)^{14} ) $ per round, which rules out any practical application scenario. Proposing a practical algorithm with a near-optimal regret is a long-standing open problem. We propose an algorithm for online portfolio selection with a near-optimal regret guarantee of $ O( d \log(T+d) ) $ and the runtime of only $ O( d^2 (T+d) ) $ per round. In a nutshell, our algorithm is a variant of the follow-the-regularized-leader scheme, with a time-dependent regularizer given by the volumetric barrier for the sum of observed losses. Thus, our result gives a fresh perspective on the concept of volumetric barrier, initially proposed in the context of cutting-plane methods and interior-point methods, correspondingly by Vaidya (1989) and Nesterov and Nemirovski (1994). Our side contribution, of independent interest, is deriving the volumetrically regularized portfolio as a variational approximation of the universal portfolio: namely, we show that it minimizes Gibbs's free energy functional, with accuracy of order $ O( d \log(T+d) ) $. This is a joint work with Remi Jezequel and Pierre Gaillard. 

A stochastic approach for noise stability on the hypercube

Series
Stochastics Seminar
Time
Thursday, October 6, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
https://us02web.zoom.us/j/86578123009
Speaker
Dan MikulincerMIT

Please Note: Recording: https://us02web.zoom.us/rec/share/cIdTfvS0tjar04MWv9ltWrVxAcmsUSFvDznprSBT285wc0VzURfB3X8jR0CpWIWQ.Sz557oNX3k5L1cpN

We revisit the notion of noise stability in the hypercube and show how one can replace the usual heat semigroup with more general stochastic processes. We will then introduce a re-normalized Brownian motion, embedding the discrete hypercube into the Wiener space, and analyze the noise stability along its paths. Our approach leads to a new quantitative form of the 'Majority is Stablest' theorem from Boolean analysis and to progress on the 'most informative bit' conjecture of Kumar and Courtade.

Perturbation theory for systems with a first integral

Series
Stochastics Seminar
Time
Thursday, September 29, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Mark FreidlinUniversity of Maryland

I will consider the long-time influence of deterministic and stochastic perturbations of dynamical systems and diffusion processes with a first integral . A diffusion process on the Reeb graph of the first integral determines the long-time behavior of the perturbed system. In particular, I will consider stochasticity of long time behavior of deterministic systems close to a system with a conservation law. Which of the invariant  measures of the non-perturbed system will be limiting for a given class of perturbations also will be discussed.

BEAUTY Powered BEAST

Series
Stochastics Seminar
Time
Thursday, September 22, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Kai ZhangUNC Chapel Hill

Link to the online seminar: https://gatech.zoom.us/j/94538442915

We study nonparametric dependence detection with the proposed binary expansion approximation of uniformity (BEAUTY) approach, which generalizes the celebrated Euler's formula, and approximates the characteristic function of any copula with a linear combination of expectations of binary interactions from marginal binary expansions. This novel theory enables a unification of many important tests through approximations from some quadratic forms of symmetry statistics, where the deterministic weight matrix characterizes the power properties of each test. To achieve a robust power, we study test statistics with data-adaptive weights, referred to as the binary expansion adaptive symmetry test (BEAST). By utilizing the properties of the binary expansion filtration, we show that the Neyman-Pearson test of uniformity can be approximated by an oracle weighted sum of symmetry statistics. The BEAST with this oracle provides a benchmark of feasible power against any alternative by leading all existing tests with a substantial margin. To approach this oracle power, we develop the BEAST through a regularized resampling approximation of the oracle test. The BEAST improves the empirical power of many existing tests against a wide spectrum of common alternatives and provides clear interpretation of the form of dependency when significant. This is joint work with Zhigen Zhao and Wen Zhou.

Pages