### TBA by Anderson Y. Zhang

- Series
- Stochastics Seminar
- Time
- Thursday, April 1, 2021 - 15:30 for 1 hour (actually 50 minutes)
- Location
- Online TBA
- Speaker
- Anderson Y. Zhang – University of Pennsylvania

- You are here:
- GT Home
- Home
- News & Events

- Series
- Stochastics Seminar
- Time
- Thursday, April 1, 2021 - 15:30 for 1 hour (actually 50 minutes)
- Location
- Online TBA
- Speaker
- Anderson Y. Zhang – University of Pennsylvania

- Series
- Stochastics Seminar
- Time
- Thursday, February 4, 2021 - 15:30 for 1 hour (actually 50 minutes)
- Location
- ONLINE
- Speaker
- Martin Wahl – Humboldt University in Berlin – martin.wahl@math.hu-berlin.de

- Series
- Stochastics Seminar
- Time
- Thursday, January 21, 2021 - 15:30 for 1 hour (actually 50 minutes)
- Location
- https://bluejeans.com/751242993/PASSWORD (To receive the password, please email Lutz Warnke)
- Speaker
- Sayan Mukherjee – Duke University

Frieze showed that the expected weight of the minimum spanning tree (MST) of the uniformly weighted graph converges to ζ(3). Recently, this result was extended to a uniformly weighted simplicial complex, where the role of the MST is played by its higher-dimensional analogue -- the Minimum Spanning Acycle (MSA). In this work, we go beyond and look at the histogram of the weights in this random MSA -- both in the bulk and in the extremes. In particular, we focus on the `incomplete' setting, where one has access only to a fraction of the potential face weights. Our first result is that the empirical distribution of the MSA weights asymptotically converges to a measure based on the shadow -- the complement of graph components in higher dimensions. As far as we know, this result is the first to explore the connection between the MSA weights and the shadow. Our second result is that the extremal weights converge to an inhomogeneous Poisson point process. A interesting consequence of our two results is that we can also state the distribution of the death times in the persistence diagram corresponding to the above weighted complex, a result of interest in applied topology.

Based on joint work with Nicolas Fraiman and Gugan Thoppe, see https://arxiv.org/abs/2012.14122

- Series
- Stochastics Seminar
- Time
- Thursday, December 3, 2020 - 15:30 for 1 hour (actually 50 minutes)
- Location
- https://bluejeans.com/504188361
- Speaker
- B. Cooper Boniece – Washington University in St. Louis

In the past several decades, scale invariant stochastic processes have been used in a wide range of applications including internet traffic modeling and hydrology. However, by comparison to univariate scale invariance, far less attention has been paid to characteristically multivariate models that display aspects of scaling behavior the limit theory arguably suggests is most natural.

In this talk, I will introduce a new scale invariance model called operator fractional Lévy motion and discuss some of its interesting features, as well as some aspects of wavelet-based estimation of its scaling exponents. This is related to joint work with Gustavo Didier (Tulane University), Herwig Wendt (CNRS, IRIT Univ. of Toulouse) and Patrice Abry (CNRS, ENS-Lyon).

- Series
- Stochastics Seminar
- Time
- Thursday, November 19, 2020 - 15:30 for 1 hour (actually 50 minutes)
- Location
- https://gatech.webex.com/gatech/j.php?MTID=mee147c52d7a4c0a5172f60998fee267a
- Speaker
- Tatiyana Apanasovich – George Washington University

The class which is refereed to as the Cauchy family allows for the simultaneous modeling of the long memory dependence and correlation at short and intermediate lags. We introduce a valid parametric family of cross-covariance functions for multivariate spatial random fields where each component has a covariance function from a Cauchy family. We present the conditions on the parameter space that result in valid models with varying degrees of complexity. Practical implementations, including reparameterizations to reflect the conditions on the parameter space will be discussed. We show results of various Monte Carlo simulation experiments to explore the performances of our approach in terms of estimation and cokriging. The application of the proposed multivariate Cauchy model is illustrated on a dataset from the field of Satellite Oceanography.

Link to Cisco Webex meeting: https://gatech.webex.com/gatech/j.php?MTID=mee147c52d7a4c0a5172f60998fee267a

- Series
- Stochastics Seminar
- Time
- Thursday, November 12, 2020 - 15:30 for 1 hour (actually 50 minutes)
- Location
- https://bluejeans.com/445382510
- Speaker
- Song Mei – UC Berkeley

For a certain scaling of the initialization of stochastic gradient descent (SGD), wide neural networks (NN) have been shown to be well approximated by reproducing kernel Hilbert space (RKHS) methods. Recent empirical work showed that, for some classification tasks, RKHS methods can replace NNs without a large loss in performance. On the other hand, two-layers NNs are known to encode richer smoothness classes than RKHS and we know of special examples for which SGD-trained NN provably outperform RKHS. This is true also in the wide network limit, for a different scaling of the initialization.

How can we reconcile the above claims? For which tasks do NNs outperform RKHS? If feature vectors are nearly isotropic, RKHS methods suffer from the curse of dimensionality, while NNs can overcome it by learning the best low-dimensional representation. Here we show that this curse of dimensionality becomes milder if the feature vectors display the same low-dimensional structure as the target function, and we precisely characterize this tradeoff. Building on these results, we present a model that can capture in a unified framework both behaviors observed in earlier work. We hypothesize that such a latent low-dimensional structure is present in image classification. We test numerically this hypothesis by showing that specific perturbations of the training distribution degrade the performances of RKHS methods much more significantly than NNs.

- Series
- Stochastics Seminar
- Time
- Thursday, November 5, 2020 - 15:30 for 1 hour (actually 50 minutes)
- Location
- https://bluejeans.com/974631214
- Speaker
- Daniel Sussman – Boston University

We consider the ramifications of utilizing biased latent position estimates in subsequent statistical analysis in exchange for sizable variance reductions in finite networks. We establish an explicit bias-variance tradeoff for latent position estimates produced by the omnibus embedding in the presence of heterogeneous network data. We reveal an analytic bias expression, derive a uniform concentration bound on the residual term, and prove a central limit theorem characterizing the distributional properties of these estimates.

Link to the BlueJeans meeting https://bluejeans.com/974631214

- Series
- Stochastics Seminar
- Time
- Thursday, October 22, 2020 - 17:00 for 1 hour (actually 50 minutes)
- Location
- https://bluejeans.com/751242993/PASSWORD (To receive the password, please email Lutz Warnke)
- Speaker
- Adrian Roellin – National University of Singapore

Dense graph limit theory is essentially a first-order limit theory analogous to the classical Law of Large Numbers. Is there a corresponding central limit theorem? We believe so. Using the language of Gaussian Hilbert Spaces and the comprehensive theory of generalised U-statistics developed by Svante Janson in the 90s, we identify a collection of Gaussian measures (aka white noise processes) that describes the fluctuations of all orders of magnitude for a broad family of random graphs. We complement the theory with error bounds using a new variant of Stein’s method for multivariate normal approximation, which allows us to also generalise Janson’s theory in some important aspects. This is joint work with Gursharn Kaur.

Please note the unusual time: 5pm

- Series
- Stochastics Seminar
- Time
- Thursday, October 15, 2020 - 15:30 for 1 hour (actually 50 minutes)
- Location
- Bluejeans (link to be sent)
- Speaker
- Xiao Shen – University of Wisconsin – xshen66@wisc.edu

(Joint work with Timo Seppäläinen) We establish estimates for the coalescence time of semi-infinite directed geodesics in the planar corner growth model with i.i.d. exponential weights. There are four estimates: upper and lower bounds on the probabilities of both fast and slow coalescence on the correct spatial scale with exponent 3/2. Our proofs utilize a geodesic duality introduced by Pimentel and properties of the increment-stationary last-passage percolation process. For fast coalescence our bounds are new and they have matching optimal exponential order of magnitude. For slow coalescence, we reproduce bounds proved earlier with integrable probability inputs, except that our upper bound misses the optimal order by a logarithmic factor.

- Series
- Stochastics Seminar
- Time
- Thursday, October 8, 2020 - 15:30 for 1 hour (actually 50 minutes)
- Location
- https://gatech.webex.com/gatech/j.php?MTID=mdd4512d3d11623149a0bd46d9fc086c8
- Speaker
- Bharath Sriperumbudur – Pennsylvania State University

Kernel principal component analysis (KPCA) is a popular non-linear dimensionality reduction technique, which generalizes classical linear PCA by finding functions in a reproducing kernel Hilbert space (RKHS) such that the function evaluation at a random variable $X$ has a maximum variance. Despite its popularity, kernel PCA suffers from poor scalability in big data scenarios as it involves solving a $n \times n$ eigensystem leading to the computational complexity of $O(n^3)$ with $n$ being the number of samples. To address this issue, in this work, we consider a random feature approximation to kernel PCA which requires solving an $m \times m$ eigenvalue problem and therefore has a computational complexity of $O(m^3+nm^2)$, implying that the approximate method is computationally efficient if $m$ < $n$ with $m$ being the number of random features. The goal of this work is to investigate the trade-off between computational and statistical behaviors of approximate KPCA, i.e., whether the computational gain is achieved at the cost of statistical efficiency. We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces.

Link to Cisco Webex meeting: https://gatech.webex.com/gatech/j.php?MTID=mdd4512d3d11623149a0bd46d9fc086c8

- Offices & Departments
- News Center
- Campus Calendar
- Special Events
- GreenBuzz
- Institute Communications
- Visitor Resources
- Campus Visits
- Directions to Campus
- Visitor Parking Information
- GTvisitor Wireless Network Information
- Georgia Tech Global Learning Center
- Georgia Tech Hotel & Conference Center
- Barnes & Noble at Georgia Tech
- Ferst Center for the Arts
- Robert C. Williams Paper Museum