## Seminars and Colloquia Schedule

### Regression of functions on a low-dimensional set by neural networks

Series
Time
Monday, October 5, 2020 - 15:30 for 1 hour (actually 50 minutes)
Location
Bluejeans meeting https://bluejeans.com/759112674
Speaker
Dr. Wenjing LiaoGeorgia Tech

Many data set in image analysis and signal processing is in a high-dimensional space but exhibit low-dimensional structures. For example, data can be modeled as point clouds in a high-dimensional space but are concentrated on a low-dimensional set (or manifold in particular). Our goal is to estimate functions on the low-dimensional manifold from finite samples of data, for statistical inference and prediction. This talk introduces approximation theories of neural networks for functions supported on a low-dimensional manifold. When the function is estimated from finite samples, we give an estimate of the mean squared error for the approximation of these functions. The convergence rate depends on the intrinsic dimension of the manifold instead of the ambient dimension of the data. These results demonstrate that neural networks are adaptive to low-dimensional geometric structures of data.

### An enhanced uncertainty principle

Series
Analysis Seminar
Time
Tuesday, October 6, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://us02web.zoom.us/j/71579248210?pwd=d2VPck1CbjltZStURWRWUUgwTFVLZz09
Speaker
Joaquim Ortega-CerdaUniversity of Barcelona

We improve on some recent results of Sagiv and Steinerberger that quantify the following uncertainty principle: for a function f with mean zero, then either the size of the zero set of the function or the cost of transporting the mass of the positive part of f to its negative part must be big. We also provide a sharp upper estimate of the transport cost of the positive part of an eigenfunction of the Laplacian.

This proves a conjecture of Steinerberger and provides a lower bound of the size of a nodal set of the eigenfunction. Finally, we use a similar technique to provide a measure of how well the points in a design in a manifold are equidistributed. This is a joint work with Tom Carroll and Xavier Massaneda.

### Inducibility of graphs and tournaments

Series
Graph Theory Seminar
Time
Tuesday, October 6, 2020 - 15:45 for 1 hour (actually 50 minutes)
Location
Speaker

A classical question in extremal graph theory asks to maximize the number of induced copies of a given graph or tournament in a large host graph, often expressed as a density. A simple averaging argument shows that the limit of this density exists as the host graph is allowed to grow. Razborov's flag algebra method is well suited to generate bounds on these quantities with the help of semidefinite programming. We will explore this method for a few small examples, and see how to modify it to fit our questions. The extremal graphs show some beautiful structures, sometimes fractal like, sometimes quasi random and sometimes even a combination of both.

### Topology of cable knots

Series
Geometry Topology Student Seminar
Time
Wednesday, October 7, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
Speaker
Hyunki MinGeorgia Tech

Cabling is one of important knot operations. We study various properties of cable knots and how to characterize the cable knots by its complement.

### Approximate Kernel Principal Component Analysis: Computational vs. Statistical Trade-off

Series
Stochastics Seminar
Time
Thursday, October 8, 2020 - 15:30 for 1 hour (actually 50 minutes)
Location
https://gatech.webex.com/gatech/j.php?MTID=mdd4512d3d11623149a0bd46d9fc086c8
Speaker
Bharath SriperumbudurPennsylvania State University

Kernel principal component analysis (KPCA) is a popular non-linear dimensionality reduction technique, which generalizes classical linear PCA by finding functions in a reproducing kernel Hilbert space (RKHS) such that the function evaluation at a random variable $X$ has a maximum variance. Despite its popularity, kernel PCA suffers from poor scalability in big data scenarios as it involves solving a $n \times n$ eigensystem leading to the computational complexity of $O(n^3)$ with $n$ being the number of samples. To address this issue, in this work, we consider a random feature approximation to kernel PCA which requires solving an $m \times m$ eigenvalue problem and therefore has a computational complexity of $O(m^3+nm^2)$, implying that the approximate method is computationally efficient if $m$ < $n$ with $m$ being the number of random features. The goal of this work is to investigate the trade-off between computational and statistical behaviors of approximate KPCA, i.e., whether the computational gain is achieved at the cost of statistical efficiency. We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces.

Link to Cisco Webex meeting: https://gatech.webex.com/gatech/j.php?MTID=mdd4512d3d11623149a0bd46d9fc086c8

### Introduction to Kajiwara-Payne Tropicalization II

Series
Student Algebraic Geometry Seminar
Time
Friday, October 9, 2020 - 09:00 for 1 hour (actually 50 minutes)
Location
Speaker
Trevor GunnGeorgia Tech

The goal of this talk is to present a summary of Sam Payne's 2009 paper "Analytification is the limit of all tropicalizations" (Math. Res. Lett. 16, no. 3 543–556). We will introduce Berkovich analytic spaces, tropicalization of projective varieties, and tropicalization of closed subvarieties of toric varieties, as well as the connections between these concepts. We will try to present many examples.

Note: Part I will focus on tropicalization of affine varieties and Berkovich analytic spaces, Part II will focus on tropicalization of toric varieties and discuss Sam Payne's theorem.

### Hyperbolic Relaxations of Locally Positive Semidefinite Matrices

Series
ACO Student Seminar
Time
Friday, October 9, 2020 - 13:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/264244877/0166
Speaker
Kevin ShuMath, Georgia Tech

Semidefinite programming is a powerful optimization tool, which involves optimizing linear functions on a slice of the positive semidefinite matrices. Locally PSD matrices are a natural relaxation of the PSD matrices which can be useful in reducing the space required for semidefinite optimization. We use the theory of hyperbolic polynomials to give precise quantitative bounds on the quality of the approximation resulting from optimizing over the locally-psd cone instead of the PSD cone.

### Discrepancy Minimization via a Self-Balancing Walk

Series
Combinatorics Seminar
Time
Friday, October 9, 2020 - 15:00 for 1 hour (actually 50 minutes)
Location