TBA by Keller VandeBogert
- Series
- Job Candidate Talk
- Time
- Tuesday, January 16, 2024 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 005 or 006
- Speaker
- Keller VandeBogert
This lecture concerns the metric Riemannian geometry of Einstein manifolds, which is a central theme in modern differential geometry and is deeply connected to a large variety of fundamental problems in algebraic geometry, geometric topology, analysis of nonlinear PDEs, and mathematical physics. We will exhibit the rich geometric/topological structures of Einstein manifolds and specifically focus on the structure theory of moduli spaces of Einstein metrics. My recent works center around the intriguing problems regarding the compactification of the moduli space of Einstein metrics, which tells us how Einstein manifolds can degenerate. Such problems constitute the most challenging part in the metric geometry of Einstein manifolds. We will introduce recent major progress in the field. If time permits, I will propose several important open questions.
Please Note: Link to join via Zoom: https://gatech.zoom.us/j/93394018195?pwd=MGJZaWIwQUhVYW9ZZDFoWWFOc29EZz09 Meeting ID: 933 9401 8195 Passcode: SoM
We define and motivate the Poisson point process, which is, informally, a “maximally random” scattering of points in some locally compact, second countable space. We introduce the ideal Poisson--Voronoi tessellation (IPVT), a new random object with intriguing geometric properties when considered on a semisimple symmetric space (the hyperbolic plane, for example). In joint work with Mikolaj Fraczyk and Sam Mellick, we use the IPVT to prove the minimal number of generators of a torsion-free lattice in a higher rank, semisimple Lie group is sublinear in the co-volume of the lattice. We give some intuition for the proof. No prior knowledge on Poisson point processes or symmetric spaces will be assumed.
How many critical points does a random function from R^N to R have for large N? Such functions appear naturally in probability, data science, and mathematical physics. Questions like this one, which have attracted longstanding interest from both physicists and mathematicians, can help explain both physical phase transitions and algorithmic thresholds. I will give an overview of this "landscape complexity" program, its motivations, and recent progress coming from random matrices.
Please Note: https://gatech.zoom.us/j/94328087718
How complicated can successive manifolds get in a tower of covering
spaces? Specifically, how large can the dimension of the first
cohomology get? We will begin with a tour of possible behaviors for
low-dimensional spaces, and then focus on arithmetic manifolds.
Specifically, for towers of complex-hyperbolic manifolds, I will
describe how to bound the rates of growth using known instances of
Langlands functoriality.
Zoom link: https://gatech.zoom.us/j/94868589860
The Teichmueller space parametrizes Riemann surfaces of fixed topological type and is fundamental in various contexts of mathematics and physics. It can be defined as a component of the moduli space of flat G=PSL(2,R) connections on the surface. Higher Teichmüller spaces extend this notion to appropriate higher rank classical Lie groups G. Other generalizations are given by the super-Teichmueller spaces, describing Riemann surfaces enhanced by odd or anti-commuting coordinates (known as super Riemann surfaces). The super-Teichmueller spaces arise naturally as higher Teichmueller spaces, corresponding to supergroups, which play an important role in geometric topology, algebraic geometry, and mathematical physics, where the anti-commuting variables correspond to Fermions.
After introducing these spaces, I will explain the solution to the long-standing problem of describing the counterpart of Penner coordinates on the super-Teichmueller space and its higher analogues. The importance of these coordinates is justified by two remarkable properties: the action of the mapping class group is rational, and the Weil-Petersson form is given by a simple explicit formula. From the algebraic and combinatorial perspectives, their transformations lead to an important generalization of cluster algebras.
In the end, I will discuss some recent applications of this construction.
In his seminal 1991 paper, Thomas M. Cover introduced a simple and elegant mathematical model for trading on the stock market. This model, which later on came to be known as online portfolio selection (OPS), is specified with only two integer parameters: the number of assets $d$ and time horizon $T$. In each round $t \in \{1, ..., T\}$, the trader selects a portfolio--distribution $p_t \in R^d_+$ of the current capital over the set of $d$ assets; after this, the adversary generates a nonnegative vector $r_t \in R^d_+$ of returns (relative prices of assets), and the trader's capital is multiplied by the "aggregated return'' $\langle p_{t}, r_{t} \rangle$. Despite its apparent simplicity, this model captures the two key properties of the stock market: (i) it "plays against'' the trader; (ii) money accumulates multiplicatively. In the 30 years that followed, the OPS model has received a great deal of attention from the learning theory, information theory, and quantitative finance communities.
In the same paper, Cover also proposed an algorithm, termed Universal Portfolios, that admitted a strong performance guarantee: the regret of $O(d \log (T))$ against the best portfolio in hindsight, and without any restrictions of returns or portfolios. This guarantee was later on shown to be worst-case optimal, and no other algorithm attaining it has been found to date. Unfortunately, exact computation of a universal portfolio amounts to averaging over a log-concave distribution, which is a challenging task. Addressing this, Kalai and Vempala (2002) achieved the running time of $O(d^4 T^{14})$ per round via log-concave sampling techniques. However, with such a running time essentially prohibiting all but "toy'' problems--yet remaining state-of-the-art--the problem of finding an optimal and practical OPS algorithm was left open.
In this talk, after discussing some of the arising challenges, I shall present a fast and optimal OPS algorithm proposed in a recent work with R. Jezequel and P. Gaillard (arXiv:2209.13932). Our algorithm combines regret optimality with the runtime of $O(d^2 T)$, thus dramatically improving state of the art. As we shall see, the motivation and analysis of the proposed algorithm are closely related to establishing a sharp bound on the accuracy of the Laplace approximation for a log-concave distribution with a polyhedral support, which is a result of independent interest.
Zoom link to the talk: https://gatech.zoom.us/j/98280978183
Please Note: Speaker will be in person, but also livestreamed but not recorded at https://gatech.zoom.us/j/98355006347
Modern neural networks are usually over-parameterized—the number of parameters exceeds the number of training data. In this case the loss function tends to have many (or even infinite) global minima, which imposes a challenge of minima selection on optimization algorithms besides the convergence. Specifically, when training a neural network, the algorithm not only has to find a global minimum, but also needs to select minima with good generalization among many others. We study the mechanisms that facilitate global minima selection of optimization algorithms, as well as its connection with good generalization performance. First, with a linear stability theory, we show that stochastic gradient descent (SGD) favors global minima with flat and uniform landscape. Then, we build a theoretical connection of flatness and generalization performance based on a special multiplicative structure of neural networks. Connecting the two results, we develop generalization bounds for neural networks trained by SGD. Our bounds take the optimization process into consideration. Furthermore, we study the behavior of optimization algorithms around manifold of minima and reveal the exploration of algorithms from one minimum to another.
The theory of graph quasirandomness studies graphs that "look like" samples of the Erdős--Rényi
random graph $G_{n,p}$. The upshot of the theory is that several ways of comparing a sequence with
the random graph turn out to be equivalent. For example, two equivalent characterizations of
quasirandom graph sequences is as those that are uniquely colorable or uniquely orderable, that is,
all colorings (orderings, respectively) of the graphs "look approximately the same". Since then,
generalizations of the theory of quasirandomness have been obtained in an ad hoc way for several
different combinatorial objects, such as digraphs, tournaments, hypergraphs, permutations, etc.
The theory of graph quasirandomness was one of the main motivations for the development of the
theory of limits of graph sequences, graphons. Similarly to quasirandomness, generalizations of
graphons were obtained in an ad hoc way for several combinatorial objects. However, differently from
quasirandomness, for the theory of limits of combinatorial objects (continuous combinatorics), the
theories of flag algebras and theons developed limits of arbitrary combinatorial objects in a
uniform and general framework.
In this talk, I will present the theory of natural quasirandomness, which provides a uniform and
general treatment of quasirandomness in the same setting as continuous combinatorics. The talk will
focus on the first main result of natural quasirandomness: the equivalence of unique colorability
and unique orderability for arbitrary combinatorial objects. Although the theory heavily uses the
language and techniques of continuous combinatorics from both flag algebras and theons, no
familiarity with the topic is required as I will also briefly cover all definitions and theorems
necessary.
This talk is based on joint work with Alexander A. Razborov.