Seminars and Colloquia by Series

The most likely evolution of diffusing and vanishing particles: Schrodinger Bridges with unbalanced marginals

Series
PDE Seminar
Time
Tuesday, November 21, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Yongxin ChenGeorgia Tech

Stochastic flows of an advective-diffusive nature are ubiquitous in biology and the physical sciences. Of particular interest is the problem to reconcile observed marginal distributions with a given prior posed by E. Schroedinger in 1932/32 and known as the Schroedinger Bridge Problem (SBP). It turns out that Schroedinger’s problem can be viewed as a problem in large deviations, a modeling problem, as well as a control problem. Due to the fundamental significance of this problem, interest in SBP and in its deterministic (zero-noise limit) counterpart of Optimal Transport (OT) has in recent years enticed a broad spectrum of disciplines, including physics, stochastic control, computer science, and geometry. Yet, while the mathematics and applications of SBP/OT have been developing at a considerable pace, accounting for marginals of unequal mass has received scant attention; the problem to interpolate between “unbalanced” marginals has been approached by introducing source/sink terms into the transport equations, in an adhoc manner, chiefly driven by applications in image registration. Nevertheless, losses are inherent in many physical processes and, thereby, models that account for lossy transport may also need to be reconciled with observed marginals following Schroedinger’s dictum; that is, to adjust the probability of trajectories of particles, including those that do not make it to the terminal observation point, so that the updated law represents the most likely way that particles may have been transported, or vanished, at some intermediate point. Thus, the purpose of this talk is to present recent results on stochastic evolutions with losses, whereupon particles are “killed” (jump into a coffin/extinction state) according to a probabilistic law, and thereby mass is gradually lost along their stochastically driven flow. Through a suitable embedding we turn the problem into an SBP for stochastic processes that combine diffusive and jump characteristics. Then, following a large-deviations formalism in the style of Schroedinger, given a prior law that allows for losses, we explore the most probable evolution of particles along with the most likely killing rate as the particles transition between the specified marginals. Our approach differs sharply from previous work involving a Feynman-Kac multiplicative reweighing of the reference measure which, as we argue, is far from Schroedinger’s quest. We develop a suitable Schroedinger system of coupled PDEs' for this problem, an iterative Fortet-IPF-Sinkhorn algorithm for computations, and finally formulate and solve a related fluid-dynamic control problem for the flow of one-time marginals where both the drift and the new killing rate play the role of control variables. Joint work with Tryphon Georgiou and Michele Pavon.

Machine learning, optimization, & sampling through a geometric lens

Series
School of Mathematics Colloquium
Time
Monday, November 20, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Suvrit SraMIT & TU Munich

Please Note: Joint {School of Math Colloquium} and {Applied & Computational Math Seminar}. Note: *special time*. Speaker will present in person.

Geometry arises in myriad ways within machine learning and related areas. I this talk I will focus on settings where geometry helps us understand problems in machine learning, optimization, and sampling. For instance, when sampling from densities supported on a manifold, understanding geometry and the impact of curvature are crucial; surprisingly, progress on geometric sampling theory helps us understand certain generalization properties of SGD for deep-learning! Another fascinating viewpoint afforded by geometry is in non-convex optimization: geometry can either help us make training algorithms more practical (e.g., in deep learning), it can reveal tractability despite non-convexity (e.g., via geodesically convex optimization), or it can simply help us understand existing methods better (e.g., SGD, eigenvector computation, etc.).

Ultimately, I hope to offer the audience some insights into geometric thinking and share with them some new tools that help us design, understand, and analyze models and algorithms. To make the discussion concrete I will recall a few foundational results arising from our research, provide several examples, and note some open problems.

––
Bio: Suvrit Sra is a Alexander von Humboldt Professor of Artificial Intelligence at the Technical University of Munich (Germany), and and Associate Professor of EECS at MIT (USA), where he is also a member of the Laboratory for Information and Decision Systems (LIDS) and of the Institute for Data, Systems, and Society (IDSS). He obtained his PhD in Computer Science from the University of Texas at Austin. Before TUM & MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. He has held visiting positions at UC Berkeley (EECS) and Carnegie Mellon University (Machine Learning Department) during 2013-2014. His research bridges mathematical topics such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS  conference. He has co-edited a book with the same name (MIT Press, 2011). He is also a co-founder and chief scientist of Pendulum, a global AI+logistics startup.

 

Machine learning, optimization, & sampling through a geometric lens

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 20, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Suvrit SraMIT & TU Munich

Please Note: Joint {Applied & Computational Math Seminar} and {School of Math Colloquium}. Speaker will present in person.

Geometry arises in myriad ways within machine learning and related areas. I this talk I will focus on settings where geometry helps us understand problems in machine learning, optimization, and sampling. For instance, when sampling from densities supported on a manifold, understanding geometry and the impact of curvature are crucial; surprisingly, progress on geometric sampling theory helps us understand certain generalization properties of SGD for deep-learning! Another fascinating viewpoint afforded by geometry is in non-convex optimization: geometry can either help us make training algorithms more practical (e.g., in deep learning), it can reveal tractability despite non-convexity (e.g., via geodesically convex optimization), or it can simply help us understand existing methods better (e.g., SGD, eigenvector computation, etc.).

Ultimately, I hope to offer the audience some insights into geometric thinking and share with them some new tools that help us design, understand, and analyze models and algorithms. To make the discussion concrete I will recall a few foundational results arising from our research, provide several examples, and note some open problems.

––
Bio: Suvrit Sra is a Alexander von Humboldt Professor of Artificial Intelligence at the Technical University of Munich (Germany), and and Associate Professor of EECS at MIT (USA), where he is also a member of the Laboratory for Information and Decision Systems (LIDS) and of the Institute for Data, Systems, and Society (IDSS). He obtained his PhD in Computer Science from the University of Texas at Austin. Before TUM & MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. He has held visiting positions at UC Berkeley (EECS) and Carnegie Mellon University (Machine Learning Department) during 2013-2014. His research bridges mathematical topics such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS  conference. He has co-edited a book with the same name (MIT Press, 2011). He is also a co-founder and chief scientist of Pendulum, a global AI+logistics startup.

 

Geometry and the complexity of matrix multiplication

Series
Algebra Seminar
Time
Monday, November 20, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Austin ConnerHarvard University

Please Note: There will be a pre-seminar (aimed toward grad students and postdocs) from 11 am to 11:30 am in Skiles 006.

Determining the computational complexity of matrix multiplication has been one of the central open problems in theoretical computer science ever since in 1969
Strassen presented an algorithm for multiplication of n by n matrices requiring only O(n^2.81) arithmetic operations. The data describing this method is
equivalently an expression to write the structure tensor of the 2 by 2 matrix algebra as a sum of 7 decomposable tensors. Any such decomposition of an n by n
matrix algebra yields a Strassen type algorithm, and Strassen showed that such algorithms are general enough to determine the exponent of matrix multiplication. Bini later showed all of the above remains true when we allow the decomposition to depend on a parameter and take limits.

I present a recent technique for lower bounds for this decomposition problem, border apolarity. Two key ideas to this technique are (i) to not just look at the sequence of decompositions, but the sequence of ideals of the point sets determining the decompositions and (ii) to exploit the symmetry of the matrix
multiplication tensor to insist that the limiting ideal has an extremely restrictive structure. I discuss its applications to the matrix multiplication
tensor and other tensors potentially useful for obtaining upper bounds via Strassen's laser method. This talk discusses joint work with JM Landsberg, Alicia Harper, and Amy Huang.

A Polynomial Method for Counting Colorings of $S$-labeled Graphs

Series
Combinatorics Seminar
Time
Friday, November 17, 2023 - 15:15 for 1 hour (actually 50 minutes)
Location
Skiles 308
Speaker
Hemanshu KaulIllinois Institute of Technology

The notion of $S$-labeling, where $S$ is a subset of the symmetric group, is a common generalization of signed $k$-coloring, signed $\mathbb{Z}_k$-coloring, DP (or Correspondence) coloring, group coloring, and coloring of gained graphs that was introduced in 2019 by Jin, Wong, and Zhu.  In this talk we use a well-known theorem of  Alon and F\"{u}redi to present an algebraic technique for bounding the number of colorings of an $S$-labeled graph from below.  While applicable in the broad context of counting colorings of $S$-labeled graphs, we will focus on the case where $S$ is a symmetric group, which corresponds to DP-coloring (or, correspondence coloring) of graphs, and the case where $S$ is a set of linear permutations which is applicable to the coloring of signed graphs, etc.

 

This technique allows us to prove exponential lower bounds on the number of colorings of any $S$-labeling of graphs that satisfy certain sparsity conditions. We apply these to give exponential lower bounds on the number of DP-colorings (and consequently, number of  list colorings, or usual colorings) of families of planar graphs, and on the number of colorings of families of signed (planar) graphs. These lower bounds either improve previously known results, or are first known such results.

This joint work with Samantha Dahlberg and Jeffrey Mudrock.

Controlled SPDEs: Peng’s Maximum Principle and Numerical Methods

Series
SIAM Student Seminar
Time
Friday, November 17, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Lukas WesselsGeorgia Tech

In this talk, we consider a finite-horizon optimal control problem of stochastic reaction-diffusion equations. First, we apply the spike variation method which relies on introducing the first and second order adjoint state. We give a novel characterization of the second order adjoint state as the solution to a backward SPDE. Using this representation, we prove the maximum principle for controlled SPDEs.

In the second part, we present a numerical algorithm that allows the efficient approximation of optimal controls in the case of stochastic reaction-diffusion equations with additive noise by first reducing the problem to controls of feedback form and then approximating the feedback function using finitely based approximations. Numerical experiments using artificial neural networks as well as radial basis function networks illustrate the performance of our algorithm.

This talk is based on joint work with Wilhelm Stannat and Alexander Vogler. Talk will also be streamed: https://gatech.zoom.us/j/93808617657?pwd=ME44NWUxbk1NRkhUMzRsK3c0ZGtvQT09

Algebra from Projective Geometry

Series
Algebra Student Seminar
Time
Friday, November 17, 2023 - 10:00 for
Location
Speaker
Griffin EdwardsGeorgia Tech

Join us as we define a whole new algebraic structure, starting from the axioms of the projective plane. This seminar will be aimed at students who have never seen this material and will focus on hands-on constructions of classic (and new!) algebraic structures that can arise from a projective plane. The goal of this seminar is to expose you to Desargues's theorem and hopefully even examine non-Desarguesian planes.

On the curved trilinear Hilbert transform

Series
Analysis Seminar
Time
Wednesday, November 15, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Bingyang HuAuburn University

The goal of this talk is to discuss the Lp boundedness of the trilinear Hilbert transform along the moment curve. More precisely, we show that the operator

$$

H_C(f_1, f_2, f_3)(x):=p.v. \int_{\mathbb R} f_1(x-t)f_2(x+t^2)f_3(x+t^3) \frac{dt}{t}, \quad x \in \mathbb R

$$

is bounded from $L^{p_1}(\mathbb R) \times L^{p_2}(\mathbb R) \times L^{p_3}(\mathbb R}$ into $L^r(\mathbb R)$ within the Banach H\"older range $\frac{1}{p_1}+\frac{1}{p_2}+\frac{1}{p_3}=\frac{1}{r}$ with $1

 

The main difficulty in approaching this problem(compared to the classical approach to the bilinear Hilbert transform) is the lack of absolute summability after we apply the time-frequency discretization(which is known as the LGC-methodology introduced by V. Lie in 2019). To overcome such a difficulty, we develop a new, versatile approch -- referred to as Rank II LGC (which is also motived by the study of the non-resonant bilinear Hilbert-Carleson operator by C. Benea, F. Bernicot, V. Lie, and V. Vitturi in 2022), whose control is achieved via the following interdependent elements:

 

1). a sparse-uniform deomposition of the input functions adapted to an appropriate time-frequency foliation of the phase-space;

 

2). a structural analysis of suitable maximal "joint Fourier coefficients";

 

3). a level set analysis with respect to the time-frequency correlation set. 

 

This is a joint work with my postdoc advisor Victor Lie from Purdue.

 

"No (Con)way!"

Series
Geometry Topology Student Seminar
Time
Wednesday, November 15, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Daniel HwangGeorgia Tech

 This talk is a summary of a summary. We will be going over Jen Hom's 2024 Levi L. Conant Prize Winning Article "Getting a handle on the Conway knot," which discusses Lisa Piccirillo's renowned 2020 paper proving the Conway knot is not slice. In this presentation, we will go over what it means for a knot to be slice, past attempts to classify the Conway knot with knot invariants, and Piccirillo's approach of constructing a knot with the same knot trace as the Conway knot. This talk is designed for all audiences and NO prior knowledge of topology or knot theory is required. Trust me, I'm (k)not a topologist.

Onsager's conjecture in 2D

Series
PDE Seminar
Time
Tuesday, November 14, 2023 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Razvan-Octavian RaduPrinceton University

I will begin by describing the ideas involved in the Nash iterative constructions of solutions to the Euler equations. These were introduced by De Lellis and Szekelyhidi (and developed by many authors) in order to tackle the flexible side of the Onsager conjecture. Then, I will describe Isett’s proof of the conjecture in the 3D case, and highlight the simple reason for which the strategy will not work in 2D. Finally, I will describe a construction of non-conservative solutions that works also in 2D (this is joint work with Vikram Giri).

Pages