This Week's Seminars and Colloquia

Central Curve in Semidefinite Programming

Series
Algebra Seminar
Time
Monday, February 6, 2023 - 10:20 for 1.5 hours (actually 80 minutes)
Location
Skiles 005
Speaker
Isabelle ShankarPortland State University

The Zariski closure of the central path (which interior point algorithms track in convex optimization problems such as linear and semidefinite programs) is an algebraic curve, called the central curve. Its degree has been studied in relation to the complexity of these interior point algorithms.  We show that the degree of the central curve for generic semidefinite programs is equal to the maximum likelihood degree of linear concentration models.  This is joint work with Serkan Hoşten and Angélica Torres.

 

The profinite topology on a group

Series
Geometry Topology Seminar Pre-talk
Time
Monday, February 6, 2023 - 12:45 for 1 hour (actually 50 minutes)
Location
Speaker
Tam Cheetham-WestRice University

The finite index subgroups of a finitely presented group generate a topology on the group. We will discuss using examples how this relates to the organization of a group's finite quotients, and introduce the ideas of profinite rigidity and flexibility. 

Implicit bias of optimization algorithms and generalization of over-parameterized neural networks

Series
Job Candidate Talk
Time
Monday, February 6, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005, and https://gatech.zoom.us/j/98355006347
Speaker
Chao MaStanford University

Speaker will be in person, but also livestreamed but not recorded at https://gatech.zoom.us/j/98355006347

Modern neural networks are usually over-parameterized—the number of parameters exceeds the number of training data. In this case the loss function tends to have many (or even infinite) global minima, which imposes a challenge of minima selection on optimization algorithms besides the convergence. Specifically, when training a neural network, the algorithm not only has to find a global minimum, but also needs to select minima with good generalization among many others. We study the mechanisms that facilitate global minima selection of optimization algorithms, as well as its connection with good generalization performance. First, with a linear stability theory, we show that stochastic gradient descent (SGD) favors global minima with flat and uniform landscape. Then, we build a theoretical connection of flatness and generalization performance based on a special multiplicative structure of neural networks. Connecting the two results, we develop generalization bounds for neural networks trained by SGD. Our bounds take the optimization process into consideration. Furthermore, we study the behavior of optimization algorithms around manifold of minima and reveal the exploration of algorithms from one minimum to another.

Distinguishing hyperbolic knots using finite quotients

Series
Geometry Topology Seminar
Time
Monday, February 6, 2023 - 14:00 for 1 hour (actually 50 minutes)
Location
Speaker
Tam Cheetham-WestRice University

The fundamental groups of knot complements have lots of finite quotients. We give a criterion for a hyperbolic knot in the three-sphere to be distinguished (up to isotopy and mirroring) from every other knot in the three-sphere by the set of finite quotients of its fundamental group, and we use this criterion as well as recent work of Baldwin-Sivek to show that there are infinitely many hyperbolic knots distinguished (up to isotopy and mirroring) by finite quotients. 

Global Existence and Long Time Behavior in the 1+1 dimensional Principal Chiral Model with Applications to Solitons

Series
PDE Seminar
Time
Tuesday, February 7, 2023 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Jessica Trespalacios JulioUniversidad de Chile

We consider the 1+1 dimensional vector valued Principal Chiral Field model (PCF) obtained as a simplification of the Vacuum Einstein Field equations under the Belinski-Zakharov symmetry. PCF is an integrable model, but a rigorous description of its evolution is far from complete. Here we provide the existence of local solutions in a suitable chosen energy space, as well as small global smooth solutions under a certain non degeneracy condition. We also construct virial functionals which provide a clear description of decay of smooth global solutions inside the light cone. Finally, some applications are presented in the case of PCF solitons, a first step towards the study of its nonlinear stability. 

Synchronization and averaging in a simple dynamical systems with fast/slow variables

Series
Math Physics Seminar
Time
Thursday, February 9, 2023 - 12:00 for 1 hour (actually 50 minutes)
Location
Skiles Room 005
Speaker
Federico BonettoSchool of Mathematics, Georgia Tech

 We study a family of dynamical systems obtained by coupling a chaotic (Anosov) map on the two-dimensional torus -- the chaotic variable -- with the identity map on the one-dimensional torus -- the neutral variable -- through a dissipative interaction. We show that the  two systems synchronize, in the sense that the trajectories evolve toward an attracting invariant manifold, and that the full dynamics is conjugated to its linearization around the invariant manifold. When the interaction is small, the evolution of the neutral variable is very close to the identity and hence the neutral variable appears as a slow variable with respect to the fast chaotic variable: we show that, seen on a suitably long time scale, the slow variable effectively follows the solution of a deterministic differential equation obtained by averaging over the fast  variable.

Effective deep neural network architectures for learning high-dimensional Banach-valued functions from limited data

Series
Applied and Computational Mathematics Seminar
Time
Friday, February 10, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 006 and https://gatech.zoom.us/j/98355006347
Speaker
Nick DexterFlorida State University

In the past few decades the problem of reconstructing high-dimensional functions taking values in abstract spaces from limited samples has received increasing attention, largely due to its relevance to uncertainty quantification (UQ) for computational science and engineering. These UQ problems are often posed in terms of parameterized partial differential equations whose solutions take values in Hilbert or Banach spaces. Impressive results have been achieved on such problems with deep learning (DL), i.e. machine learning with deep neural networks (DNN). This work focuses on approximating high-dimensional smooth functions taking values in reflexive and typically infinite-dimensional Banach spaces. Our novel approach to this problem is fully algorithmic, combining DL, compressed sensing, orthogonal polynomials, and finite element discretization. We present a full theoretical analysis for DNN approximation with explicit guarantees on the error and sample complexity, and a clear accounting of all sources of error. We also provide numerical experiments demonstrating the efficiency of DL at approximating such high-dimensional functions from limited data in UQ applications.
 

The controllability function method and the feedback synthesis problem for a robust linear system

Series
CDSNS Colloquium
Time
Friday, February 10, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Online
Speaker
Tetiana RevinaV. N. KARAZIN KHARKIV NATIONAL UNIVERSITY

https://gatech.zoom.us/j/91390791493?pwd=QnpaWHNEOHZTVXlZSXFkYTJ0b0Q0UT09

The talk is about controllability for uncertain linear systems. Our approach is 
based on the Controllability Function (CF) method proposed by V.I. Korobov in 
1979. The CF method is a development of the Lyapunov function method and the 
dynamic programming method. The CF includes both approaches at a certain values 
of parameters. The main advance of the CF method is finiteness of the time of motion 
(settling-time function). 
In the talk the feedback synthesis problem for a chain of integrators system 
with continuous bounded unknown perturbations is considered. This problem consist 
in constructing a control in explicit form which depends on phase coordinates and 
steers an arbitrary initial point from a neighborhood of the origin to the origin in a 
finite time (settling-time function). Besides the control is satisfies some preassigned 
constrains. The range of the unknown perturbations such that the control solving the 
synthesis problem for the system without the perturbations also solves the synthesis 
problem for the perturbed system are found. This study shows the relations between 
the range of perturbations and the bounds of the settling-time function.
In particular the feedback synthesis problem for the motion of a material 
point with allowance for friction is solved. 
Keywords: chain of integrators, finite-time stability, robust control, settling 
time estimation, uncertain systems, unknown bounded perturbation

On Extremal Polynomials: 5. Upper Estimates and Irregularity of Widom Factors

Series
Mathematical Physics and Analysis Working Seminar
Time
Friday, February 10, 2023 - 12:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Burak HatinogluGeorgia Institute of Technology

We will continue to focus on Cantor-type sets we introduced last week. Using them we will consider maximal growth rate and irregular behavior of Widom factors (nth Chebyshev number divided by nth power of logarithmic capacity). We will also discuss a recent result of Jacob Christiansen, Barry Simon and Maxim Zinchenko, which shows that Widom factors of Parreau-Widom sets are uniformly bounded.

Computation with sequences of neural assemblies

Series
ACO Student Seminar
Time
Friday, February 10, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Max DabagiaGeorgia Tech CS

Assemblies are subsets of neurons whose coordinated excitation is hypothesized to represent the subject's thinking of an object, idea, episode, or word. Consequently, they provide a promising basis for a theory of how neurons and synapses give rise to higher-level cognitive phenomena. The existence (and pivotal role) of assemblies was first proposed by Hebb, and has since been experimentally confirmed, as well as rigorously proven to emerge in the model of computation in the brain recently developed by Papadimitriou & Vempala. In light of contemporary studies which have documented the creation and activation of sequences of assemblies of neurons following training on tasks with sequential decisions, we study here the brain's mechanisms for working with sequences in the assemblies model of Papadimitriou & Vempala.  We show that (1) repeated presentation of a sequence of stimuli leads to the creation of a sequence of corresponding assemblies -- upon future presentation of any contiguous sub-sequence of stimuli, the corresponding assemblies are activated and continue until the end of the sequence; (2) when the stimulus sequence is projected to two brain areas in a "scaffold", both memorization and recall are more efficient, giving rigorous backing to the cognitive phenomenon that memorization and recall are easier with scaffolded memories; and (3) existing assemblies can be quite easily linked to simulate an arbitrary finite state machine (FSM), thereby capturing the brain's ability to memorize algorithms. This also makes the assemblies model capable of arbitrary computation simply in response to presentation of a suitable stimulus sequence, without explicit control commands. These findings provide a rigorous, theoretical explanation at the neuronal level of complex phenomena such as sequence memorization in rats and algorithm learning in humans, as well as a concrete hypothesis as to how the brain's remarkable computing and learning abilities could be realized.

 

Joint work with Christos Papadimitriou and Santosh Vempala.