Seminars and Colloquia by Series

Dehn–Seidel twists on configurations of Lagrangian spheres in K3 surfaces

Series
Geometry Topology Seminar
Time
Monday, October 27, 2025 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Juan Munoz-EchanizStony Brook University

On a closed, simply-connected, symplectic 4-manifold, the Dehn–Seidel twists on Lagrangian spheres and their products provide all known examples of non-trivial elements in the symplectic mapping class group. However, little is known in general about the relations that may hold among Dehn–Seidel twists. 

I will discuss the following result: on a symplectic K3 surface, the squared Dehn--Seidel twists on Lagrangian spheres with distinct fundamental classes are algebraically independent in the abelianization of the (smoothly-trivial) symplectic mapping class group. In a particular case, this establishes an abelianized form of a Conjecture by Seidel and Thomas on the faithfulness of certain Braid group representations in the symplectic mapping class group of K3 surfaces. The proof makes use of Seiberg--Witten gauge theory for families of symplectic 4-manifolds.

Lagrangian Dual Sections: A Topological Perspective on Hidden Convexity

Series
Algebra Seminar
Time
Monday, October 27, 2025 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Kevin ShuCalifornia Institute of Technology

Convex relaxations are of central interest in optimization, and it is typically challenging to determine whether a given convex relaxation will be tight for a given problem. We introduce a topological framework for analyzing  situations in which a constrained optimization problem over a nonconvex set (such as a manifold) has a tight convex relaxation. In particular, we give a criterion for the existence of such a tight convex relaxation in terms of the existence of a continuous function of Lagrange multipliers for the constrained problem maximizing the corresponding Lagrangian. We call such a function a Lagrangian dual section, in reference to the topological notion of a section of a bundle.

As a corollary of this result, we will give new criteria for the exactness of SDP relaxations for Stiefel manifold optimization and inverse eigenvalue problems in terms of linear subspaces of matrices satisfying spectral properties such as being nonsingular. We will also illustrate a homotopy continuation style algorithm with global optimality guarantees with applications to the unbalanced procrustes problem.

Problems and Results for Geometric Graphs and Hypergraphs

Series
Combinatorics Seminar
Time
Friday, October 24, 2025 - 15:15 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jacques VerstraëteUniversity of California San Diego

A geometric graph consists of a set of points in the plane together with line 

segments between some pairs of points. A convex geometric graph is a geometric graph whose 

points are in convex position. We present some old and new extremal results and applications, 

and their extension to geometric hypergraphs, together with a variety of open problems. 

 

Partly joint work with Zoltan Furedi, Tao Jiang, Sasha Kostochka and Dhruv Mubayi

Margulis-like measures on expanding foliations: construction and rigidity

Series
CDSNS Colloquium
Time
Friday, October 24, 2025 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 311
Speaker
Fan YangWake Forest University

In a recent joint work with J. Buzzi, Y. Shi, and J. Yang, given a diffeomorphism preserving a one-dimensional expanding foliation $\mathcal F$ with homogeneous exponential growth, we construct a family of reference measures on each leaf of the foliation with controlled Jacobian and a Gibbs property.

We then prove that for any measure of maximal $\mathcal F$-entropy, its conditional measures on each leaf must be equivalent to the reference measures.

When the reference measures are equivalent to the leafwise Lebesgue measure, we prove that the log-determinant of $f$ must be cohomologous to a constant.

We will consider several applications, including the strong and center foliations of Anosov diffeomorphisms, factor over Anosov diffeomorphisms, and perturbations of the time-one map of geodesic flows on surfaces with negative curvature. We will also discuss several conjectures on the unique ergodicity and (exponential) equidistribution for the strong unstable foliations of Anosov systems. 

Zoom link: https://gatech.zoom.us/j/92005780980?pwd=ptlx7KdBAbHI3DTvv6V7fjFn27LDaE.1

Meeting ID: 920 0578 0980
Passcode: 604975

Lectures on Kahler Geometry V

Series
Geometry Topology Working Seminar
Time
Friday, October 24, 2025 - 14:00 for 1.5 hours (actually 80 minutes)
Location
Skiles 006
Speaker
Randy Van WhyGeorgia Tech

This series will tie together algebraic, complex analytic, symplectic, and contact geometries together in one coherent story. This will be done via the study of a series of couplets from different fields of geometry:

Algebraic manifolds:
Affine and quasi-projective varieties (non-compact models)
Projective varieties (compact models)

Complex manifolds:
Stein manifolds
Stein compactifications

Symplectic manifolds:
Liouville/ Weinstein geometry
Compact Kahler manifolds 

Depending on how long it takes to discuss these items, I will also attempt to include discussions on:

• Biran-Giroux decompositions of symplectic manifolds • Boothby-Wang bundles and contact plumbings of these • Milnor's fibration theorem for isolated singularities and connections to open book decompositions and Lefschetz fibrations • Open questions and interesting avenues of research

Most of our discussion will, as a side effect, outline the topological structure behind Type IIA String theory (the "topological A-model") which requires a 6-dimensional Calabi-Yau (Kahler) background.

 

Efficient Low-Rank Training and Fine-Tuning of Neural Networks

Series
Applied and Computational Mathematics Seminar
Time
Friday, October 24, 2025 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Steffen SchotthoeferOak Ridge National Laboratory

Abstract:

Low-rank adaptation (LoRA) has become the de-facto state-of-the-art method for parameter efficient fine-tuning of large-scale, pre-trained neural networks.  Similarly, low-rank compression of pre-trained networks has become a widely adopted technique to reduce the parameter count of networks for fast inference on resource constraint devices.  The idea of low-rank methods is based upon the assumption that the weight matrices of overparametrized neural networks are of low-rank.  Thus, a factorization of the weight layers based on truncated singular value decompositions can be employed to reduce the memory footprint of the network.  However, LoRA and its extensions face several challenges in practice, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tuning process.  In this talk, Dr. Schotthoefer investigates mathematical concepts of low-rank training and uses the gained insights to design efficient and robust low-rank training algorithms.

                                                                                        

Speaker’s Bio:

Dr. Steffen Schotthoefer is the current Householder Fellow in the Mathematics in Computation Section at the Oak Ridge National Laboratory (ORNL), affiliated with the Multiscale Methods and Dynamics Group.  Steffen's work centers on creating efficient numerical methods for training and fine-tuning artificial intelligence models in environments with limited resources and at large scales.  He investigates low-rank methods for model compression to minimize the computational cost of neural network training and inference.  In addition, Steffen develops neural network-based surrogate models for scientific domains such as radiation transport and plasma dynamics.  His research aims to tackle the challenges posed by memory and communication bottlenecks in large-scale simulations.  Prior to joining ORNL, Steffen completed his Ph.D. in Applied Mathematics at Karlsruhe Institute of Technology, Germany, focusing on neural network-based surrogate modeling for radiation transport.  During his doctoral studies, he devised numerical methods for the simulation of kinetic partial differential equations and neural network training, establishing the foundation for his current research.

 

Power law covariance and a solvable model of the Kaplan scaling laws

Series
Stochastics Seminar
Time
Thursday, October 23, 2025 - 15:30 for 1 hour (actually 50 minutes)
Location
Speaker
Elliot PaquetteMcGill University

One of the foundational ideas in modern machine learning is the scaling hypothesis: that machine learning models will improve in a predictable manner, with each doubling of resources leading to a commensurate improvement in abilities.  These were formalized for large language models in the Kaplan et al. scaling laws.

This is an almost entirely empirically observed law, which motivates the development probabilistic models that can explain these laws and to ultimately inform how to answer fundamental questions, such as: what can improve these laws? Or what causes them to break?

In this talk I’ll focus on a simple random matrix model of these scaling laws, the power law random features model, which motivates new iteration of stochastic algorithms which have the potential to change these scaling laws.  This random matrix model is not fully solved, and there are many open questions, both in pure probability and machine learning that rise in this study.

Computer Algebra club/seminar

Series
Additional Talks and Lectures
Time
Thursday, October 23, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Noah SolomonGeorgia Tech

We will start with a 15-minute presentation by Noah Solomon and continue with a free discussion.

A modular framework for generalized Hurwitz class numbers

Series
Number Theory
Time
Wednesday, October 22, 2025 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Olivia BeckwithTulane University

We explore the modular properties of generating functions for Hurwitz class numbers endowed with level structure. Our work is based on an inspection of the weight $\frac{1}{2}$ Maass--Eisenstein series of level $4N$ at its spectral point $s=\frac{3}{4}$, extending the work of Duke, Imamo\={g}lu and T\'{o}th in the level $4$ setting. We construct a higher level analogue of Zagier's Eisenstein series and a preimage under the $\xi_{\frac{1}{2}}$-operator.  We deduce a linear relation between the mock modular generating functions of the level $1$ and level $N$ Hurwitz class numbers, giving rise to a holomorphic modular form of weight $\frac{3}{2}$ and level $4N$ for $N > 1$ odd and square-free. Furthermore, we connect the aforementioned results to a regularized Siegel theta lift as well as a regularized Kudla--Millson theta lift for odd prime levels, which builds on earlier work by Bruinier, Funke and Imamo\={g}lu. I wil lbe discussing joint work with Andreas Mono and Ngoc Trinh Le.

Dynamical Frames and Hyperinvariant Subspaces

Series
Analysis Seminar
Time
Wednesday, October 22, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Victor BaileyUniversity of Oklahoma
The theory of dynamical frames arose from practical problems in dynamical sampling where the initial state of a vector needs to be recovered from the space-time samples of future states of the vector. This leads to the investigation of structured frames obtained from the orbits of evolution operators. One of the basic problems in dynamical frame theory is to determine the semigroup representations, which we will call central frame representations,  whose  frame generators are unique (up to equivalence). In this talk, we will address the general uniqueness problem by presenting a characterization of central frame representations for any semigroup in terms of the co-hyperinvariant subspaces of the left regular representation of the semigroup. This result is not only consistent with the known result of Han and Larson in 2000 for group representation frames, but also proves that the frame vectors for any system of the form $\{A_1^{n_1}\cdots A_k^{n_k}: n_j\geq 0\}$, where  $A_1,...,A_k \in B(H)$ commute,  are equivalent. This is joint work with Deguang Han, Keri Kornelson, David Larson, and Rui Liu.

Pages