Seminars and Colloquia by Series

The Uzawa Method: Historical Perspectives, Current Advances, and Future Directions

Series
Applied and Computational Mathematics Seminar
Time
Friday, January 23, 2026 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Professor Xiaoming YuanThe University of Hong Kong

Abstract:
This talk explores the Uzawa method, tracing its development from early applications in partial differential equations (PDEs) to modern advancements in optimization, image processing, and scientific computing. We will examine recent refinements for developing GPU-adaptive solvers for huge-scale linear programming and its extension to semidefinite programming arising in quantum information science. The discussion will also highlight the method's integration with deep learning and unrolling techniques for optimal control problems of PDEs, as well as its applications in industry.

 

Bio:

Xiaoming Yuan is a Professor in the Department of Mathematics at The University of Hong Kong. His research spans optimization, optimal control, scientific machine computing, and artificial intelligence. He is well recognized for his fundamental contributions to first-order optimization algorithms, including the Alternating Direction Method of Multipliers (ADMM), primal-dual methods, and proximal point algorithms. He also collaborates extensively with the AI and cloud computing industries. He led the development of the first automatic bandwidth allocation system for the cloud computing sector. His team was honored as a Franz Edelman Award Finalist in 2023.

Learning geometry from incomplete pairwise distances: Theory, algorithms and applications

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 12, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 254 and https://gatech.zoom.us/j/94954654170
Speaker
Abiy TasissaTufts

The advancement of technology has significantly enhanced our capacity to collect data. However, in many real-world applications, certain inherent limitations, such as the precision of measurement devices, environmental conditions, or operating costs, can result in missing data. In this talk, we focus on the setting where the available data consists of pairwise distances between a set of points, with the goal of estimating the configuration of the underlying geometry from incomplete distance measurements. This is known as the Euclidean distance geometry (EDG) problem and is central to many applications.

We first start by describing the solution when all distances are given using the classical multidimensional scaling (MDS) technique and then discuss a constructive approach to interpret the key mathematical objects in MDS. Next, we introduce a mathematical framework to address the EDG problem under two sampling models of the distance matrix: global sampling (uniform sampling of the entries of the distance matrix) and structured local sampling, where the measurements are limited to a subset of rows and columns. We discuss the conditions required for the exact recovery of the point configuration and the associated algorithms. The last part of the talk will illustrate the algorithms using synthetic and real data and discuss ongoing work.

Fantastic Path RND and find them in diffusion control

Series
Applied and Computational Mathematics Seminar
Time
Tuesday, December 9, 2025 - 13:30 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/96503797550
Speaker
Jiajun HeUniversity of Cambridge

Please Note: Note the special time/date. Speaker will be in person.

I will begin by introducing the concept of path Radon–Nikodym derivative (path RND) and explaining how it connects to, and accelerates, classical sampling and estimation algorithms such as parallel tempering and free-energy perturbation. I will then show how path RND offers a unifying perspective on controlling diffusion models using Sequential Monte Carlo. Finally, I will present a new paradigm for inference-time control based on parallel tempering, which enables more robust manipulation of diffusion trajectories.

Opportunities and Challenges of Neural Networks in Partial Differential Equations

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 1, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Yahong YangGeorgia Tech

The use of neural networks for solving partial differential equations (PDEs) has attracted considerable attention in recent years. In this talk, I will first highlight their advantages over traditional numerical methods, including improved approximation rates and the potential to overcome the curse of dimensionality. I will then discuss the challenges that arise when applying neural networks to PDEs, particularly in training. Because training is inherently a highly nonconvex optimization problem, it can lead to poor local minima with large training errors, especially in complex PDE settings. To address these issues, I will demonstrate how incorporating mathematical insight into the design of training algorithms and network architectures can lead to significant improvements in both accuracy and robustness.

Transformers for Learning a Single task and Multi Task Regression on Manifolds: Approximation and Generalization Insights

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 24, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Zhaiming ShenGeorgia Institute of Technology

Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SORA, and their successors. While empirical studies have shown that real-world data and learning tasks exhibit low-dimensional geometric structures, the theoretical understanding of transformers in leveraging these structures remains largely unexplored. In this talk, we present a theoretical foundation for transformers in two key scenarios: (1) regression tasks with noisy input data lying near a low-dimensional manifold, and (2) in-context learning (ICL) for regression of Hölder functions on manifolds. For the first setting, we prove that approximation and generalization bound that depend crucially on the intrinsic dimension of the manifold, demonstrating that transformers can effectively learn from data perturbed by high-dimensional noise. For the second setting, we derive generalization error bounds for ICL in terms of prompt length and the number of training tasks, revealing that transformers achieve the minimax optimal rate for Hölder regression—scaling exponentially with the intrinsic rather than ambient dimension. Together, these results provide foundational insights into how transformers exploit low-dimensional geometric structures in learning tasks, advancing our theoretical understanding of their remarkable empirical success.

Reduced-order data assimilation models for computing probability distributions of complex multiscale systems

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 17, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Di QiPurdue University

A new strategy is presented for the statistical forecasts of multiscale nonlinear systems involving non-Gaussian probability distributions. The capability of using reduced-order models to capture key statistical features is investigated. A closed stochastic-statistical modeling framework is proposed using a high-order statistical closure enabling accurate prediction of leading-order statistical moments and probability density functions in multiscale complex turbulent systems. A new efficient ensemble forecast algorithm is developed dealing with the nonlinear multiscale coupling mechanism as a characteristic feature in high-dimensional turbulent systems. To address challenges associated with closely coupled spatio-temporal scales in turbulent states and expensive large ensemble simulation for high-dimensional complex systems, we introduce efficient computational strategies using the random batch method. Effective nonlinear ensemble filters are developed based on the nonlinear coupling structures of the explicit stochastic and statistical equations, which satisfy an infinite-dimensional Kalman-Bucy filter with conditional Gaussian dynamics. It is demonstrated that crucial principal statistical quantities in the most important large scales can be captured efficiently with accuracy using the new reduced-order model in various dynamical regimes of the flow field with distinct statistical structures.

Bridging Scientific Computing and Machine Learning through Stochastic and Data-Driven Solvers

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 10, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Tianshi XuEmory University

Classical solvers for large-scale scientific and data-driven problems often face limitations when uncertainty, multiscale effects, or ill-conditioning become dominant. In this talk, I will present hybrid algorithmic frameworks that unify ideas from numerical analysis, stochastic computation, and machine learning to address these challenges. In the first part, I will introduce Preconditioned Truncated Single-Sample (PTSS) estimators, a new class of stochastic Krylov methods that integrate preconditioning with truncated Lanczos iterations. PTSS provides unbiased, low-variance estimators for linear system solutions, log-determinants, and their derivatives, enabling scalable algorithms for inference and optimization. In the second part, I will discuss a data-driven approach to constructing approximate inverse preconditioners for partial differential equations (PDEs). By learning the Green’s function of the underlying operator through neural representations, this framework captures multiscale behavior and preserves essential spectral structure. The resulting solvers achieve near-linear complexity in both setup and application. Together, these developments illustrate how stochastic and learning-based mechanisms can be embedded into classical numerical frameworks to create adaptive and efficient computational methods for complex systems.

Pages