Seminars and Colloquia by Series

Fantastic Path RND and find them in diffusion control

Series
Applied and Computational Mathematics Seminar
Time
Tuesday, December 9, 2025 - 13:30 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jiajun HeUniversity of Cambridge

Please Note: Note the special time/date. Speaker will be in person.

I will begin by introducing the concept of path Radon–Nikodym derivative (path RND) and explaining how it connects to, and accelerates, classical sampling and estimation algorithms such as parallel tempering and free-energy perturbation. I will then show how path RND offers a unifying perspective on controlling diffusion models using Sequential Monte Carlo. Finally, I will present a new paradigm for inference-time control based on parallel tempering, which enables more robust manipulation of diffusion trajectories.

Opportunities and Challenges of Neural Networks in Partial Differential Equations

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 1, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Yahong YangGeorgia Tech

The use of neural networks for solving partial differential equations (PDEs) has attracted considerable attention in recent years. In this talk, I will first highlight their advantages over traditional numerical methods, including improved approximation rates and the potential to overcome the curse of dimensionality. I will then discuss the challenges that arise when applying neural networks to PDEs, particularly in training. Because training is inherently a highly nonconvex optimization problem, it can lead to poor local minima with large training errors, especially in complex PDE settings. To address these issues, I will demonstrate how incorporating mathematical insight into the design of training algorithms and network architectures can lead to significant improvements in both accuracy and robustness.

Transformers for Learning a Single task and Multi Task Regression on Manifolds: Approximation and Generalization Insights

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 24, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Zhaiming ShenGeorgia Institute of Technology

Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SORA, and their successors. While empirical studies have shown that real-world data and learning tasks exhibit low-dimensional geometric structures, the theoretical understanding of transformers in leveraging these structures remains largely unexplored. In this talk, we present a theoretical foundation for transformers in two key scenarios: (1) regression tasks with noisy input data lying near a low-dimensional manifold, and (2) in-context learning (ICL) for regression of Hölder functions on manifolds. For the first setting, we prove that approximation and generalization bound that depend crucially on the intrinsic dimension of the manifold, demonstrating that transformers can effectively learn from data perturbed by high-dimensional noise. For the second setting, we derive generalization error bounds for ICL in terms of prompt length and the number of training tasks, revealing that transformers achieve the minimax optimal rate for Hölder regression—scaling exponentially with the intrinsic rather than ambient dimension. Together, these results provide foundational insights into how transformers exploit low-dimensional geometric structures in learning tasks, advancing our theoretical understanding of their remarkable empirical success.

Reduced-order data assimilation models for computing probability distributions of complex multiscale systems

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 17, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Di QiPurdue University

A new strategy is presented for the statistical forecasts of multiscale nonlinear systems involving non-Gaussian probability distributions. The capability of using reduced-order models to capture key statistical features is investigated. A closed stochastic-statistical modeling framework is proposed using a high-order statistical closure enabling accurate prediction of leading-order statistical moments and probability density functions in multiscale complex turbulent systems. A new efficient ensemble forecast algorithm is developed dealing with the nonlinear multiscale coupling mechanism as a characteristic feature in high-dimensional turbulent systems. To address challenges associated with closely coupled spatio-temporal scales in turbulent states and expensive large ensemble simulation for high-dimensional complex systems, we introduce efficient computational strategies using the random batch method. Effective nonlinear ensemble filters are developed based on the nonlinear coupling structures of the explicit stochastic and statistical equations, which satisfy an infinite-dimensional Kalman-Bucy filter with conditional Gaussian dynamics. It is demonstrated that crucial principal statistical quantities in the most important large scales can be captured efficiently with accuracy using the new reduced-order model in various dynamical regimes of the flow field with distinct statistical structures.

Bridging Scientific Computing and Machine Learning through Stochastic and Data-Driven Solvers

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 10, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Tianshi XuEmory University

Classical solvers for large-scale scientific and data-driven problems often face limitations when uncertainty, multiscale effects, or ill-conditioning become dominant. In this talk, I will present hybrid algorithmic frameworks that unify ideas from numerical analysis, stochastic computation, and machine learning to address these challenges. In the first part, I will introduce Preconditioned Truncated Single-Sample (PTSS) estimators, a new class of stochastic Krylov methods that integrate preconditioning with truncated Lanczos iterations. PTSS provides unbiased, low-variance estimators for linear system solutions, log-determinants, and their derivatives, enabling scalable algorithms for inference and optimization. In the second part, I will discuss a data-driven approach to constructing approximate inverse preconditioners for partial differential equations (PDEs). By learning the Green’s function of the underlying operator through neural representations, this framework captures multiscale behavior and preserves essential spectral structure. The resulting solvers achieve near-linear complexity in both setup and application. Together, these developments illustrate how stochastic and learning-based mechanisms can be embedded into classical numerical frameworks to create adaptive and efficient computational methods for complex systems.

Multiscale Representation and Learning of Molecules

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 3, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Bao WangUniversity of Utah

Artificial intelligence (AI) has become a transformative force in scientific discovery---known as AI for Science---with profound impact on computational molecular design, as highlighted by the 2024 Nobel Prize in Chemistry. Due to their remarkable capability in analyzing complex structures, message-passing neural networks and diffusion- and flow-based generative models stand out as effective tools for molecular property prediction and structure generation. However, message-passing neural networks struggle to efficiently integrate multiscale molecular features and complex 3D geometry for accurate property prediction, and (2) the generative processes of generative models are often computationally intensive and error-prone. 

In this talk, I will present our recent advances toward overcoming these limitations: (1) multiscale graph representations and message-passing architectures for efficient and accurate molecular learning, and (2) one-step flow-based generative models that enable high-fidelity molecule generation with dramatically reduced computational cost.

Efficient Low-Rank Training and Fine-Tuning of Neural Networks

Series
Applied and Computational Mathematics Seminar
Time
Friday, October 24, 2025 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Steffen SchotthoeferOak Ridge National Laboratory

Abstract:

Low-rank adaptation (LoRA) has become the de-facto state-of-the-art method for parameter efficient fine-tuning of large-scale, pre-trained neural networks.  Similarly, low-rank compression of pre-trained networks has become a widely adopted technique to reduce the parameter count of networks for fast inference on resource constraint devices.  The idea of low-rank methods is based upon the assumption that the weight matrices of overparametrized neural networks are of low-rank.  Thus, a factorization of the weight layers based on truncated singular value decompositions can be employed to reduce the memory footprint of the network.  However, LoRA and its extensions face several challenges in practice, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tuning process.  In this talk, Dr. Schotthoefer investigates mathematical concepts of low-rank training and uses the gained insights to design efficient and robust low-rank training algorithms.

                                                                                        

Speaker’s Bio:

Dr. Steffen Schotthoefer is the current Householder Fellow in the Mathematics in Computation Section at the Oak Ridge National Laboratory (ORNL), affiliated with the Multiscale Methods and Dynamics Group.  Steffen's work centers on creating efficient numerical methods for training and fine-tuning artificial intelligence models in environments with limited resources and at large scales.  He investigates low-rank methods for model compression to minimize the computational cost of neural network training and inference.  In addition, Steffen develops neural network-based surrogate models for scientific domains such as radiation transport and plasma dynamics.  His research aims to tackle the challenges posed by memory and communication bottlenecks in large-scale simulations.  Prior to joining ORNL, Steffen completed his Ph.D. in Applied Mathematics at Karlsruhe Institute of Technology, Germany, focusing on neural network-based surrogate modeling for radiation transport.  During his doctoral studies, he devised numerical methods for the simulation of kinetic partial differential equations and neural network training, establishing the foundation for his current research.

 

Neural Network with Local Converging Input as Efficient Solver for Unstructured Computational Fluid Dynamics

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 20, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Weiming DingGeorgia Institute of Technology, School of Mathematics

This talk presents two recent advances in Neural Network with Local Converging Inputs (NNLCI) —a novel surrogate model for efficiently resolving nonlinear flow dynamics at modest computational cost

First, a powerful and efficient technique is introduced to extend NNLCI to unstructured computational fluid dynamics. The framework is validated on two-dimensional inviscid supersonic flow in channels with varying bump geometries and positions. The NNLCI model accurately captures key flowfield structures and dynamics, including regions with highly nonlinear shock interactions while achieving a speedup of more than two orders of magnitude.

Second, we conduct a comprehensive benchmark study to compare our method with current state-of-the-art AI-based PDE solvers. Across representative hyperbolic conservation law problems, NNLCI consistently deliver superior accuracy, efficiency and robustness in resolving challenging sharp discontinuities and wave interactions. The work provides practical guidance for model selection in scientific machine learning applications

Measure theoretic approaches for uncertainty propagation

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 13, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Li WangUniversity of Minnesota

Uncertainty is ubiquitous: both data and physical models inherently contain uncertainty. Therefore, it is crucial to identify the sources of uncertainty and control its propagation over time. In this talk, I will introduce two approaches to address this uncertainty propagation problem—one for the inverse problem and one for the forward problem. The main idea is to work directly with probability measures, treating the underlying PDE as a pushforward map. In the inverse setting, we will explore various variational formulations, focusing on the characterization of minimizers and their stability. In the forward setting, we aim to propose a new approach to tackle high-dimensional uncertainties.

High-Order Spectral Difference Method for Ducted Wind Turbine Aerodynamics and Solar Magnetohydrodynamics

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 29, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Chunlei LiangClarkson University

This talk highlights two recent advances in applying the high-order spectral difference (SD) method for computational fluid dynamics on unstructured meshes. The first is a novel curved sliding-mesh technique for the SD method, enabling accurate simulations of rotary-wing aerodynamics. Recent applications include large eddy simulations of marine propellers and ducted wind turbines. The second is the development of a massively parallel code, CHORUS++, designed for Nvidia GPUs to study magnetohydrodynamics in the solar interior. From a computational mathematics standpoint, Dr. Liang also introduced the spectral difference with divergence cleaning (SDDC) algorithm, which addresses the solenoidal constraint of magnetic fields, particularly in the presence of physical boundaries on 3D unstructured grids.

Pages