Seminars and Colloquia by Series

Mathematical theory of structured deep neural networks

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 28, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Ding-Xuan ZhouSchool of Mathematics and Statistics, University of Sydney, Australia

Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, natural language processing, and many other domains. The involved deep neural network architectures and computational issues have been well studied in machine learning. But there is much less theoretical understanding about the modelling, approximation or generalization abilities of deep learning models with network architectures. An important family of structured deep neural networks is deep convolutional neural networks (CNNs) induced by convolutions. The convolutional architecture gives essential differences between deep CNNs and fully-connected neural networks, and the classical approximation theory for fully-connected networks developed around 30 years ago does not apply.  This talk describes approximation and generalization analysis of deep CNNs and related structured deep neural networks. 
 

An energy-stable machine-learning model of non-Newtonian hydrodynamics with molecular fidelity

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 7, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Huan LeiMichigan State University

One essential challenge in the computational modeling of multiscale systems is the availability of reliable and interpretable closures that faithfully encode the micro-dynamics. For systems without clear scale separation, there generally exists no such a simple set of macro-scale field variables that allow us to project and predict the dynamics in a self-determined way. We introduce a machine-learning (ML) based approach that enables us to reduce high-dimensional multi-scale systems to reliable macro-scale models with low-dimensional variational structures that preserve canonical degeneracies and symmetry constraints. The non-Newtonian hydrodynamics of polymeric fluids is used as an example to illustrate the essential idea. Unlike our conventional wisdom about ML modeling that focuses on learning the PDE form, the present approach directly learns the energy variational structure from the micro-model through an end-to-end process via the joint learning of a set of micro-macro encoder functions. The final model, named the deep non-Newtonian model (DeePN2), retains a multi-scale nature with clear physical interpretation and strictly preserves the frame-indifference constraints. We show that DeePN2 can capture the broadly overlooked viscoelastic differences arising from the specific molecular structural mechanics without human intervention.

Latent neural dynamics for fast data assimilation with sparse observations

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 31, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Peng ChenGeorgia Tech CSE

Data assimilation techniques are crucial for correcting trajectories when modeling complex dynamical systems. The Latent Ensemble Score Filter (Latent-EnSF), our recently developed data assimilation method, has shown great promise in high-dimensional and nonlinear data assimilation problems with sparse observations. However, this method faces the challenge of high computational cost due to the expensive forward simulation. In this talk, we present Latent Dynamics EnSF (LD-EnSF), a novel methodology that evolves the neural dynamics in a low-dimensional latent space and significantly accelerates the data assimilation process.

 

To achieve this, we introduce a novel variant of Latent Dynamics Networks (LDNets) to effectively capture the system's dynamics within a low-dimensional latent space. Additionally, we propose a new method for encoding sparse observations into the latent space using recurrent neural networks. We demonstrate the robustness, accuracy, and efficiency of the proposed methods and their limitations for complex dynamical systems with highly sparse (in both space and time) and noisy observations, including shallow water wave propagation for tsunami modeling, FourCastNet in numerical weather prediction, and Kolmogorov flow that exhibits chaotic and turbulent phenomena.

From Theory to Practice: Mathematical Approaches to Scientific Machine Learning

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 10, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Yeonjong ShinNCSU

Machine learning (ML) has achieved unprecedented empirical success in diverse applications. It now has been applied to solve scientific and engineering problems, which has become an emerging field, Scientific Machine Learning (SciML). However, many ML techniques are highly complex and sophisticated, often requiring extensive trial-and-error experimentation and specialized problem-dependent tricks to implement effectively. This complexity frequently leads to significant challenges, such as reproducibility and rigorness, for scientific research. This talk explores mathematical approaches, offering more principled and reliable methodologies in SciML. The first part will present recent efforts advancing the predictive power of physics-informed machine learning through robust training/optimization methods. This includes an effective training method for multivariate neural networks, namely, Active Neuron Least Squares (ANLS) and a two-step training method for deep operator networks. The second part is about how to embed the first principles of physics into neural networks. I will present a general framework for designing NNs that obey the first and second laws of thermodynamics. The framework not only provides flexible ways of leveraging available physics information but also results in expressive NN architectures. I will also present an intriguing phenomenon of this framework when it is applied in the context of latent space dynamics identification where a correlation appears between an entropy production rate in the latent space and the behaviors of the full-state solution.

Unsupervised Solution Operator Learning for Mean-Field Games

Series
Applied and Computational Mathematics Seminar
Time
Friday, March 7, 2025 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 006 and https://gatech.zoom.us/j/98355006347
Speaker
Rongjie LaiPurdue University

Recent advances in deep learning have introduced numerous innovative frameworks for solving high-dimensional mean-field games (MFGs). However, these methods are often limited to solving single-instance MFGs and require extensive computational time for each instance, presenting challenges for practical applications.

In this talk, I will present our recent work on a novel framework for learning the MFG solution operator. Our model takes MFG instances as input and directly outputs their solutions in a single forward pass, significantly improving computational efficiency. Our method offers two key advantages: (1) it is discretization-free, making it particularly effective for high-dimensional MFGs, and (2) it can be trained without requiring supervised labels, thereby reducing the computational burden of preparing training datasets common in existing operator learning methods. If time permits, I will also explore connections between this framework and in-context learning, highlighting its broader implications and potential for further advancements.

 

Modeling, analysis, and control of droplet dynamics

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 3, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Hangjie JiNorth Carolina State University

Thin liquid films flowing down vertical fibers spontaneously exhibit complex interfacial dynamics, leading to irregular wavy patterns and traveling liquid droplets. Such droplet dynamics are fundamental components in many engineering applications, including mass and heat exchangers for thermal desalination, as well as water vapor and particle capture. Recent experiments demonstrate that critical flow regime transitions can be triggered by varying inlet geometries and external fields. Similar interacting droplet dynamics have also been observed on hydrophobic substrates, arising from interfacial instabilities in volatile liquid films. In this talk, I will describe lubrication and weighted residual models for falling droplets. The coarsening dynamics of condensing droplets will be discussed using a lubrication model. I will also present our recent results on developing optimal boundary control and mean-field control for droplet dynamics. 

 

The weak form is stronger than you think

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 24, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Daniel MessengerLos Alamos National Laboratory (LANL)

Equation learning has been a holy grail of scientific research for decades. Only recently has the capability of learning equations directly from data become a computationally feasible task, due to the availability of high-resolution data and fast algorithms capable of surpassing the inherent combinatorial complexity of most model classes. Weak form equation learning has arisen as an advantageous framework for efficiently selecting models from data with noise and nonsmoothness, qualities inherent to observed data. By viewing the dynamics through the guise of test functions, the weak form affords a flexible representation of the governing equations that naturally incorporates these data maladies. More generally, the weak form has been shown to reveal alternative dynamical descriptions, such as coarse-grained and reduced-order models, opening the door to hierarchical model discovery. In this talk I will give a broad overview of historical advances in weak form equation learning and parameter inference, from the 1950s to WSINDy and more recent algorithms. I will then give an outlook for future research directions in this field, in light of now-known computational limitations and recently demonstrated successes, both theoretical and applied, with applications to molecular dynamics, plasma physics, cell biology, and weather forecasting.

 

Introduction to reservoir computing

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 10, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Yunho KimUNIST, Korea

Reservoir computing is a branch of neuromorphic computing, which is usually realized in the form of ESNs (Echo State Networks). In this talk, I will present some fundamentals of reservoir computing from both the mathematical and the computational points of view. While reservoir computing was designed for sequential/time-series data, we recently observed its great performances in dealing with static image data once the reservoir is set to process certain image features, not the images themselves. Hence, I will discuss possible applications and open questions in reservoir computing.

Georgia Scientific Computing Symposium

Series
Applied and Computational Mathematics Seminar
Time
Saturday, February 8, 2025 - 08:45 for 8 hours (full day)
Location
Clough 144
Speaker

The Georgia Scientific Computing Symposium (GSCS) is a forum for professors, postdocs, graduate students and other researchers in Georgia to meet in an informal setting, to exchange ideas, and to highlight local scientific computing research. Established in 2009, this annual symposium welcomes participants from the broader research community. The event features a day-long program of invited talks, lightning presentations and ample opportunities for networking and collaboration.  Please check this year's information at https://wliao60.math.gatech.edu/2025GSCS.html

Advances in Probabilistic Generative Modeling for Scientific Machine Learning

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 3, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005, and https://gatech.zoom.us/j/94954654170
Speaker
Dr. Fei SHAGoogle Research

Please Note: Speaker will present in person

Leveraging large-scale data and systems of computing accelerators, statistical learning has led to  significant paradigm shifts in many scientific disciplines. Grand challenges in science have been tackled with exciting synergy between disciplinary science, physics-based simulations via high-performance computing, and powerful learning methods.

In this talk, I will describe several vignettes of our research in the theme of modeling complex dynamical systems characterized by partial differential equations with turbulent solutions. I will also demonstrate how machine learning technologies, especially advances in generative AI technology,  are effectively applied to address the computational and modeling challenges in such systems, exemplified by their successful applications to  weather forecast and climate projection. I will also discuss what new challenges and opportunities have been brought into future machine learning research.

The research work presented in this talk is based on joint and interdisciplinary research work of several teams at Google Research, ETH and Caltech.


Bio: Dr. Fei Sha is currently a research scientist at Google Research, where he leads a team of scientists and engineers working on scientific machine learning with a specific application focus towards AI for Weather and Climate. He was a full professor and the Zohrab A. Kaprielian Fellow in Engineering at the Department of Computer Science, University of Southern California. His primary research interests are machine learning and its application to various AI problems: speech and language processing, computer vision, robotics and recently scientific computing, dynamical systems, weather forecast and climate modeling.  Dr. Sha was selected as a Alfred P. Sloan Research Fellow in 2013, and also won an Army Research Office Young Investigator Award in 2012. He has a Ph.D from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc from Southeast University (Nanjing, China). More information about Dr. Sha's scholastic activities can be found at his microsite at http://feisha.org.

Pages