Seminars and Colloquia by Series

Introduction to reservoir computing

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 10, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Yunho KimUNIST, Korea

Reservoir computing is a branch of neuromorphic computing, which is usually realized in the form of ESNs (Echo State Networks). In this talk, I will present some fundamentals of reservoir computing from both the mathematical and the computational points of view. While reservoir computing was designed for sequential/time-series data, we recently observed its great performances in dealing with static image data once the reservoir is set to process certain image features, not the images themselves. Hence, I will discuss possible applications and open questions in reservoir computing.

Advances in Probabilistic Generative Modeling for Scientific Machine Learning

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 3, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005, and https://gatech.zoom.us/j/94954654170
Speaker
Dr. Fei SHAGoogle Research

Please Note: Speaker will present in person

Leveraging large-scale data and systems of computing accelerators, statistical learning has led to  significant paradigm shifts in many scientific disciplines. Grand challenges in science have been tackled with exciting synergy between disciplinary science, physics-based simulations via high-performance computing, and powerful learning methods.

In this talk, I will describe several vignettes of our research in the theme of modeling complex dynamical systems characterized by partial differential equations with turbulent solutions. I will also demonstrate how machine learning technologies, especially advances in generative AI technology,  are effectively applied to address the computational and modeling challenges in such systems, exemplified by their successful applications to  weather forecast and climate projection. I will also discuss what new challenges and opportunities have been brought into future machine learning research.

The research work presented in this talk is based on joint and interdisciplinary research work of several teams at Google Research, ETH and Caltech.


Bio: Dr. Fei Sha is currently a research scientist at Google Research, where he leads a team of scientists and engineers working on scientific machine learning with a specific application focus towards AI for Weather and Climate. He was a full professor and the Zohrab A. Kaprielian Fellow in Engineering at the Department of Computer Science, University of Southern California. His primary research interests are machine learning and its application to various AI problems: speech and language processing, computer vision, robotics and recently scientific computing, dynamical systems, weather forecast and climate modeling.  Dr. Sha was selected as a Alfred P. Sloan Research Fellow in 2013, and also won an Army Research Office Young Investigator Award in 2012. He has a Ph.D from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc from Southeast University (Nanjing, China). More information about Dr. Sha's scholastic activities can be found at his microsite at http://feisha.org.

From centralized to federated learning of neural operators: Accuracy, efficiency, and reliability

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 27, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Lu LuYale University

As an emerging paradigm in scientific machine learning, deep neural operators pioneered by us can learn nonlinear operators of complex dynamic systems via neural networks. In this talk, I will present the deep operator network (DeepONet) to learn various operators that represent deterministic and stochastic differential equations. I will also present several extensions of DeepONet, such as DeepM&Mnet for multiphysics problems, DeepONet with proper orthogonal decomposition or Fourier decoder layers, MIONet for multiple-input operators, and multifidelity DeepONet. I will demonstrate the effectiveness of DeepONet and its extensions to diverse multiphysics and multiscale problems, such as bubble growth dynamics, high-speed boundary layers, electroconvection, hypersonics, geological carbon sequestration, full waveform inversion, and astrophysics. Deep learning models are usually limited to interpolation scenarios, and I will quantify the extrapolation complexity and develop a complete workflow to address the challenge of extrapolation for deep neural operators. Moreover, I will present the first operator learning method that only requires one PDE solution, i.e., one-shot learning, by introducing a new concept of local solution operator based on the principle of locality of PDEs. I will also present the first systematic study of federated scientific machine learning (FedSciML) for approximating functions and solving PDEs with data heterogeneity.

Leveraging low-dimensional structures in structure-preserving machine learning for dynamical systems

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 9, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Qi TangGeorgia Tech CSE

In this talk I will discuss our recent effort to develop structure-preserving machine learning (ML) for time series data, focusing on both dissipative PDEs and singularly perturbed ODEs. The first part presents a data-driven modeling method that accurately captures shocks and chaotic dynamics through a stabilized neural ODE framework. We learn the right-hand-side of an ODE by adding the outputs of two networks together, one learning a linear term and the other a nonlinear term. The architecture is inspired by the inertial manifold theorem. We apply this method to chaotic trajectories of the Kuramoto-Sivashinsky equation, where our model keeps long-term trajectories on the attractor and remains robust to noisy initial conditions. The second part explores structure-preserving ML for singularly perturbed dynamical systems. A powerful tool to address these systems is the Fenichel normal form, which significantly simplifies fast dynamics near slow manifolds. I will discuss a novel realization of this concept using ML. Specifically, a fast-slow neural network (FSNN) is proposed, enforcing the existence of a trainable, attractive invariant slow manifold as a hard constraint. To illustrate the power of FSNN, I will show a fusion-motivated example where traditional numerical integrators all fail.

Stability of explicit integrators on Riemannian manifolds

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 2, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Klaus 2443 and https://gatech.zoom.us/j/94954654170
Speaker
Brynjulf OwrenNorwegian University of Science and Technology

Please Note: Special Location

In this talk, I will discuss some very recent results on non-expansive numerical integrators on Riemannian manifolds.
 
We shall focus on the mathematical results, but the work is motivated by neural network architectures applied to manifold-valued data, and also by some recent activities in the simulation of slender structures in mechanical engineering. In Arnold et al. (2024), we proved that when applied to non-expansive continuous models, the Geodesic Implicit Euler method is non-expansive for all stepsizes when the manifold has non-positive sectional curvature. Disappointing counter-examples showed that this cannot hold in general for positively curved spaces. In the last few weeks, we have considered the Geodesic Explicit Euler method applied to non-expansive systems on manifolds of constant sectional curvature. In this case, we have proved upper bounds for the stepsize for which the Euler scheme is non-expansive.
 
Reference
Martin Arnold, Elena Celledoni, Ergys Çokaj, Brynjulf Owren and Denise Tumiotto,
B-stability of numerical integrators on Riemannian manifolds, J. Comput. Dyn.,  11(1) 2024, 92-107. doi: 10.3934/jcd.2024002 

Efficient, Robust, and Agnostic Generative Modeling with Group Symmetry and Regularized Divergences

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 25, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Ziyu ChenUniversity of Massachusetts Amherst

In this talk, I will discuss our recent theoretical advancements in generative modeling. The first part of the presentation will focus on learning distributions with symmetry. I will introduce results on the sample complexity of empirical estimations of probability divergences for group-invariant distributions, and present performance guarantees for GANs and score-based generative models that incorporate symmetry. Notably, I will offer the first quantitative comparison between data augmentation and directly embedding symmetry into models, highlighting the latter as a more fundamental approach for efficient learning. These findings underscore how incorporating symmetry into generative models can significantly enhance learning efficiency, particularly in data-limited scenarios. The second part will cover $\alpha$-divergences with Wasserstein-1 regularization. These divergences can be interpreted as $\alpha$-divergences constrained to Lipschitz test functions in their variational form. I will demonstrate how generative learning can be made agnostic to assumptions about target distributions, including those with heavy tails or low-dimensional and fractal supports, through the use of these divergences as objective functionals. I will outline the conditions for the finiteness of these divergences under minimal assumptions on the target distribution along with the variational derivatives and gradient flow formulation associated with them. This framework provides guarantees for various machine learning algorithms that optimize over this class of divergences. 

Mathematical and Numerical Understanding of Neural Networks: From Representation to Learning Dynamics

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 18, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Hongkai ZhaoDuke University

In this talk I will present both mathematical and numerical analysis as well as experiments to study a few basic computational issues in using neural network to approximate functions: (1) the stability and accuracy, (2) the learning dynamics and computation cost, and (3) structured and balanced approximation. These issues are investigated for both approximation and optimization in asymptotic and non-asymptotic regimes.

Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints

Series
Applied and Computational Mathematics Seminar
Time
Wednesday, November 13, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
2443 Classroom Klaus and https://gatech.zoom.us/j/94954654170
Speaker
Yangyang XuRensselaer Polytechnic Institute

In this talk, I will present a damped proximal augmented Lagrangian method (DPALM) for solving problems with a weakly-convex objective and convex linear/nonlinear constraints. Instead of taking a full stepsize, DPALM adopts a damped dual stepsize. DPALM can produce a (near) eps-KKT point within eps^{-2} outer iterations if each DPALM subproblem is solved to a proper accuracy. In addition, I will show overall iteration complexity of DPALM when the objective is either a regularized smooth function or in a regularized compositional form. For the former case, DPALM achieves the complexity of eps^{-2.5} to produce an eps-KKT point by applying an accelerated proximal gradient (APG) method to each DPALM subproblem. For the latter case, the complexity of DPALM is eps^{-3} to produce a near eps-KKT point by using an APG to solve a Moreau-envelope smoothed version of each subproblem. Our outer iteration complexity and the overall complexity either generalize existing best ones from unconstrained or linear-constrained problems to convex-constrained ones, or improve over the best-known results on solving the same-structured problems. Furthermore, numerical experiments on linearly/quadratically constrained non-convex quadratic programs and linear-constrained robust nonlinear least squares are conducted to demonstrate the empirical efficiency of the proposed DPALM over several state-of-the art methods.

Regularized Stein Variational Gradient Flow

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 11, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Ye HeGeorgia Tech

The stein variational gradient descent (SVGD) algorithm is a deterministic particle method for sampling. However, a mean-field analysis reveals that the gradient flow corresponding to the SVGD algorithm (i.e., the Stein Variational Gradient Flow) only provides a constant-order approximation to the Wasserstein gradient flow corresponding to the KL-divergence minimization. In this work, we propose the Regularized Stein Variational Gradient Flow, which interpolates between the Stein Variational Gradient Flow and the Wasserstein gradient flow. We establish various theoretical properties of the Regularized Stein Variational Gradient Flow (and its time-discretization) including convergence to equilibrium, existence and uniqueness of weak solutions, and stability of the solutions. We provide preliminary numerical evidence of the improved performance offered by the regularization.

Interpretable machine learning with governing law discovery

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 28, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Mars GaoUniversity of Washington

Spatio-temporal modeling of real-world data presents significant challenges due to high-dimensionality, noisy measurements, and limited data. In this talk, we introduce two frameworks that jointly solve the problems of sparse identification of governing equations and latent space reconstruction: the Bayesian SINDy autoencoder and SINDy-SHRED. The Bayesian SINDy autoencoder leverages a spike-and-slab prior to enable robust discovery of governing equations and latent coordinate systems, providing uncertainty estimates in low-data, high-noise settings. In our experiments, we applied the Bayesian SINDy autoencoder to real video data, marking the first example of learning governing equations directly from such data. This framework successfully identified underlying physical laws, such as accurately estimating constants like gravity from pendulum videos, even in the presence of noise and limited samples.

 

In parallel, SINDy-SHRED integrates Gated Recurrent Units (GRUs) with a shallow decoder network to model temporal sequences and reconstruct full spatio-temporal fields using only a few sensors. Our proposed algorithm introduces a SINDy-based regularization. Beginning with an arbitrary latent state space, the dynamics of the latent space progressively converges to a SINDy-class functional. We conduct a systematic experimental study including synthetic PDE data, real-world sensor measurements for sea surface temperature, and direct video data. With no explicit encoder, SINDy-SHRED allows for efficient training with minimal hyperparameter tuning and laptop-level computing. SINDy-SHRED demonstrates robust generalization in a variety of applications with minimal to no hyperparameter adjustments. Additionally, the interpretable SINDy model of latent state dynamics enables accurate long-term video predictions, achieving state-of-the-art performance and outperforming all baseline methods considered, including Convolutional LSTM, PredRNN, ResNet, and SimVP.

Pages