Seminars and Colloquia by Series

Spectral Representation for Control and Reinforcement Learning

Series
SIAM Student Seminar
Time
Friday, September 13, 2024 - 11:15 for 1 hour (actually 50 minutes)
Location
Skiles 249
Speaker
Bo DaiGeorgia Tech

How to achieve the optimal control for general stochastic nonlinear is notoriously difficult, which becomes even more difficult by involving learning and exploration for unknown dynamics in reinforcement learning setting. In this talk, I will present our recent work on exploiting the power of representation in RL to bypass these difficulties. Specifically, we designed practical algorithms for extracting useful representations, with the goal of improving statistical and computational efficiency in exploration vs. exploitation tradeoff and empirical performance in RL. We provide rigorous theoretical analysis of our algorithm, and demonstrate the practical superior performance over the existing state-of-the-art empirical algorithms on several benchmarks. 

Paper Reading: Bridging discrete and continuous state spaces: Exploring the Ehrenfest process in time-continuous diffusion models

Series
SIAM Student Seminar
Time
Thursday, August 29, 2024 - 10:00 for 1 hour (actually 50 minutes)
Location
Skiles 254
Speaker
Kevin RojasGeorgia Tech

Paper link: https://arxiv.org/abs/2405.03549

Abstract: Generative modeling via stochastic processes has led to remarkable empirical results as well as to recent advances in their theoretical understanding. In principle, both space and time of the processes can be discrete or continuous. In this work, we study time-continuous Markov jump processes on discrete state spaces and investigate their correspondence to state-continuous diffusion processes given by SDEs. In particular, we revisit the Ehrenfest process, which converges to an Ornstein-Uhlenbeck process in the infinite state space limit. Likewise, we can show that the time-reversal of the Ehrenfest process converges to the time-reversed Ornstein-Uhlenbeck process. This observation bridges discrete and continuous state spaces and allows to carry over methods from one to the respective other setting. Additionally, we suggest an algorithm for training the time-reversal of Markov jump processes which relies on conditional expectations and can thus be directly related to denoising score matching. We demonstrate our methods in multiple convincing numerical experiments.

 

Efficient hybrid spatial-temporal operator learning

Series
SIAM Student Seminar
Time
Friday, March 29, 2024 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Francesco BrardaEmory University

Recent advancements in operator-type neural networks, such as Fourier Neural Operator (FNO) and Deep Operator Network (DeepONet), have shown promising results in approximating the solutions of spatial-temporal Partial Differential Equations (PDEs). However, these neural networks often entail considerable training expenses, and may not always achieve the desired accuracy required in many scientific and engineering disciplines. In this paper, we propose a new operator learning framework to address these issues. The proposed paradigm leverages the traditional wisdom from numerical PDE theory and techniques to refine the pipeline of existing operator neural networks. Specifically, the proposed architecture initiates the training for a single or a few epochs for the operator-type neural networks in consideration, concluding with the freezing of the model parameters. The latter are then fed into an error correction scheme: a single parametrized linear spectral layer trained with a convex loss function defined through a reliable functional-type a posteriori error estimator.This design allows the operator neural networks to effectively tackle low-frequency errors, while the added linear layer addresses high-frequency errors. Numerical experiments on a commonly used benchmark of 2D Navier-Stokes equations demonstrate improvements in both computational time and accuracy, compared to existing FNO variants and traditional numerical approaches.

Optimization in Data Science: Enhancing Autoencoders and Accelerating Federated Learning

Series
SIAM Student Seminar
Time
Monday, January 22, 2024 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Xue FengUC Davis

In this presentation, I will discuss my research in the field of data science, specifically in two areas: improving autoencoder interpolations and accelerating federated learning algorithms. My work combines advanced mathematical concepts with practical machine learning applications, contributing to both the theoretical and applied aspects of data science. The first part of my talk focuses on image sequence interpolation using autoencoders, which are essential tools in generative modeling. The focus is when there is only limited training data. By introducing a novel regularization term based on dynamic optimal transport to the loss function of autoencoder, my method can generate more robust and semantically coherent interpolation results. Additionally, the trained autoencoder can be used to generate barycenters. However, computation efficiency is a bottleneck of our method, and we are working on improving it. The second part of my presentation focuses on accelerating federated learning (FL) through the application of Anderson Acceleration. Our method achieves the same level of convergence performance as state-of-the-art second-order methods like GIANT by reweighting the local points and their gradients. However, our method only requires first-order information, making it a more practical and efficient choice for large-scale and complex training problems. Furthermore, our method is theoretically guaranteed to converge to the global minimizer with a linear rate.

Controlled SPDEs: Peng’s Maximum Principle and Numerical Methods

Series
SIAM Student Seminar
Time
Friday, November 17, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Lukas WesselsGeorgia Tech

In this talk, we consider a finite-horizon optimal control problem of stochastic reaction-diffusion equations. First, we apply the spike variation method which relies on introducing the first and second order adjoint state. We give a novel characterization of the second order adjoint state as the solution to a backward SPDE. Using this representation, we prove the maximum principle for controlled SPDEs.

In the second part, we present a numerical algorithm that allows the efficient approximation of optimal controls in the case of stochastic reaction-diffusion equations with additive noise by first reducing the problem to controls of feedback form and then approximating the feedback function using finitely based approximations. Numerical experiments using artificial neural networks as well as radial basis function networks illustrate the performance of our algorithm.

This talk is based on joint work with Wilhelm Stannat and Alexander Vogler. Talk will also be streamed: https://gatech.zoom.us/j/93808617657?pwd=ME44NWUxbk1NRkhUMzRsK3c0ZGtvQT09

Neural-ODE for PDE Solution Operators

Series
SIAM Student Seminar
Time
Friday, September 29, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Nathan GabyGeorgia State University

We consider a numerical method to approximate the solution operator for evolutional partial differential equations (PDEs). By employing a general reduced-order model, such as a deep neural network, we connect the evolution of a model's parameters with trajectories in a corresponding function space. Using the Neural Ordinary Differential Equations (NODE) technique we learn a vector field over the parameter space such that from any initial starting point, the resulting trajectory solves the evolutional PDE. Numerical results are presented for a number of high-dimensional problems where traditional methods fail due to the curse of dimensionality.

Geometric Equations for Matroid Varieties

Series
SIAM Student Seminar
Time
Tuesday, November 15, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Ashley K. WheelerSchool of Mathematics

Each point x in Gr(r, n) corresponds to an r × n matrix Ax which gives rise to a matroid Mx on its columns. Gel’fand, Goresky, MacPherson, and Serganova showed that the sets {y ∈ Gr(r, n)|My = Mx} form a stratification of Gr(r, n) with many beautiful properties. However, results of Mnëv and Sturmfels show that these strata can be quite complicated, and in particular may have arbitrary singularities. We study the ideals Ix of matroid varieties, the Zariski closures of these strata. We construct several classes of examples based on theorems from projective geometry and describe how the Grassmann-Cayley algebra may be used to derive non-trivial elements of Ix geometrically when the combinatorics of the matroid is sufficiently rich.

Sparse Quadratic Optimization via Polynomial Roots

Series
SIAM Student Seminar
Time
Tuesday, October 25, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Kevin ShuSchool of Mathematics

We'll talk about problems of optimizing a quadratic function subject to quadratic constraints, in addition to a sparsity constraint that requires that solutions have only a few nonzero entries. Such problems include sparse versions of linear regression and principal components analysis. We'll see that this problem can be formulated as a convex conical optimization problem over a sparse version of the positive semidefinite cone, and then see how we can approximate such problems using ideas arising from the study of hyperbolic polynomials. We'll also describe a fast algorithm for such problems, which performs well in practical situations.

Ergodic theory: a statistical description of chaotic dynamical systems

Series
SIAM Student Seminar
Time
Friday, December 3, 2021 - 14:30 for 1 hour (actually 50 minutes)
Location
Skiles 169
Speaker
Alex BlumenthalGeorgia Tech

Dynamical systems model the way that real-world systems evolve in time. While the time-asymptotic behavior of many systems can be characterized by “simple” dynamical features such as equilibria and periodic orbits, some systems evolve in a chaotic, seemingly random way. For such systems it is no longer meaningful to track one trajectory at a time individually- instead, a natural approach is to treat the initial condition as random and to observe how its probabilistic law evolves in time. This is the core idea of ergodic theory, the topic of this talk. I will not assume much beyond some basics of probability theory, e.g., random variables. 

About Coalescence of Eigenvalues for Matrices Depending on Several Parameters

Series
SIAM Student Seminar
Time
Friday, November 12, 2021 - 14:30 for 1 hour (actually 50 minutes)
Location
Skiles 169
Speaker
Luca DieciGeorgia Institute of Technology

We review some theoretical and computational results on locating eigenvalues coalescence for matrices smoothly depending on parameters. Focus is on the symmetric 2 parameter case, and Hermitian 3 parameter case. Full and banded matrices are of interest.

Pages