Seminars and Colloquia by Series

Effective deep neural network architectures for learning high-dimensional Banach-valued functions from limited data

Series
Applied and Computational Mathematics Seminar
Time
Friday, February 10, 2023 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 006 and https://gatech.zoom.us/j/98355006347
Speaker
Nick DexterFlorida State University

In the past few decades the problem of reconstructing high-dimensional functions taking values in abstract spaces from limited samples has received increasing attention, largely due to its relevance to uncertainty quantification (UQ) for computational science and engineering. These UQ problems are often posed in terms of parameterized partial differential equations whose solutions take values in Hilbert or Banach spaces. Impressive results have been achieved on such problems with deep learning (DL), i.e. machine learning with deep neural networks (DNN). This work focuses on approximating high-dimensional smooth functions taking values in reflexive and typically infinite-dimensional Banach spaces. Our novel approach to this problem is fully algorithmic, combining DL, compressed sensing, orthogonal polynomials, and finite element discretization. We present a full theoretical analysis for DNN approximation with explicit guarantees on the error and sample complexity, and a clear accounting of all sources of error. We also provide numerical experiments demonstrating the efficiency of DL at approximating such high-dimensional functions from limited data in UQ applications.
 

New advances on the decomposition and analysis of nonstationary signals: a Mathematical perspective on Signal Processing.

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 5, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Antonio Cicone University of L'Aquila

In many applied fields of research, like Geophysics, Medicine, Engineering, Economy, and Finance, to name a few, classical problems are the extraction of hidden information and features, like quasi-periodicities and frequency patterns, as well as the separation of different components contained in a given signal, like, for instance, its trend.

Standard methods based on Fourier and Wavelet Transform, historically used in Signal Processing, proved to be limited when nonlinear and non-stationary phenomena are present. For this reason in the last two decades, several new nonlinear methods have been developed by many research groups around the world, and they have been used extensively in many applied fields of research.

In this talk, we will briefly review the Hilbert-Huang Transform (a.k.a. Empirical Mode Decomposition method) and discuss its known limitations. Then, we will review the Iterative Filtering technique and we will introduce newly developed generalizations to handle multidimensional, multivariate, or highly non-stationary signals, as well as their time-frequency representation, via the so-called IMFogram. We will discuss the theoretical and numerical properties of these methods and show their applications to real-life data.
We will conclude the talk by reviewing the main problems which are still open in this research field.

A Nonlocal Gradient for High-Dimensional Black-Box Optimization in Scientific Applications

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 28, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Guannan ZhangOak Ridge National Laboratory (ORNL)

In this talk, we consider the problem of minimizing multi-modal loss functions with many local optima. Since the local gradient points to the direction of the steepest slope in an infinitesimal neighborhood, an optimizer guided by the local gradient is often trapped in a local minimum. To address this issue, we develop a novel nonlocal gradient to skip small local minima by capturing major structures of the loss's landscape in black-box optimization. The nonlocal gradient is defined by a directional Gaussian smoothing (DGS) approach. The key idea is to conducts 1D long-range exploration with a large smoothing radius along orthogonal directions, each of which defines a nonlocal directional derivative as a 1D integral. Such long-range exploration enables the nonlocal gradient to skip small local minima. We use the Gauss-Hermite quadrature rule to approximate the d 1D integrals to obtain an accurate estimator. We also provide theoretical analysis on the convergence of the method on nonconvex landscape. In this work, we investigate the scenario where the objective function is composed of a convex function, perturbed by a highly oscillating, deterministic noise. We provide a convergence theory under which the iterates converge to a tightened neighborhood of the solution, whose size is characterized by the noise frequency. We complement our theoretical analysis with numerical experiments to illustrate the performance of this approach.

Optimal variance-reduced stochastic approximation in Banach spaces

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 21, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Wenlong MouUC Berkeley

Please Note: Speaker will give the talk in person

Estimating the fixed-point of a contractive operator from empirical data is a fundamental computational and statistical task. In many practical applications including dynamic programming, the relevant norm is not induced by an inner product structure, which hinders existing techniques for analysis. In this talk, I will present recent advances in stochastic approximation methods for fixed-point equations in Banach spaces. Among other results, we discuss a novel variance-reduced stochastic approximation scheme, and establish its non-asymptotic error bounds. In contrast to worst-case guarantees, our bounds are instance-dependent, and achieve the optimal covariance structure in central limit theorems non-asymptotically.
Joint works with Koulik Khamaru, Martin Wainwright, Peter Bartlett, and Michael Jordan.

Inference for Gaussian processes on compact Riemannian manifolds

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 14, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Didong LiUNC Chapel Hill

Gaussian processes (GPs) are widely employed as versatile modeling and predictive tools in spatial statistics, functional data analysis, computer modeling and diverse applications of machine learning. They have been widely studied over Euclidean spaces, where they are specified using covariance functions or covariograms for modelling complex dependencies. There is a growing literature on GPs over Riemannian manifolds in order to develop richer and more flexible inferential frameworks. While GPs have been extensively studied for asymptotic inference on Euclidean spaces using positive definite covariograms, such results are relatively sparse on Riemannian manifolds. We undertake analogous developments for GPs constructed over compact Riemannian manifolds. Building upon the recently introduced Matérn covariograms on a compact Riemannian manifold, we employ formal notions and conditions for the equivalence of two Matérn Gaussian random measures on compact manifolds to derive the microergodic parameters and formally establish the consistency of their maximum likelihood estimates as well as asymptotic optimality of the best linear unbiased predictor.

Combinatorial Topological Dynamics

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 7, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Thomas WannerGeorge Mason University

Morse theory establishes a celebrated link between classical gradient dynamics and the topology of the
underlying phase space. It provided the motivation for two independent developments. On the one hand, Conley's
theory of isolated invariant sets and Morse decompositions, which is a generalization of Morse theory, is able
to encode the global dynamics of general dynamical systems using topological information. On the other hand,
Forman's discrete Morse theory on simplicial complexes, which is a combinatorial version of the classical
theory, and has found numerous applications in mathematics, computer science, and applied sciences.
In this talk, we introduce recent work on combinatorial topological dynamics, which combines both of the
above theories and leads as a special case to a dynamical Conley theory for Forman vector fields, and more
general, for multivectors. This theory has been developed using the general framework of finite topological
spaces, which contain simplicial complexes as a special case.

Multi-scale modeling for complex flows at extreme computational scales

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 10, 2022 - 14:00 for
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Spencer BryngelsonGeorgia Tech CSE

Many fluid flows display at a wide range of space and time scales. Turbulent and multiphase flows can include small eddies or particles, but likewise large advected features. This challenge makes some degree of multi-scale modeling or homogenization necessary. Such models are restricted, though: they should be numerically accurate, physically consistent, computationally expedient, and more. I present two tools crafted for this purpose. First, the fast macroscopic forcing method (Fast MFM), which is based on an elliptic pruning procedure that localizes solution operators and sparse matrix-vector sampling. We recover eddy-diffusivity operators with a convergence that beats the best spectral approximation (from the SVD), attenuating the cost of, for example, targeted RANS closures. I also present a moment-based method for closing multiphase flow equations. Buttressed by a recurrent neural network, it is numerically stable and achieves state-of-the-art accuracy. I close with a discussion of conducting these simulations near exascale. Our simulations scale ideally on the entirety of ORNL Summit's GPUs, though the HPC landscape continues to shift.

Efficient Krylov subspace methods for uncertainty quantification

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 19, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Julianne ChungEmory University
Uncertainty quantification for linear inverse problems remains a challenging task, especially for problems with a very large number of unknown parameters (e.g., dynamic inverse problems), for problems where computation of the square root and inverse of the prior covariance matrix are not feasible, and for hierarchical problems where the mean is not known a priori. This work exploits Krylov subspace methods to develop and analyze new techniques for large-scale uncertainty quantification in inverse problems. We assume that generalized Golub-Kahan based methods have been used to compute an estimate of the solution, and we describe efficient methods to explore the posterior distribution. We present two methods that use the preconditioned Lanczos algorithm to efficiently generate samples from the posterior distribution. Numerical examples from dynamic photoacoustic tomography and atmospheric inverse modeling, including a case study from NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite, demonstrate the effectiveness of the described approaches.

Neural Oracle Search on N-BEST Hypotheses

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 12, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Tongzhou ChenGoogle

In this talk, we propose a Neural Oracle Search(NOS) model in Automatic Speech Recognition(ASR) to select the most likely hypothesis using a sequence of acoustic representations and multiple hypotheses as input. The model provides a sequence level score for each audio-hypothesis pair that is obtained by integrating information from multiple sources, such as the input acoustic representations, N-best hypotheses, additional 1st-pass statistics, and unpaired textual information through an external language model. These scores are then used to map the search problem of identifying the most likely hypothesis to a sequence classification problem. The definition of the proposed model is broad enough to allow its use as an alternative to beam search in the 1st-pass or as a 2nd-pass, rescoring step. This model achieves up to 12% relative reductions in Word Error Rate (WER) across several languages over state-of-the-art baselines with relatively few additional parameters. In addition, we investigate the use of the NOS model on a 1st-pass multilingual model and show that similar to the 1st-pass model, the NOS model can be made multilingual.

Convergence of denoising diffusion models

Series
Applied and Computational Mathematics Seminar
Time
Monday, August 29, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Valentin DE BORTOLICNRS and ENS Ulm
Generative modeling is the task of drawing new samples from an underlying distribution known only via an empirical measure. There exists a myriad of models to tackle this problem with applications in image and speech processing, medical imaging, forecasting and protein modeling to cite a few.  Among these methods score-based generative models (or diffusion models) are a  new powerful class of generative models that exhibit remarkable empirical performance. They consist of a ``noising'' stage, whereby a diffusion is used to gradually add Gaussian noise to data, and a generative model, which entails a ``denoising'' process defined by approximating the time-reversal of the diffusion.

In this talk I will present some of their theoretical guarantees with an emphasis on their behavior under the so-called manifold hypothesis. Such theoretical guarantees are non-vacuous and provide insight on the empirical behavior of these models. I will show how these results imply generalization bounds on denoising diffusion models. This presentation is based on https://arxiv.org/abs/2208.05314

Pages