Seminars and Colloquia by Series

Symmetrically processed splitting integrators for enhanced Hamiltonian Monte Carlo sampling

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 19, 2021 - 14:00 for 1 hour (actually 50 minutes)
Location
ONLINE https://bluejeans.com/884917410
Speaker
Prof. Sergio BlanesUniversidad Politécnica de Valencia

We construct integrators to be used in Hamiltonian (or Hybrid) Monte Carlo sampling. The new integrators are easily implementable and, for a given computational budget, may deliver five times as many accepted proposals as standard leapfrog/Verlet without impairing in any way the quality of the samples. They are based on a suitable modification of the   processing technique first introduced by J.C. Butcher. The idea of modified processing may also be useful for other purposes, like the construction of high-order splitting integrators with positive coefficients.

Joint work with Mari Paz Calvo, Fernando Casas, and Jesús M. Sanz-Serna

Optimization in the space of probabilities with MCMC: Uncertainty quantification and sequential decision making

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 5, 2021 - 14:00 for 1 hour (actually 50 minutes)
Location
ONLINE https://bluejeans.com/884917410
Speaker
Prof. Yian MaUCSD

I will present MCMC algorithms as optimization over the KL-divergence in the space of probabilities. By incorporating a momentum variable, I will discuss an algorithm which performs accelerated gradient descent over the KL-divergence. Using optimization-like ideas, a suitable Lyapunov function is constructed to prove that an accelerated convergence rate is obtained. I will then discuss how MCMC algorithms compare against variational inference methods in parameterizing the gradient flows in the space of probabilities and how it applies to sequential decision making problems.

Incorporating Invariance to Reduce the Complexity of Parametric Models

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 29, 2021 - 14:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/884917410
Speaker
Alex CloningerUniversity of California, San Diego

Many scientific problems involve invariant structures, and learning functions that rely on a much lower dimensional set of features than the data itself.   Incorporating these invariances into a parametric model can significantly reduce the model complexity, and lead to a vast reduction in the number of labeled examples required to estimate the parameters.  We display this benefit in two settings.  The first setting concerns ReLU networks, and the size of networks and number of points required to learn certain functions and classification regions.  Here, we assume that the target function has built in invariances, namely that it only depends on the projection onto a very low dimensional, function defined manifold (with dimension possibly significantly smaller than even the intrinsic dimension of the data).  We use this manifold variant of a single or multi index model to establish network complexity and ERM rates that beat even the intrinsic dimension of the data.  We should note that a corollary of this result is developing intrinsic rates for a manifold plus noise data model without needing to assume the distribution of the noise decays exponentially, and we also discuss implications in two-sample testing and statistical distances.  The second setting for building invariances concerns linearized optimal transport (LOT), and using it to build supervised classifiers on distributions.  Here, we construct invariances to families of group actions (e.g., shifts and scalings of a fixed distribution), and show that LOT can learn a classifier on group orbits using a simple linear separator.   We demonstrate the benefit of this on MNIST by constructing robust classifiers with only a small number of labeled examples.  This talk covers joint work with Timo Klock, Xiuyuan Cheng, and Caroline Moosmueller.
 

Group Synchronization via Cycle-Edge Message Passing

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 8, 2021 - 14:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/884917410
Speaker
Gilad LermanUniversity of Minnesota

The problem of group synchronization asks to recover states of objects associated with group elements given possibly corrupted relative state measurements (or group ratios) between pairs of objects. This problem arises in important data-related tasks, such as structure from motion, simultaneous localization and mapping, Cryo-EM, community detection and sensor network localization. Two common groups in these problems are the rotation and symmetric groups. We propose a general framework for group synchronization with compact groups. The main part of the talk discusses a novel message passing procedure that uses cycle consistency information in order to estimate the corruption levels of group ratios. Under our mathematical model of adversarial corruption, it can be used to infer the corrupted group ratios and thus to solve the synchronization problem. We first explain why the group cycle consistency information is essential for effectively solving group synchronization problems. We then establish exact recovery and linear convergence guarantees for the proposed message passing procedure under a deterministic setting with adversarial corruption. We also establish the stability of the proposed procedure to sub-Gaussian noise. We further establish competitive theoretical results under a uniform corruption model. Finally, we discuss the MPLS (Message Passing Least Squares) or Minneapolis framework for solving real scenarios with high levels of corruption and noise and with nontrivial scenarios of corruption. We demonstrate state-of-the-art results for two different problems that occur in structure from motion and involve the rotation and permutation groups.

Monte Carlo methods for the Hermitian eigenvaue problem

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 25, 2021 - 14:00 for 1 hour (actually 50 minutes)
Location
ONLINE https://bluejeans.com/884917410
Speaker
Robert WebberCourant Institute

In quantum mechanics and the analysis of Markov processes, Monte Carlo methods are needed to identify low-lying eigenfunctions of dynamical generators. The standard Monte Carlo approaches for identifying eigenfunctions, however, can be inaccurate or slow to converge. What limits the efficiency of the currently available spectral estimation methods and what is needed to build more efficient methods for the future? Through numerical analysis and computational examples, we begin to answer these questions. We present the first-ever convergence proof and error bounds for the variational approach to conformational dynamics (VAC), the dominant method for estimating eigenfunctions used in biochemistry. Additionally, we analyze and optimize variational Monte Carlo (VMC), which combines Monte Carlo with neural networks to accurately identify low-lying eigenstates of quantum systems.

Time-parallel wave propagation in heterogeneous media aided by deep learning

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 23, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/884917410
Speaker
Richard TsaiUT Austin

 

We present a deep learning framework for learning multiscale wave propagation in heterogeneous media. The framework involves the construction of linear feed-forward networks (experts) that specialize in different media groups and a nonlinear "committee" network that gives an improved approximation of wave propagation in more complicated media.  The framework is then applied to stabilize the "parareal" schemes of Lions, Maday, and Turinici, which are time-parallelization schemes for evolutionary problems. 

Theoretical guarantees of machine learning methods for statistical sampling and PDEs in high dimensions

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 2, 2020 - 16:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/884917410
Speaker
Yulong LuUniversity of Massachusetts Amherst

Neural network-based machine learning methods, inlcuding the most notably deep learning have achieved extraordinary successes in numerious  fields. In spite of the rapid development of learning algorithms based on neural networks, their mathematical analysis are far from understood. In particular, it has been a big mystery that neural network-based machine learning methods work extremely well for solving high dimensional problems.

In this talk, I will demonstrate the power of  neural network methods for solving two classes of high dimensional problems: statistical sampling and PDEs. In the first part of the talk, I will present a universal approximation theorem of deep neural networks for representing high dimensional probability distributions. In the second part of the talk, I will discuss a generalization error bound of the Deep Ritz Method for solving high dimensional elliptic problems. For both problems,  our theoretical results show that neural networks-based methods  can overcome the curse of dimensionality.

A Few Thoughts on Deep Learning-Based Scientific Computing

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 26, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/884917410
Speaker
Haizhao YangPurdue University

The remarkable success of deep learning in computer science has evinced potentially great applications of deep learning in computational and applied mathematics. Understanding the mathematical principles of deep learning is crucial to validating and advancing deep learning-based scientific computing. We present a few thoughts on the theoretical foundation of this topic and our methodology for designing efficient solutions of high-dimensional and highly nonlinear partial differential equations, mainly focusing on the approximation and optimization of deep neural networks.

On the Continuum Between Models, Data-Driven Discovery and Machine Learning: Mapping the Continuum of Molecular Conformations Using Cryo-Electron Microscopy

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 19, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/884917410
Speaker
Roy Lederman Yale University

Cryo-Electron Microscopy (cryo-EM) is an imaging technology that is revolutionizing structural biology. Cryo-electron microscopes produce a large number of very noisy two-dimensional projection images of individual frozen molecules; unlike related methods, such as computed tomography (CT), the viewing direction of each particle image is unknown. The unknown directions, together with extreme levels of noise and additional technical factors, make the determination of the structure of molecules challenging. While other methods for structure determination, such as x-ray crystallography and nuclear magnetic resonance (NMR), measure ensembles of molecules, cryo-electron microscopes produce images of individual molecules. Therefore, cryo-EM could potentially be used to study mixtures of different conformations of molecules. Indeed, current algorithms have been very successful at analyzing homogeneous samples, and can recover some distinct conformations mixed in solutions, but, the determination of multiple conformations, and in particular, continuums of similar conformations (continuous heterogeneity), remains one of the open problems in cryo-EM. In practice, some of the key components in “molecular machines” are flexible and therefore appear as very blurry regions in 3-D reconstructions of macro-molecular structures that are otherwise stunning in resolution and detail.

We will discuss “hyper-molecules,” the mathematical formulation of heterogenous 3-D objects as higher dimensional objects, and the machinery that goes into recovering these “hyper-objects” from data. We will discuss some of the statistical and computational challenges, and how they are addressed by merging data-driven exploration, models and computational tools originally built for deep-learning.

This is joint work with Joakim Andén and Amit Singer.

Numerical methods for solving nonlinear PDEs from homotopy methods to machine learning

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 12, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/884917410
Speaker
Wenrui HaoPenn State University

Many systems of nonlinear PDEs are arising from engineering and biology and have attracted research scientists to study the multiple solution structure such as pattern formation. In this talk, I will present several methods to compute the multiple solutions of nonlinear PDEs. In specific, I will introduce the homotopy continuation technique to compute the multiple steady states of nonlinear differential equations and also to explore the relationship between the number of steady-states and parameters. Then I will also introduce a randomized Newton's method to solve the nonlinear system arising from neural network discretization of the nonlinear PDEs. Several benchmark problems will be used to illustrate these ideas.

Pages