- You are here:
- GT Home
- Home
- News & Events

Monday, February 11, 2019 - 13:55 ,
Location: Skiles 005 ,
Prof. Roland Glowinski ,
University of Houston ,
roland@math.uh.edu ,
Organizer: Hao Liu

Monday, December 3, 2018 - 13:55 ,
Location: Skiles 005 ,
Fei Lu ,
Johns Hopkins University ,
feilu@math.jhu.edu ,
Organizer: Wenjing Liao

Self-interacting systems of particles/agents arise in many areas of science, such as particle systems in physics, flocking and swarming models in biology, and opinion dynamics in social science. An interesting question is to learn the laws of interaction between the particles/agents from data consisting of trajectories. In the case of distance-based interaction laws, we present efficient regression algorithms to estimate the interaction kernels, and we develop a nonparametric statistic learning theory addressing learnability, consistency and optimal rate of convergence of the estimators. Especially, we show that despite the high-dimensionality of the systems, optimal learning rates can still be achieved.

Monday, November 26, 2018 - 13:55 ,
Location: 005 Skiles ,
Ray Treinen ,
Texas State University ,
rt30@txstate.edu ,
Organizer: John McCuan

<p>We consider one or more volumes of a liquid or semi-molten material sitting on a substrate, while the vapor above is assumed to have the same medium in suspension. There may be both evaporation and condensation to move mass from one cell to another. We explore possible equilibrium states of such configurations. Our examples include a single sessile drop (or cell) on the plate, connected clusters of cells of the material on the plate, as well as a periodic configuration of connected cells on the plate. The shape of the configurations will depend on the type of energy that we take into consideration, and in settings with a vertical gravitational potential energy the clusters are shown to exhibit a preferred granular scale. The majority of our results are in a lower dimensional setting, however, some results will be presented in 3-D.</p>

Monday, November 12, 2018 - 13:55 ,
Location: Skiles 005 ,
Prof. Xiaoliang Wan ,
Louisiana State University ,
Organizer: Molei Tao

In this talk, we will discuss some computational issues when applying the large deviation theory to study small-noise-induced rare events in differential equations. We focus on two specific problems: the most probable transition path for an ordinary differential equation and the asymptotically efficient simulation of rare events for an elliptic problem. Both problems are related to the large deviation theory. From a computational point of view, the former one is a variational problem while the latter one is a sampling problem. For the first problem, we have developed an hp adaptive minimum action method, and for the second problem, we will present an importance sampling estimator subject to a sufficient and necessary condition for its asymptotic efficiency.

Monday, November 5, 2018 - 13:55 ,
Location: Skiles 005 ,
Lizhen Lin ,
University of Notre Dame ,
lizhen.lin@nd.edu ,
Organizer: Wenjing Liao

Hypothesis testing of structure in covariance matrices is of significant importance, but faces great challenges in high-dimensional settings. Although consistent frequentist one-sample covariance tests have been proposed, there is a lack of simple, computationally scalable, and theoretically sound Bayesian testing methods for large covariance matrices. Motivated by this gap and by the need for tests that are powerful against sparse alternatives, we propose a novel testing framework based on the maximum pairwise Bayes factor. Our initial focus is on one-sample covariance testing; the proposed test can optimally distinguish null and alternative hypotheses in a frequentist asymptotic sense. We then propose diagonal tests and a scalable covariance graph selection procedure that are shown to be consistent. Further, our procedure can effectively control false positives. A simulation study evaluates the proposed approach relative to competitors. The performance of our graph selection method is demonstrated through applications to a sonar data set.

Monday, October 29, 2018 - 13:55 ,
Location: Skiles 005 ,
Prof. Tobin Issac ,
Georgia Tech, School of Computational Science and Engineering ,
Organizer: Sung Ha Kang

We are often forced to make important decisions with imperfect and incomplete data. In model-based inference, our efforts to extract useful information from data are aided by models of what occurs where we have no observations: examples range from climate prediction to patient-specific medicine. In many cases, these models can take the form of systems of PDEs with critical-yet-unknown parameter fields, such as initial conditions or material coefficients of heterogeneous media. A concrete example that I will present is to make predictions about the Antarctic ice sheet from satellite observations, when we model the ice sheet using a system of nonlinear Stokes equations with a Robin-type boundary condition, governed by a critical, spatially varying coefficient. This talk will present three aspects of the computational stack used to efficiently estimate statistics for this kind of inference problem. At the top is an posterior-distribution approximation for Bayesian inference, that combines Laplace's method with randomized calculations to compute an optimal low-rank representation. Below that, the performance of this approach to inference is highly dependent on the efficient and scalable solution of the underlying model equation, and its first- and second- adjoint equations. A high-level description of a problem (in this case, a nonlinear Stokes boundary value problem) may suggest an approach to designing an optimal solver, but this is just the jumping-off point: differences in geometry, boundary conditions, and otherconsiderations will significantly affect performance. I will discuss how the peculiarities of the ice sheet dynamics problem lead to the development of an anisotropic multigrid method (available as a plugin to the PETSc library for scientific computing) that improves on standard approaches.At the bottom, to increase the accuracy per degree of freedom of discretized PDEs, I develop adaptive mesh refinement (AMR) techniques for large-scale problems. I will present my algorithmic contributions to the p4est library for parallel AMR that enable it to scale to concurrencies of O(10^6), as well as recent work commoditizing AMR techniques in PETSc.

Monday, October 22, 2018 - 13:55 ,
Location: Skiles 005 ,
Professor Hans-Werner van Wyk ,
Auburn University ,
Organizer: Martin Short

The fractional Laplacian is a non-local spatial operator describing anomalous diffusion processes, which have been observed abundantly in nature. Despite its many similarities with the classical Laplacian in unbounded domains, its definition on bounded regions is more problematic. So is its numerical discretization. Difficulties arise as a result of the integral kernel's singularity at the origin as well as its unbounded support. In this talk, we discuss a novel finite difference method to discretize the fractional Laplacian in hypersingular integral form. By introducing a splitting parameter, we first formulate the fractional Laplacian as the weighted integral of a function with a weaker singularity, and then approximate it by a weighted trapezoidal rule. Our method generalizes the standard finite difference approximation of the classical Laplacian and exhibits the same quadratic convergence rate, for any fractional power in (0, 2), under sufficient regularity conditions. We present theoretical error bounds and demonstrate our method by applying it to the fractional Poisson equation. The accompanying numerical examples verify our results, as well as give additional insight into the convergence behavior of our method.

Monday, October 15, 2018 - 13:55 ,
Location: Skiles 005 ,
Prof. Yun Jing ,
NCSU ,
Organizer: Molei Tao

In recent years, metamaterials have drawn a great deal of attention in the scientific community due to their unusual properties and useful applications. Metamaterials are artificial materials made of subwavelength microstructures. They are well known to exhibit exotic properties and could manipulate wave propagation in a way that is impossible by using nature materials.In this talk, I will present our recent works on membrane-type acoustic metamaterials (AMMs). First, I will talk about how to achieve near-zero density/index AMMs using membranes. We numerically show that such an AMM can be utilized to achieve angular filtering and manipulate wave-fronts. Next, I will talk about the design of an acoustic complimentary metamaterial (CMM). Such a CMM can be used to acoustically cancel out aberrating layers so that sound transmission can be greatly enhanced. This material could find usage in transcranial ultrasound beam focusing and non-destructive testing through metal layers. I will then talk about our recent work on using membrane-type AMMs for low frequency noise reduction. We integrated membranes with honeycomb structures to design simultaneously lightweight, strong, and sound-proof AMMs. Experimental results will be shown to demonstrate the effectiveness of such an AMM. Finally, I will talk about how to achieve a broad-band hyperbolic AMM using membranes.

Monday, October 1, 2018 - 13:55 ,
Location: Skiles 005 ,
Dr. Andre Wibisono ,
Georgia Tech CS ,
Organizer: Molei Tao

Accelerated gradient methods play a central role in optimization, achieving the optimal convergence rates in many settings. While many extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this work, we study accelerated methods from a continuous-time perspective. We show there is a Bregman Lagrangian functional that generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that in continuous time, these accelerated methods correspond to traveling the same curve in spacetime at different speeds. This is in contrast to the family of rescaled gradient flows, which correspond to changing the distance in space. We show how to implement both the rescaled and accelerated gradient methods as algorithms in discrete time with matching convergence rates. These algorithms achieve faster convergence rates for convex optimization under higher-order smoothness assumptions. We will also discuss lower bounds and some open questions. Joint work with Ashia Wilson and Michael Jordan.

Monday, September 24, 2018 - 13:55 ,
Location: Skiles 005 ,
Anthony Yezzi ,
Georgia Tech, ECE ,
Organizer: Sung Ha Kang

Following the seminal work of Nesterov, accelerated optimization methods (sometimes referred to as momentum methods) have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios were second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with an attraction basin large enough to accommodate the initial overshoot. This behavior has made accelerated search methods particularly popular within the machine learning community where stochastic variants have been proposed as well. So far, however, accelerated optimization methods have been applied to searches over finite parameter spaces. We show how a variational setting for these finite dimensional methods (recently formulated by Wibisono, Wilson, and Jordan) can be extended to the infinite dimensional setting, both in linear functional spaces as well as to the more complicated manifold of 2D curves and 3D surfaces.