Seminars and Colloquia by Series

Monday, November 27, 2017 - 14:00 , Location: Skiles 005 , Zhiliang Xu , Applied and Computational Mathematics and Statistics Dept, U of Notre Dame , zxu2@nd.edu , Organizer: Yingjie Liu

In
this talk, we will present new central and central DG schemes for
solving ideal magnetohydrodynamic (MHD) equations while preserving
globally divergence-free magnetic field on triangular grids. These
schemes incorporate the constrained transport
(CT) scheme of Evans and Hawley with central schemes and central DG
methods on overlapping cells which have no need for solving Riemann
problems across cell edges where there are discontinuities of the
numerical solution. The schemes are formally second-order
accurate with major development on the reconstruction of globally
divergence-free magnetic field on polygonal dual mesh. Moreover, the
computational cost is reduced by solving the complete set of governing
equations on the primal grid while only solving the
magnetic induction equation on the polygonal dual mesh.

Monday, November 20, 2017 - 14:00 , Location: Skiles 005 , Yat Tin Chow , Mathematics, UCLA , ytchow@math.ucla.edu , Organizer: Prasad Tetali

In this talk, we will introduce a family of stochastic processes on the

Wasserstein space, together with their infinitesimal generators. One of

these processes is modeled after Brownian motion and plays a central

role in our work. Its infinitesimal generator defines a partial

Laplacian on the space of Borel probability measures, taken as a

partial trace of a Hessian. We study the eigenfunction of this partial

Laplacian and develop a theory of Fourier analysis. We also consider

the heat flow generated by this partial Laplacian on the Wasserstein

space, and discuss smoothing effect of this flow for a particular class

of initial conditions. Integration by parts formula, Ito formula and an

analogous Feynman-Kac formula will be discussed.

We note the use of the infinitesimal generators in the theory of Mean

Field Games, and we expect they will play an important role in future

studies of viscosity solutions of PDEs in the Wasserstein space.

Monday, November 6, 2017 - 13:55 , Location: Skiles 005 , Prof. Kevin Lin , University of Arizona , klin@math.arizona.edu , Organizer: Molei Tao

Weighted direct samplers, sometimes also called importance
samplers, are Monte Carlo algorithms for generating
independent, weighted samples from a given target
probability distribution. They are used in, e.g., data
assimilation, state estimation for dynamical systems, and
computational statistical mechanics. One challenge in
designing weighted samplers is to ensure the variance of the
weights, and that of the resulting estimator, are
well-behaved. Recently, Chorin, Tu, Morzfeld, and coworkers
have introduced a class of novel weighted samplers called
implicit samplers, which possess a number of nice empirical
properties. In this talk, I will summarize an asymptotic
analysis of implicit samplers in the small-noise limit and
describe a simple method to obtain a higher-order accuracy.
I will also discuss extensions to stochastic differential
equatons. This is joint work with Jonathan Goodman, Andrew
Leach, and Matthias Morzfeld.

Monday, October 16, 2017 - 14:00 , Location: Skiles 005 , Dr. Barak Sober , Tel Aviv University , barakino@gmail.com , Organizer: Doron Lubinsky

We approximate a function defined over a $d$-dimensional manifold $M
⊂R^n$ utilizing only noisy function values at noisy locations on the manifold. To produce
the approximation we do not require any knowledge regarding the manifold
other than its dimension $d$. The approximation scheme is based upon the
Manifold Moving Least-Squares (MMLS) and is therefore resistant to noise in
the domain $M$ as well. Furthermore, the approximant is shown to be smooth
and of approximation order of $O(h^{m+1})$ for non-noisy data, where $h$ is
the mesh size w.r.t $M,$ and $m$ is the degree of the local polynomial
approximation. In addition, the proposed algorithm is linear in time with
respect to the ambient space dimension $n$, making it useful for cases
where d is much less than n. This assumption, that the high dimensional data is situated
on (or near) a significantly lower dimensional manifold, is prevalent in
many high dimensional problems. Thus, we put our algorithm to numerical
tests against state-of-the-art algorithms for regression over manifolds and
show its dominance and potential.

Monday, October 2, 2017 - 13:55 , Location: Skiles 005 , Weilin Li , University of Maryland, College Park , wl298@math.umd.edu , Organizer: Wenjing Liao

We formulate
super-resolution as an inverse problem in the space of measures, and
introduce a discrete and a continuous model. For the discrete model, the
problem is to accurately recover a sparse high dimensional vector from
its noisy low frequency Fourier coefficients. We determine a sharp bound
on the min-max recovery error, and this is an immediate consequence of a
sharp bound on the smallest singular value of restricted Fourier
matrices. For the continuous model, we study the total variation
minimization method. We borrow ideas from Beurling in order to determine
general conditions for the recovery of singular measures, even those
that do not satisfy a minimum separation condition. This presentation
includes joint work with John Benedetto and Wenjing Liao.

Monday, September 25, 2017 - 13:55 , Location: Skiles 005 , Professor Alessandro Veneziani , Emory Department of Mathematics and Computer Science , Organizer: Martin Short

When we get to the point of including the huge and relevant experience of
finite element fluid modeling collected in over 25 years of experience in the treatment of
cardiovascular diseases, the risk of getting “lost in translation” is real. The most important issues
are the reliability that we need to guarantee to provide a trustworthy decision support to clinicians;
the efficiency we need to guarantee to fit into the demand coming from a large volume of patients
in Computer Aided Clinical Trials as well as short timelines required by special
circumstances (emergency) in Surgical Planning.
In this talk, we will report on some recent activities taken at Emory to
make this transition possible. Reliability requirements call for an appropriate integration of
measurements and numerical models, as well as for uncertainty quantification. In particular, image and data
processing are critical to feeding mathematical models. However, there are several challenges still
open, e.g. in simulating blood flow in patient-specific arteries after stent deployment; or in
assessing the correct boundary data set to be prescribed in complex vascular districts. The gap between
theory, in this case, is apparent and good simulation and assimilation practices in finite elements
for clinical hemodynamics need to be drawn. The talk will cover these topics.
For computational efficiency, we will cover some numerical techniques currently in use for coronary
blood flow, like the Hierarchical Model Reduction or efficient methods for
coping with turbulence in aortic flows. As Clinical Trials are currently one of the most important sources of
information for medical research and practice, we envision that the suitable achievement of reliability and
efficiency requirements will make Computer Aided Clinical Trials (specifically with a strong
Finite-Elements-in-Fluids component) an important source of information with a significant impact on the
quality of healthcare. This is a joint work with the scholars and students of the Emory Center for
Mathematics and Computing in Medicine (E(CM)2), the Emory Biomech Core Lab (Don Giddens and Habib Samady), the Beta-Lab at the University of Pavia (F. Auricchio ). This work is supported by the US National
Science Foundation, Projects DMS 1419060, 1412963 1620406, Fondazione Cariplo, Abbott
Vascular Inc., and the XSEDE Consortium.

Monday, September 18, 2017 - 13:55 , Location: Skiles 005 , Prof. Nathan Kutz , University of Washington, Applied Mathematics , Organizer: Martin Short

The emergence of data methods for the sciences in the last decade has
been enabled by the plummeting costs of sensors, computational power,
and data storage. Such vast quantities of data afford us new
opportunities for data-driven discovery, which has been referred to as
the 4th paradigm of scientific discovery. We demonstrate that we can use
emerging, large-scale time-series data from modern sensors to directly
construct, in an adaptive manner, governing equations, even nonlinear
dynamics, that best model the system measured using modern regression
techniques. Recent innovations also allow for handling multi-scale
physics phenomenon and control protocols in an adaptive and robust way.
The overall architecture is equation-free in that the dynamics and
control protocols are discovered directly from data acquired from
sensors. The theory developed is demonstrated on a number of canonical
example problems from physics, biology and engineering.

Friday, August 25, 2017 - 13:55 , Location: Skiles 005 , Prof. Song Li , Zhejiang University , Organizer: Haomin Zhou

In this talk, i shall provide some optimal PIR bounds, which confirmed a conjecture on optimal RIP bound. Furtheremore, i shall also investigate some results on signals recovery with redundant dictionaries, which are also related to statistics and sparse representation.

Monday, April 24, 2017 - 14:05 , Location: Skiles 005 , Prof. George Mohler , IUPUI Computer Science , Organizer: Martin Short

In this talk we focus on classification problems where noisy sensor
measurements collected over a time window must be classified into one or
more categories. For example, mobile phone health and insurance apps
take as input time series from the accelerometer, gyroscope and GPS
radio of the phone and output predictions as to whether the user is
still, walking, running, biking, driving etc. Standard approaches to
this problem consist of first engineering features from statistics of
the data (or a transform) over a window and then training a
discriminative classifier. For two applications we show how these
features can instead be learned in an end-to-end modeling framework with
the advantages of increased accuracy and decreased modeling and
training time. The first application is reconstructing unobserved neural connections from Calcium fluorescence time series and we introduce a novel convolutional neural network architecture
with an inverse covariance layer to solve the problem. The second
application is driving detection on mobile phones with applications to
car telematics and insurance.

Monday, April 17, 2017 - 14:00 , Location: Skiles 005 , Dr. Andre Souza , Georgia Tech , andre.souza@math.gatech.edu , Organizer: Molei Tao

In this talk we discuss how to find probabilities of extreme events in stochastic differential equations. One approach to calculation would be to perform a large number of simulations and gather statistics, but an efficient alternative is to minimize Freidlin-Wentzell action. As a consequence of the analysis one also determines the most likely trajectory that gave rise to the extreme event. We apply this approach to stochastic systems whose deterministic behavior exhibit chaos (Lorenz and Kuramoto-Sivashinsky equations), comment on the observed behavior, and discuss.

Pages