- You are here:
- GT Home
- Home
- News & Events

Monday, November 26, 2018 - 13:55 ,
Location: Skiles 005 ,
Ray Treinen ,
Texas State University ,
rt30@txstate.edu ,
Organizer: John McCuan

Monday, November 12, 2018 - 13:55 ,
Location: Skiles 005 ,
Prof. Xiaoliang Wan ,
Louisiana State University ,
Organizer: Molei Tao

In this talk, we will discuss some computational issues when applying the large deviation theory to study small-noise-induced rare events in differential equations. We focus on two specific problems: the most probable transition path for an ordinary differential equation and the asymptotically efficient simulation of rare events for an elliptic problem. Both problems are related to the large deviation theory. From a computational point of view, the former one is a variational problem while the latter one is a sampling problem. For the first problem, we have developed an hp adaptive minimum action method, and for the second problem, we will present an importance sampling estimator subject to a sufficient and necessary condition for its asymptotic efficiency.

Monday, October 22, 2018 - 13:55 ,
Location: Skiles 005 ,
Professor Hans-Werner van Wyk ,
Auburn University ,
Organizer: Martin Short

The fractional Laplacian is a non-local spatial operator describing anomalous diffusion processes, which have been observed abundantly in nature. Despite its many similarities with the classical Laplacian in unbounded domains, its definition on bounded regions is more problematic. So is its numerical discretization. Difficulties arise as a result of the integral kernel's singularity at the origin as well as its unbounded support. In this talk, we discuss a novel finite difference method to discretize the fractional Laplacian in hypersingular integral form. By introducing a splitting parameter, we first formulate the fractional Laplacian as the weighted integral of a function with a weaker singularity, and then approximate it by a weighted trapezoidal rule. Our method generalizes the standard finite difference approximation of the classical Laplacian and exhibits the same quadratic convergence rate, for any fractional power in (0, 2), under sufficient regularity conditions. We present theoretical error bounds and demonstrate our method by applying it to the fractional Poisson equation. The accompanying numerical examples verify our results, as well as give additional insight into the convergence behavior of our method.

Monday, October 15, 2018 - 13:55 ,
Location: Skiles 005 ,
Prof. Yun Jing ,
NCSU ,
Organizer: Molei Tao

In recent years, metamaterials have drawn a great deal of attention in the scientific community due to their unusual properties and useful applications. Metamaterials are artificial materials made of subwavelength microstructures. They are well known to exhibit exotic properties and could manipulate wave propagation in a way that is impossible by using nature materials.In this talk, I will present our recent works on membrane-type acoustic metamaterials (AMMs). First, I will talk about how to achieve near-zero density/index AMMs using membranes. We numerically show that such an AMM can be utilized to achieve angular filtering and manipulate wave-fronts. Next, I will talk about the design of an acoustic complimentary metamaterial (CMM). Such a CMM can be used to acoustically cancel out aberrating layers so that sound transmission can be greatly enhanced. This material could find usage in transcranial ultrasound beam focusing and non-destructive testing through metal layers. I will then talk about our recent work on using membrane-type AMMs for low frequency noise reduction. We integrated membranes with honeycomb structures to design simultaneously lightweight, strong, and sound-proof AMMs. Experimental results will be shown to demonstrate the effectiveness of such an AMM. Finally, I will talk about how to achieve a broad-band hyperbolic AMM using membranes.

Monday, October 1, 2018 - 13:55 ,
Location: Skiles 005 ,
Dr. Andre Wibisono ,
Georgia Tech CS ,
Organizer: Molei Tao

Accelerated gradient methods play a central role in optimization, achieving the optimal convergence rates in many settings. While many extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this work, we study accelerated methods from a continuous-time perspective. We show there is a Bregman Lagrangian functional that generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that in continuous time, these accelerated methods correspond to traveling the same curve in spacetime at different speeds. This is in contrast to the family of rescaled gradient flows, which correspond to changing the distance in space. We show how to implement both the rescaled and accelerated gradient methods as algorithms in discrete time with matching convergence rates. These algorithms achieve faster convergence rates for convex optimization under higher-order smoothness assumptions. We will also discuss lower bounds and some open questions. Joint work with Ashia Wilson and Michael Jordan.

Monday, September 24, 2018 - 13:55 ,
Location: Skiles 005 ,
Anthony Yezzi ,
Georgia Tech, ECE ,
Organizer: Sung Ha Kang

Following the seminal work of Nesterov, accelerated optimization methods (sometimes referred to as momentum methods) have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios were second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with an attraction basin large enough to accommodate the initial overshoot. This behavior has made accelerated search methods particularly popular within the machine learning community where stochastic variants have been proposed as well. So far, however, accelerated optimization methods have been applied to searches over finite parameter spaces. We show how a variational setting for these finite dimensional methods (recently formulated by Wibisono, Wilson, and Jordan) can be extended to the infinite dimensional setting, both in linear functional spaces as well as to the more complicated manifold of 2D curves and 3D surfaces.

Monday, September 17, 2018 - 13:55 ,
Location: Skiles 005 ,
Professor Lourenco Beirao da Veiga ,
Università di Milano-Bicocca ,
Organizer: Haomin Zhou

This is a joint seminar by College of Engineering and School of Math.

The Virtual
Element Method (VEM), is a very recent technology introduced in [Beirao da
Veiga, Brezzi, Cangiani, Manzini, Marini, Russo, 2013, M3AS] for the
discretization of partial differential equations, that has shared a good
success in recent years. The VEM can be interpreted as a generalization of the
Finite Element Method that allows to use general polygonal and polyhedral
meshes, still keeping the same coding complexity and allowing for arbitrary
degree of accuracy. The Virtual Element Method makes use of local functions
that are not necessarily polynomials and are defined in an implicit way.
Nevertheless, by a wise choice of the degrees of freedom and introducing a novel
construction of the associated stiffness matrixes, the VEM avoids the explicit
integration of such shape functions.
In addition
to the possibility to handle general polytopal meshes, the flexibility of the
above construction yields other interesting properties with respect to more
standard Galerkin methods. For instance, the VEM easily allows to build discrete
spaces of arbitrary C^k regularity, or to satisfy exactly the divergence-free
constraint for incompressible fluids.
The present
talk is an introduction to the VEM, aiming at showing the main ideas of the
method. We consider for simplicity a simple elliptic model problem (that is
pure diffusion with variable coefficients) but set ourselves in the more
involved 3D setting. In the first part we introduce the adopted Virtual Element
space and the associated degrees of freedom, first by addressing the faces of
the polyhedrons (i.e. polygons) and then building the space in the full
volumes. We then describe the construction of the discrete bilinear form and
the ensuing discretization of the problem. Furthermore, we show a set of
theoretical and numerical results. In the very final part, we will give a
glance at more involved problems, such as magnetostatics (mixed problem with more
complex spaces interacting) and large deformation elasticity (nonlinear
problem).

Monday, September 10, 2018 - 13:55 ,
Location: Skiles 005 ,
Sergei Avdonin ,
University of Alaska Fairbanks ,
s.avdonin@alaska.edu ,
Organizer: Wenjing Liao

Quantum graphs are metric graphs with differential equations defined on the edges. Recent interest in control and inverse problems for quantum graphs

is motivated by applications to important problems of classical and quantum physics, chemistry, biology, and engineering.

In this talk we describe some new controllability and identifability results for partial differential equations on compact graphs. In particular, we consider graph-like networks of inhomogeneous strings with masses attached at the interior vertices. We show that the wave transmitted through a mass is more

regular than the incoming wave. Therefore, the regularity of the solution to the initial boundary value problem on an edge depends on the combinatorial distance of this edge from the source, that makes control and inverse problems

for such systems more diffcult.

We prove the exact controllability of the systems with the optimal number of controls and propose an algorithm recovering the unknown densities of thestrings, lengths of the edges, attached masses, and the topology of the graph. The proofs are based on the boundary control and leaf peeling methods developed in our previous papers. The boundary control method is a powerful

method in inverse theory which uses deep connections between controllability and identifability of distributed parameter systems and lends itself to straight-forward algorithmic implementations.

Monday, July 2, 2018 - 01:55 ,
Location: Skiles 005 ,
Isabelle Kemajou-Brown ,
Morgan State University ,
elisabeth.brown@morgan.edu ,
Organizer: Luca Dieci

We assume the stock is modeled by a Markov regime-switching diffusion process
and that, the benchmark depends on the economic factor. Then, we solve a
risk-sensitive benchmarked asset management problem of a firm. Our method
consists of finding the portfolio strategy that minimizes the risk sensitivity
of an investor in such environment, using the general maximum principle.After the above presentation, the speaker will discuss some of her ongoing research.

Monday, April 16, 2018 - 13:55 ,
Location: Skiles 005 ,
Xiuyuan Cheng ,
Duke University ,
xiuyuan.cheng@duke.edu ,
Organizer: Wenjing Liao

Filters in a Convolutional Neural Network
(CNN) contain model parameters learned from enormous amounts of data.
The properties of convolutional filters in a trained network directly
affect the quality of the data representation being produced. In this
talk, we introduce a framework for decomposing convolutional filters
over a truncated expansion under pre-fixed bases, where the expansion coefficients are learned from data. Such a structure not only reduces the number of trainable parameters and computation load but
also explicitly imposes filter regularity by bases truncation. Apart
from maintaining prediction accuracy across image classification
datasets, the decomposed-filter CNN also produces a stable
representation with respect to input variations, which is proved under generic assumptions on the basis expansion. Joint work with Qiang Qiu, Robert Calderbank, and Guillermo Sapiro.