- You are here:
- GT Home
- Home
- News & Events

Monday, April 16, 2018 - 13:55 ,
Location: Skiles 005 ,
Xiuyuan Cheng ,
Duke University ,
xiuyuan.cheng@duke.edu ,
Organizer: Wenjing Liao

Filters in a Convolutional Neural Network
(CNN) contain model parameters learned from enormous amounts of data.
The properties of convolutional filters in a trained network directly
affect the quality of the data representation being produced. In this
talk, we introduce a framework for decomposing convolutional filters
over a truncated expansion under pre-fixed bases, where the expansion coefficients are learned from data. Such a structure not only reduces the number of trainable parameters and computation load but
also explicitly imposes filter regularity by bases truncation. Apart
from maintaining prediction accuracy across image classification
datasets, the decomposed-filter CNN also produces a stable
representation with respect to input variations, which is proved under generic assumptions on the basis expansion. Joint work with Qiang Qiu, Robert Calderbank, and Guillermo Sapiro.

Monday, April 9, 2018 - 13:55 ,
Location: Skiles 005 ,
Prof. Qingshan Chen ,
Department of Mathematical Sciences, Clemson University ,
qsc@clemson.edu ,
Organizer: Yingjie Liu

Large-scale geophysical flows, i.e. the ocean and
atmosphere, evolve on spatial scales ranging from meters to thousands
of kilometers, and on temporal scales ranging from seconds to
decades. These scales interact in a highly nonlinear fashion, making
it extremely challenging to reliably and accurately capture the
long-term dynamics of these flows on numerical models. In fact, this
problem is closely associated with the grand challenges of long-term
weather and climate predictions. Unstructured meshes have been gaining
popularity
in recent years on geophysical models, thanks to its being almost free
of polar singularities, and remaining highly scalable even at eddy
resolving resolutions. However, to unleash the full potential of these
meshes, new schemes are needed. This talk starts with a brief
introduction to large-scale geophysical flows. Then it goes
over the main considerations, i.e. various numerical and algorithmic
choices, that one needs to make in deisgning numerical schemes for these
flows. Finally, a new vorticity-divergence based
finite volume scheme will be introduced. Its strength and challenges,
together with some numerical results, will be presented and discussed.

Monday, April 2, 2018 - 13:55 ,
Location: Skiles 005 ,
Tuo Zhao ,
Georgia Institute of Technology ,
Organizer: Wenjing Liao

Nonconvex
optimization naturally arises in many machine learning problems.
Machine learning researchers exploit various nonconvex formulations to
gain modeling flexibility, estimation robustness, adaptivity, and
computational scalability. Although classical computational complexity
theory has shown that solving nonconvex optimization is generally
NP-hard in the worst case, practitioners have proposed numerous
heuristic optimization algorithms, which achieve outstanding empirical
performance in real-world applications.To
bridge this gap between practice and theory, we propose a new
generation of model-based optimization algorithms and theory, which
incorporate the statistical thinking into modern optimization.
Specifically, when designing practical computational algorithms, we take
the underlying statistical models into consideration. Our novel
algorithms exploit hidden geometric structures behind many nonconvex
optimization problems, and can obtain global optima with the desired
statistics properties in polynomial time with high probability.

Monday, March 26, 2018 - 13:55 ,
Location: Skiles 005 ,
Mark Iwen ,
Michigan State University ,
iwenmark@msu.edu ,
Organizer: Wenjing Liao

We propose a general phase retrieval approach that uses correlation-based measurements with compactly supported measurement masks. The algorithm admits deterministic measurement constructions together with a robust, fast recovery algorithm that consists of solving a system of linear equations in a lifted space, followed by finding an eigenvector (e.g., via an inverse
power iteration). Theoretical reconstruction error guarantees are presented. Numerical experiments demonstrate robustness and computational efficiency that outperforms competing approaches on large
problems. Finally, we show that this approach also trivially extends to phase retrieval problems based on windowed Fourier measurements.

Monday, March 5, 2018 - 13:55 ,
Location: Skiles 005 ,
Nick Dexter ,
University of Tennessee ,
ndexter@utk.edu ,
Organizer: Wenjing Liao

We present and analyze a novel sparse polynomial approximation method
for the solution of PDEs with stochastic and parametric inputs. Our
approach treats the parameterized problem as a problem of joint-sparse
signal reconstruction, i.e.,
the simultaneous reconstruction of a set of signals sharing a common
sparsity pattern from a countable, possibly infinite, set of
measurements. Combined with the standard measurement scheme developed
for compressed sensing-based polynomial approximation, this
approach allows for global approximations of the solution over both
physical and parametric domains. In addition, we are able to show that,
with minimal sample complexity, error estimates comparable to the best
s-term approximation, in energy norms, are achievable,
while requiring only a priori bounds on polynomial truncation error. We
perform extensive numerical experiments on several high-dimensional
parameterized elliptic PDE models to demonstrate the superior recovery
properties of the proposed approach.

Monday, February 26, 2018 - 14:00 ,
Location: Skiles 005 ,
Prof. Hyenkyun Woo ,
Korea University of Technology and Education ,
Organizer: Sung Ha Kang

Bio: Hyenkyun Woo is an assistant professor at KOREATECH (Korea University of Technology and Education). He got a Ph.D at Yonsei university. and was a post-doc at Georgia Tech and Korea Institute of Advanced Study and others.

In machine learning and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative region. In this article, we study the domain of the beta-divergence and its connection to the Bregman-divergence associated with the convex function of Legendre type. In fact, we show that the domain of beta-divergence (and the corresponding Bregman-divergence) include negative region under the mild condition on the beta value. Additionally, through the relation between the beta-divergence and the Bregman-divergence, we can reformulate various variational models appearing in image processing problems into a unified framework, namely the Bregman variational model. This model has a strong advantage compared to the beta-divergence-based model due to the dual structure of the Bregman-divergence. As an example, we demonstrate how we can build up a convex reformulated variational model with a negative domain for the classic nonconvex problem, which usually appears in synthetic aperture radar image processing problems.

Saturday, February 24, 2018 - 09:30 ,
Location: Helen M. Aderhold Learning Center (ALC), Room 24 (60 Luckie St NW, Atlanta, GA 30303) ,
Wenjing Liao and others ,
GSU, Clemson,UGA, GT, Emory ,
Organizer: Sung Ha Kang

The Georgia Scientific Computing Symposium is a forum for professors,
postdocs, graduate students and other researchers in Georgia to meet in
an informal setting, to exchange ideas, and to highlight local
scientific computing research. The symposium has been held every year
since 2009 and is open to the entire research community. This year, the symposium will be held on Saturday, February 24, 2018, at Georgia State University. More information can be found at: https://math.gsu.edu/xye/public/gscs/gscs2018.html

Friday, February 23, 2018 - 13:55 ,
Location: Skiles 269 ,
Prof. Justin Kakeu ,
Morehouse University ,
Organizer: Sung Ha Kang

We use a stochastic dynamic programming approach to address the following question: Can a homogenous resource extraction model (one without extraction costs, without new discoveries, and without technical progress) generate non-increasing resource prices? The traditional answer to that question contends that prices should exhibit an increasing trend as the exhaustible resource is being depleted over time (The Hotelling rule). In contrast, we will show that injecting concerns for temporal resolution of uncertainty in a resource extraction problem can generate a non-increasing trend in the resource price. Indeed, the expected rate of change of the price can become negative if the premium for temporal resolution of uncertainty is negative and outweighs both the positive discount rate and the short-run risk premium. Numerical examples are provided for illustration.

Monday, February 5, 2018 - 13:55 ,
Location: Skiles 005 ,
Mark A. Davenport ,
Georgia Institute of Technology ,
Organizer: Wenjing Liao

The discrete prolate spheroidal sequences (DPSS's) provide an efficient
representation for discrete signals that are perfectly timelimited and
nearly bandlimited. Due to the high computational complexity of
projecting onto the DPSS basis - also known as the Slepian basis - this
representation is often overlooked in favor of the fast Fourier
transform (FFT). In this talk I will describe novel fast algorithms for
computing approximate projections onto the leading Slepian basis
elements with a complexity comparable to the FFT. I will also highlight
applications of this Fast Slepian Transform in the context of
compressive sensing and processing of sampled multiband signals.

Monday, January 29, 2018 - 13:55 ,
Location: Skiles 005 ,
Prof. Lou, Yifei ,
University of Texas, Dallas ,
Organizer: Sung Ha Kang

A fundamental problem in compressive sensing (CS) is to reconstruct a sparse signal under a few linear measurements far less than the physical dimension of the signal. Currently, CS favors incoherent systems, in which any two measurements are as little correlated as possible. In reality, however, many problems are coherent, in which case conventional methods, such as L1 minimization, do not work well. In this talk, I will present a novel non-convex approach, which is to minimize the difference of L1 and L2 norms, denoted as L1-L2, in order to promote sparsity. In addition to theoretical aspects of the L1-L2 approach, I will discuss two minimization algorithms. One is the difference of convex (DC) function methodology, and the other is based on a proximal operator, which makes some L1 algorithms (e.g. ADMM) applicable for L1-L2. Experiments demonstrate that L1-L2 improves L1 consistently and it outperforms Lp (p between 0 and 1) for highly coherent matrices. Some applications will be discussed, including super-resolution, image processing, and low-rank approximation.