Seminars and Colloquia by Series

Compute Faster and Learn Better: Machine Learning via Nonconvex Optimization

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 2, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Tuo ZhaoGeorgia Institute of Technology
Nonconvex optimization naturally arises in many machine learning problems. Machine learning researchers exploit various nonconvex formulations to gain modeling flexibility, estimation robustness, adaptivity, and computational scalability. Although classical computational complexity theory has shown that solving nonconvex optimization is generally NP-hard in the worst case, practitioners have proposed numerous heuristic optimization algorithms, which achieve outstanding empirical performance in real-world applications.To bridge this gap between practice and theory, we propose a new generation of model-based optimization algorithms and theory, which incorporate the statistical thinking into modern optimization. Specifically, when designing practical computational algorithms, we take the underlying statistical models into consideration. Our novel algorithms exploit hidden geometric structures behind many nonconvex optimization problems, and can obtain global optima with the desired statistics properties in polynomial time with high probability.

Fast Phase Retrieval from Localized Time-Frequency Measurements

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 26, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Mark IwenMichigan State University
We propose a general phase retrieval approach that uses correlation-based measurements with compactly supported measurement masks. The algorithm admits deterministic measurement constructions together with a robust, fast recovery algorithm that consists of solving a system of linear equations in a lifted space, followed by finding an eigenvector (e.g., via an inverse power iteration). Theoretical reconstruction error guarantees are presented. Numerical experiments demonstrate robustness and computational efficiency that outperforms competing approaches on large problems. Finally, we show that this approach also trivially extends to phase retrieval problems based on windowed Fourier measurements.

Joint-sparse recovery for high-dimensional parametric PDEs

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 5, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Nick DexterUniversity of Tennessee
We present and analyze a novel sparse polynomial approximation method for the solution of PDEs with stochastic and parametric inputs. Our approach treats the parameterized problem as a problem of joint-sparse signal reconstruction, i.e., the simultaneous reconstruction of a set of signals sharing a common sparsity pattern from a countable, possibly infinite, set of measurements. Combined with the standard measurement scheme developed for compressed sensing-based polynomial approximation, this approach allows for global approximations of the solution over both physical and parametric domains. In addition, we are able to show that, with minimal sample complexity, error estimates comparable to the best s-term approximation, in energy norms, are achievable, while requiring only a priori bounds on polynomial truncation error. We perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.

A characterization of domain of beta-divergence and its connection to Bregman-divergence

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 26, 2018 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Prof. Hyenkyun WooKorea University of Technology and Education

Please Note: Bio: Hyenkyun Woo is an assistant professor at KOREATECH (Korea University of Technology and Education). He got a Ph.D at Yonsei university. and was a post-doc at Georgia Tech and Korea Institute of Advanced Study and others.

In machine learning and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative region. In this article, we study the domain of the beta-divergence and its connection to the Bregman-divergence associated with the convex function of Legendre type. In fact, we show that the domain of beta-divergence (and the corresponding Bregman-divergence) include negative region under the mild condition on the beta value. Additionally, through the relation between the beta-divergence and the Bregman-divergence, we can reformulate various variational models appearing in image processing problems into a unified framework, namely the Bregman variational model. This model has a strong advantage compared to the beta-divergence-based model due to the dual structure of the Bregman-divergence. As an example, we demonstrate how we can build up a convex reformulated variational model with a negative domain for the classic nonconvex problem, which usually appears in synthetic aperture radar image processing problems.

Georgia Scientific Computing Symposium

Series
Applied and Computational Mathematics Seminar
Time
Saturday, February 24, 2018 - 09:30 for 8 hours (full day)
Location
Helen M. Aderhold Learning Center (ALC), Room 24 (60 Luckie St NW, Atlanta, GA 30303)
Speaker
Wenjing Liao and othersGSU, Clemson,UGA, GT, Emory
The Georgia Scientific Computing Symposium is a forum for professors, postdocs, graduate students and other researchers in Georgia to meet in an informal setting, to exchange ideas, and to highlight local scientific computing research. The symposium has been held every year since 2009 and is open to the entire research community. This year, the symposium will be held on Saturday, February 24, 2018, at Georgia State University. More information can be found at: https://math.gsu.edu/xye/public/gscs/gscs2018.html

[unusual date and room] Temporal Resolution of Uncertainty and Exhaustible Resource Pricing: A Dynamic Programming Approach

Series
Applied and Computational Mathematics Seminar
Time
Friday, February 23, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 269
Speaker
Prof. Justin KakeuMorehouse University
We use a stochastic dynamic programming approach to address the following question: Can a homogenous resource extraction model (one without extraction costs, without new discoveries, and without technical progress) generate non-increasing resource prices? The traditional answer to that question contends that prices should exhibit an increasing trend as the exhaustible resource is being depleted over time (The Hotelling rule). In contrast, we will show that injecting concerns for temporal resolution of uncertainty in a resource extraction problem can generate a non-increasing trend in the resource price. Indeed, the expected rate of change of the price can become negative if the premium for temporal resolution of uncertainty is negative and outweighs both the positive discount rate and the short-run risk premium. Numerical examples are provided for illustration.

The Fast Slepian Transform

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 5, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Mark A. Davenport Georgia Institute of Technology
The discrete prolate spheroidal sequences (DPSS's) provide an efficient representation for discrete signals that are perfectly timelimited and nearly bandlimited. Due to the high computational complexity of projecting onto the DPSS basis - also known as the Slepian basis - this representation is often overlooked in favor of the fast Fourier transform (FFT). In this talk I will describe novel fast algorithms for computing approximate projections onto the leading Slepian basis elements with a complexity comparable to the FFT. I will also highlight applications of this Fast Slepian Transform in the context of compressive sensing and processing of sampled multiband signals.

Minimizing the Difference of L1 and L2 norms with Applications

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 29, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Prof. Lou, YifeiUniversity of Texas, Dallas
A fundamental problem in compressive sensing (CS) is to reconstruct a sparse signal under a few linear measurements far less than the physical dimension of the signal. Currently, CS favors incoherent systems, in which any two measurements are as little correlated as possible. In reality, however, many problems are coherent, in which case conventional methods, such as L1 minimization, do not work well. In this talk, I will present a novel non-convex approach, which is to minimize the difference of L1 and L2 norms, denoted as L1-L2, in order to promote sparsity. In addition to theoretical aspects of the L1-L2 approach, I will discuss two minimization algorithms. One is the difference of convex (DC) function methodology, and the other is based on a proximal operator, which makes some L1 algorithms (e.g. ADMM) applicable for L1-L2. Experiments demonstrate that L1-L2 improves L1 consistently and it outperforms Lp (p between 0 and 1) for highly coherent matrices. Some applications will be discussed, including super-resolution, image processing, and low-rank approximation.

Model-Based Multichannel Blind Deconvolution: Mathematical Analysis and Nonconvex Optimization Algorithms

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 22, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Dr. Lee, KiryungGT ECE
There are numerous modern applications in data science that involve inference from incomplete data. Various geometric prior models such as sparse vectors or low-rank matrices have been employed to address the ill-posed inverse problems arising in these applications. Recently, similar ideas were adopted to tackle more challenging nonlinear inverse problems such as phase retrieval and blind deconvolution. In this talk, we consider the blind deconvolution problem where the desired information as a time series is accessed as indirect observations through a time-invariant system with uncertainty. The measurements in this case is given in the form of the convolution with an unknown kernel. Particularly, we study the mathematical theory of multichannel blind deconvolution where we observe the output of multiple channels that are all excited with the same unknown input source. From these observations, we wish to estimate the source and the impulse responses of each of the channels simultaneously. We show that this problem is well-posed if the channel impulse responses follow a simple geometric model. Under these models, we show how the channel estimates can be found by solving corresponding non-convex optimization problems. We analyze methods for solving these non-convex programs, and provide performance guarantees for each.

Portfolio Optimization Problems for Models with Delays

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 4, 2017 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Tao PangDepartment of Mathematics, North Carolina State University
In the real world, the historical performance of a stock may have impacts on its dynamics and this suggests us to consider models with delays. We consider a portfolio optimization problem of Merton’s type in which the risky asset is described by a stochastic delay model. We derive the Hamilton-Jacobi-Bellman (HJB) equation, which turns out to be a nonlinear degenerate partial differential equation of the elliptic type. Despite the challenge caused by the nonlinearity and the degeneration, we establish the existence result and the verification results.

Pages