Seminars and Colloquia by Series

Faster convex optimization with higher-order smoothness via rescaled and accelerated gradient flows

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 1, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Dr. Andre WibisonoGeorgia Tech CS
Accelerated gradient methods play a central role in optimization, achieving the optimal convergence rates in many settings. While many extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this work, we study accelerated methods from a continuous-time perspective. We show there is a Bregman Lagrangian functional that generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that in continuous time, these accelerated methods correspond to traveling the same curve in spacetime at different speeds. This is in contrast to the family of rescaled gradient flows, which correspond to changing the distance in space. We show how to implement both the rescaled and accelerated gradient methods as algorithms in discrete time with matching convergence rates. These algorithms achieve faster convergence rates for convex optimization under higher-order smoothness assumptions. We will also discuss lower bounds and some open questions. Joint work with Ashia Wilson and Michael Jordan.

Accelerated Optimization in the PDE Framework

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 24, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Anthony YezziGeorgia Tech, ECE
Following the seminal work of Nesterov, accelerated optimization methods (sometimes referred to as momentum methods) have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios were second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with an attraction basin large enough to accommodate the initial overshoot. This behavior has made accelerated search methods particularly popular within the machine learning community where stochastic variants have been proposed as well. So far, however, accelerated optimization methods have been applied to searches over finite parameter spaces. We show how a variational setting for these finite dimensional methods (recently formulated by Wibisono, Wilson, and Jordan) can be extended to the infinite dimensional setting, both in linear functional spaces as well as to the more complicated manifold of 2D curves and 3D surfaces.

AN INTRODUCTION TO VIRTUAL ELEMENTS IN 3D

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 17, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Professor Lourenco Beirao da Veiga Università di Milano-Bicocca

Please Note: This is a joint seminar by College of Engineering and School of Math.

The Virtual Element Method (VEM), is a very recent technology introduced in [Beirao da Veiga, Brezzi, Cangiani, Manzini, Marini, Russo, 2013, M3AS] for the discretization of partial differential equations, that has shared a good success in recent years. The VEM can be interpreted as a generalization of the Finite Element Method that allows to use general polygonal and polyhedral meshes, still keeping the same coding complexity and allowing for arbitrary degree of accuracy. The Virtual Element Method makes use of local functions that are not necessarily polynomials and are defined in an implicit way. Nevertheless, by a wise choice of the degrees of freedom and introducing a novel construction of the associated stiffness matrixes, the VEM avoids the explicit integration of such shape functions. In addition to the possibility to handle general polytopal meshes, the flexibility of the above construction yields other interesting properties with respect to more standard Galerkin methods. For instance, the VEM easily allows to build discrete spaces of arbitrary C^k regularity, or to satisfy exactly the divergence-free constraint for incompressible fluids. The present talk is an introduction to the VEM, aiming at showing the main ideas of the method. We consider for simplicity a simple elliptic model problem (that is pure diffusion with variable coefficients) but set ourselves in the more involved 3D setting. In the first part we introduce the adopted Virtual Element space and the associated degrees of freedom, first by addressing the faces of the polyhedrons (i.e. polygons) and then building the space in the full volumes. We then describe the construction of the discrete bilinear form and the ensuing discretization of the problem. Furthermore, we show a set of theoretical and numerical results. In the very final part, we will give a glance at more involved problems, such as magnetostatics (mixed problem with more complex spaces interacting) and large deformation elasticity (nonlinear problem).

Control and Inverse Problems for Differential Equations on Graphs

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 10, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Sergei AvdoninUniversity of Alaska Fairbanks

Quantum graphs are metric graphs with differential equations defined on the edges. Recent interest in control and inverse problems for quantum graphs
is motivated by applications to important problems of classical and quantum physics, chemistry, biology, and engineering.

In this talk we describe some new controllability and identifability results for partial differential equations on compact graphs. In particular, we consider graph-like networks of inhomogeneous strings with masses attached at the interior vertices. We show that the wave transmitted through a mass is more
regular than the incoming wave. Therefore, the regularity of the solution to the initial boundary value problem on an edge depends on the combinatorial distance of this edge from the source, that makes control and inverse problems
for such systems more diffcult.

We prove the exact controllability of the systems with the optimal number of controls and propose an algorithm recovering the unknown densities of thestrings, lengths of the edges, attached masses, and the topology of the graph. The proofs are based on the boundary control and leaf peeling methods developed in our previous papers. The boundary control method is a powerful
method in inverse theory which uses deep connections between controllability and identifability of distributed parameter systems and lends itself to straight-forward algorithmic implementations.

Application of stochastic maximum principle. Risk-sensitive regime switching in asset management.

Series
Applied and Computational Mathematics Seminar
Time
Monday, July 2, 2018 - 01:55 for 1.5 hours (actually 80 minutes)
Location
Skiles 005
Speaker
Isabelle Kemajou-BrownMorgan State University
We assume the stock is modeled by a Markov regime-switching diffusion process and that, the benchmark depends on the economic factor. Then, we solve a risk-sensitive benchmarked asset management problem of a firm. Our method consists of finding the portfolio strategy that minimizes the risk sensitivity of an investor in such environment, using the general maximum principle.After the above presentation, the speaker will discuss some of her ongoing research.

Convolutional Neural Network with Structured Filters

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 16, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Xiuyuan ChengDuke University
Filters in a Convolutional Neural Network (CNN) contain model parameters learned from enormous amounts of data. The properties of convolutional filters in a trained network directly affect the quality of the data representation being produced. In this talk, we introduce a framework for decomposing convolutional filters over a truncated expansion under pre-fixed bases, where the expansion coefficients are learned from data. Such a structure not only reduces the number of trainable parameters and computation load but also explicitly imposes filter regularity by bases truncation. Apart from maintaining prediction accuracy across image classification datasets, the decomposed-filter CNN also produces a stable representation with respect to input variations, which is proved under generic assumptions on the basis expansion. Joint work with Qiang Qiu, Robert Calderbank, and Guillermo Sapiro.

Simulating large-scale geophysical flows on unstructured meshes

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 9, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Prof. Qingshan ChenDepartment of Mathematical Sciences, Clemson University
Large-scale geophysical flows, i.e. the ocean and atmosphere, evolve on spatial scales ranging from meters to thousands of kilometers, and on temporal scales ranging from seconds to decades. These scales interact in a highly nonlinear fashion, making it extremely challenging to reliably and accurately capture the long-term dynamics of these flows on numerical models. In fact, this problem is closely associated with the grand challenges of long-term weather and climate predictions. Unstructured meshes have been gaining popularity in recent years on geophysical models, thanks to its being almost free of polar singularities, and remaining highly scalable even at eddy resolving resolutions. However, to unleash the full potential of these meshes, new schemes are needed. This talk starts with a brief introduction to large-scale geophysical flows. Then it goes over the main considerations, i.e. various numerical and algorithmic choices, that one needs to make in deisgning numerical schemes for these flows. Finally, a new vorticity-divergence based finite volume scheme will be introduced. Its strength and challenges, together with some numerical results, will be presented and discussed.

Compute Faster and Learn Better: Machine Learning via Nonconvex Optimization

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 2, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Tuo ZhaoGeorgia Institute of Technology
Nonconvex optimization naturally arises in many machine learning problems. Machine learning researchers exploit various nonconvex formulations to gain modeling flexibility, estimation robustness, adaptivity, and computational scalability. Although classical computational complexity theory has shown that solving nonconvex optimization is generally NP-hard in the worst case, practitioners have proposed numerous heuristic optimization algorithms, which achieve outstanding empirical performance in real-world applications.To bridge this gap between practice and theory, we propose a new generation of model-based optimization algorithms and theory, which incorporate the statistical thinking into modern optimization. Specifically, when designing practical computational algorithms, we take the underlying statistical models into consideration. Our novel algorithms exploit hidden geometric structures behind many nonconvex optimization problems, and can obtain global optima with the desired statistics properties in polynomial time with high probability.

Fast Phase Retrieval from Localized Time-Frequency Measurements

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 26, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Mark IwenMichigan State University
We propose a general phase retrieval approach that uses correlation-based measurements with compactly supported measurement masks. The algorithm admits deterministic measurement constructions together with a robust, fast recovery algorithm that consists of solving a system of linear equations in a lifted space, followed by finding an eigenvector (e.g., via an inverse power iteration). Theoretical reconstruction error guarantees are presented. Numerical experiments demonstrate robustness and computational efficiency that outperforms competing approaches on large problems. Finally, we show that this approach also trivially extends to phase retrieval problems based on windowed Fourier measurements.

Joint-sparse recovery for high-dimensional parametric PDEs

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 5, 2018 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Nick DexterUniversity of Tennessee
We present and analyze a novel sparse polynomial approximation method for the solution of PDEs with stochastic and parametric inputs. Our approach treats the parameterized problem as a problem of joint-sparse signal reconstruction, i.e., the simultaneous reconstruction of a set of signals sharing a common sparsity pattern from a countable, possibly infinite, set of measurements. Combined with the standard measurement scheme developed for compressed sensing-based polynomial approximation, this approach allows for global approximations of the solution over both physical and parametric domains. In addition, we are able to show that, with minimal sample complexity, error estimates comparable to the best s-term approximation, in energy norms, are achievable, while requiring only a priori bounds on polynomial truncation error. We perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.

Pages