Accelerated Optimization in the PDE Framework

Series
Applied and Computational Mathematics Seminar
Time
Monday, September 24, 2018 - 1:55pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Anthony Yezzi – Georgia Tech, ECE – https://www.ece.gatech.edu/faculty-staff-directory/anthony-joseph-yezzi
Organizer
Sung Ha Kang
Following the seminal work of Nesterov, accelerated optimization methods (sometimes referred to as momentum methods) have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios were second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with an attraction basin large enough to accommodate the initial overshoot. This behavior has made accelerated search methods particularly popular within the machine learning community where stochastic variants have been proposed as well. So far, however, accelerated optimization methods have been applied to searches over finite parameter spaces. We show how a variational setting for these finite dimensional methods (recently formulated by Wibisono, Wilson, and Jordan) can be extended to the infinite dimensional setting, both in linear functional spaces as well as to the more complicated manifold of 2D curves and 3D surfaces.