Faster convex optimization with higher-order smoothness via rescaled and accelerated gradient flows

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 1, 2018 - 1:55pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Dr. Andre Wibisono – Georgia Tech CS
Organizer
Molei Tao
Accelerated gradient methods play a central role in optimization, achieving the optimal convergence rates in many settings. While many extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this work, we study accelerated methods from a continuous-time perspective. We show there is a Bregman Lagrangian functional that generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that in continuous time, these accelerated methods correspond to traveling the same curve in spacetime at different speeds. This is in contrast to the family of rescaled gradient flows, which correspond to changing the distance in space. We show how to implement both the rescaled and accelerated gradient methods as algorithms in discrete time with matching convergence rates. These algorithms achieve faster convergence rates for convex optimization under higher-order smoothness assumptions. We will also discuss lower bounds and some open questions. Joint work with Ashia Wilson and Michael Jordan.