Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints

Series
Applied and Computational Mathematics Seminar
Time
Wednesday, November 13, 2024 - 2:00pm for 1 hour (actually 50 minutes)
Location
2443 Classroom Klaus and https://gatech.zoom.us/j/94954654170
Speaker
Yangyang Xu – Rensselaer Polytechnic Institute – xuy21@rpi.eduhttps://xu-yangyang.github.io/
Organizer
Wei Zhu

In this talk, I will present a damped proximal augmented Lagrangian method (DPALM) for solving problems with a weakly-convex objective and convex linear/nonlinear constraints. Instead of taking a full stepsize, DPALM adopts a damped dual stepsize. DPALM can produce a (near) eps-KKT point within eps^{-2} outer iterations if each DPALM subproblem is solved to a proper accuracy. In addition, I will show overall iteration complexity of DPALM when the objective is either a regularized smooth function or in a regularized compositional form. For the former case, DPALM achieves the complexity of eps^{-2.5} to produce an eps-KKT point by applying an accelerated proximal gradient (APG) method to each DPALM subproblem. For the latter case, the complexity of DPALM is eps^{-3} to produce a near eps-KKT point by using an APG to solve a Moreau-envelope smoothed version of each subproblem. Our outer iteration complexity and the overall complexity either generalize existing best ones from unconstrained or linear-constrained problems to convex-constrained ones, or improve over the best-known results on solving the same-structured problems. Furthermore, numerical experiments on linearly/quadratically constrained non-convex quadratic programs and linear-constrained robust nonlinear least squares are conducted to demonstrate the empirical efficiency of the proposed DPALM over several state-of-the art methods.