Stochastic Methods for Matrix Games and its Applications.

Series
ACO Student Seminar
Time
Friday, September 17, 2021 - 1:00pm for 1 hour (actually 50 minutes)
Location
Skiles 314
Speaker
Yujia Jin – Stanford University – yujiajin@stanford.eduhttps://web.stanford.edu/~yujiajin/
Organizer
Abhishek Dhawan

Please Note: Stream online at https://bluejeans.com/520769740/

In this talk, I will introduce some recent advances in designing stochastic primal-dual methods for bilinear saddle point problems, in the form of min_x max_y y^TAx under different geometries of x and y. These problems are prominent in economics, linear programming, machine learning and reinforcement learning. Specifically, our methods apply to Markov decision processes (MDPs), linear regression, and computational geometry tasks. 

 

In our work, we propose a variance-reduced framework for solving convex-concave saddle-point problems, given a gradient estimator satisfying some local properties. Further, we show how to design such gradient estimators for bilinear objectives under different geometry including simplex (l_2), Euclidean ball (l_1) or box (l_inf) domains. For matrix A with larger dimension n, nonzero entries nnz and accuracy epsilon, our proposed variance-reduced primal dual methods obtain a runtime complexity of nnz+\sqrt{nnz*n}/epsilon, improving over the exact gradient methods and fully stochastic methods in the accuracy and/or the sparse regime (when epsilon < n/nnz). For finite-sum saddle-point problems sum_{k=1}^K f_k(x,y) where each f is 1-smooth, we show how to obtain an epsilon-optimal saddle point within gradient query complexity of K+\sqrt{K}/epsilon.

 

Moreover, we also provide a class of coordinate methods for solving bilinear saddle-point problems. These algorithms use either O(1)-sparse gradient estimators to obtain improved sublinear complexity over fully stochastic methods, or their variance-reduced counterparts for improved nearly-linear complexity, for sparse and numerically sparse instances A. 

 

This talk is based on several joint works with Yair Carmon, Aaron Sidford and Kevin Tian, with links of papers below:

Variance Reduction for Matrix Games

Coordinate Methods for Matrix Games

Efficiently Solving MDPs using Stochastic Mirror Descent

 

Bio of the speaker: Yujia Jin is a fourth-year Ph.D. student in Department of Management Science and Engineering, Stanford University, working with Aaron Sidford. She is interested in designing efficient continuous optimization methods, which often run in nearly linear / sublinear time and find vast applications in machine learning, data analysis, reinforcement learning, and graph problems.