- Series
- Stochastics Seminar
- Time
- Thursday, October 5, 2023 - 3:30pm for 1 hour (actually 50 minutes)
- Location
- Skile 006
- Speaker
- Siva Theja Maguluri – Georgia Tech
- Organizer
- Cheng Mao
Abstract: Motivated by applications in Reinforcement Learning (RL), this talk focuses on the Stochastic Appproximation (SA) method to find fixed points of a contractive operator. First proposed by Robins and Monro, SA is a popular approach for solving fixed point equations when the information is corrupted by noise. We consider the SA algorithm for operators that are contractive under arbitrary norms (especially the l-infinity norm). We present finite sample bounds on the mean square error, which are established using a Lyapunov framework based on infimal convolution and generalized Moreau envelope. We then present our more recent result on concentration of the tail error, even when the iterates are not bounded by a constant. These tail bounds are obtained using exponential supermartingales in conjunction with the Moreau envelop and a novel bootstrapping approach. Our results immediately imply the state-of-the-art sample complexity results for a large class of RL algorithms.
Bio: Siva Theja Maguluri is Fouts Family Early Career Professor and Associate Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. He obtained his Ph.D. and MS in ECE as well as MS in Applied Math from UIUC, and B.Tech in Electrical Engineering from IIT Madras. His research interests span the areas of Control, Optimization, Algorithms and Applied Probability and include Reinforcement Learning theory and Stochastic Networks. His research and teaching are recognized through several awards including the Best Publication in Applied Probability award, NSF CAREER award, second place award at INFORMS JFIG best paper competition, Student best paper award at IFIP Performance, CTL/BP Junior Faculty Teaching Excellence Award, and Student Recognition of Excellence in Teaching: Class of 1934 CIOS Award.