- Series
- ACO Student Seminar
- Time
- Friday, November 2, 2018 - 12:20pm for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Thinh Doan – ISyE/ECE, Georgia Tech – thinh.doan@isye.gatech.edu – https://sites.google.com/site/thinhdoan210/
- Organizer
- He Guo
Abstract In
this talk, I will present a popular distributed method, namely,
distributed consensus-based gradient (DCG) method, for solving optimal
learning problems over a network of agents. Such problems arise in many
applications such as, finding optimal parameters over
a large dataset distributed among a network of processors or seeking an
optimal policy for coverage control problems in robotic networks. The
focus is to present our recent results, where we study the performance
of DCG when the agents are only allowed to exchange
their quantized values due to their finite communication bandwidth. In
particular, we develop a novel quantization method, which we refer to as
adaptive quantization. The main idea of our approach is to quantize the
nodes' estimates based on the progress of
the algorithm, which helps to eliminate the quantized errors. Under
the adaptive quantization, we then derive the bounds on the convergence
rates of the proposed method as a function of the bandwidths
and the underlying network topology, for both convex and strongly convex
objective functions. Our results suggest that under the adaptive
quantization, the rate of convergence of DCG with and without
quantization are the same, except for a factor which captures
the number of quantization bits. To the best of the authors’ knowledge,
the results in this paper are considered better than any existing
results for DCG under quantization.
This is based on a joint work with Siva Theja Maguluri and Justin Romberg.
Bio Thinh
T. Doan is a TRIAD postdoctoral fellow at Georgia Institute of
Technology, joint between the School of Industrial and Systems
Engineering and the School of Electrical and Computer Engineering (ECE).
He was born in Vietnam, where he got his Bachelor degree in
Automatic Control at Hanoi University of Science and Technology in 2008.
He obtained his Master and Ph.D. degrees both in ECE from the
University of Oklahoma in 2013 and the University of Illinois at
Urbana-Champaign in 2018, respectively. His research interests
lie at the intersection of control theory, optimization, distributed
algorithms, and applied probability, with the main applications in
machine learning, reinforcement learning, power networks, and
multi-agent systems.