Data Compression in Distributed Learning

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 15, 2021 - 2:00pm for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/457724603/4379
Speaker
Ming Yan – Michigan State University – myan@msu.eduhttps://mingyan08.github.io/
Organizer
Wenjing Liao

Large-scale machine learning models are trained by parallel (stochastic) gradient descent algorithms on distributed systems. The communications for gradient aggregation and model synchronization become the major obstacles for efficient learning as the number of nodes and the model's dimension scale up. In this talk, I will introduce several ways to compress the transferred data and reduce the overall communication such that the obstacles can be immensely mitigated. More specifically, I will introduce methods to reduce or eliminate the compression error without additional communication.