Memory bounds for continual learning

Series
ACO Student Seminar
Time
Friday, January 13, 2023 - 1:00pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Binghui Peng – Columbia University – bp2601@columbia.eduhttp://www.cs.columbia.edu/~binghuip/
Organizer
Abhishek Dhawan

Memory bounds for continual learning

Abstract: Continual learning, or lifelong learning, is a formidable current challenge to machine learning. It requires the learner to solve a sequence of k different learning tasks, one after the other, while with each new task learned it retains its aptitude for earlier tasks; the continual learner should scale better than the obvious solution of developing and maintaining a separate learner for each of the k tasks.  We embark on a complexity-theoretic study of continual learning in the PAC framework. We make novel uses of communication complexity to establish that any continual learner, even an improper one, needs memory that grows linearly with k, strongly suggesting that the problem is intractable.  When logarithmically many passes over the learning tasks are allowed, we provide an algorithm based on multiplicative weights update whose memory requirement scales well; we also establish that improper learning is necessary for such performance. We conjecture that these results may lead to new promising approaches to continual learning.

 

Based on the joint work with Xi Chen and Christos Papadimitriou.