- Series
- ACO Student Seminar
- Time
- Friday, January 28, 2022 - 1:00pm for 1 hour (actually 50 minutes)
- Location
- ONLINE
- Speaker
- Max Hopkins – UCSD – nmhopkin@eng.ucsd.edu – https://cseweb.ucsd.edu/~nmhopkin/
- Organizer
- Abhishek Dhawan
Please Note: Link: https://bluejeans.com/520769740/
The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust and private learning, it’s surprising we still lack a unifying theory explaining these results.
In this talk, we'll introduce exactly such a framework: a simple, model-independent blackbox reduction between agnostic and realizable learnability that explains their equivalence across a wide host of classical models. We’ll discuss how this reduction extends our understanding to traditionally difficult settings such as learning with arbitrary distributional assumptions and general loss, and look at some applications beyond agnostic learning as well (e.g. to privacy). Finally, we'll end by surveying a few nice open problems in the area.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.