Two Phases of Scaling Laws for Nearest Neighbor Classifiers

Series
Applied and Computational Mathematics Seminar
Time
Thursday, May 25, 2023 - 10:30am for 1 hour (actually 50 minutes)
Location
https://gatech.zoom.us/j/98355006347
Speaker
Jingzhao Zhang – Tsinghua University
Organizer
Molei Tao

Please Note: Special time & day. Remote only.

A scaling law refers to the observation that the test performance of a model improves as the number of training data increases. A fast scaling law implies that one can solve machine learning problems by simply boosting the data and the model sizes. Yet, in many cases, the benefit of adding more data can be negligible. In this work, we study the rate of scaling laws of nearest neighbor classifiers. We show that a scaling law can have two phases: in the first phase, the generalization error depends polynomially on the data dimension and decreases fast; whereas in the second phase, the error depends exponentially on the data dimension and decreases slowly. Our analysis highlights the complexity of the data distribution in determining the generalization error. When the data distributes benignly, our result suggests that nearest neighbor classifier can achieve a generalization error that depends polynomially, instead of exponentially, on the data dimension.