Seminars and Colloquia Schedule

Symmetric nonnegative polynomials and sums of squares: mean roads to infinity

Series
Dissertation Defense
Time
Wednesday, May 24, 2023 - 11:00 for 1.5 hours (actually 80 minutes)
Location
Skiles 006
Speaker
Jose AcevedoGeorgia Tech
We study the limits of cones of symmetric nonnegative polynomials and symmetric sums of squares of fixed degree, when expressed in power-mean or monomial-mean basis. These limits correspond to forms with stable expression in power-mean polynomials that are globally nonnegative (resp. sums of squares) regardless of the number of variables. Using some elements of the representation theory of the symmetric group we introduce partial symmetry reduction to describe the limit cone of symmetric sums of squares, which simultaneously allows us to tropicalize its dual cone. Using tropical convexity to describe the tropicalization of the dual cone to symmetric nonnegative forms we then compare both tropicalizations, which turn out to be convex polyhedral cones. We then show that the cones are different for all degrees larger than 4. For even symmetric forms we show that the cones agree up to degree $8$, and are different starting at degree 10. We also find, via tropicalization, explicit examples of symmetric forms that are nonnegative but not sums of squares at the limit.

Two Phases of Scaling Laws for Nearest Neighbor Classifiers

Series
Applied and Computational Mathematics Seminar
Time
Thursday, May 25, 2023 - 10:30 for 1 hour (actually 50 minutes)
Location
https://gatech.zoom.us/j/98355006347
Speaker
Jingzhao ZhangTsinghua University

Special time & day. Remote only.

A scaling law refers to the observation that the test performance of a model improves as the number of training data increases. A fast scaling law implies that one can solve machine learning problems by simply boosting the data and the model sizes. Yet, in many cases, the benefit of adding more data can be negligible. In this work, we study the rate of scaling laws of nearest neighbor classifiers. We show that a scaling law can have two phases: in the first phase, the generalization error depends polynomially on the data dimension and decreases fast; whereas in the second phase, the error depends exponentially on the data dimension and decreases slowly. Our analysis highlights the complexity of the data distribution in determining the generalization error. When the data distributes benignly, our result suggests that nearest neighbor classifier can achieve a generalization error that depends polynomially, instead of exponentially, on the data dimension.