The Approximation Properties of Convex Hulls, Greedy Algorithms, and Applications to Neural Networks

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 4, 2022 - 2:00pm for 1 hour (actually 50 minutes)
Location
Hybrid: Skiles 005 and https://gatech.zoom.us/j/96551543941
Speaker
Jonathan Siegel – Penn State Mathematics Department – jus1949@psu.eduhttp://personal.psu.edu/jus1949/
Organizer
Haomin Zhou and Wenjing Liao

Given a collection of functions in a Banach space, typically called a dictionary in machine learning, we study the approximation properties of its convex hull. Specifically, we develop techniques for bounding the metric entropy and n-widths, which are fundamental quantities in approximation theory that control the limits of linear and non-linear approximation. Our results generalize existing methods by taking the smoothness of the dictionary into account, and in particular give sharp estimates for shallow neural networks. Consequences of these results include: the optimal approximation rates which can be attained for shallow neural networks, that shallow neural networks dramatically outperform linear methods of approximation, and indeed that shallow neural networks outperform all continuous methods of approximation on the associated convex hull. Next, we discuss greedy algorithms for constructing approximations by non-linear dictionary expansions. Specifically, we give sharp rates for the orthogonal greedy algorithm for dictionaries with small metric entropy, and for the pure greedy algorithm. Finally, we give numerical examples showing that greedy algorithms can be used to solve PDEs with shallow neural networks.