Integral neural networks with weight penalization

Series
Analysis Seminar
Time
Tuesday, September 1, 2020 - 2:00pm for 1 hour (actually 50 minutes)
Location
https://us02web.zoom.us/j/87104893132
Speaker
Armenak Petrosyan – Georgia Tech – petrosyan@mail.gatech.eduhttps://petrosyan.page
Organizer
Ben Jaye

Artificial neural networks have gained widespread adoption as a powerful tool for various machine learning tasks in recent years. Training a neural network to approximate a target function involves solving an inherently non-convex problem. In practice, this is done using stochastic gradient descent with random initialization. For the approximation problem with neural networks error rate guarantees are established for different classes of functions however these rates are not always achieved in practice due to many  local minima of the resulting optimization problem. 

The challenge we address in this work is the following. We want to find small size shallow neural networks that can be trained algorithmically and which achieve guaranteed approximation speed and precision. To maintain the small size we apply penalties on the weights of the network. We show that under minimal requirements, all local minima of the resulting problem are well behaved and possess a desirable small size without sacrificing precision. We adopt the integral neural network framework and use techniques from optimization theory and harmonic analysis to prove our results. In this talk, we will discuss our existing work and possible future promising areas of interest where this approach can potentially be adopted.