- Series
- Geometry Topology Seminar
- Time
- Monday, April 22, 2019 - 3:30pm for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Eli Grigsby – Boston College
- Organizer
- Caitlin Leverson
One can regard a (trained) feedforward neural network as a particular type of function , where is a (typically high-dimensional) Euclidean space parameterizing some data set, and the value of the function on a data point is the probability that the answer to a particular yes/no question is "yes." It is a classical result in the subject that a sufficiently complex neural network can approximate any function on a bounded set. Last year, J. Johnson proved that universality results of this kind depend on the architecture of the neural network (the number and dimensions of its hidden layers). His argument was novel in that it provided an explicit topological obstruction to representability of a function by a neural network, subject to certain simple constraints on its architecture. I will tell you just enough about neural networks to understand how Johnson's result follows from some very simple ideas in piecewise linear geometry. Time permitting, I will also describe some joint work in progress with K. Lindsey aimed at developing a general theory of how the architecture of a neural network constrains its topological expressiveness.