- Series
- GT-MAP Seminar
- Time
- Friday, March 30, 2018 - 3:00pm for 2 hours
- Location
- Skiles 006
- Speaker
- Chethan Pandarinath – GT BME – http://snel.gatech.edu/
- Organizer
- Sung Ha Kang
Since its inception, neuroscience has largely focused on the neuron as the functional unit of the nervous system. However, recent evidence demonstrates that populations of neurons within a brain area collectively show emergent functional properties ("dynamics"), properties that are not apparent at the level of individual neurons. These emergent dynamics likely serve as the brain’s fundamental computational mechanism. This shift compels neuroscientists to characterize emergent properties – that is, interactions between neurons – to understand computation in brain networks. Yet this introduces a daunting challenge – with millions of neurons in any given brain area, characterizing interactions within an area, and further, between brain areas, rapidly becomes intractable.I will demonstrate a novel unsupervised tool, Latent Factor Analysis via Dynamical Systems ("LFADS"), that can accurately and succinctly capture the emergent dynamics of large neural populations from limited sampling. LFADS is based around deep learning architectures (variational sequential auto-encoders), and builds a model of an observed neural population's dynamics using a nonlinear dynamical system (a recurrent neural network). When applied to neuronal ensemble recordings (~200 neurons) from macaque primary motor cortex (M1), we find that modeling population dynamics yields accurate estimates of the state of M1, as well as accurate predictions of the animal's motor behavior, on millisecond timescales. I will also demonstrate how our approach allows us to infer perturbations to the dynamical system (i.e., unobserved inputs to the neural population), and further allows us to leverage population recordings across long timescales (months) to build more accurate models of M1's dynamics.This approach demonstrates the power of deep learning tools to model nonlinear dynamical systems and infer accurate estimates of the states of large biological networks. In addition, we will discuss future directions, where we aim to pry open the "black box" of the trained recurrent neural networks, in order to understand the computations being performed by the modeled neural populations.pre-print available: lfads.github.io [lfads.github.io]