Noise and variability in the brain
Brains seem noisy at multiple levels. We seek to understand the role of this noise in neural computation via three complementary hypotheses (all of which are likely to be partially true, depending on the computation and brain region).
- “Noise” reflects the encoding of unknown variables. Neural variability at least partially reflects our ignorance of which variables are actually driving responses, and we develop tools to discover unknown variables encoded in neural population responses.
- Noise is averaged away or corrected. We use models to understand how microscopic variability in the brain might coexist with simpler emergent macroscopic structure, and to determine which dimensions of microscopic variability matter at larger spatial or temporal scales. We also explore whether the brain might use more sophisticated error correction strategies than simple averaging, such as those that underlie error-correcting codes.
- Noise is used for computation. Randomness is a powerful computational resource, and randomized algorithms are often simple to implement and naturally parallel, making them an excellent fit for computation by large, noisy networks of neurons. We draw on recent mathematical frameworks using large, sparse graphs to build models of randomized computation in the brain.
Computation via dynamics in recurrent networks
We blend approaches from dynamical systems and control with those from information theory and computation to study recurrent networks as dynamical systems that compute. Interests include studying recurrent networks that carry out canonical computations such as memory storage, decision making, and temporal patterning, as well as developing mathematical tools to study these networks in the strongly coupled setting.
Large-scale organization of neural computation
Information processing in the brain is distributed across regions that receive a variety of inputs and use differing strategies to encode information and learn from experience. We model interactions between such disparate brain regions, motivated by the principle that understanding how information is translated will yield insight not only into inter-areal interactions but could also provide context to help understand the computational strategies used by individual regions.
Geometry and dynamics of data and learning
Geometry and dynamical systems formalize our intuitions of space and time, providing powerful mathematical frameworks to approach problems in the natural sciences. We work on data analysis methods and models of learning that draw on geometric and dynamical systems perspectives.