Computation in the brain is distributed across very large numbers of neurons. Each neuron is error-prone, has access to only a small piece of a given computation, and has limited capabilities to represent and transform information. Yet by acting collectively neurons build sophisticated and robust representations of the world, store them over time, and manipulate them to guide complex, adaptive behavior. Our research seeks to uncover the representations and information-processing strategies that make such collective neural computation possible.
We work across a variety of questions and approaches, ranging from studying general theoretical principles of neural computation, to modeling specific brain regions, to developing methods to analyze large neural data sets. We collaborate closely with experimentalists on several of these topics. We also work on more abstract mathematical questions that are inspired by problems in neuroscience. A major mathematical theme is working with dynamical systems that compute on networks and exploring ideas at the intersection of theory of computation/complexity, dynamical systems, and graph theory.