Computation in the brain is distributed across very large numbers of neurons. Each neuron is error-prone, has access to only a small piece of a given computation, and has limited capabilities to represent and transform information. Yet by acting collectively neurons build sophisticated and robust representations of the world, store them over time, and manipulate them to guide complex, adaptive behavior. Our research seeks to uncover the representations and information-processing strategies that make such collective neural computation possible.
We work across a variety of questions and approaches, ranging from studying general theoretical principles of neural computation, to modeling specific brain regions, to developing methods to analyze large neural data sets. We collaborate closely with experimentalists on several of these topics. We also work on more abstract mathematical questions that are inspired by problems in neuroscience. For a general overview of research themes see Research, or see Publications for a list of papers.