We hear sounds in a wide range of listening conditions, of which few parameters are under our control. The location, intensity, presence of noise, and presence of other sounds all alter the pressure waveform that impinges on our eardrums. Yet, we are able to interpret these varied physical waveforms as arising from the same underlying sound. Our perception of sounds is perceptually invariant to a large number of nuisance parameters. The research thrust of our laboratory is to study the mechanisms by which invariant representations of sounds are generated in primary and higher auditory cortex.
Inspired by Marr, we use a multifaceted approach to determine how invariant representations are constructed at multiple stages of the auditory processing hierarchy.
Behavioral/Computational goals: We study a set of sounds, vocalizations, that are behaviorally critical for vocal animals, allowing us to define a clear computational goal for auditory processing. Using behavioral techniques such as pupillometry and operant training, we probe the limits of behavior, allowing us to develop constraints for models.
Theoretical/Algorithms: We develop theoretical models to determine what algorithms are necessary for the optimal performance of these behaviors, and derive specific predictions of neural response properties.
Experimental/Implementation: We use large-scale electrophysiological recordings to determine how these algorithms might be implemented in the auditory pathway. We use cell-type specific optogenetics to causally perturb neural circuits to determine the mechanism of implementation. We expect to add imaging techniques to this experimental suite in the near future. This integrative approach allows us to characterize neural representations at multiple stages of auditory processing and determine how they are transformed from one stage to the next.
Our research is aimed at obtaining a complete understanding of how the auditory system gradually transforms the neural representation of sounds from one that is based on the sensing of individual frequencies to a representation that can support complex behaviors such as speech perception, and eventually social cognition. Patients with communication disorders such as dyslexia, some sensory aphasias, the hearing impaired, and the elderly with age-related decline in hearing face significant challenges in perceiving complex sounds in realistic conditions. We will provide fundamental insights into these disorders by understanding the circuit mechanisms by which the brain extracts meaningful signals from noise.