The lab investigates the neurocomputational mechanisms underlying human object recognition and learning as a gateway to understanding the neural bases of intelligent behavior. The ability to recognize objects is a fundamental cognitive task in every sensory modality, e.g., for friend/foe discrimination, social communication, reading, or speech perception, and its loss or impairment is associated with a number of neural disorders. Yet, despite the apparent ease with which we see and hear, object recognition is widely acknowledged as a very difficult computational problem. It is even more difficult from a biological systems perspective, since it involves several levels of understanding, from the computational level, over the level of cellular and biophysical mechanisms and the level of neuronal circuits, up to the level of behavior.
In our work, we combine computational models with human behavioral, functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data. This comprehensive approach addresses one of the major challenges in neuroscience today, that is, the necessity to combine experimental data from a range of approaches in order to develop a rigorous and predictive model of human brain function that quantitatively and mechanistically links neurons to behavior. This is of interest not only for basic research, but also for the investigation of the neural bases of behavioral deficits in disorders. Understanding the neural mechanisms underlying object recognition tasks and the acquisition of these skills is also of significant relevance for Artificial Intelligence, as the capabilities of pattern recognition systems in engineering (e.g., in machine vision or speech recognition) still lag far behind that of their human counterparts in terms of robustness, flexibility, and the ability to learn from few exemplars. Finally, a mechanistic understanding of the neural processing networks that enable the brain to make sense of stimuli across different senses opens the door to supporting and extending human cognitive abilities in this area through, for instance, hybrid brain-machine systems ("augmented cognition") and novel technologies, e.g., for sensory substitution.
Most of the work in the lab has traditionally focused on the domain of vision, reflecting its status as the most accessible sensory modality. However, given that similar problems of task learning, specificity and invariance have to be solved in other sensory modalities as well, it is likely that similar computational principles underlie processing in those domains, and we are interested in understanding commonalities and differences in processing between modalities. A new research thrust in the lab applies our understanding of auditory recognition to investigate the neural mechanisms underlying speech production and the interaction of speech perception and production in speech production learning as predicted by computational models.
Learn more about our ongoing research projects.
Explore our publications.