The human brainís plasticity and learning capacity have produced astonishing examples of adaptation to novel types of sensory information. An example is deaf-blind individuals who learned to perceive spoken language through their sense of touch, through placing a hand on the face and throat of someone producing speech. This example tells us that the somatosensory sensory system can be adapted to a new function, even speech perception, which is often thought to be the domain of hearing. Funded by NSF, our lab, in collaboration with Lynne Bernstein's group at George Washington University, is using advanced functional magnetic resonance brain imaging (fMRI) and electroencephalography (EEG) together with a novel high-dimensional vibrotactile stimulator to investigate the underlying basis for sensory learning in the brain by carrying out research on learning through the sense of touch. This project focuses on category learning, which is needed for understanding language and for recognizing objects and their qualities. Regardless of whether stimuli arrive through touch, hearing, or vision, category learning requires the brain to recognize that some very similar stimuli nevertheless belong to different categories while other stimuli with larger differences among them can nevertheless belong to the same category. While much progress has been made in understanding learning in the visual and in the auditory sensory systems, the neural mechanisms underlying learning in the somatosensory system, the third major sensory system in the human brain, is still poorly understood. A basic question is whether each of the senses learns and carries out its categorization functions using similar neural mechanisms. We are studying somatosensory learning by training human participants on a categorization task involving morphed vibrotactile stimuli grouped into artificial categories (Aim 1) and natural speech stimuli that have been transformed into complex vibrotactile stimuli (Aim 2). The stimuli are presented on the forearm of the trainees. Before and following training, fMRI and scalp-based EEG measures are being applied to the trainees. The artificial vibrotactile stimuli give insights into the fundamental principles of category learning through touch. The speech stimuli are designed to address questions about cross-sensory learning and the linking of speech categories across hearing and vision. Language is a powerful domain to test hypotheses about sensory learning, given its complexity, such as consonants and vowels making up syllables and whole words. Language is a powerful domain also, because it can be used to test the extent to which the auditory system is unique in its ability to support speech perception.
Understanding the general principles of sensory processing in the brain, and in particular the commonalities and differences in the underlying neural mechanisms across sensory modalities, is of great interest for practical applications such as the design of neuroprostheses for hearing and/or vision disorders. For example, patients who have auditory or visual sensory system damage may benefit from devices that substitute vibrotactile stimuli for information no longer available through their damaged sensory systems. Evidence that the somatosensory cortex can be drafted for speech perception will have important implications for rehabilitation following stroke. Vibrotactile stimuli can be combined with visual or auditory stimuli to improve speech perception in noisy situations such as the cockpit of a plane.
The human visual system can solve the complex task of detecting objects in natural scenes within a fraction of a second. Computational simulations along with EEG and intracranial recording studies have indicated that such "rapid" object recognition can be done based on a single pass through the visual hierarchy from primary visual cortex to task circuits in prefrontal cortex, in about 150-180 ms. Within this computational model, it is generally assumed that there is a progression from relatively simple features such as edges at the first cortical stages, to combinations of these simple features at intermediate levels, to "objects" at the top of the system. However, this "Standard Model" was recently challenged by behavioral demonstrations that reliable saccades to images containing animals were initiated as early as 120-130 ms after image onset, with even faster saccades to faces - within 100 ms. Given that saccadic programming and execution presumably need at least 20 ms, the underlying visual processing must have completed within 80-100 ms. These ultra- rapid detection times thus pose major problems for the current "Standard Model" of visual processing. In an NEI-funded research project we are testing the hypothesis that the visual system can increase its processing speed on particular tasks by basing task-relevant decisions on signals that originate from intermediate processing levels, rather than requiring that stimuli are processed by the entire visual hierarchy. We are testing this hypothesis using a tightly integrated multidisciplinary approach consisting of behavioral studies using eye tracking to determine the capabilities of human ultra-rapid object detection, EEG and fMRI studies to determine when and where in the brain object-selective responses occur, and computational modeling studies to determine whether such multilevel object mechanisms can account for human performance levels. Instead of the classic hierarchical model, in which objects can only be coded at the very top of the system, this project will show how "objects" can be detected by neurons located in early visual areas - especially when those objects are behaviorally very important and need to be localized accurately - with fundamental implications for our understanding of the role of earlv and intermediate visual areas in object detection.
One of the most impressive aspects of human cognition is its ability to, through practice, convert effortful tasks to those that can be performed with little effort, and even in parallel with other, attention-demanding tasks. The recent proliferation of technologies permitting constant engagement in our culture has made multitasking increasingly prevalent, giving rise to new policy challenges as people's attempts at multitasking exceed their cognitive capabilities, such as when texting while driving. In addition, it is being increasingly appreciated that automaticity itself can be associated with negative consequences, as automatic processes can reduce flexibility in responding due to a lack of conscious awareness and decreased control. A better understanding of the brain's capabilities and limitations to automate tasks is therefore highly desirable. With funding from NSF, we are using a combination of fMRI rapid adaptation (fMRI-RA) and EEG-RA to test the key hypothesis that under certain conditions, familiar tasks can be "offloaded"? to parietal circuits, thus freeing up the frontal system to simultaneously perform additional, attention-demanding tasks.
Reading written words is an excellent domain through which to study how the brain assigns meaning to sensory stimuli: Given the cultural recency of reading and the variability of lexica across languages, reading arguably needs to depend on specific neural representations that are acquired through experience with written words, and this orthographic input must eventually be connected to semantic concept information - which likewise has to be acquired by learning the mapping from objects to their appropriate labels. We are currently investigating these questions using fMRI with NSF funding, and are also (with NICHD funding) investigating how the circuits supporting reading written words differ in dyslexic individuals.