A primary focus of our lab is on how people comprehend words, sentences, and larger units of language.
Language provides one of the most powerful cues into the brain's storehouse of world knowledge. Language contexts play a crucial role in shaping the processing of meaning, as, during comprehension, semantic and structural constraints make some words more likely to occur than others. These context effects have a profound influence on the speed and accuracy of word processing, as manifested not only behaviorally but also neurophysiologically.
In particular, meaningful stimuli elicit a brain electrical response, known as the "N400". This brain activity, arising in part from medial temporal cortical regions, has been closely linked to language comprehension. It is elicited in response to meaningful stimuli of all kinds and its amplitude is reduced by congruent contextual information.
We have used N400 responses to examine how words and pictures are linked to meaning, and how context information guides the search through long-term memory, primarily in monolingual speakers but also in bilinguals.
We have found that the organization of information in long-term memory has a profound impact on language processing and, moreover, that, when it can, the brain uses context information to predict (i.e., to anticipate and prepare for) the semantic and even perceptual features of upcoming items. This stands in contrast to most mainstream language comprehension models, which assume that word access and integration are largely bottom-up processes that initially receive very little guidance from context information. Our data instead suggest that top-down context information impacts even early stages of processing and is an integral part of normal, efficient language comprehension – a part that is, however, susceptible to age- and disease-related deterioration (outlined in aging section).
In addition to investigating the processes that are used to build meaning representations during language comprehension, we have explored the nature of the semantic representations that are formed on-line and ultimately stored in long-term memory. During comprehension, information derived from different modalities must rapidly come together to yield a coherent conceptual level understanding. There has long been an assumption that such concepts are “amodal” and relatively static in nature. In a series of studies, we have examined how semantic memory is structured – and used on-line – as a function of modality, stimulus type, and mood. The results of these studies suggest that meaning arises from synchronized activity in distributed brain networks that do modality-specific processing, and that factors like representation type, experience, and mood can all change the nature of the semantic information that is accessed in response to the same stimulus.
Our N400 data thus suggest that semantic memory may consist of feature mosaics distributed across multiple, higher-order perceptual and motor processing areas, from which meaning emerges by virtue of temporally coincident and functionally similar activity within a number of brain areas. Somewhat different activity patterns arise for different types of stimuli (e.g., pictures and words) and even for the same stimulus as a function of factors such as context, experience, or mood.