Principal investigator and team

Multisensory semantic integration in inferential comprehension

Incubator Fund, Cambridge Language Sciences. University of Cambridge (UK)

Multisensory integration refers to the neural processes by which information from different sensory modalities is synthesized to form a coherent message. Language comprehension is usually multimodal (e.g. audiovisual integration in film comprehension), which makes the field of multisensory perception a promising research area to disentangle the cognitive mechanisms underlying language. Importantly, language comprehension requires inferencing, which is a central process in comprehension, learning and decision-making. To illustrate this with a multisensory example: if an actor in a thriller film says “the victim was tied with a red tape”, and a subsequent scene shows red tape in the car of a policeman, we will probably infer that this policeman was “the murderer”. Despite previous research on semantic congruence across modalities, to our knowledge no study has tried to investigate multisensory semantic integration in comprehension. Therefore, the main goal of the present research proposal is to study whether the integration of information across different sensory modalities facilitates and/or interferes with the comprehension of inferential information in monolingual adults.

Principal Investigators

Ana I. Pérez Muñoz

Faculty and researchers


Please select listing to show.


Please select listing to show.

Conference Presentations



Incubator Fund

Lines of research

Individual differences in memory and language