When we see a tiger, read the word “tiger” or listen to the word “tiger”, how do we know that those different sensory inputs refer to the same thing? Initial evidence suggests that multisensory information about meaning is represented in the anterior temporal lobe as neural patterns that are distributed across space and that change rapidly in time during processing. Confirmation of this hypothesis could have important consequences for neuroimaging – methods which lack spatial or temporal resolution, such as fMRI, EEG, and MEG, might currently be unable to reveal this type of code. This would raise questions about whether neuroimaging research on other brain processes has missed this kind of detail. My PhD research will seek to provide more evidence that the brain can represent information in a distributed, dynamic way; investigate the capacities of different neuroimaging methods for revealing semantic codes; and develop new methods to allow distributed, dynamic codes to be studied.
Magnetic resonance imaging (MRI)