Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Neurocognitive perspectives on language comprehension
Submission status
Open
Submission deadline
The Editors at Communications Psychology, Nature Communications, and Scientific Reports invite submissions providing neurocognitive perspectives on language comprehension.
Communicating through spoken, signed, and written language is an integral part of human experience. The neuroscience of language comprehension provides insights into the complex cognitive processes that enable linguistic perception and can inform the development of interventional tools for language and speech-related disorders.
Topics of interest for this Collection include, but are not limited to, semantic processing, syntactic processing, prediction, and second language comprehension in speech, written, and signed languages.
This Collection aims to bring together high-quality research using behavioural and neural measures, psychophysics, computational modelling, neuroimaging, and neurostimulation. The journals will consider submissions of research Articles, Registered Reports, and Resources on the topic. More information on the different formats can be found here. Review and opinion pieces are welcome, but presubmission enquiries are recommended given space constraints.
We will highlight relevant publications in this Collection.
The Person Perception from Voices model (PPV) provides a unified account of person perception beyond identity recognition, incorporating perception of other person characteristics or personae.
Speech brain-computer interfaces face challenges scaling across individuals with different brain organization. Using minimally invasive recordings from 25 patients, the authors developed transfer learning methods that enable robust speech decoding even with incomplete brain coverage.
Using electrical recordings taken from the surface of the brain, researchers decode what words neurosurgical patients are saying and show that the brain plans words in a different order than they are ultimately spoken.
Zhu et al. combine neonatal and twin imaging to unveil innate functional connectivity patterns in the temporal pole, a hub region for semantic memory. These patterns are present in neonates, heritable, and linked to semantic functions in adults.
How the brain supports speaking and listening during conversation of its natural form remains poorly understood. Here, by combining intracranial EEG recordings with Natural Language Processing, the authors show broadly distributed frontotemporal neural signals that encode context-dependent linguistic information during both speaking and listening..
Using intracerebral recordings, the authors find abstract prosodic categories in continuous speech are encoded differently to segmental features by Heschl’s gyrus, suggesting specialized cortical processing early in the auditory processing hierarchy.
Temporal integration throughout the human auditory cortex is predominantly locked to absolute time and does not vary with the duration of speech structures such as phonemes or words.
This study links acoustic, speech and linguistic data with brain activity during real-life conversations to create a model that predicts neural responses during speech with high accuracy.
In natural conversations, people are able to stop speaking at any time. Using high-density electrocorticography, Zhao et al. found a distinct neural signal in the human premotor cortex that inhibits speech output to achieve abrupt stopping.