Decoding inner language to treat speech disorders
- Date:
- January 12, 2022
- Source:
- Université de Genève
- Summary:
- What if it were possible to decode the internal language of individuals deprived of the ability to express themselves? Researchers have now managed to identify promising neural signals to capture our internal monologues. They were also able to identify the brain areas to be observed to try to decipher them in the future.
- Share:
When human beings speak, different areas of their brain must be activated. However, the function of these regions can be seriously impaired after damage to the nervous system. For example, amyotrophic lateral sclerosis (or Charcot's disease) can completely paralyze the muscles used to speak. In other cases, following a stroke for example, areas of the brain responsible for language can be affected: this is called aphasia. However, in many of those cases, the ability of patients to imagine words and sentences remains partly functional.
Decoding our internal speech is therefore of great interest to neuroscience researchers. But the task is far from easy, as Timothée Proix, scientist in the Department of Basic Neuroscience at the UNIGE Faculty of Medicine, explains: "Several studies have been conducted on the decoding of spoken language, but much less on the decoding of imagined speech. This is because, in the latter case, the associated neural signals are weak and variable compared to explicit speech. They are therefore difficult to decode by learning algorithms." That is, through computer programmes.
A well-hidden speech
When a person speaks aloud, he or she produces sounds that are emitted at certain precise moments. Researchers can thus relate these tangible elements to the brain regions involved. In the case of imagined speech, the process is much less easy. Scientists have no obvious information on the sequencing and tempo of the words or sentences formulated internally by the individual. The areas recruited in the brain are also less numerous and less active.
In order to perceive the neural signals of this very particular type of speech, the UNIGE team used a panel of thirteen hospitalized patients, in collaboration with two American hospitals. They collected data through electrodes implanted directly into patients' brains in order to assess their epileptic disorders. "We asked these people to say words and then to imagine them. Each time, we reviewed several frequency bands of brain activity known to be involved in language," explains Anne-Lise Giraud, a professor in the Department of Basic Neuroscience at the UNIGE Faculty of Medicine, and newly appointed director of the Institut de l'Audition in Paris.
Tapping into the right frequency
The researchers observed several types of frequencies produced by different brain areas when these patients spoke, either orally or internally. "First of all, the oscillations called theta (4-8Hz), which correspond to the average rhythm of syllable elocution. Then the gamma frequencies (25-35Hz), observed in the areas of the brain where speech sounds are formed. Thirdly, beta waves (12-18Hz) related to the cognitively more efficient regions solicited, for example to anticipate and predict the evolution of a conversation. Finally, the high frequencies (80-150Hz) that are observed when a person speaks out" explains Pierre Mégevand, assistant professor in the Department of Clinical Neurosciences at the Faculty of Medicine of the UNIGE and associate physician at the HUG.
Thanks to these observations, the scientists were able to show that the low frequencies and the coupling between certain frequencies (beta and gamma in particular) contain essential information for the decoding of imagined speech. Their research also reveals that the temporal cortex is an important area for the eventual decoding of internal speech. Located in the left lateral part of the brain, this specific cerebral region is involved in the processing of information related to hearing and memory, but it also houses a part of Wernicke's area, responsible for the perception of words and language symbols.
These results are a major advance in the reconstruction of speech from neural activity. "But we are still a long way from being able to decode imagined language," concludes the research team.
Story Source:
Materials provided by Université de Genève. Note: Content may be edited for style and length.
Journal Reference:
- Timothée Proix, Jaime Delgado Saa, Andy Christen, Stephanie Martin, Brian N. Pasley, Robert T. Knight, Xing Tian, David Poeppel, Werner K. Doyle, Orrin Devinsky, Luc H. Arnal, Pierre Mégevand, Anne-Lise Giraud. Imagined speech can be decoded from low- and cross-frequency intracranial EEG features. Nature Communications, 2022; 13 (1) DOI: 10.1038/s41467-021-27725-3
Cite This Page: