New! Sign up for our free email newsletter.
Science News
from research organizations

Unlocking the secret of how the brain encodes speech

Progress toward decoding speech to help completely paralyzed people like Stephen Hawking talk again

Date:
September 26, 2018
Source:
Northwestern University
Summary:
People like the late Stephen Hawking are unable to speak because their muscles are paralyzed. Scientists want to help these individuals communicate by developing a brain machine interface to decode the commands the brain is sending to the tongue, palate, lips and larynx. New research has moved science closer by unlocking new information about how the brain encodes speech. They discovered the brain controls speech in a similar way to how it controls arm movements.
Share:
FULL STORY

People like the late Stephen Hawking can think about what they want to say, but are unable to speak because their muscles are paralyzed. In order to communicate, they can use devices that sense a person's eye or cheek movements to spell out words one letter at a time. However, this process is slow and unnatural.

Scientists want to help these completely paralyzed, or "locked-in," individuals communicate more intuitively by developing a brain machine interface to decode the commands the brain is sending to the tongue, palate, lips and larynx (articulators.)

The person would simply try to say words and the brain machine interface (BMI) would translate into speech.

New research from Northwestern Medicine and Weinberg College of Arts and Sciences has moved science closer to creating speech-brain machine interfaces by unlocking new information about how the brain encodes speech.

Scientists have discovered the brain controls speech production in a similar manner to how it controls the production of arm and hand movements. To do this, researchers recorded signals from two parts of the brain and decoded what these signals represented. Scientists found the brain represents both the goals of what we are trying to say (speech sounds like "pa" and "ba") and the individual movements that we use to achieve those goals (how we move our lips, palate, tongue and larynx). The different representations occur in two different parts of the brain.

"This can help us build better speech decoders for BMIs, which will move us closer to our goal of helping people that are locked-in speak again," said lead author Dr. Marc Slutzky, associate professor of neurology and of physiology at Northwestern University Feinberg School of Medicine and a Northwestern Medicine neurologist.

The study will be published Sept. 26 in the Journal of Neuroscience.

The discovery could also potentially help people with other speech disorders, such as apraxia of speech, which is seen in children as well as after stroke in adults. In speech apraxia, an individual has difficulty translating speech messages from the brain into spoken language.

How words are translated from your brain into speech

Speech is composed of individual sounds, called phonemes, that are produced by coordinated movements of the lips, tongue, palate and larynx. However, scientists didn't know exactly how these movements, called articulatory gestures, are planned by the brain. In particular, it was not fully understood how the cerebral cortex controls speech production, and no evidence of gesture representation in the brain had been shown.

"We hypothesized speech motor areas of the brain would have a similar organization to arm motor areas of the brain," Slutzky said. "The precentral cortex would represent movements (gestures) of the lips, tongue, palate and larynx, and the higher level cortical areas would represent the phonemes to a greater extent."

That's exactly what they found.

"We studied two parts of the brain that help to produce speech," Slutzky said. "The precentral cortex represented gestures to a greater extent than phonemes. The inferior frontal cortex, which is a higher level speech area, represented both phonemes and gestures."

Chatting up patients in brain surgery to decode their brain signals

Northwestern scientists recorded brain signals from the cortical surface using electrodes placed in patients undergoing brain surgery to remove brain tumors. The patients had to be awake during their surgery, so researchers asked them to read words from a screen.

After the surgery, scientists marked the times when the patients produced phonemes and gestures. Then they used the recorded brain signals from each cortical area to decode which phonemes and gestures had been produced, and measured the decoding accuracy. The brain signals in the precentral cortex were more accurate at decoding gestures than phonemes, while those in the inferior frontal cortex were equally good at decoding both phonemes and gestures. This information helped support linguistic models of speech production. It will also help guide engineers in designing brain machine interfaces to decode speech from these brain areas.

The next step for the research is to develop an algorithm for brain machine interfaces that would not only decode gestures but also combine those decoded gestures to form words.

This was an interdisciplinary, cross-campus investigation; authors included a neurosurgeon, a neurologist, a computer scientist, a linguist, and biomedical engineers. In addition to Slutzky, Northwestern authors are Emily M. Mugler, Matthew C. Tate (neurological surgery), Jessica W. Templer (neurology) and Matthew A. Goldrick (linguistics).

The paper is titled "Differential Representation of Articulatory Gestures and Phonemes in Precentral and Inferior Frontal Gyri."

This work was supported in part by the Doris Duke Charitable Foundation, Northwestern Memorial Foundation Dixon Translational Research Award (including partial funding from National Center for Advancing Translational Sciences, UL1TR000150 and UL1TR001422), NIH grants F32DC015708 and R01NS094748 and National Science Foundation 1321015.


Story Source:

Materials provided by Northwestern University. Note: Content may be edited for style and length.


Journal Reference:

  1. Emily M. Mugler, Matthew C. Tate, Karen Livescu, Jessica W. Templer, Matthew A. Goldrick, Marc W. Slutzky. Differential Representation of Articulatory Gestures and Phonemes in Precentral and Inferior Frontal Gyri. Journal of Neuroscience, 2018; DOI: 10.1523/JNEUROSCI.1206-18.2018

Cite This Page:

Northwestern University. "Unlocking the secret of how the brain encodes speech." ScienceDaily. ScienceDaily, 26 September 2018. <www.sciencedaily.com/releases/2018/09/180926140827.htm>.
Northwestern University. (2018, September 26). Unlocking the secret of how the brain encodes speech. ScienceDaily. Retrieved November 21, 2024 from www.sciencedaily.com/releases/2018/09/180926140827.htm
Northwestern University. "Unlocking the secret of how the brain encodes speech." ScienceDaily. www.sciencedaily.com/releases/2018/09/180926140827.htm (accessed November 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES