How the brain controls our speech
- Date:
- June 10, 2020
- Source:
- Goethe University Frankfurt
- Summary:
- Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said. However, the distribution of tasks is different than has been thought up to now, as an interdisciplinary team of neuroscientists and phoneticians has discovered.
- Share:
Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said. However, the distribution of tasks is different than has been thought up to now, as an interdisciplinary team of neuroscientists and phoneticians at Goethe University Frankfurt and the Leibniz-Centre General Linguistics Berlin has discovered: it is not just the right hemisphere that analyses how we speak -- the left hemisphere also plays a role.
Until now, it has been assumed that the spoken word arises in left side of the brain and is analysed by the right side. According to accepted doctrine, this means that when we learn to speak English and for example practice the sound equivalent to "th," the left side of the brain controls the motor function of the articulators like the tongue, while the right side analyses whether the produced sound actually sounds as we intended.
The division of labour actually follows different principles, as Dr Christian Kell from the Department of Neurology at Goethe University explains: "While the left side of the brain controls temporal aspects such as the transition between speech sounds, the right hemisphere is responsible for the control of the sound spectrum. When you say 'mother', for example, the left hemisphere primarily controls the dynamic transitions between "th" and the vowels, while the right hemisphere primarily controls the sounds themselves." His team, together with the phonetician Dr Susanne Fuchs, was able to demonstrate this division of labour in temporal and spectral control of speech for the first time in studies in which speakers were required to talk while their brain activities were recorded using functional magnetic resonance imaging.
A possible explanation for this division of labour between the two sides of the brain is that the left hemisphere generally analyses fast processes such as the transition between speech sounds better than the right hemisphere. The right hemisphere could be better at controlling the slower processes required for analysing the sound spectrum. A previous study on hand motor function that was published in the scientific publication "elife" demonstrates that this is in fact the case. Kell and his team wanted to learn why the right hand was preferentially used for the control of fast actions and the left hand preferred for slow actions. For example, when cutting bread, the right hand is used to slice with the knife while the left hand holds the bread.
In the experiment, scientists had right-handed test persons tap with both hands to the rhythm of a metronome. In one version they were supposed to tap with each beat, and in another only with every fourth beat. As it turned out, the right hand was more precise during the quick tapping sequence and the left hemisphere, which controls the right side of the body, exhibited increased activity. Conversely, tapping with the left hand corresponded better with the slower rhythm and resulted in the right hemisphere exhibiting increased activity.
Taken together, the two studies create a convincing picture of how complex behaviour -- hand motor functions and speech -- are controlled by both cerebral hemispheres. The left side of the brain has a preference for the control of fast processes while the right side tends to control the slower processes in parallel.
Story Source:
Materials provided by Goethe University Frankfurt. Note: Content may be edited for style and length.
Journal Reference:
- Mareike Floegel, Susanne Fuchs, Christian A. Kell. Differential contributions of the two cerebral hemispheres to temporal and spectral speech feedback control. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-16743-2
Cite This Page: