New! Sign up for our free email newsletter.
Science News
from research organizations

Components of speech recognition pathway in humans identified

Date:
June 29, 2011
Source:
Georgetown University Medical Center
Summary:
Neuroscientists have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech -- and discovered that they are the same as ones identified in non-human primates.
Share:
FULL STORY

Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech -- and discovered that they are the same as ones identified in non-human primates.

In the June 22 issue of the Journal of Neuroscience, the researchers say their discovery -- made possible with the help of 13 human volunteers who spent time in a functional MRI machine -- could potentially offer important insights into what can go wrong when someone has difficulty speaking, which involves hearing voice-generated sounds, or understanding the speech of others.

But more than that, the findings help shed light on the complex, and extraordinarily elegant, workings of the "auditory" human brain, says Josef Rauschecker, PhD, a professor in the departments of physiology/ biophysics and neuroscience and a member of the Georgetown Institute for Cognitive and Computational Sciences at GUMC.

"This is the first time we have been able to identify three discrete brain areas that help people recognize and understand the sounds they are hearing," says Rauschecker. "These sounds, such as speech, are vitally important to humans, and it is critical that we understand how they are processed in the human brain."

Rauschecker and his colleagues at Georgetown have been instrumental in building a unified theory about how the human brain processes speech and language. They have shown that both human and non-human primates process speech along two parallel pathways, each of which run from lower to higher functioning neural regions.

These pathways are dubbed the "what" and "where" streams and are roughly analogous to how the brain processes sight, but in different regions. The "where" stream localizes sound and the "what" pathway identifies the sound.

Both pathways begin with the processing of signals in the auditory cortex, located inside a deep fissure on the side of the brain underneath the temples -- the so-called "temporal lobe." Information processed by the "what" pathway then flows forward along the outside of the temporal lobe, and the job of that pathway is to recognize complex auditory signals, which include communication sounds and their meaning (semantics). The "where" pathway is mostly in the parietal lobe, above the temporal lobe, and it processes spatial aspects of a sound -- its location and its motion in space -- but is also involved in providing feedback during the act of speaking.

Auditory perception -- the processing and interpretation of sound information -- is tied to anatomical structures; signals move from lower to higher brain regions, Rauschecker says. "Sound as a whole enters the ear canal and is first broken down into single tone frequencies, then higher-up neurons respond only to more complex sounds, including those used in the recognition of speech, as the neural representation of the sound moves through the various brain regions," he says.

In this study, Rauschecker and his colleagues -- computational neuroscientist Maximilian Riesenhuber, Ph.D., and Mark Chevillet, a student in the Interdisciplinary Program in Neuroscience -- identified the three distinct areas in the "what" pathway in humans that had been seen in non-human primates. Only two had been recognized before in previous human studies.

The first, and most primary, is the "core" which analyzes tones at the basic level of simple frequencies. The second area, the "belt," wraps around the core, and integrates several tones, "like buzz sounds," that lie close to each other, Rauschecker says. The third area, the "parabelt," responds to speech sounds such as vowels, which are essentially complex bursts of multiple frequencies.

Rauschecker is fascinated by the fact that although speech and language are considered to be uniquely human abilities, the emerging picture of brain processing of language suggests "in evolution, language must have emerged from neural mechanisms at least partially available in animals," he says. "There appears to be a conservation of certain processing pathways through evolution in humans and nonhuman primates."

The study was funded by a National Science Foundation grant awarded to Rauschecker and Riesenhuber.


Story Source:

Materials provided by Georgetown University Medical Center. Note: Content may be edited for style and length.


Journal Reference:

  1. M. Chevillet, M. Riesenhuber, J. P. Rauschecker. Functional Correlates of the Anterolateral Processing Hierarchy in Human Auditory Cortex. Journal of Neuroscience, 2011; 31 (25): 9345 DOI: 10.1523/JNEUROSCI.1448-11.2011

Cite This Page:

Georgetown University Medical Center. "Components of speech recognition pathway in humans identified." ScienceDaily. ScienceDaily, 29 June 2011. <www.sciencedaily.com/releases/2011/06/110622145906.htm>.
Georgetown University Medical Center. (2011, June 29). Components of speech recognition pathway in humans identified. ScienceDaily. Retrieved November 13, 2024 from www.sciencedaily.com/releases/2011/06/110622145906.htm
Georgetown University Medical Center. "Components of speech recognition pathway in humans identified." ScienceDaily. www.sciencedaily.com/releases/2011/06/110622145906.htm (accessed November 13, 2024).

Explore More

from ScienceDaily

RELATED STORIES