Linguist Tunes In To Pitch Processing In Brain
- Date:
- February 20, 2008
- Source:
- Purdue University
- Summary:
- More of the brain is busy processing pitch from language and other sounds than previously thought, according to a researcher in neurophonetics. New data reveal that melody of speech is processed in neither a single region nor a specific hemisphere, but engages multiple areas comprising large-scale networks that involve both hemispheres of the brain.
- Share:
More of the brain is busy processing pitch from language and other sounds than previously thought, according to a researcher in neurophonetics at Purdue University.
"By studying brain activity at different stages of processing pitch patterns in tonal languages, we have found that early activity in the brainstem is shaped by a person's language experience, even while the person is asleep, and consequently, we now believe it plays a much greater role in speech perception that we thought before," said linguistics professor Jackson T. Gandour.
Gandour is presenting information from several of his pitch processing studies at the Feb. 16 "Brain Basis of Speech" session during the American Association for Advancement of Science's annual meeting.
"Everyone has a brainstem, but it's tuned differently depending on what sounds are behaviorally relevant to a person, for example, the sounds of his or her mother tongue," Gandour said.
The brain stem is located early along the auditory pathway, about 7-9 milliseconds from the time the auditory signal enters the ear. This is near where pitch processing begins in the cochlea and the auditory nerve, about 0-2 milliseconds.
"We now know that there are regions of the brain involved in processing the sounds of language that we didn't know about before," he said. "We know even less about how pitch information is analyzed, transformed and represented at different levels of the brain in the translation from sound to meaning. A fuller understanding will give us a better idea what roles the brain regions are playing, and this information could help people with communication disorders or brain injuries."
Gandour collaborated with Purdue auditory electrophysiologist Ananthanarayan Ravi Krishnan on the brainstem studies, which compared electrical activity in young adult speakers of the tonal language Mandarin with those of speakers of English, a non-tonal language. The majority of languages of the world are tone languages. They use inflections of pitch on syllables to indicate a difference between words. For example, in Mandarin the sound "ma" with a level tone means "mother," a rising tone means "hemp," a falling-rising tone means "horse" and a falling tone means "scold."
"Never did I expect we would find that language experience would shape the way the brainstem works," Gandour said. "The idea is that this sensory signal undergoes a set of transformations that are far more complicated than we originally thought. I feel like we have broken new ground and that we have just begun to go down a new avenue of research."
Gandour also collaborated with Purdue biomedical engineer Thomas Talavage as well as colleagues at the Indiana University Medical Center to apply the functional brain imaging techniques positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) to display blood flow activity at the level of the cerebral cortex.
These data reveal that melody of speech is processed in neither a single region nor a specific hemisphere, but engages multiple areas comprising large-scale networks that involve both hemispheres.
"And moreover, we find that these networks are not circumscribed to language processes, but instead interact with more general sensory-motor and cognitive process in addition to those associated with language," he said.
Gandour and his colleagues have shown that when the melody of speech is processed there is a dynamic interplay between the left and right hemispheres of the brain. The processing pitch of information engages neural mechanisms in the brain's right hemisphere, while left hemisphere regions mediate processing of linguistic information, he said.
Gandour compares the evolution of his research program on brain and language to that of someone trying to figure out the structure and function of different parts of a house. The view through the attic window shows theories about elements, rules and representations of language, minus the brain. Moving down to the second floor offers the first look at the neurobiology of language. Scientists on this floor assess deficits in patients' language abilities that result from damage to one or the other side of the brain to determine what areas are necessary for normal language functioning.
"While the windows on the second floor are important, it's only when we get to the first floor that we begin to see actual brain activity," Gandour said. "By using brain imaging techniques, we can view activity in both hemispheres simultaneously while subjects are performing language tasks, telling us what areas on either side of the brain participate in language functions in the normal human brain.
"That leaves the cellar, and what do you find in the cellar" In a house, it's fine wine. In a human, it's the midbrain. That's where we tune our fine tones. And just as fine wines take time, so too does it take time for our brain to construct fine tones."
Gandour's studies on neurophonetics span nearly three decades: brain lesion deficits, 1979-2000; functional neuroimaging, 1998-present; and auditory electrophysiology, 2004-present.
Story Source:
Materials provided by Purdue University. Note: Content may be edited for style and length.
Cite This Page: