New! Sign up for our free email newsletter.
Science News
from research organizations

How The Brain Sorts Babble Into Auditory Streams

Date:
October 11, 2005
Source:
Cell Press
Summary:
Known as "the cocktail party problem," the ability of the brain's auditory processing centers to sort a babble of different sounds, like cocktail party chatter, into identifiable individual voices has long been a mystery. Now, researchers analyzing how both humans and monkeys perceive sequences of tones have created a model that can predict the central features of this process, offering a new approach to studying its mechanisms.
Share:
FULL STORY

Known as "the cocktail party problem," the ability of the brain'sauditory processing centers to sort a babble of different sounds, likecocktail party chatter, into identifiable individual voices has longbeen a mystery.

Now, researchers analyzing how both humans and monkeys perceivesequences of tones have created a model that can predict the centralfeatures of this process, offering a new approach to studying itsmechanisms.

The research team--Christophe Micheyl, Biao Tian, RobertCarlyon, and Josef Rauschecker--published their findings in the October6, 2005, issue of Neuron.

For both the humans and the monkeys, the researchers used anexperimental method in which they played repetitive triplet sequencesof tones of two alternating frequencies. Researchers know that when thefrequencies are close together and alternate slowly, the listenerperceives a single stream that sounds like a galloping horse. Howeverwhen the tones are at widely separated frequencies or played in rapidsuccession, the listener perceives two separate streams of beeps.

Importantly, at intermediate frequency separations or speeds,after a few seconds the listeners' perceptions can shift from thesingle galloping sounds to the two streams of beeps. The researcherscould use this phenomenon to explore the neurobiology of perception ofauditory streams, because they could explore how perception alteredwith the same stimulus.

In the human studies, Micheyl, working in the MIT laboratory ofAndrew Oxenham, asked subjects to listen to such tone sequences andsignal when their perceptions changed. The researchers found that thesubjects showed the characteristic perception changes at theintermediate frequency differences and speeds.

Then, Carlyon, working in Rauschecker's laboratory atGeorgetown University Medical Center, recorded signals from neurons inthe auditory cortex of monkeys as the same sequences of tones wereplayed to the animals. These neuronal signals could be used to indicatethe monkeys' perceptions of the tone sequences.

From the data on the monkeys, the researchers developed a modelthat aimed to predict in humans the change in perception between one ortwo auditory streams under different frequency separations and tonepresentation rates.

"Using this approach, we demonstrate a striking correspondencebetween the temporal dynamics of neural responses to alternating-tonesequences in the primary cortex ... of awake rhesus monkeys and theperceptual build-up of auditory stream segregation measured in humanslistening to similar sound sequences," concluded the researchers.

In a commentary on the paper in the same issue of Neuron,Michael DeWeese and Anthony Zador wrote that the new approach "promisesto elucidate the neural mechanisms underlying both our consciousexperience of the auditory world and our impressive ability to extractuseful auditory streams from a sea of distracters."

###

The researchers include Christophe Micheyl of MassachusettsInstitute of Technology; Biao Tian and Josef P. Rauschecker ofGeorgetown University Medical Center; and Robert P. Carlyon of MRCCognition and Brain Sciences Unit. The research was supported by anEngineering and Physical Sciences Research Council Research grant viaHearNet, NIH grants, and CNRS.

Micheyl et al.: "Perceptual organization of tone sequences inthe auditory cortex of awake macaques." Published in Neuron, Vol. 48,139-148, October 6, 2005, DOI 10.1016/j.neuron.2005.08.039 www.neuron.org.


Story Source:

Materials provided by Cell Press. Note: Content may be edited for style and length.


Cite This Page:

Cell Press. "How The Brain Sorts Babble Into Auditory Streams." ScienceDaily. ScienceDaily, 11 October 2005. <www.sciencedaily.com/releases/2005/10/051010100824.htm>.
Cell Press. (2005, October 11). How The Brain Sorts Babble Into Auditory Streams. ScienceDaily. Retrieved November 16, 2024 from www.sciencedaily.com/releases/2005/10/051010100824.htm
Cell Press. "How The Brain Sorts Babble Into Auditory Streams." ScienceDaily. www.sciencedaily.com/releases/2005/10/051010100824.htm (accessed November 16, 2024).

Explore More

from ScienceDaily

RELATED STORIES