New! Sign up for our free email newsletter.
Science News
from research organizations

When your eyes override your ears: New insights into the McGurk effect

New model shows how the brain combines information from multiple senses

Date:
February 16, 2017
Source:
PLOS
Summary:
Seeing is not always believing -- visual speech (mouth movements) mismatched with auditory speech (sounds) can result in the perception of an entirely different message. This mysterious illusion is known as the McGurk effect. Neuroscience researchers have created an algorithm to reveal key insight into why the brain can sometimes muddle up one of the most fundamental aspects of the human experience.
Share:
FULL STORY

Seeing is not always believing -- visual speech (mouth movements) mismatched with auditory speech (sounds) can result in the perception of an entirely different message. This mysterious illusion is known as the McGurk effect. In new research, published in PLOS Computational Biology, neuroscience researchers have created an algorithm to reveal key insight into why the brain can sometimes muddle up one of the most fundamental aspects of the human experience.

The findings will be useful in understanding patients with speech perception deficits and in building computers able to understand auditory and visual speech.

"All humans grow up listening to tens of thousands of speech examples, with the result that our brains contain a comprehensive mapping of the likelihood that any given pair of mouth movements and speech sounds go together," said Dr. Michael Beauchamp, professor of neurosurgery at Baylor College of Medicine and senior author on the paper with John Magnotti, postdoctoral research fellow at Baylor. "In everyday situations we are frequently confronted with multiple talkers emitting auditory and visual speech cues, and the brain must decide whether or not to integrate a particular combination of voice and face."

"Even though our senses are constantly bombarded with information, our brain effortlessly selects the verbal and nonverbal speech of our conversation partners from this cacophony," Magnotti said.

The McGurk effect is an example of when this goes wrong. It happens when mouth movements that are seen can override what is heard, causing a person to perceive a different sound than what is actually being said. Only when the eyes are closed, and when the sound is being heard, can the correct message be perceived. For example, the visual "ga" combined with the auditory "ba" results in the perception of "da."

Magnotti and Beauchamp were able to create an algorithm model of multisensory speech perception based on the principle of causal inference, which means given a particular pair of auditory and visual syllables, the brain calculates the likelihood they are from single versus multiple talkers and uses this likelihood to determine the final speech perception.

"We compared our model with an alternative model that is identical, except that it always integrates the available cues, meaning there is no casual inference of speech perception," said Beauchamp, who also is director of the Core for Advanced MRI at Baylor. "Using data from a large number of subjects, the model with causal inference better predicted how humans would or would not integrate audiovisual speech syllables."

"The results suggest a fundamental role for a causal inference type calculation going on in the brain during multisensory speech perception," Magnotti said.

Researchers already have an idea of how and where the brain separately encodes auditory speech and visual speech, but this algorithm shines light on the process of how they are integrated. It will serve as a guide, highlighting specific brain regions that will be essential for multisensory speech perception.

"Understanding how the brain combines information from multiple senses will provide insight into ways to improve declines in speech perception due to typical aging and even to develop devices that could enhance hearing across the life span," Beauchamp said.

(See http://openwetware.org/wiki/Beauchamp:McGurk_CI_Stimuli for examples.)


Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.


Journal Reference:

  1. John F. Magnotti, Michael S. Beauchamp. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech. PLOS Computational Biology, 2017; 13 (2): e1005229 DOI: 10.1371/journal.pcbi.1005229

Cite This Page:

PLOS. "When your eyes override your ears: New insights into the McGurk effect." ScienceDaily. ScienceDaily, 16 February 2017. <www.sciencedaily.com/releases/2017/02/170216143941.htm>.
PLOS. (2017, February 16). When your eyes override your ears: New insights into the McGurk effect. ScienceDaily. Retrieved November 22, 2024 from www.sciencedaily.com/releases/2017/02/170216143941.htm
PLOS. "When your eyes override your ears: New insights into the McGurk effect." ScienceDaily. www.sciencedaily.com/releases/2017/02/170216143941.htm (accessed November 22, 2024).

Explore More

from ScienceDaily

RELATED STORIES