New! Sign up for our free email newsletter.
Science News
from research organizations

New model for the way humans localize sounds

Date:
November 6, 2019
Source:
New Jersey Institute of Technology
Summary:
One of the enduring puzzles of hearing loss is the decline in a person's ability to determine where a sound originates, a key survival faculty that allows animals to pinpoint the location of danger, prey and group members. Researchers are proposing a model based on a more dynamic neural code.
Share:
FULL STORY

One of the enduring puzzles of hearing loss is the decline in a person's ability to determine where a sound originates, a key survival faculty that allows animals -- from lizards to humans -- to pinpoint the location of danger, prey and group members. In modern times, finding a lost cell phone by using the application "Find My Device," just to find it had slipped under a sofa pillow, relies on minute differences in the ringing sound that reaches the ears.

Unlike other sensory perceptions, such as feeling where raindrops hit the skin or being able to distinguish high notes from low on the piano, the direction of sounds must be computed; the brain estimates them by processing the difference in arrival time across the two ears, the so-called interaural time difference (ITD). A longstanding consensus among biomedical engineers is that humans localize sounds with a scheme akin to a spatial map or compass, with neurons aligned from left to right that fire individually when activated by a sound coming from a given angle -- say, at 30 degrees leftward from the center of the head.

But in research published this month in the journal eLife, Antje Ihlefeld, director of NJIT's Neural Engineering for Speech and Hearing Laboratory, is proposing a different model based on a more dynamic neural code. The discovery offers new hope, she says, that engineers may one day devise hearing aids, now notoriously poor in restoring sound direction, to correct this deficit.

"If there is a static map in the brain that degrades and can't be fixed, that presents a daunting hurdle. It means people likely can't "relearn" to localize sounds well. But if this perceptual capability is based on a dynamic neural code, it gives us more hope of retraining peoples' brains," Ihlefeld notes. "We would program hearing aids and cochlear implants not just to compensate for an individual's hearing loss, but also based upon how well that person could adapt to using cues from their devices. This is particularly important for situations with background sound, where no hearing device can currently restore the ability to single out the target sound. We know that providing cues to restore sound direction would really help."

What led her to this conclusion is a journey of scholarly detective work that began with a conversation with Robert Shapley, an eminent neurophysiologist at NYU who remarked on a peculiarity of human binocular depth perception -- the ability to determine how far away a visual object is -- that also depends on a computation comparing input received by both eyes. Shapley noted that these distance estimates are systematically less accurate for low-contrast stimuli (images that are more difficult to distinguish from their surrounding) than for high-contrast ones.

Ihlefeld and Shapley wondered if the same neural principle applied to sound localization: whether it is less accurate for softer sounds than for louder ones. But this would depart from the prevailing spatial map theory, known as the Jeffress model, which holds that sounds of all volumes are processed -- and therefore perceived -- the same way. Physiologists, who propose that mammals rely on a more dynamic neural model, have long disagreed with it. They hold that mammalian neurons tend to fire at different rates depending on directional signals and that the brain then compares these rates across sets of neurons to dynamically build up a map of the sound environment.

"The challenge in proving or disproving these theories is that we can't look directly at the neural code for these perceptions because the relevant neurons are located in the human brainstem, so we cannot obtain high-resolution images of them," she says. "But we had a hunch that the two models would give different sound location predictions at a very low volume."

They searched the literature for evidence and found only two papers that had recorded from neural tissue at these low sounds. One study was in barn owls -- a species thought to rely on the Jeffress model, based on high-resolution recordings in the birds' brain tissue -- and the other study was in a mammal, the rhesus macaque, an animal thought to use dynamic rate coding. They then carefully reconstructed the firing properties of the neurons recorded in these old studies and used their reconstructions to estimate sound direction both as a function of ITD and volume.

"We expected that for the barn owl data, it really should not matter how loud a source is -- the predicted sound direction should be really accurate no matter the sound volume -- and we were able to confirm that. However, what we found for the monkey data is that predicted sound direction depended on both ITD and volume," she said. "We then searched the human literature for studies on perceived sound direction as a function of ITD, which was also thought not to depend on volume, but surprisingly found no evidence to back up this long-held belief."

She and her graduate student, Nima Alamatsaz, then enlisted volunteers on the NJIT campus to test their hypothesis, using sounds to test how volume affects where people think a sound emerges.

"We built an extremely quiet, sound-shielded room with specialized calibrated equipment that allowed us to present sounds with high precision to our volunteers and record where they perceived the sound to originate. And sure enough, people misidentified the softer sounds," notes Alamatsaz.

"To date, we are unable to describe sound localization computations in the brain precisely," adds Ihlefeld. "However, the current results are inconsistent with the notion that the human brain relies on a Jeffress-like computation. Instead, we seem to rely on a slightly less accurate mechanism.

More broadly, the researchers say, their studies point to direct parallels in hearing and visual perception that have been overlooked before now and that suggest that rate-based coding is a basic underlying operation when computing spatial dimensions from two sensory inputs.

"Because our work discovers unifying principles across the two senses, we anticipate that interested audiences will include cognitive scientists, physiologists and computational modeling experts in both hearing and vision," Ihlefeld says. "It is fascinating to compare how the brain uses the information reaching our eyes and ears to make sense of the world around us and to discover that two seemingly unconnected perceptions -- vision and hearing -- may in fact be quite similar after all."


Story Source:

Materials provided by New Jersey Institute of Technology. Original written by Tracey Regan. Note: Content may be edited for style and length.


Journal Reference:

  1. Antje Ihlefeld, Nima Alamatsaz, Robert M Shapley. Population rate-coding predicts correctly that human sound localization depends on sound intensity. eLife, 2019; 8 DOI: 10.7554/eLife.47027

Cite This Page:

New Jersey Institute of Technology. "New model for the way humans localize sounds." ScienceDaily. ScienceDaily, 6 November 2019. <www.sciencedaily.com/releases/2019/11/191106085459.htm>.
New Jersey Institute of Technology. (2019, November 6). New model for the way humans localize sounds. ScienceDaily. Retrieved December 3, 2024 from www.sciencedaily.com/releases/2019/11/191106085459.htm
New Jersey Institute of Technology. "New model for the way humans localize sounds." ScienceDaily. www.sciencedaily.com/releases/2019/11/191106085459.htm (accessed December 3, 2024).

Explore More

from ScienceDaily

RELATED STORIES