New! Sign up for our free email newsletter.
Science News
from research organizations

All in the timing: Mapping auditory brain cells for maximum hearing precision

Date:
September 12, 2016
Source:
Lehigh University
Summary:
The specific synaptic and post-synaptic characteristics that allow auditory neurons to compute with temporal precision have now been revealed by scientists, ultimately revealing the optimal arrangement of both input and electrical properties needed for neurons to process their 'preferred' frequency with maximum precision.
Share:
FULL STORY

When it comes to hearing, precision is important. Because vertebrates, such as birds and humans, have two ears -- and sounds from either side travel different distances to arrive at each one -- localizing sound involves discerning subtle differences in when sounds arrive. The brain has to keep time better than a Swiss watch in order to locate where sound is coming from.

In fact, the quality of this sound processing precision is a limiting factor in how well one detects the location of sound and perceives speech.

A team of researchers led by R. Michael Burger, neuroscientist and associate professor in Lehigh University's Department of Biological Sciences, have identified the specific synaptic and post-synaptic characteristics that allow auditory neurons to compute with temporal precision -- ultimately revealing the optimal arrangement of both input and electrical properties needed for neurons to process their "preferred" frequency with maximum precision.

In order for birds and mammals to hear, hair cells in the cochlea -- the auditory portion of the inner ear -- vibrate in response to sounds and thereby convert sound into electrical activity. Each hair cell is tuned to a unique frequency tone, which humans ultimately experience as pitch.

Every hair cell in the cochlea is partnered with several neurons that convey information from the ear to the brain in an orderly way. The tone responses in the cochlea are, essentially, "remapped" to the cochlear nucleus, the first brain center to process sounds.

This unique spatial arrangement of how sounds of different frequencies are processed in the brain is called tonotopy. It can be visualized as a kind of sound map: tones that are close to each other in terms of frequency are represented by neighboring neurons of the cochlear nucleus.

Timing precision is important to cochlear nucleus neurons because their firing pattern is specific for each sound frequency. That is, their output pattern is akin to a digital code that is unique for each tone.

"In the absence of sound, neurons fire randomly and at a high rate," says Burger. "In the presence of sound, neurons fire in a highly stereotyped manner known as phase-locking -- which is the tendency for a neuron to fire at a particular phase of a periodic stimulus or sound wave."

Previous research by Burger and Stefan Oline -- a former Ph.D. candidate at Lehigh, now a postdoctoral fellow at New York University Medical School -- demonstrated for the first time that synaptic inputs -- the messages being sent between cells -- are distinct across frequencies and that these different impulse patterns are "mapped" onto the cells of the cochlear nucleus. They further established the computational processes by which neurons "tuned" to process low frequency sound actually improve the phase-locking precision of the impulses they receive. However, the mechanisms that allow neurons to respond properly to these frequency-specific incoming messages remained poorly understood.

In new research, published in an article in The Journal of Neuroscience, Burger and Oline -- along with Dr. Go Ashida of the University of Oldenburg in Germany -- have investigated auditory brain cell membrane selectivity and observed that the neurons "tuned" to receive high-frequency sound preferentially select faster input than their low-frequency-processing counterparts -- and that this preference is tolerant of changes to the inputs being received.

"A low frequency cell will tolerate a slow input and still be able to fire -- but a high frequency cell requires a very rapid input and rejects slow input," says Burger. "The neurons essentially demand that the high-frequency input be more precise."

"What I find really striking is that the tuning of these neurons helps them uniquely deal with the constraints of the ear," says Oline. "Neurons responding to low frequency input can average their inputs from hair cells to improve their resolution. But hair cells aren't very good at responding to high frequency tones as they introduce a lot of timing errors. Because of this, and because they occur at such a high rate, averaging these inputs is impossible and would smear information across multiple sound waves. So, instead, the high-frequency-processing cells use an entirely different strategy: they are as picky as possible to avoid averaging at all costs."

Burger and his colleagues built a computer simulation of low frequency and high frequency neurons, based on observations of physiological activity. They then used these computational models to test which combinations of properties are crucial to phase-locking. The model predicted that the optimal arrangement of synaptic and cell membrane properties for phase-locking is specific to stimulus frequency. These computational predictions were then tested physiologically in the neurons.

The team's model is not only useful for determining how the brain responds to sounds, but also reveals general features of input-output optimization that apply to any brain cell that processes time varying input.

Paving the way to more precise hearing

Understanding the mechanisms that allow cells of the cochlear nucleus to compute with temporal precision has implications for understanding the evolution of the auditory system.

"It's really the high frequency-processing cells that have uniquely evolved in mammals," explains Burger.

Understanding these processes may also be important for advancing the technology used to make cochlear implants. A cochlear implant is an electronic medical device that helps provide a sense of sound to someone who is deaf or has severe hearing loss. It replaces the function of the damaged inner ear by sending electrical impulses directly to the auditory nerve. These impulses, in turn, are interpreted by the brain as sound.

Though an established and effective treatment for many, cochlear implants cannot currently simulate the precision of sound experienced by those with a naturally-developed auditory system. The sound processing lacks the clarity of natural hearing, especially across frequencies.

"Ideally, what you want -- whether in your natural auditory system or through a cochlear implant -- is the most precise representation in the brain of the various frequencies," says Burger.

Burger and his colleagues have assembled what is known about the optimal electrical properties and synaptic inputs into a single cohesive model, laying the groundwork needed to investigate some of the big questions in the field of auditory neuroscience. Resolving these questions may someday lead scientists and medical professionals to a better understanding of how to preserve the natural organization of the auditory structures in the brain for those who are born with profound hearing loss.


Story Source:

Materials provided by Lehigh University. Note: Content may be edited for style and length.


Journal Reference:

  1. S. N. Oline, G. Ashida, R. M. Burger. Tonotopic Optimization for Temporal Processing in the Cochlear Nucleus. Journal of Neuroscience, 2016; 36 (32): 8500 DOI: 10.1523/JNEUROSCI.4449-15.2016

Cite This Page:

Lehigh University. "All in the timing: Mapping auditory brain cells for maximum hearing precision." ScienceDaily. ScienceDaily, 12 September 2016. <www.sciencedaily.com/releases/2016/09/160912141953.htm>.
Lehigh University. (2016, September 12). All in the timing: Mapping auditory brain cells for maximum hearing precision. ScienceDaily. Retrieved November 20, 2024 from www.sciencedaily.com/releases/2016/09/160912141953.htm
Lehigh University. "All in the timing: Mapping auditory brain cells for maximum hearing precision." ScienceDaily. www.sciencedaily.com/releases/2016/09/160912141953.htm (accessed November 20, 2024).

Explore More

from ScienceDaily

RELATED STORIES