How Little Gray Cells Process Sound: They're Really A Series Of Computers
- Date:
- November 25, 1997
- Source:
- University Of Washington
- Summary:
- Hearing is a far more complicated process than once imagined. But neuro- scientists are beginning to unravel the ways individual brain cells continually perform complex computational tasks to help creatures as diverse as humans, gerbils, bats and birds distinguish what a sound is and where it is coming from.
- Share:
Hearing is a far more complicated process than once imagined. But neuro- scientists are beginning to unravel the ways individual brain cells continually perform complex computational tasks to help creatures as diverse as humans, gerbils, bats and birds distinguish what a sound is and where it is coming from.
Individual neurons, or brain cells, do not just relay information from one point to another, according to a group of researchers from across the United States who discussed new insights into the process of hearing at a symposium held last month at the Society for Neuroscience's annual meeting in New Orleans. Instead, they said, each neuron could be compared to a tiny computer that compiles information from many sources and makes a decision based on that information
"In hearing, the brain does not function as one big computer, but rather as a series of small computers working in series and in parallel. Now, for the first time, we are getting a good idea of how individual neurons work as computers," said Ellen Covey, an assistant professor of psychology at the University of Washington and organizer of the symposium.
Other members of the panel were Dan Sanes, associate professor of neural science and biology at New York University; George Pollak, professor of zoology at the University of Texas, and William Spain, associate professor of neurology at the University of Washington.
In New Orleans, the researchers reported on new techniques that for the first time permit them to record and monitor low-level electrical activity in single neurons of awake animals. They also discussed a number of findings showing how neurons analyze and integrate information from different sources
Understanding the mechanisms of sound recognition in the brain and in single neurons is basic neuroscience that Covey said may permit researchers to design better processors used in hearing aids for the hearing impaired and the totally deaf. The research also has implications for improving sonar devices and creating speech recognition systems for computers.
Here are highlights of what each panelist discussed:
While the bat's awake: Covey works with the widely distributed North American big brown bat (Eptesicus fuscus) and reported on the first successful use of a technique utilizing tiny glass electrodes one micron in diameter to record very low-level, sound-evoked electrical activity in single neurons in awake bats.
The auditory system in mammals and birds initially is divided into parallel pathways so different types of information can be extracted from a complex signal, Covey explained. To fully analyze a signal or set of simultaneous signals, the results of the calculations in the different pathways must be integrated. An important center for this activity is a portion of the midbrain called the inferior colliculus, where many auditory pathways converge.
The outputs from some pathways excite the cells on which they terminate, making the cells more likely to respond to a signal. Other signals inhibit cells, making them less likely to respond to a signal. Covey said it is computations that result from the interaction between the excitatory and inhibitory inputs that ultimately tells an animal not only where a sound is coming from but also what the sound is.
Big brown bats echolocate by emitting calls and listening to the echoes reflected from objects in their environment. Echolocation calls, while higher in frequency, possess many of the characteristics of human speech. The bats' auditory pathways are similar to those of humans. Because of these similarities, it is possible that some of the same mechanisms used by bats to process echolocation sounds also are used by humans to process speech signals, she added.
Tracking moving sound: Sanes research lab developed a method for understanding how neurons respond to a sound moving into an animal's field of hearing by measuring the excitatory and inhibitory responses of individual brain cells of gerbils (Meriones unguiculatus).
Sanes unexpectedly found that when sound is moved neurons can become unusually sensitive to the new location of a sound through a process called release from inhibition. Inhibition initially decreases the responsiveness of the cell, but subsequently raises its level of excitability. Release from inhibition can last for several seconds, which by auditory standards is a long time.
Sanes said that the process not only occurs under conditions of natural sound stimulation, but also can be created artificially by applying the inhibitory neurotransmitters glycine and GABA.
The suppression of sound: Pollak's research team has been studying how animals locate a sound source by initially processing information in the brain stem, a lower region of the brain, and then sending the processed information to a series of higher regions.
He found a response similar to the so-called precedence-effect which enables humans sitting in an auditorium at a concert to hear the primary sound originating from an instrument or singer and ignore the echoes bouncing off walls and the ceiling. Without this effect, the primary sound and the echoes would be perceived as originating from different locations.
Working with Jamaican mustached bats (Pteronotus parnelli), Pollak discovered that neurons in one brain stem nucleus create a precedence-like effect or long-lasting inhibition that suppresses sounds that occur during the period of the inhibition. Thus he said, this nucleus is inhibited by the initial sound from sending any information to higher regions of the brain.
Telling the time of sound: Spain's research involves using chicken embryo cells to study how cells in the brain stem can calculate the spatial location of a sound source based on signals from an animal's two ears that are received microseconds apart.
In order to detect the very small delay in the time of arrival of a sound at both ears, neurons must be able to sense differences in arrival times within 1/2000th of a second, Spain said. Sound detected independently by each ear is turned into an electrical signal and the timing of the electrical signals are checked for coincidence. Spain and his colleagues have begun to investigate how coincidence detection is accomplished inside individual cells.
Story Source:
Materials provided by University Of Washington. Note: Content may be edited for style and length.
Cite This Page: