Scientists tuning in to how you tune out noise
- Date:
- May 8, 2012
- Source:
- Acoustical Society of America (ASA)
- Summary:
- Although we have little awareness that we are doing it, we spend most of our lives filtering out many of the sounds that permeate our lives and acutely focusing on others – a phenomenon known as auditory selective attention. Hearing scientists are attempting to tease apart the process.
- Share:
Although we have little awareness that we are doing it, we spend most of our lives filtering out many of the sounds that permeate our lives and acutely focusing on others -- a phenomenon known as auditory selective attention. In research that could some day lead to the development of improved devices allowing users to control things like wheelchairs through thought alone, hearing scientists at the University of Washington (UW) are attempting to tease apart the process.
The work will be presented at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.
Auditory selective attention is extremely important in everyday life, notes UW postdoctoral researcher Ross Maddox. "In situations as mundane as ordering your morning cup of coffee, you must focus on the barista while tuning out the loud hiss of the espresso machine and the annoying cell phone conversation happening in line right behind you," says Maddox. "However, the mechanisms behind selective attention are still not well understood." In addition, some individuals suffer from Central Auditory Processing Disorder (CAPD), "which means they have normal hearing when tested by an audiologist," he says, "but they are completely lost in loud settings like restaurants and airports."
To determine how auditory selective attention works -- and perhaps how it fails in people with CAPD -- Maddox, along with Adrian K.C. Lee, an assistant professor of speech and hearing sciences, and colleague Willy Cheung, created laboratory situations that promoted the breakdown of the process. The researchers had 10 subjects try to focus their attention on just one target sound -- a continuously repeating utterance of a single letter -- among a total of 4, 6, 8, or 12 such sounds. The subjects had to determine when an "oddball" item (the letter "R," chosen because it doesn't rhyme with any other letter) was inserted into the target sound stream.
"Most studies systematically degrade sounds and measure the effects on listeners' performance," Maddox explains. "Here, we made the target sound as easy to distinguish from all the other sounds present as possible, and tested the upper limit on the number of sounds a listener could tune out, given all these acoustical advantages."
Unsurprisingly, it is harder to tune in to just one stream when the number of streams increases. However, study subjects did better than expected -- successfully identifying the target 70 percent of the time in the most difficult conditions. Repeating letters faster did make the task harder -- although with faster repetition, listeners more quickly learn what the letter they're listening to sounds like, "so there is a tradeoff involved when deciding on repetition speed," Maddox says.
The work, Maddox and colleagues say, is a first step toward developing an auditory brain-computer interface (BCI) -- a device that reads brain activity to allow users to control computers or machines such as wheelchairs. "We hope to create a system that presents a user with an auditory 'menu' of sounds -- similar to the letter streams here -- and allows the listener to make a choice by reading their brainwaves to determine which sound they are focusing on. The more sound streams a user is able to tune out, the more menu options we can present at a single time."
Story Source:
Materials provided by Acoustical Society of America (ASA). Note: Content may be edited for style and length.
Cite This Page: