Get Ready For Next Generation Surround Sound
- Date:
- April 20, 2005
- Source:
- Engineering And Physical Sciences Research Council
- Summary:
- Ultra-realistic surround sound is a step closer for everyone thanks to a new method that will cheaply and efficiently compute the way individuals hear things.
- Share:
Ultra-realistic surround sound is a step closer for everyone thanks to a new method that will cheaply and efficiently compute the way individuals hear things.
Currently, creating accurate ‘virtual sound fields’ through headphones is almost exclusively the domain of high-budget military technologies and involves lengthy and awkward acoustic measurements. The new approach eliminates the acoustic measurement step altogether and promises to produce the required results in mere minutes.
The breakthrough has been made by researchers at the University of York’s Department of Electronics, funded by the EPSRC. The researchers are working in collaboration with colleagues at the University of Sydney, Australia.
The team are now working to commercialise the idea. Tony Tew, lead researcher at York explains, “We envisage booths in the high street, like those used for passport photos, where customers can have the shape of their head and ears measured easily. The shape information will be used to quickly compute an individual’s spatial filters.”
Spatial filters encapsulate how an individual’s features alter sounds before they reach the eardrum. The changes vary with direction and so supply the brain with the information it needs to work out where a sound is coming from. Tew’s booth would record the spatial filter measurements on to a smart card, readable by next-generation sound systems. The result – sounds heard through headphones should be indistinguishable from hearing the same sounds live.
Rapid-growth portable technologies, such as mobile communications, wearable computers and personal entertainment systems, largely depend on ear phones of one sort or another for their reproduction of sound. Ear phones are perfect for creating a virtual sound field using the York-Sydney team’s method. Realism is only one benefit; the ability to place virtual sounds anywhere around the head has applications in computer games and for producing earcons (the acoustic equivalent of icons on a visual display). Next-generation hearing aids programmed with the wearer’s spatial filters will be able to exploit the directional information created by the ear flaps and so help to target one sound while rejecting others.
Tony Tew says, “Our main goal is for personalised spatial filters to figure in a wide range of consumer technologies, making their benefits available to everyone.”
Story Source:
Materials provided by Engineering And Physical Sciences Research Council. Note: Content may be edited for style and length.
Cite This Page: