Next-generation Sound Systems To Minimize Background Noise
- Date:
- July 28, 2009
- Source:
- ICT Results
- Summary:
- The whole listening experience in cars, cinemas, theaters, and even during video conferences, is likely to improve radically thanks to a new set of tools for application development.
- Share:
The whole listening experience in cars, cinemas, theatres, and even during videoconferences, is likely to improve radically thanks to a new set of tools for application development being assembled by European researchers.
Automobiles are increasingly using embedded systems to maximise efficiency in all sorts of ways, including the efficiency of sound systems.
An embedded system is a special-purpose computer system designed to perform a single or a few dedicated functions. An MP3 player, for example, is driven by an embedded system, unlike a general purpose PC which can perform many different tasks depending on the software installed.
An embedded system typically consists of hardware, at the heart of which is usually a circuit board with microprocessors mounted on it, and a software application or applications.
As the architecture of IT systems, including embedded systems, becomes ever more complex, designing applications to run across platforms – which may incorporate many different types of processor with different performance characteristics – is becoming increasingly difficult.
Untying the developers' hands
From the developers’ viewpoint, writing code to make new applications work with a variety of different processors is a difficult, expensive and time-consuming process.
The EU-funded HARTES project was set up to automate as much of this process as possible, to make it much quicker and more cost effective to bring new applications to market by freeing up developers to concentrate on high-level creative work while the drudgery is taken care of by a new tool chain.
HARTES’ office manager, Roberto Marega, says the core of the project is developing the tool chain which comprises a series of specialist tools related to application development. “These take the developer’s high-level software algorithm and code it to run on the different processors and parts of processors,” he explains.
Although the implications are important across a number of sectors and industries, the researchers chose to use a car as a laboratory to run applications as a testbed for the tool chain and for their proof-of-concept demonstrations.
The car lab was in the form of a specially kitted out Mercedes SUV, and the applications to be validated using the lab were all in the field of infotainment and, specifically, those with an audio component.
Enhancing in-car audio quality
“We chose to look at applications which focused on audio improvement inside the vehicle,” says Marega. “For example, if you listen to music in a car you also listen to other noise; the wind, the outside environment, tyres on the road, all sorts of extraneous noises which degrade your listening.”
To accomplish this task, dozens of microphones were positioned inside the car, as were dozens of speakers, to evenly distribute sound.
The high-level algorithms from application developers then processed the audio to greatly enhance sound quality while minimising the impact of the noise.
Other clever bits of software improve the audio quality by taking into account the influence of different features, such as the texture of the seats, the shape of the cockpit and the presence of passengers.
Ensuring that people in the back of the car can hear as well as people in the front – in other words equal distribution of the enhanced sound quality throughout the car – was another achievement. This was not just for listening to music but also for optimising the sound of conversations between people in different positions in the car and of incoming phone calls.
Template for future car audio
According to Marega, the reason the car lab was chosen was because the infotainment applications are typical embedded applications requiring complex, usually heterogeneous, hardware architecture.
“The hardware in the car lab uses a lot of different components, including a variety of general purpose processors and reconfigurable processors. The applications are processed through the HARTES tool chain, and we are able to validate and demonstrate inside the car the algorithms and synthesisers for this applications platform,” he explains.
“This is a real-life case, not just theoretical work on small pieces of code. The developers were able to concentrate on what the applications were supposed to be, using the car’s embedded systems… while the tool chain made sure everything else was handled automatically.”
He says cars of the future will almost certainly have this type of sophisticated sound system as standard thanks, at least partially, to the work done by the project. There will also be closely related applications in cinema, theatre and concert halls with an array of loudspeakers ensuring everybody hears the same quality and level of sound.
“Videoconferencing is another way to use this, making it seem like you are sitting around a table and voices are coming from different directions as in real life,” according to Marega.
He emphasises the tool chain has many other potential uses aside from audio, and that the tools developed will be freely available to download as open source from the project website at the end of the project by anybody who wants to use it.
The industrial partners of the project are also working to commercialise the tool chain or adapt it to work on their own platforms.
HARTES is funded under the ICT strand of the EU’s Sixth Framework Programme for research.
Story Source:
Materials provided by ICT Results. Note: Content may be edited for style and length.
Cite This Page: