Gesture-driven Computers Will Take Computer Gaming To New Level
- Date:
- March 7, 2008
- Source:
- Fraunhofer-Gesellschaft
- Summary:
- A man stands in front of a large screen gesticulating in a seemingly hectic manner. As if by magic, images suddenly appear on the display. Their movements follow the actor's gestures, rotate at the slightest turn of a finger, and become larger or smaller as desired. This scene will look familiar to anyone who has watched the science fiction film 'Minority Report'.
- Share:
A man stands in front of a large screen gesticulating in a seemingly hectic manner. As if by magic, images suddenly appear on the display. Their movements follow the actor’s gestures, rotate at the slightest turn of a finger, and become larger or smaller as desired. This scene will look familiar to anyone who has watched the science fiction film ‘Minority Report’. Paul Chojecki, scientist and project manager at the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, HHI in Berlin, explains the iPoint Presenter in a manner reminiscent of the John Anderton character played by Tom Cruise.
At the heart of the system is a set of cameras which enable the computer to observe the person standing in front of the projection screen. The moment this person moves their hands, the computer reacts without being touched at all. “It begins by determining the position of the user’s index finger, then follows its movements,” Chojecki explains. The user can point to buttons or use gestures to move virtual objects. Through ‘multipointing interaction’, i.e. commands using multiple fingers, he can rotate, enlarge or minimize objects. This requires neither special gloves nor any particular markings. Anyone can intuitively operate the device with their bare hands without any preparation whatsoever.
The iPoint Presenter will be demonstrated for the first time at CeBIT 2008, using the example of an interactive game and a photo viewer. But these are just two of the many possible applications it can be used for. It could replace touch screens at info terminals, for example, or help to edit and organize photos. “What is special about it is that the human-computer communication is entirely contact-free. The system is therefore ideal for scenarios in which contact between the user and the computer is not allowed or not possible, for example in an operating theater,” says Chojecki.
The system would also be ideal for presentations in large auditoriums. The speaker would no longer need a mouse or a laser pointer, and could click through the presentation and highlight important information simply by pointing. A particularly useful feature for situations like these is that the system can be extended to as many as nine cameras. This immensely increases the user’s operating range and enables them to interact with very large screens, for example at trade fairs or advertising events.
Identifying gestures
Gestures enable people of different nationalities to communicate without the need for spoken words. How useful would it be if this type of communication were also possible between humans and technical devices? This form of giving commands would make many situations safer and more pleasant than they are today. Drivers, for example, could operate their car radios and navigators more easily, and TV viewers at home in their armchairs would no longer need a remote control to flick through the channels.
A whole new generation of video games could be created if the technology involved were able to identify and interpret human gestures. Even machines, household appliances or video conference systems could be controlled by mere hand signals. The system could also be of help to physically disabled people, enabling them to interact with a computer without the need for a mouse and keyboard.
To translate these scenarios into reality as soon as possible, researchers at the Fraunhofer Institute for Digital Media Technology IDMT in Ilmenau are now teaching computers to understand human gestures, and are developing a method of automatically recognizing different hand signals. “Our work is based on optical pattern recognition,” explains IDMT project manager Valiantsin Hardzeyeu.
“This technique mimics the way in which humans see things. To this end, we modeled the processes taking place in the human visual apparatus – from the point where the photons hit the retina to the stage in which they are processed in the visual cortex – in a computer simulation.” A first prototype, which comprises an ‘intelligent’ camera connected to a computer with this new type of pattern recognition software, will be presented at the Fraunhofer stand at the CeBIT trade fair in Hanover Germany March 4-9, 2008. The camera will record visitors’ gestures, and the software behind it will analyze them and convert the hand signals into machine commands.
Story Source:
Materials provided by Fraunhofer-Gesellschaft. Note: Content may be edited for style and length.
Cite This Page: