Body-mounted cameras turn motion capture inside out
- Date:
- August 9, 2011
- Source:
- Carnegie Mellon University
- Summary:
- Traditional motion capture techniques use cameras to meticulously record the movements of actors inside studios, enabling those movements to be translated into digital models. But by turning the cameras around -- mounting almost two dozen, outward-facing cameras on the actors themselves -- scientists have shown that motion capture can occur almost anywhere -- in natural environments, over large areas and outdoors.
- Share:
Traditional motion capture techniques use cameras to meticulously record the movements of actors inside studios, enabling those movements to be translated into digital models. But by turning the cameras around -- mounting almost two dozen, outward-facing cameras on the actors themselves -- scientists at Disney Research, Pittsburgh (DRP), and Carnegie Mellon University (CMU) have shown that motion capture can occur almost anywhere -- in natural environments, over large areas and outdoors.
Motion capture makes possible scenes such as those in "Pirates of the Caribbean: Dead Man's Chest," where the movements of actor Bill Nighy were translated into a digitally created Davy Jones with octopus-like tentacles forming his beard. But body-mounted cameras enable capture of motions, such as running outside or swinging on monkey bars, that would be difficult -- if not impossible -- otherwise, said Takaaki Shiratori, a post-doctoral associate at DRP.
"This could be the future of motion capture," said Shiratori, who will make a presentation about the new technique today (Aug. 8) at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver. As video cameras become ever smaller and cheaper, "I think anyone will be able to do motion capture in the not-so-distant future," he said.
Other researchers on the project include Jessica Hodgins, DRP director and a CMU professor of robotics and computer science; Hyun Soo Park, a Ph.D. student in mechanical engineering at CMU; Leonid Sigal, DRP researcher; and Yaser Sheikh, assistant research professor in CMU's Robotics Institute.
The wearable camera system makes it possible to reconstruct the relative and global motions of an actor thanks to a process called structure from motion (SfM). Takeo Kanade, a CMU professor of computer science and robotics and a pioneer in computer vision, developed SfM 20 years ago as a means of determining the three-dimensional structure of an object by analyzing the images from a camera as it moves around the object, or as the object moves past the camera.
In this application, SfM is not used primarily to analyze objects in a person's surroundings, but to estimate the pose of the cameras on the person. Researchers used Velcro to mount 20 lightweight cameras on the limbs, and trunk of each subject. Each camera was calibrated with respect to a reference structure. Each person then performed a range-of-motion exercise that allowed the system to automatically build a digital skeleton and estimate positions of cameras with respect to that skeleton.
SfM is used to estimate rough position and orientation of limbs as the actor moves through an environment and to collect sparse 3D information about the environment that can provide context for the captured motion. The rough position and orientation of limbs serves as an initial guess for a refinement step that optimizes the configuration of the body and its location in the environment, resulting in the final motion capture result.
The quality of motion capture from body-mounted cameras does not yet match the fidelity of traditional motion capture, Shiratori said, but will improve as the resolution of small video cameras continues to improve.
The technique requires a significant amount of computational power; a minute of motion capture now can require an entire day to process. Future work will include efforts to find computational shortcuts, such as performing many of the steps simultaneously through parallel processing.
For more information and to see a video, visit the project website at: http://drp.disneyresearch.com/projects/mocap/
Story Source:
Materials provided by Carnegie Mellon University. Note: Content may be edited for style and length.
Cite This Page: