Computer Scientists Create 'light Field Camera' Banishing Fuzzy Photos
- Date:
- November 14, 2005
- Source:
- Stanford University
- Summary:
- We've all done it. Lost that priceless Kodak moment by snapping a photo that was grainy, dark, overexposed or out of focus. While user ineptitude is often at the root of our blurry snapshots, the limits of conventional cameras can be to blame as well. But Stanford computer scientists are now making strides to combat the fuzzy photo by bringing photographic technology into sharp focus.
- Share:
We've all done it. Lost that priceless Kodak moment by snapping a photo that was grainy, dark, overexposed or out of focus. While user ineptitude is often at the root of our blurry snapshots, the limits of conventional cameras can be to blame as well. But Stanford computer scientists are now making strides to combat the fuzzy photo by bringing photographic technology into sharp focus.
Ren Ng, a computer science graduate student in the lab of Pat Hanrahan, the Canon USA Professor in the School of Engineering, has developed a "light field camera" capable of producing photographs in which subjects at every depth appear in finely tuned focus. Adapted from a conventional camera, the light field camera overcomes low-light and high-speed conditions that plague photography and foreshadows potential improvements to current scientific microscopy, security surveillance and sports and commercial photography.
"Currently, cameras have to make decisions about the focus before taking the exposure, which engineering-wise can be very difficult," said Ng. "With the light field camera, you can take one exposure, capture a lot more information about the light and make focusing decisions after you've already taken the shot. It is more flexible."
The light field camera, sometimes referred to as a "plenoptic camera," looks and operates exactly like an ordinary handheld digital camera. The difference lies inside. In a conventional camera, rays of light are corralled through the camera's main lens and converge on the film or digital photosensor directly behind it. Each point on the resulting two-dimensional photo is the sum of all the light rays striking that location.
The light field camera adds an additional element—a microlens array—inserted between the main lens and the photosensor. Resembling the multi-faceted compound eye of an insect, the microlens array is a square panel composed of nearly 90,000 miniature lenses. Each lenslet separates back out the converged light rays received from the main lens before they hit the photosensor and changes the way the light information is digitally recorded. Custom processing software manipulates this "expanded light field" and traces where each ray would have landed if the camera had been focused at many different depths. The final output is a synthetic image in which the subjects have been digitally refocused.
Tweaking tradition
Expanding the light field demands that the rules of traditional photography be tweaked. Ordinarily, a tradeoff exists between aperture size, which determines the amount of light reaching the film or photosensor, and depth of field, which determines which objects in an image will be sharp and which will be fuzzy. As the aperture size increases, more light passes through the lens and the depth of field shallows—bringing into focus only the nearest objects and severely blurring the surrounding subjects.
The light field camera decouples aperture size and depth of field. The microlens array harnesses the additional light to reveal the depth of each object in the image and project tiny, sharp subimages onto the photosensor. The blurry halo typically surrounding the centrally focused subject is "un-blurred." In this way, the benefits of large apertures—increased light, shorter exposure time, reduced graininess—can be exploited without sacrificing the depth of field or sharpness of the image.
Extending the depth of field while maintaining a wide aperture may provide significant benefits to several industries, such as security surveillance. Often mounted in crowded or dimly lit areas, such as congested airport security lines and backdoor exits, monitoring cameras notoriously produce grainy, indiscernible images.
"Let's say it's nighttime and the security camera is trying to focus on something," said Ng. "If someone comes and they are moving around, the camera will have trouble tracking them. Or if there are two people, whom does it choose to track? The typical camera will close down its aperture to try capturing a sharp image of both people, but the small aperture will produce video that is dark and grainy."
The idea behind the light field camera is not new. With the roots of its conception dating back nearly a century, several variants of the light field camera have been devised over the years, each with slight variations in its optical system. Other models that rely on refocusing light fields have been slow and bulky and have generated gaps in the light fields, known as aliasing. Ng's camera—compact and portable with drastically reduced aliasing—displays greater commercial utility.
Marc Levoy, professor of computer science and electrical engineering; Mark Horowitz, the Yahoo! Founders Professor in the School of Engineering; Mathieu Bredif, M.S. '05 in computer science; and Gene Duval, B.S. '75, M.S. '78 in mechanical engineering and founder of Duval Design, also contributed to this work.
The research was supported by the Office of Technology Licensing Birdseed Fund, which provides small grants for the prototype development of unlicensed technologies. A manuscript detailing the theoretic performance of the light field camera appeared in Transactions on Graphics, published by the Association for Computing Machinery in July, and subsequently was presented at the 2005 ACM SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) conference in August in Los Angeles.
Story Source:
Materials provided by Stanford University. Note: Content may be edited for style and length.
Cite This Page: