New angles on visual cloaking of everyday objects
- Date:
- May 19, 2016
- Source:
- University of Rochester
- Summary:
- Using the same mathematical framework as the Rochester Cloak, researchers have been able to use flat screen displays to extend the range of angles that can be hidden from view. Their method lays out how cloaks of arbitrary shapes, that work from multiple viewpoints, may be practically realized in the near future using commercially available digital devices.
- Share:
Using the same mathematical framework as the Rochester Cloak, researchers at the University of Rochester have been able to use flat screen displays to extend the range of angles that can be hidden from view. Their method lays out how cloaks of arbitrary shapes, that work from multiple viewpoints, may be practically realized in the near future using commercially available digital devices.
The Rochester researchers have shown a proof-of-concept demonstration for such a setup, which is still much lower resolution than the nearly perfect imaging achieved by the Rochester Cloak lenses. But with increasingly higher resolution displays becoming available, the "digital integral cloak" they describe in their new Optica paper will continue to improve.
While the Rochester Cloak offered a simple way of cloaking, it was limited by the cloaking working only over small angles, and cloaking large objects would require large, expensive lenses.
By breaking up the information into distinct pieces, it becomes possible to use currently available digital cameras and digital displays. The Rochester researchers use a camera to scan a background and then encode the information in such a way that every pixel on a screen offers a unique view of a given point on the background for a given position of a viewer. By doing this for many views and using lenticular lenses -- a sheet of plastic with an array of thin, parallel semicylindrical lenses -- they can recreate multiple images of the background, each corresponding to a viewer at a different position. So if the viewer moves from side to side, every part of the background moves accordingly as if the screen was not there, "cloaking" anything in the space between the screen and the background.
In the current system, it takes PhD student Joseph Choi and his advisor Professor of Physics John Howell several minutes to scan, process and update the image on the screen, i.e. to update the background. But Choi explains they are hoping soon to be able to do this in real-time, even if at lower resolution.
Their mathematical framework and their proof-of-concept setup also demonstrates how any object of a fixed size can be cloaked, even when in motion -- so long as the shape of the object remains fixed and does not deform. To do this one side of the object would be covered in an array of sensors -- effectively cameras -- and the other side in pixels with tiny lenses over them. Choi's and Howell's approach could then be used to identify which sensors need to feed into which pixels so as to show the background as if an object wasn't there. A similar trick has been used in advertising, but for one viewing angle only. However, by using the Rochester group's setup, a car, for example, could be made invisible to viewers from multiple positions, not just to a person at a predetermined position.
Story Source:
Materials provided by University of Rochester. Note: Content may be edited for style and length.
Journal Reference:
- Joseph S. Choi, John C. Howell. Paraxial ray optics cloaking. Optics Express, 2014; 22 (24): 29465 DOI: 10.1364/OE.22.029465
Cite This Page: