ONR Looks To Human Visual System To Improve Satellite Images
- Date:
- April 27, 2000
- Source:
- Office Of Naval Research
- Summary:
- An advanced neural network technique to increase the level of detail captured in Landsat satellite images has been developed by ONR researchers. The technique, which uses the human visual system as a model, reveals the composition of information contained in a single pixel.
- Share:
An advanced neural network technique to increase the level of detail captured in Landsat satellite images has been developed by ONR researchers. The technique, which uses the human visual system as a model, reveals the composition of information contained in a single pixel.
"It essentially increases the resolution of satellite images," said ONR Program Officer Harold Szu, who presented the results of this research today at the International Society for Optical Engineering symposium in Orlando, Fla. "In the past, information presented at the pixel level followed the 'winner take all' classification," Szu added. "If a sensor saw mostly trees and one small, man-made structure, the pixel showed only trees." The new more sophisticated wavelet technique reveals objects that traditional techniques might overlook by using highly sophisticated neural network technology to eliminate distracting "background noise" and make subtle judgments about the remaining information.
The key to the ONR breakthrough is a smart sensing capability modeled on the human visual system. A pair or more of sensors survey the scene and compare and contrast data before creating the final image, similar to the way a person's visual system uses a pair of eyes to generate a single image. Biological sensing systems typically contain two sensors to provide the brain with an immediate comparison that doesn't rely on past memory or detailed instructions from a teacher to draw conclusions from the information. When the sensors are in agreement, the brain regards the data as information; when they disagree, the brain disregards the data as noise.
Szu and his colleague Dr. James Buss, also with ONR, describe a new learning paradigm in their paper "ICA Neural Net to Refine Remote Sensing with Multiple Labels," whereby a pair of sensors can identify any common information and leave behind anything not useful. "It's not the 'garbage in, garbage out' processing of a typical computer, which is a faithful processor but not a very smart one," Buss said. The smart sensing system can dig out the nuggets of information even if they're buried in garbage.
"Sensor pairs are no doubt a survival mechanism that gives us the maximum amount of feedback in the shortest span of time," Szu said. "Sensor pairs allow a child to learn about the world without a lesson plan or the constant presence of a teacher."
Remote sensing is the first real-world application of smart sensor processing. It will provide detailed compositions of information within each pixel on Landsat images. The technology will be installed in F-18 military aircraft to assist pilots in passive surveillance. Smart sensing will provide a statistical warning to the pilot, who can then follow up with an active surveillance tool such as LIDAR to pinpoint and analyze objects.
The sensing system can also be used to monitor environmental change such as rain forest deforestation. Most developing countries do not have the people or equipment to conduct earth resource management studies. This kind of tool could help them protect valuable natural resources. Commercially, the smart sensor could be used to rid cell phones of background noise.
Story Source:
Materials provided by Office Of Naval Research. Note: Content may be edited for style and length.
Cite This Page: