Computers 'Taught' To Search For Photos Based On Their Contents
- Date:
- October 9, 2008
- Source:
- Penn State
- Summary:
- A new statistical approach that one day could make it easier to search the Internet for photographs has been given a patent. Its accuracy now is being improved with public participation. Called Automatic Linguistic Indexing of Pictures, the system works by teaching computers to recognize the contents of photographs rather than by searching for keywords in the surrounding text, as is done with most current image-retrieval systems.
- Share:
A pair of Penn State researchers has developed a statistical approach, called Automatic Linguistic Indexing of Pictures in Real-Time (ALIPR), that one day could make it easier to search the Internet for photographs.
The public can participate in improving ALIPR's accuracy by visiting a designated Web site (http://www.alipr.com), uploading photographs, and evaluating whether the keywords that ALIPR uses to describe the photographs are appropriate.
ALIPR works by teaching computers to recognize the contents of photographs, such as buildings, people, or landscapes, rather than by searching for keywords in the surrounding text, as is done with most current image-retrieval systems. The team recently received a patent for an earlier version of the approach, called ALIP, and is in the process of obtaining another patent for the more sophisticated ALIPR. They hope that eventually ALIPR can be used in industry for automatic tagging or as part of Internet search engines.
"Our basic approach is to take a large number of photos -- we started with 60,000 photos -- and to manually tag them with a variety of keywords that describe their contents. For example, we might select 100 photos of national parks and tag them with the following keywords: national park, landscape, and tree," said Jia Li, an associate professor of statistics at Penn State. "We then would build a statistical model to teach the computer to recognize patterns in color and texture among these 100 photos and to assign our keywords to new photos that seem to contain national parks, landscapes, and/or trees. Eventually, we hope to reverse the process so that a person can use the keywords to search the Web for relevant images."
ALIPR assigned the following keywords to this photo of a dinosaur exhibit at the American Museum of Natural History in New York, New York: rock, animal, landscape, man-made, people, cave, wildlife, indoor, interior, lizard, texture, design, grass, car, and building.
Li said that most current image-retrieval systems search for keywords in the text associated with the photo or in the name that was given to the photo. This technique, however, often misses appropriate photos and retrieves inappropriate photos. Li's new technique allows her to train computers to recognize the semantics of images based on pixel information alone.
Li, who developed ALIPR with her colleague James Wang, a Penn State associate professor of information sciences and technology, said that their approach appropriately assigns to photos at least one keyword among seven possible keywords about 90 percent of the time. But, she added, the accuracy rate really depends on the evaluator. "It depends on how specific the evaluator expects the approach to be," she said. "For example, ALIPR often distinguishes people from animals, but rarely distinguishes children from adults."
Although the team's goal is to improve ALIPR's accuracy, Li said she does not believe the approach ever will be 100-percent accurate. "There are so many images out there and so many variations on the images' contents that I don't think it will be possible for ALIPR to be 100-percent accurate," she said. "ALIPR works by recognizing patterns in color and texture. For example, if a cat in a photo is wearing a red coat, the red coat may lead ALIPR to tag the photo with words that are irrelevant to the cat. There is just too much variability out there." Li currently is pursuing some new ideas that may help her to achieve better recognition of image semantics.
This work is being supported by the National Science Foundation.
Story Source:
Materials provided by Penn State. Note: Content may be edited for style and length.
Cite This Page: