Facial recognition technology aimed at spotting terrorists
- Date:
- September 16, 2010
- Source:
- University of Texas at Dallas
- Summary:
- Rapid improvements in facial-recognition software mean airport security workers might one day know with near certainty whether they're looking at a stressed-out tourist or staring a terrorist in the eye. Researchers are evaluating how well these rapidly evolving recognition programs work. The researchers are comparing the rates of success for the software to the rates for non-technological, but presumably "expert" human evaluation.
- Share:
Rapid improvements in facial-recognition software mean airport security workers might one day know with near certainty whether they're looking at a stressed-out tourist or staring a terrorist in the eye.
A research team led by Dr. Alice O'Toole, a professor in The University of Texas at Dallas' School of Behavioral and Brain Sciences, is evaluating how well these rapidly evolving recognition programs work. The researchers are comparing the rates of success for the software to the rates for non-technological, but presumably "expert" human evaluation.
"The government is interested in spotting people who might pose a danger," O'Toole said. "But they also don't want to have too many false alarms and detain people who are not real risks."
The studies in the Face Perception and Research Laboratories are funded by the U.S. Department of Defense. The agency is seeking the most accurate and cost-effective way to recognize individuals who might pose a security risk to the nation.
Algorithms -- formulae that allow computers to "recognize" faces -- vary greatly among the various software developers, and most have not faced real-world challenges. So O'Toole and her team are carefully examining where the algorithms succeed and where they come up short. They're using point-by-point comparisons to examine similarities in millions of faces captured within a database, and then comparing results to algorithm determinations.
In the studies, humans and algorithms decided whether pairs of face images, taken under different illumination conditions, were pictures of the same person or different people.
The UT Dallas researchers have worked with algorithms that match up still photos and are now moving into comparisons involving more challenging images, such as faces caught on video or photographs taken under poor lighting conditions.
"Many of the images that security people have to work with are not high-quality," O'Toole said. "They may be taken off closed-circuit television or other low-resolution equipment."
The study is likely to continue through several more phases, as more and better software programs are presented for review. So far, the results of man vs. machine have been a bit surprising, O'Toole said.
"In fact, the very best algorithms performed better than humans at identifying faces," she said. "Because most security applications rely primarily on human comparisons up until now, the results are encouraging about the prospect of using face recognition software in important environments."
The real success comes when the software is combined with human evaluation techniques, O'Toole said. By using the software to spot potential high-risk individuals and then combining the software with the judgment of a person, nearly 100 percent of matching faces were identified, O'Toole said.
The researchers also are interested in the role race plays in humans' ability to spot similar facial features. O'Toole said many studies indicate individuals almost always recognize similarities among members of their own race with more accuracy. But there is little research evaluating how technological tools differ in recognizing faces of varying races.
In a paper to be published soon in ACM Transactions on Applied Perception, O'Toole reports that the "other race effect" occurs for algorithms tested in a recent international competition for state-of-the-art face recognition algorithms. The study involved a Western algorithm made by fusing eight algorithms from Western countries and an East Asian algorithm made by fusing five algorithms from East Asian countries. At the low false-accept rates required for most security applications, the Western algorithm recognized Caucasian faces more accurately than East Asian faces, and the East Asian algorithm recognized East Asian faces more accurately than Caucasian faces.
Next, using a test that spanned all false-alarm rates, O'Toole's team compared the algorithms with humans of Caucasian and East Asian descent matching face identity in an identical stimulus set. In this case, both algorithms performed better on the Caucasian faces, the "majority" race in the database. The Caucasian face advantage was far larger for the Western algorithm than for the East Asian algorithm.
Humans showed the standard other-race effect for these faces, but showed more stable performance than the algorithms over changes in the race of the test faces. These findings indicate that state-of-the-art face-recognition algorithms, like humans, struggle with "other-race face" recognition, O'Toole said.
The companies that develop the most reliable facial recognition software are likely to reap big profits down the line. Although governments may be their most obvious clients, there is also a great deal of interest from other major industries.
"Casinos have been some of the first users of face recognition software," O'Toole said. "They obviously want to be able to spot people who are counting cards and trying to cheat the casino."
O'Toole collaborated on the research with Dr. P. Jonathon Phillips of the National Institute of Standards and Technology, Dr. Fang Jiang of the University of Washington, and Dr. Abhijit Narvekar of Alcon Labs.
Story Source:
Materials provided by University of Texas at Dallas. Note: Content may be edited for style and length.
Cite This Page: