New! Sign up for our free email newsletter.
Science News
from research organizations

Findings About Veracity Of Peripheral Vision Could Lead To Better Robotic Eyes

Date:
October 18, 2009
Source:
Kansas State University
Summary:
Psychology researchers have found that peripheral vision is most important for telling us what type of scene we're looking at. Examining how people take in scene information paves the way for building better robots.
Share:
FULL STORY

Two Kansas State University psychology researchers have found that although central vision allows our eyes to discern the details of a scene, our peripheral vision is most important for telling us what type of scene we're looking at in the first place, such as whether it is a street, a mountain or a kitchen.

"I think the most surprising part of the study is that we didn't really expect peripheral vision to do that well at getting the gist of a scene," said Adam Larson, K-State master's student in psychology. "What makes our study interesting is that it's showing that peripheral vision is really important for perceiving scenes."

Larson is working with Lester Loschky, a K-State assistant professor of psychology, to research how people understand and label what they see around them in the world. This study of peripheral versus central vision appeared in the Journal of Vision.

The researchers showed people two kinds of photographs of everyday scenes, ones in which the periphery was obscured and others in which the center of the image was obscured. View the images online at http://ow.ly/ueEu

"We found that your peripheral vision is important for taking in the gist of a scene and that you can remove the central portion of an image, where your visual acuity is best, and still do just fine at identifying the scene," Larson said.

Loschky and Larson also showed images in which they obscured less of the image. They found that people's central vision benefitted more from just a few additional pixels than did their peripheral vision. This suggests how the areas of our eyes known as our visual fields use information differently.

Loschky said the human eye moves on average three times each second. When the eyes are still -- called fixations -- our brains take in information. Metaphorically, fixations are like snapshots of a scene. When the eyes are moving, he said it is like moving a video camera too fast -- the images are blurry, so our brains don't take in much information.

"The question we're trying to answer is, how do we get the information about our environment from these

snapshots?" Loschky said. "It turns out that the first meaningful information you can get from fixation is probably to give it a very short label."

That's what we do when we channel surf, he said.

"People tend to do that pretty quickly, particularly the ones who drive everybody else crazy," Loschky said. "To decide that quickly whether to watch a program, they have to get an overall impression of what they're looking at, even though they don't know many of the details of it."

Loschky said that examining how people take in scene information paves the way for a better understanding of eyewitness testimony in court cases, as well possible applications in advertising and marketing. But he said the most important contribution of this research could probably be for building better robots.

"If you want to build a machine that can understand what it's looking at, we have to understand how humans do it first," he said. "Suppose you give your robot two video camera eyes. How are you going to get the robot to the next critical step, which is to understand what it sees? That's a really difficult problem that computer scientists have been working on since the 1950s, and they're just starting to have some success now."

In future research, Larson and Loschky will study how people make sense of events that take place over time, such as seeing that someone is washing the dishes, taking a walk or robbing a bank. Whereas the scene gist research investigates human visual experiences by using single photographs, the researchers said this research will investigate our visual experiences using sequences of photographs or movies.


Story Source:

Materials provided by Kansas State University. Note: Content may be edited for style and length.


Journal Reference:

  1. Adam M. Larson, Lester C. Loschky. The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 2009; 9 (10): 1 DOI: 10.1167/9.10.6

Cite This Page:

Kansas State University. "Findings About Veracity Of Peripheral Vision Could Lead To Better Robotic Eyes." ScienceDaily. ScienceDaily, 18 October 2009. <www.sciencedaily.com/releases/2009/10/091015102043.htm>.
Kansas State University. (2009, October 18). Findings About Veracity Of Peripheral Vision Could Lead To Better Robotic Eyes. ScienceDaily. Retrieved November 14, 2024 from www.sciencedaily.com/releases/2009/10/091015102043.htm
Kansas State University. "Findings About Veracity Of Peripheral Vision Could Lead To Better Robotic Eyes." ScienceDaily. www.sciencedaily.com/releases/2009/10/091015102043.htm (accessed November 14, 2024).

Explore More

from ScienceDaily

RELATED STORIES