New! Sign up for our free email newsletter.
Science News
from research organizations

Visual search function: Where scene context happens in our brain

Date:
May 21, 2013
Source:
University of California - Santa Barbara
Summary:
Though a seemingly simple and intuitive strategy, visual search function -- a process that takes mere seconds for the human brain -- is still something that a computer can't do as accurately. Over the millennia of human evolution, our brains developed a pattern of search based largely on environmental cues and scene context. It's an ability that has not only helped us find food and avoid danger in humankind's earliest days, but continues to aid us today. Where this -- the search for objects using scene and other objects -- occurs in the brain is little understood, and is for the first time discussed in a new paper.
Share:
FULL STORY

In a remote fishing community in Venezuela, a lone fisherman sits on a cliff overlooking the southern Caribbean Sea. This man -- the lookout -- is responsible for directing his comrades on the water, who are too close to their target to detect their next catch. Using abilities honed by years of scanning the water's surface, he can tell by shadows, ripples, and even the behavior of seabirds, where the fish are schooling, and what kind of fish they might be, without actually seeing the fish. This, in turn, changes where the boats go, and how the men fish.

Though a seemingly simple and intuitive strategy, the lookout's visual search function -- a process that takes mere seconds for the human brain -- is still something that a computer, despite technological advances, can't do as accurately.

"Behind what seems to be automatic is a lot of sophisticated machinery in our brain," said Miguel Eckstein, professor in UC Santa Barbara's Department of Psychological & Brain Sciences. "A great part of our brain is dedicated to vision."

Over the millennia of human evolution, our brains developed a pattern of search based largely on environmental cues and scene context. It's an ability that has not only helped us find food and avoid danger in humankind's earliest days, but continues to aid us today, in tasks as banal as driving to work, or shopping; or as specialized as reading X-rays.

Where this -- the search for objects using scene and other objects -- occurs in the brain is little understood, and is for the first time discussed in the paper, "Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes," published recently in the Journal of Neuroscience.

The researchers flashed hundreds images of indoor and outdoor scenes before observers, and instructed them to search for certain objects that were consistent with those scenes. Half of the images, however, did not contain the target object. During the trials, the subjects were asked to indicate whether the target object was present in the scene.

The researchers were particularly interested in the images that did not contain the target. Another measure was taken to determine where subjects expected specific objects to be in target-absent scenes. Invariably, the subjects would indicate similar areas: If presented with a living room scene and told to look for a clock or a painting, they would indicate the wall; if shown a photo of a bathroom and told to indicate where to expect a hand soap or toothbrush, they would indicate the sink.

The searched object's contextual location in the scenes, according to the study, is represented in the area called the lateral occipital complex (LOC), a place that corresponds roughly to the lower back portion of the head, toward the side. This area, according to Eckstein, has the ability to account for other objects in the scene that often appear in close spatial proximity with the searched object -- something computers are only recently being taught to do.

"So, if you're looking for a computer mouse on a cluttered desk, a machine would be looking for things shaped like a mouse. It might find it, but it might see other objects of similar shape, and classify that as a mouse," Eckstein said. Computer vision systems might also not associate their target with specific locations or other objects. So, to a machine, the floor is just as likely a place for a mouse as a desk.

The LOC, on the other hand, would contain the information the brain needs to direct a person's attention and gaze first toward the most likely place that a mouse might be, such as on top of the desk, or near the keyboard. From there, other visual parts of the brain go to work, searching for particular characteristics, or determining the target's presence.

So strong is the scene context in biasing search, said Eckstein, that if another similar-looking object was placed in the location where the mouse is likely to be, and that scene briefly flashed before your eyes, you would likely -- erroneously -- interpret that object as the mouse.

While scene context information has been found highly active in the LOC, other visual areas of the brain are also influenced by context to certain degrees, including the interparietal sulcus, located near the top of the head; and the retrosplenial cortex, found in the brain's interior.

"Since contextual guidance is a critical strategy that allows humans to rapidly find objects in scenes, studying the brain areas involved in normal humans might help us to gain a better understanding of neural areas involved in those with visual search deficits, such as brain-damaged patients and the elderly," Eckstein said. "Also, a large component of becoming an expert searcher -- like radiologists or fishermen -- is exploiting contextual relationships to search. Thus, understanding the neural basis of contextual guidance might allow us to gain a better understanding about what brain areas are critical to gain search expertise."

Research on this study was also performed by visiting researcher and first author Tim Preston, Fei Guo, Koel Das, and Barry Giesbrecht, all from the Department of Psychological and Brain Sciences, and from the Institute for Collaborative Biotechnologies at UCSB.


Story Source:

Materials provided by University of California - Santa Barbara. Note: Content may be edited for style and length.


Journal Reference:

  1. T. J. Preston, F. Guo, K. Das, B. Giesbrecht, M. P. Eckstein. Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes. Journal of Neuroscience, 2013; 33 (18): 7846 DOI: 10.1523/JNEUROSCI.5840-12.2013

Cite This Page:

University of California - Santa Barbara. "Visual search function: Where scene context happens in our brain." ScienceDaily. ScienceDaily, 21 May 2013. <www.sciencedaily.com/releases/2013/05/130521105706.htm>.
University of California - Santa Barbara. (2013, May 21). Visual search function: Where scene context happens in our brain. ScienceDaily. Retrieved November 17, 2024 from www.sciencedaily.com/releases/2013/05/130521105706.htm
University of California - Santa Barbara. "Visual search function: Where scene context happens in our brain." ScienceDaily. www.sciencedaily.com/releases/2013/05/130521105706.htm (accessed November 17, 2024).

Explore More

from ScienceDaily

RELATED STORIES