New! Sign up for our free email newsletter.
Science News
from research organizations

New Theory Of Visual Computation Reveals How Brain Makes Sense Of Natural Scenes

Date:
November 20, 2008
Source:
Carnegie Mellon University
Summary:
Computational neuroscientists have developed a computational model that provides insight into the function of the brain's visual cortex and the information processing that enables people to perceive contours and surfaces, and understand what they see in the world around them.
Share:
FULL STORY

Computational neuroscientists at Carnegie Mellon University have developed a computational model that provides insight into the function of the brain's visual cortex and the information processing that enables people to perceive contours and surfaces, and understand what they see in the world around them.

A type of visual neuron known as simple cells can detect lines, or edges, but the computation they perform is insufficient to make sense of natural scenes, said Michael S. Lewicki, associate professor in Carnegie Mellon's Computer Science Department and the Center for the Neural Basis of Cognition. Edges often are obscured by variations in the foreground and background surfaces within the scene, he said, so more sophisticated processing is necessary to understand the complete picture. But little is known about how the visual system accomplishes this feat.

In a paper published online by the journal Nature, Lewicki and his graduate student, Yan Karklin, outline their computational model of this visual processing. The model employs an algorithm that analyzes the myriad patterns that compose natural scenes and statistically characterizes those patterns to determine which patterns are most likely associated with each other.

The bark of a tree, for instance, is composed of a multitude of different local image patterns, but the computational model can determine that all these local images represent bark and are all part of the same tree, as well as determining that those same patches are not part of a bush in the foreground or the hill behind it.

"Our model takes a statistical approach to making these generalizations about each patch in the image," said Lewicki, who currently is on sabbatical at the Institute for Advanced Study in Berlin. "We don't know if the visual system computes exactly in this way, but it is behaving as if it is."

Lewicki and Karklin report that the response of their model neurons to images used in physiological experiments matches well with the response of neurons in higher visual stages. These "complex cells," so-called for their more complex response properties, have been extensively studied, but the role they play in visual processing has been elusive. "We were astonished that the model reproduced so many of the properties of these cells just as a result of solving this computational problem," Lewicki said.

The human brain makes these interpretations of visual stimuli effortlessly, but computer scientists have long struggled to program computers to do the same. "We don't have computer vision algorithms that function as well as the brain," Lewicki said, so computers often have trouble recognizing objects, understanding their three-dimensional nature and appreciating how the objects they see are juxtaposed across a landscape. A deeper understanding of how the brain perceives the world could translate into improved computer vision systems.

In the meantime, the functional explanation of complex cells suggested by the computer model will enable scientists to develop new ways of investigating the visual system and other brain areas. "It's still a theory, after all, so naturally you want to test it further," Lewicki noted. But if the model is confirmed, it could establish a new paradigm for how we derive the general from the specific.

Karklin, who earned his Ph.D. in computational neuroscience, machine learning and computer science in 2007, is now a post-doctoral fellow at New York University.


Story Source:

Materials provided by Carnegie Mellon University. Note: Content may be edited for style and length.


Cite This Page:

Carnegie Mellon University. "New Theory Of Visual Computation Reveals How Brain Makes Sense Of Natural Scenes." ScienceDaily. ScienceDaily, 20 November 2008. <www.sciencedaily.com/releases/2008/11/081119140714.htm>.
Carnegie Mellon University. (2008, November 20). New Theory Of Visual Computation Reveals How Brain Makes Sense Of Natural Scenes. ScienceDaily. Retrieved December 25, 2024 from www.sciencedaily.com/releases/2008/11/081119140714.htm
Carnegie Mellon University. "New Theory Of Visual Computation Reveals How Brain Makes Sense Of Natural Scenes." ScienceDaily. www.sciencedaily.com/releases/2008/11/081119140714.htm (accessed December 25, 2024).

Explore More

from ScienceDaily

RELATED STORIES