New! Sign up for our free email newsletter.
Science News
from research organizations

Explaining How The Brain Recognizes Faces

Date:
April 9, 2006
Source:
Cell Press
Summary:
The mechanism by which the brain recognizes faces has long fascinated neurobiologists, many of whom believe that the brain perceives faces as "special" and very different from other visual objects. For example, classic studies found that turning the image of a face upside down compromises recognition much more than does similarly inverting other objects.
Share:
FULL STORY

The mechanism by which the brain recognizes faces has long fascinated neurobiologists, many of whom believe that the brain perceives faces as "special" and very different from other visual objects. For example, classic studies found that turning the image of a face upside down compromises recognition much more than does similarly inverting other objects.

More recent studies have suggested that there may even be particular neurons tuned to the identity of one particular person. These neurons, according to that theory, lie in the "fusiform face area," FFA, known to be particularly active when a person encounters a face.

However, in the April 6, 2006 issue of Neuron, Maximilian Riesenhuber of Georgetown University Medical Center and his colleagues (Jiang et al.) report evidence for a theory that the FFA, instead, contains tightly integrated circuitry that recognizes faces based on selective processing of shapes of facial features.

In their studies, the researchers first constructed a computational model that represented how their hypothesized neuronal circuitry would work. This model aimed at predicting how the circuitry could give rise to the perception of faces. Such perception includes the shape of specific features--eyes, noses, and mouths--as well as the "configuration" of those features--their position on the face.

The researchers found that their model captured such aspects of face perception, even though the circuitry in their model had not explicitly coded them. To demonstrate that their model could also account for how other neuronal circuitry could be similarly tuned to other objects, they also tested how it might behave when it encountered images of cars. They found that model worked just as well to produce the same recognition characteristics as in faces.

Riesenhuber and his colleagues tested their "shape-based" model experimentally by exposing volunteers to images of faces that could be precisely "morphed" with a computer program to subtly alter the facial features. And at the same time, the subjects' brains were scanned by using functional magnetic resonance imaging (fMRI) to detect patterns of activity in the FFA. The technique of fMRI involves using harmless magnetic fields and radio waves to measure blood flow in brain regions, which reflects their activity. The researchers found that the results from the fMRI studies agreed with those of the computational model.

The researchers concluded that "we have shown that a computational implementation of a physiologically plausible neural model of face processing can quantitatively account for key data, leading to the prediction that human face discrimination is based on a sparse population code of sharply tuned face neurons.

"In particular, we have shown that a shape-based model can account for the face inversion effect, can produce 'configural' effects without explicit configural coding, and can quantitatively account for the experimental data. The model thus constitutes a computational counterexample to theories that posit that human face discrimination necessarily relies on face-specific processes."

In a preview of the paper in the same issue of Neuron, Tzvi Ganel wrote that Riesenhuber and colleagues "provide a compelling array of evidence supporting the idea that the processing of faces and objects do not rely on qualitatively different mechanisms. In a series of experiments, Jiang et al. present and integrate findings from neural modeling, behavior, and fMRI, showing that face classification, similarly to object classification, can be achieved by a simple-to-complex architecture based on hierarchical shape detectors.

"Jiang et al.'s modeling and behavioral findings have strong implications for understanding how faces and objects are processed in the human brain," wrote Ganel.

The researchers include Xiong Jiang, Ezra Rosen, Maximilian Riesenhuber, Thomas Zeffiro, and John VanMeter of Georgetown University Medical Center in Washington, D.C.; Volker Blanz of Max-Planck-Institut für Informatik in Saarbrücken, Germany. This research was supported in part by NIMH grants 1P20MH66239-01A1, 1R01MH076281-01, and an NSF CAREER Award (#0449743).

Jiang et al.: "Evaluation of a Shape-Based Model of Human Face Discrimination Using fMRI and Behavioral Techniques." Publishing in Neuron 50, 159-172, April 6, 2006. DOI 10.1016/j.neuron.2006.03.012 


Story Source:

Materials provided by Cell Press. Note: Content may be edited for style and length.


Cite This Page:

Cell Press. "Explaining How The Brain Recognizes Faces." ScienceDaily. ScienceDaily, 9 April 2006. <www.sciencedaily.com/releases/2006/04/060409153926.htm>.
Cell Press. (2006, April 9). Explaining How The Brain Recognizes Faces. ScienceDaily. Retrieved December 21, 2024 from www.sciencedaily.com/releases/2006/04/060409153926.htm
Cell Press. "Explaining How The Brain Recognizes Faces." ScienceDaily. www.sciencedaily.com/releases/2006/04/060409153926.htm (accessed December 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES