Out Of Darkness, Sight: Rare Cases Of Restored Vision Reveal How The Brain Learns To See
- Date:
- September 18, 2009
- Source:
- Massachusetts Institute of Technology
- Summary:
- Cases of restored vision after a lifetime of blindness, though exceedingly rare, provide a unique opportunity to address several fundamental questions regarding brain function. After being deprived of visual input, the brain needs to learn to make sense of the new flood of visual information. Very little is known about how this learning takes place, but a new study by neuroscientists suggests that dynamic information — that is, input from moving objects — is critical.
- Share:
Cases of restored vision after a lifetime of blindness, though exceedingly rare, provide a unique opportunity to address several fundamental questions regarding brain function. After being deprived of visual input, the brain needs to learn to make sense of the new flood of visual information. Very little is known about how this learning takes place, but a new study by MIT neuroscientists suggests that dynamic information — that is, input from moving objects — is critical.
In the United States, as in most developed nations, infants with curable blindness are treated within a few weeks of birth. However, in developing nations such as India, there are relatively more instances of children born with curable forms of blindness that are left untreated for want of medical or financial resources. Such children face greatly elevated odds of early mortality, illiteracy and unemployment. Doctors have been hesitant to treat older patients because the conventional dogma holds that the brain is incapable of learning to see after age 5 or 6.
MIT brain and cognitive sciences professor Pawan Sinha, through his humanitarian foundation, Project Prakash (Sanskrit for "light"), has treated and studied several such patients over the past five years. The Prakash effort serves the dual purpose of providing sight to blind children and, in the process, tackling several foundational issues in neuroscience.
The new findings from Sinha's team, reported in the November issue of the Journal of Psychological Science, provide clues about how the brain learns to put together the visual world. They not only support the idea of treating blindness in older children and adults, but also offer insight into modeling the human visual system, diagnosing visual disorders, creating rehabilitation procedures and developing computers that can see.
This work builds on a 2007 study in which Sinha and graduate student Yuri Ostrovsky showed that a woman who had had her sight restored at age 12 had nearly normal visual processing abilities. These findings were significant since they challenged the widely held notion of a "critical age" for acquiring vision.
However, because they came across the woman 20 years after her sight was restored, the researchers had no chance to study how her brain first learned to process visual input. The new work focuses on three adolescent and young adult patients in India, and follows them from the time of treatment to several months afterward. It suggests that "not only is recovery possible, but also provides insights into the mechanism by which such recovery comes about," says Sinha.
Testing the patients within weeks of sight restoration, Sinha and his colleagues found that subjects had very limited ability to distinguish an object from its background, identify overlapping objects, or even piece together the different parts of an object. Eventually, however, they improved in this "visual integration" task, discovering whole objects and segregating them from their backgrounds.
"Somehow our brain is able to solve the problem, and we want to know how it does it or how it learns to do it," says Ostrovsky, lead author of the new paper.
'Many different pieces'
One of their subjects, known as S.K., suffered from a rare condition called secondary congenital aphakia (a lack of lenses in the eye) and was treated with corrective optics in 2004, at the age of 29. After treatment, S.K. participated in a series of tests asking him to identify simple shapes and objects.
S.K. could identify some shapes (triangles, squares, etc.) when they were side-by-side, but not when they overlapped. His brain was unable to distinguish the outlines of a whole shape; instead, he believed that each fragment of a shape was its own whole. For S.K. and other patients like him, "it seems like the world has been broken into many different pieces," says Sinha.
However, if a square or triangle was put into motion, S.K. (and the other two patients) could much more easily identify it. (With motion, their success rates improved from close to zero to around 75 percent.) Furthermore, motility of objects greatly influenced the patients' ability to recognize them in images.
During follow-up tests that continued for 18 months after treatment, the patients' performance with stationary objects gradually improved to almost normal.
These results suggest that movement patterns in the world provide some of the most salient clues about its constituent objects. The brain is programmed to use similarity of dynamics to infer which regions constitute objects, says Sinha. The significance of motion may go even further, the team believes. It may serve to "bootstrap" the learning of rules and heuristics by which the brain comes to be able to parse static images. The idea is simple but far-reaching. Starting from an initial capability of grouping via motion, the brain begins to notice that similar dynamics are correlated with similarity in other region attributes such as orientation and color. These attributes can then be used even in the absence of motion.
In addition to understanding how the human visual system works, the findings could help researchers build robots with visual systems capable of autonomously discovering objects in their environment.
"If we could understand how the brain learns to see, we can better understand how to train a computer to do it," says Ostrovsky.
Other authors of the paper are Ethan Myers, an MIT graduate student in brain and cognitive sciences, and Suma Ganesh and Umang Mathur of Dr. Shroff's Charity Eye Hospital in New Delhi, India.
The research was funded by the National Institutes of Health, the Alfred P. Sloan Foundation, the John Merck Scholars Fund, and The James McDonnell Foundation.
Story Source:
Materials provided by Massachusetts Institute of Technology. Original written by Anne Trafton, MIT News Office. Note: Content may be edited for style and length.
Cite This Page: