Computer software analyzing facial expressions accurately predicts student test performance
- Date:
- April 16, 2014
- Source:
- University of California, San Diego
- Summary:
- Real-time engagement detection technology that processes facial expressions can perform with accuracy comparable to that of human observers, according to new research. The study used automatic expression recognition technology to analyze students' facial expressions on a frame-by-frame basis and estimate their engagement level. The study also revealed that engagement levels were a better predictor of students' post-test performance than the students' pre-test scores.
- Share:
The University of California, San Diego and Emotient, the leading provider of facial expression recognition data and analysis, announced publication of a joint study by two Emotient co-founders affiliated with UC San Diego, together with researchers from Virginia Commonwealth University and Virginia State University. The study demonstrates that a real-time engagement detection technology that processes facial expressions can perform with accuracy comparable to that of human observers. The study also revealed that engagement levels were a better predictor of students' post-test performance than the students' pre-test scores.
The early online version of the paper, "The Faces of Engagement: Automatic Recognition of Student Engagement," appeared today in the journal, IEEE Transactions on Affective Computing.
"Automatic recognition of student engagement could revolutionize education by increasing understanding of when and why students get disengaged," said Dr. Jacob Whitehill, Machine Perception Lab researcher in UC San Diego's Qualcomm Institute and Emotient co-founder. "Automatic engagement detection provides an opportunity for educators to adjust their curriculum for higher impact, either in real time or in subsequent lessons. Automatic engagement detection could be a valuable asset for developing adaptive educational games, improving intelligent tutoring systems and tailoring massive open online courses, or MOOCs."
The study consisted of training an automatic detector, which measures how engaged a student appears in a webcam video while undergoing cognitive skills training on an iPad®. The study used automatic expression recognition technology to analyze students' facial expressions on a frame-by-frame basis and estimate their engagement level.
"This study is one of the most thorough to date in the application of computer vision and machine learning technologies for automatic student engagement detection," said Javier Movellan, co-director of the Machine Perception Lab at UC San Diego and Emotient co-founder and lead researcher. "The possibilities for its application in education and beyond are tremendous. By understanding what parts of a lecture, conversation, game, advertisement or promotion produced different levels of engagement, an individual or business can obtain valuable feedback to fine-tune the material to something more impactful."
In addition to Movellan and Whitehill, the study's authors include Virginia Commonwealth professor of developmental psychology, Zewelanji Serpell, MD, as well as Yi-Ching Lin and Aysha Foster from the department of psychology at Virginia State. Whitehill (Ph.D., '12) recently received his doctorate from the Computer Science and Engineering department of UC San Diego's Jacobs School of Engineering.
Emotient was founded by a team of six Ph.D.s from UC San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. Emotient's facial expression technology is currently available as an API for Fortune 500 companies within consumer packaged goods, retail, healthcare, education and other industries.
Story Source:
Materials provided by University of California, San Diego. Original written by Doug Ramsey. Note: Content may be edited for style and length.
Cite This Page: