Computer learning system detects emotional context in text messages
- Date:
- June 29, 2015
- Source:
- American Technion Society
- Summary:
- A student has developed a computerized learning system that can detect emotional sentiments, such as sarcasm and irony, in text messages and emails. It could help detect content that suggests suicidal ideations, or other "calls for help.”
- Share:
Despite the use of smiley faces and exclamation points, it is all too easy to misinterpret sarcasm, humor, irony or determination in a text or email message. That may all change thanks to the efforts of Eden Saig, a computer science student at the Technion-Israel Institute of Technology, who developed a computerized learning system that can detect emotional sentiments expressed in a text message or email based on recognizing repeated word patterns.
His student project, entitled "Sentiment Classification of Texts in Social Networks," won the annual Amdocs Best Project Contest, sponsored by Amdocs, a provider of software and services to more than 250 communications, media and entertainment service providers in more than 80 countries. Saig developed the system at the Technion's Learning and Reasoning Laboratory, after taking a course in artificial intelligence supervised by Professor Shaul Markovich, of the Technion Faculty of Computer Science.
According to Saig, voice tone and inflections play an important role in conveying one's meaning in verbally communicated message. In text and email messages, those nuances are lost and writers who want to signify sarcasm, sympathy or doubt have taken to using images, or "emoticons," such as the smiley face, to compensate.
"These icons are superficial cues at best," said Saig. "They could never express the subtle or complex feelings that exist in real life verbal communication."
Recently, pages intended to be humorous on social networks such as Facebook and Twitter were titled "superior and condescending people," or "ordinary and sensible people." Such pages are very popular in Israel, said Saig, and users are invited to submit suggestions for phrases that can be labeled as 'stereotypical sayings,' for that particular page. Tens of thousands of friends and followers joined these groups.
By observing posts to these groups, Saig was able to identify existing patterns, The method he developed enables the system to detect future patterns on any social network, he added.
Since the content in these sections was colloquial, everyday language, Saig realized that, "the content could provide a good database for collecting homogeneous data that could, in turn, help 'teach' a computerized learning system to recognize patronizing sounding semantics or slang words and phrases in text."
The project applied 'Machine-Learning' algorithms to the content on these pages and used the results to automatically identify stereotypical behaviors found every day in social network communication.
The quantification was carried out by examining 5,000 posts on social media pages and, through statistical analysis, gearing a learning system to recognize content structure that could be identified as condescending or slang. The system was constructed to identify key words and grammatical habits that were characteristic of sentence structure implied by the content's sentiments. The repeated patterns were learned automatically through a text analysis system designed specifically for analyzing popular social networking groups. Saig found that the greatest accuracy could be obtained when the system combined keyword searches with grammatical structural analysis. He also found that accuracy improved when the number of "Likes" a message received was taken into account.
"Now, the system can recognize patterns that are either condescending or caring sentiments and can even send a text message to the user if the system thinks the post may be arrogant," explained Saig.
The method he developed has many potential pragmatic applications beyond clarifying emotions or feelings in interpersonal communication on social media.
When applied to other networking pages it may help detect content that suggests suicidal ideations, for example, or 'calls' for help, or expressions of admiration or pleasure."
"I hope that ultimately I can develop a mechanism that would demonstrate to the writer how his or her words could be interpreted by readers, thereby helping people to better express themselves and avoid being misunderstood," he concluded.
Story Source:
Materials provided by American Technion Society. Original written by Kevin Hattori. Note: Content may be edited for style and length.
Cite This Page: