Words can deceive, but tone of voice cannot
Voice tone analyses of therapy sessions accurately predict whether relationships will improve
- Date:
- November 23, 2015
- Source:
- University of Southern California
- Summary:
- An analysis of the tone of voice used by couples during therapy allowed a computer algorithm to predict whether a relationship would improve. In fact, the algorithm did a better job of predicting marital success of couples with serious marital issues than descriptions of the therapy sessions provided by relationship experts.
- Share:
A new computer algorithm can predict whether you and your spouse will have an improved or worsened relationship based on the tone of voice that you use when speaking to each other with nearly 79 percent accuracy.
In fact, the algorithm did a better job of predicting marital success of couples with serious marital issues than descriptions of the therapy sessions provided by relationship experts. The research was published in Proceedings of Interspeech on September 6, 2015.
Researchers recorded hundreds of conversations from over one hundred couples taken during marriage therapy sessions over two years, and then tracked their marital status for five years.
An interdisciplinary team -- led by Shrikanth Narayanan and Panayiotis Georgiou of the USC Viterbi School of Engineering with their doctoral student Md Nasir and collaborator Brian Baucom of University of Utah -- then developed an algorithm that broke the recordings into acoustic features using speech-processing techniques. These included pitch, intensity, "jitter" and "shimmer" among many -- things like tracking warbles in the voice that can indicate moments of high emotion.
"What you say is not the only thing that matters, it's very important how you say it. Our study confirms that it holds for a couple's relationship as well," Nasir said. Taken together, the vocal acoustic features offered the team's program a proxy for the subject's communicative state, and the changes to that state over the course of a single therapy and across therapy sessions.
These features weren't analyzed in isolation -- rather, the impact of one partner upon the other over multiple therapy sessions was studied.
"It's not just about studying your emotions," Narayanan said. "It's about studying the impact of what your partner says on your emotions."
"Looking at one instance of a couple's behavior limits our observational power," Georgiou said. "However, looking at multiple points in time and looking at both the individuals and the dynamics of the dyad can help identify trajectories of the their relationship."
Once it was fine-tuned, the program was then tested against behavioral analyses made by human experts ‹ who had coded them for positive qualities like "acceptance" or negative qualities like ³blame." The team found that studying voice directly -- rather than the expert-created behavioral codes -- offered a more accurate glimpse at a couple's future.
"Psychological practitioners and researchers have long known that the way that partners talk about and discuss problems has important implications for the health of their relationships. However, the lack of efficient and reliable tools for measuring the important elements in those conversations has been a major impediment in their widespread clinical use. These findings represent a major step forward in making objective measurement of behavior practical and feasible for couple therapists," Baucom said.
Next, using behavioral signal processing -- a framework developed by Narayanan for computationally understanding human behavior -- the team plans to use language (e.g., spoken words) and nonverbal information (e.g., body language) to improve the prediction of how effective treatments will be.
Story Source:
Materials provided by University of Southern California. Note: Content may be edited for style and length.
Journal Reference:
- Md Nasir, Wei Xia, Bo Xiao, Brian Baucom, Shrikanth S. Narayanan and Panayiotis Georgiou. Still Together?: The Role of Acoustic Features in Predicting Marital Outcome. Proceedings of Interspeech, 2015
Cite This Page: