If you are working on some natural language processing application, you probably came across sentiment analysis at some point. MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute of Medical Engineering and Science (IMES) have now released a prototype of a wearable device that can predict the mood of a speech.
The device recognizes speech, transcribes the audio into text, and combines physiological signals to determine the tone in real-time with 83% accuracy. Sentiment analysis is available for five second intervals during a conversation. The algorithm runs locally on purpose, keeping the privacy of users in mind.
The research team found that their device is approximately 18% more accurate than prediction via pure chance. This is a significant 7.5% improvement over existing approaches. Tuka Alhanai, the co-author of the research paper, plans to improve the algorithm by tweaking the neural network to organize different features (text vs. physiological data) at various layers of the network.
Alhanai also added that they are planning to collect more data, deploy with more commercial devices (e.g. Apple Watch), and improve the accuracy to “call out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative.’” The ultimate goal would be to elevate the algorithm to be deployed for social coaching tools. A wearable device might be a discreet tool to help navigate difficult social situations for people with anxiety or Asperger’s.