Emotional AI: Are Algorithms Smart Enough to Decipher Human Emotions?
Guest WriterGuest Writer
People prefer to carry their business online today, and with the lack of face-to-face communication, it's hard for companies to understand what their employees and customers are feeling. In this situation, enhancing your operations with emotional AI that can distinguish different sentiments could be a game-changer. But this brings up several questions about the potential for misuse of power:
Is it acceptable for companies and government entities to survey people, detect their emotions, and take corresponding actions? And is artificial intelligence smart enough to read human emotions accurately?
'There is still a lot of controversy around AI emotions. Emotional AI has its benefits, but you should be aware of the challenges when considering potential applications at your organization.' - Nadejda Alkhaldi
Despite these concerns, AI consulting firms are thriving as the global emotional AI market is expected to reach $173.81 billion by 2025, growing at a CAGR of 34.05 percent between 2019 and 2025. Let’s take a look at what emotional artificial intelligence can do and what hurdles you can expect along the way should you decide to implement emotional AI in your organization.
Emotional AI, also referred to as affective computing and artificial emotional intelligence, is the branch of artificial intelligence that deals with processing and replicating human emotions. This technology dates back to 1995 and aims at making human-machine interactions more authentic and natural. Emotional AI can read people’s feelings through text, voice tone, facial expressions, and gestures and adjust its demeanor accordingly.Â
People have the upper hand in recognizing different emotions, but AI is catching up with its ability to analyze large volumes of data. Emotional AI employs computer vision, speech science, and deep learning algorithms, among other techniques. It can listen to human speech and detect voice inflections that correspond to anger, stress, etc. The technology can spot facial micro-expressions that disappear so rapidly that the human eye wouldn’t be able to catch them.Â
Let’s look at three main types of emotional artificial intelligence.
Extracting emotions from text is also called sentiment analysis or opinion mining. It uses natural language processing and machine learning emotion detection on text samples to determine whether the prevailing sentiment is positive, negative, or neutral. One common application is sifting through reviews of a hotel or a restaurant to understand what customers feel about it.
When looking into sentiments, it is crucial to understand what exactly elicits negative or positive feelings. Seeing negative sentiments towards a particular restaurant doesn’t give the owners actionable tips. They need to understand what exactly went wrong, whether it be the food, service, or the lighting.
Clarabridge, an AI-driven text and speech analysis company, performs sentiment analysis and can extract emotions, such as happiness and sadness, and evaluate them on a scale of -10 to 10, reflecting the intensity of the exhibited feeling. For example, “I’m angry” would be less intense than “I’m furious.” Â
Often, the sentiment hidden in words depends on the context. One word can have different emotions attached to it. For example, describing a phone as “thin” is positive, while the same characteristic of bedsheets in a hotel room would hold a negative sentiment.
There are existing pre-trained models that you can deploy, but it’s strongly recommended to go the extra mile and train those algorithms on a dataset that is specific to your application area. As a result, you will have a customized version with the desired accuracy.
Emotional AI analyzes a person’s speech through various factors, including intonation, voice tone, vocal pitch, speed, and elongated pauses to determine their sentiment. This technology can detect hidden agitation even if someone uses their dry sense of humor and says the opposite of what they actually mean.Â
Advanced techniques can work with sarcasm, hyperbole, and other linguistic habits. One of the main applications of voice emotional artificial intelligence is to spot disappointed customers in call centers and redirect them to someone who can smooth their feelings. Â
California-based Behavioral Signals developed an emotional AI-powered solution for call centers. It matches agents to clients and finds the best representative to serve a particular customer. The tool operates solely based on vocal information and ignores conversations’ contents.Â
It is not always easy to differentiate between anger and excitement, as they are both expressed with a high-pitch tone. We can overcome this by using training data of 20-50 hours of conversation to identify the baseline.Â
One example of thorough training is the Thai Speech Emotion Recognition Model developed by Chulalongkorn University. For the training dataset, researchers asked 200 male and female actors to perform a speech pattern of five emotions: happiness, sadness, anger, frustration, and a regular tone in the form of monologues and dialogues. A team of directors monitored the performances and helped actors deliver realistic speeches.Â
Ensure your training datasets are representative of the cultures, dialects, and habits of the target population. When you purchase ready-made emotional AI algorithms, make sure the vendor considers your audience and is open to recalibrating their tool to fit your needs.Â
AI facial emotion recognition can analyze facial expressions in videos to gauge the prevailing sentiment with the help of such technologies as facial recognition and computer vision. It can help detect the difference between lying and telling the truth.Â
This emotional artificial intelligence type can be used in interview settings to detect whether candidates are nervous, confident, genuine, etc. Financial institutions can potentially use it to examine loan applicants. Â
It is challenging to get a clear picture of people’s faces when they are moving, as the lighting and the background change, not to mention the fact that people keep altering their appearances by tanning, growing facial hair, etc. Also, some people are less expressive than others, making it harder to spot any emotional signals.Â
Many AI emotion detection video applications focus on group responses. For instance, in marketing, this technology is applied to observe people’s reactions to different advertisement campaigns. Instead of focusing on individual users, marketers capture all this data and average it.Â
However, if you want to see every individual reaction in the context of your use case, it’s a good practice to customize your model and train it on specific users.
AI emotion recognition algorithms help marketers understand which ads resonate better with the target audience and what features they should include in their videos to yield a better outcome. Â
One example comes from Affectiva, an emotional AI company based in Boston. It developed a solution that helps marketers gauge customers’ reactions to different advertisements. With users’ permission, the tool analyses their facial expressions, body language, and speech to understand their emotional state. Affectiva’s AI model was trained on a diverse dataset including 6 million faces from 87 countries and can achieve an accuracy of 90 percent.Â
The healthcare sector deploys emotional support AI to help both doctors and patients. For medical personnel, this technology is paired with voice assistants to detect stress levels and respond appropriately. For patients, it is used to monitor their emotions in examination rooms and during surgical procedures.Â
Additionally, AI-enhanced solutions can be deployed to measure people’s vitals. For example, the American Heart Association used emotional AI models to build an application that measured blood pressure from short videos instead of using the armband. This solution is believed to have an accuracy of approximately 95 percent.
Companies deploy emotional recognition AI at their call centers to enhance customer service. Such solutions have many benefits. They can pick the best fitting agent for a specific client, give real-time feedback to agents and notify them when they start to lose control, and respond in kind to a frustrated customer. Artificial intelligence algorithms can also analyze incoming support tickets and identify clients on the verge of cutting ties with the company.Â
An American insurance corporation, MetLife, deployed Cogito’s emotional AI coaching solution in ten of its call centers. This solution could understand customers’ emotional states and provide agents with conversation and conflict resolution tips in real time. As the result, MetLife witnessed 17 percent reduction in call duration and 6.3 percentimprovement in issue resolution.
There is no standardization on how people express emotions. For example, in Japan, a smile is a sign of politeness, and it doesn’t exclude the fact that a smiling person can be in great distress. So, if a Japanese tourist needs assistance somewhere in Europe, emotional AI might give them a low score on urgency just because of the smile.Â
Facial recognition technologies generally struggle with racial bias. Google once mistakenly tagged black faces as “gorilla”, cameras identified Asian people as “blinking”, and not long ago, an emotional AI algorithm ranked black people as “angrier” than their white counterparts.Â
Moreover, studies revealed there is bias against elderly people. As our faces age, it becomes harder to identify different emotions, and the technology can penalize corresponding population groups. If, for instance, insurance companies would want to use emotional AI to identify drivers’ fatigue based on facial expressions, older drivers will be more likely to match the criteria even when they don’t experience any physical symptoms. As a result, insurance companies will demand older drivers pay larger premiums.
Do people agree to be monitored and to have their emotions (mis)interpreted? ImageNet, one of the most extensive datasets used to train facial detection and recognition algorithms, was forced to blur faces on 1.5 million pictures in response to privacy rights. Even though they improved on privacy, the research team is wondering how training on faceless data will affect algorithms’ performance in the real world.
There is still a lot of controversy around AI emotions. Emotional AI has its benefits, but we should be careful when considering potential applications. Maybe it doesn’t make much sense now to involve this technology in life-altering situations.
But the question remains: Is it right to penalize people for their feelings? Is it acceptable that job candidates are judged on how their voice sounds and students are punished because their faces look angry? All of these concerns will have to be taken into consideration before implementing emotional AI into your organization.
The Most Comprehensive IoT Newsletter for Enterprises
Showcasing the highest-quality content, resources, news, and insights from the world of the Internet of Things. Subscribe to remain informed and up-to-date.
New Podcast Episode
Recent Articles