The Uncanny Valley: Why Consumers Distrust Lifelike AI

Riley Panko -
Illustration: © IoT For All

Despite the rise of voice assistants like Amazon Alexa, people are uncomfortable with lifelike AI. For example, Google unveiled the “Duplex” feature for the Google Assistant last year. The human-sounding AI could make simple phone calls on behalf of users, mainly for booking restaurant reservations.

The AI sounded too lifelike. Call recipients reported feeling “creeped out” by the Duplex bot because it was almost indistinguishable from a human. This is an example of “the uncanny valley” or the eerie feeling people get when human-like AI imitates a person, yet falls short of seeming completely real. This gap in realism leads to feelings of revulsion and mistrust.

Creating warm and trustworthy relationships with customers requires that special touch that only high-end developers can deliver. For AI to gain the trust of consumers going forward, and achieve better business outcomes, developers need to have a solid grasp of the uncanny valley, and its consequences.

Businesses considering using AI should consider numerous points on how AI may impact consumers’ level of trust before adoption.

Inside the Uncanny Valley

AI’s increased realism is unnerving, but this negative emotional response is nothing new. Looking at lifelike dolls, corpses, and even prosthetic limbs can trigger the same effect. This is because lifeless yet human-esque objects remind us of our own mortality. Sci-fi and horror films utilize this phenomenon to great effect, conjuring images that are too close for comfort.

Lifelike AI is also disturbing because humans are biologically incentivized to avoid those who look sick, unhealthy, or ‘off’. This is known as “pathogen avoidance,” which biologically serves to protect against dangerous diseases. Lifelike AI seems almost human, but almost human isn’t enough.

People Neither Trust Nor Understand AI

Humans have evolved to control their environment. As a result, we hesitate to delegate tasks to algorithms that are neither fully understood nor failsafe. So when AI fails to perform to human standards, which is often, we’re acutely aware.

For example, Uber’s self-driving car has yet to operate safely on auto-pilot. According to research by UC Berkeley, one AI housing system set about charging minority homeowners higher interest rates for home loans.

Even in the case of Google Duplex, users doubted whether the AI could correctly understand the simple details of their restaurant reservation.

AI is perceived as untrustworthy because no matter how often it succeeds, even if it fails a handful of times, those situations stick out. Though convenience is appealing, consumers demand reliability, control, and comfort when using the technology.

Voice assistants like Amazon Alexa occupy a happy medium for users. The AI isn’t too lifelike, and it’s easily understood how to control the technology. People only trust what they understand. But lifelike AI, like most, isn’t well known.

Differentiation and Understanding Critical to Trust

To gain trust, AI developers and businesses must ensure a more comfortable AI experience for users. Foremost, this means that the AI should appear and sound less human.

People want technology such as Google Duplex to announce itself as AI, and this would make them more comfortable with the technology. Visually, AI can be created to appear cute rather than anatomically accurate. If the AI is easily distinguishable from a human, people are more likely to adopt it.

Although machine learning algorithms are too complex to be understood by humans, transparency and explainability engender trust. To this end, sharing information about AI decision-making processes can shine a light into the “black box” of machine-learning algorithms. In one study, people were more likely to trust and use AI in the future if they were allowed to tweak the algorithm to their satisfaction.

This suggests that both a sense of control and familiarity are key to fostering acceptance for lifelike AI.

Finally, if consumers will not trust a business’ AI system, revert back to the old-fashioned way and use humans to communicate with customers – and seek help from third-party sources like virtual assistants to ensure the task doesn’t become overwhelming.

Why Consumers Distrust Lifelike AI

To open people up to lifelike AI, companies must avoid the uncanny valley. Familiarity, education, and visual distinction are needed to help people be comfortable in the presence of humanoid technology.

Author
Riley Panko - Senior Content Developer, Clutch.co

Contributors
Guest Writer
Guest Writer
Guest writers are IoT experts and enthusiasts interested in sharing their insights with the IoT industry through IoT For All.
Guest writers are IoT experts and enthusiasts interested in sharing their insights with the IoT industry through IoT For All.