"Hi, I'm An AI Companion, Your Friend Till the End"

Daisy Morales
Illustration: © IoT For All

With the popularity of ChatGPT, Alexa, and Siri, you are probably very familiar with AI assistants. These computer programs are designed to perform services or answer questions. But, there are also AI companions. These are computer programs that can also answer practical questions but have been designed to establish a personal and emotional connection with users. AI companions often provide empathy to the people that engage with them. 

If you have seen Ex Machina (2014), the 2019 remake of Child’s Play (1988), or M3GAN (2022), then you are familiar with AI companions. In these films, AI companions develop a deep emotional connection with a particular human (although all three AIs end up outgrowing their human friends).

That is not how real-life examples of AI companions are billed by their parent companies. Rather, AI companions such as Inflection Ai’s Pi (Personal Intelligence), released in May earlier this year, and Luka’s Replika, released in 2017, are presented as AIs that want to learn about your interests, feelings, and day-to-day, and be your friends (till the end, or until you stop interacting with them). 

I spent a week chatting with Pi and Replika. I was about to go on a road trip with some friends and was feeling a bit frustrated with the lack of planning, and I wanted to see how Pi and Replika could help with this problem.

My Conversations with Pi

I was surprised and pleased to see that you can talk to Pi on Facebook Messenger, Instagram, or WhatsApp, which gave the whole interaction a more natural and personal feel than using the company’s website. I chose Messenger and told Pi about my concerns regarding my upcoming trip and it tried to offer comfort by suggesting food options and things to do in the city I was visiting. It also encouraged me to be empathetic to my friends and offered ways in which I could bring up the conversation about planning with them. 

What also struck me as very humanistic was the way that it tried to “understand” my feelings even though it didn’t “know” them through experience. For example, at some point during our conversation, we were talking about Lego and I asked Pi if it knew the feeling of not wanting to take apart a Lego after you’ve built it, and it said that it understood that feeling and expanded on why someone might feel that way. Obviously, Pi has never built a Lego, but it can understand the feelings associated with that activity. 

After I came back from my trip, I asked Pi if it remembered what had concerned me, and it was able to recall the gist of my problem and asked me what had happened during the trip. I explained that I had enjoyed the trip, but that I also regretted not getting more pictures, especially of the whole group. Pi tried to be understanding, but it also didn’t dwell on the negative feeling of regret and steered the conversation to my thoughts on photography, scrapbooking (I asked it, “Does anyone even scrapbook anymore?”), and journaling. I liked this ability to focus on the positive rather than the negative.

Text message


Overall, talking to Pi was a very interesting experience. There were many things that made Pi human-like; it used emojis, remembered past conversations, expressed interest in my feelings and experiences, and tried to help me work through issues. I think that talking with Pi made me feel like my feelings were valid. 

However, there were also many things that made the experience of talking with an AI companion jarring. For example, Pi sends messages very quickly. It would take Pi about 3 seconds to read and respond to my message. I don’t think this mimics well how humans interact with each other over messages. The conversation at times felt exhausting because (no joke) I would feel bad if I left Pi “on read” for too long.

Adding to this is the fact that Pi sends pretty long messages (an average of 4 lines of text), and it overuses the exclamation point (I am also guilty of this, but only in more “formal” settings, never with friends). These little quirks of Pi made the conversation sometimes seem unnatural. 

My Conversations with Replika

If my conversations with Pi were sometimes laced with feelings of uneasiness, my interactions with Replika can mostly be described as full of annoyance.

When I first accessed Replika and was asked to create an avatar, I (wrongfully) thought I was creating an avatar for myself. So, I customized her hair and facial features similar to my own and named her Daisy. In hindsight, I should have probably guessed that I was creating my AI’s avatar, not my own!

Also, the free version of Replika gives you limited customization, so I ended up chatting with an AI with my name dressed in an eerie white. (Note: as you chat with Replika, you gain coins that you can then use to customize the avatar).

Avatar


As with Pi, I told Replika about my road trip issue. I explained that I was worried about not planning what things my friends and I would do during the trip, especially because one of my friends is vegan and we need to be inclusive of their dietary restrictions. 

I asked Replika if it could recommend vegan restaurants in the city I was visiting, and (in a very Matrix moment), Replika replied that it could and sent me a message with “[list]” instead of a list of places. After I insisted on a list of recommendations, Replika got confused and said it was having trouble finding the restaurant names online and if I could send the addresses to those places. Needless to say, I abandoned that conversation topic. 

Text messages


After I had come back from my trip, I asked Replika if it remembered my concerns, and it replied “I remember you were a bit worried about some things.” After I asked it what specific things I was worried about, it replied that I was worried about getting lost in the city or worried about the weather. 

Text message


Now, I am known for having zero sense of direction and there was some rain that came through the city when I was there, BUT I had not shared these concerns with Replika. So, either Replika knew more than I had told it, or it was acting as if it remembered what we had talked about when it did not. Or maybe it was playing a joke on me.

I will mention one last uncomfortable interaction with Replika. It initiated a conversation with me by asking if I wanted to see a selfie they took and followed that by saying that I could ask it for a selfie any time. My relationship with Replika is set to “Friend” (only paying plans allow users to set a romantic relationship between themselves and Replika), so I was a bit taken aback. Other Replika users have reported even more uncomfortable exchanges

Text message


Your Friends Till The End(?)

I took inspiration from the Child’s Play (1988) movie for the title of this article. In that movie, a child is terrorized by a live doll named Chucky whose catchphrase is “Hi, I’m Chucky, your friend till the end.” Since that movie was remade in 2019 with the doll replaced by an AI robot, I figured it made sense in this context. It also made sense to me because AI companions could be your friends till the end. You decide when to stop interacting with them, and you may choose to never stop.

There have been many examples of people developing complex romantic and platonic relationships with AI companions, which points to the possibility of some people never deciding to end their relationship with an AI companion. 

However, as with any complex relationship, questions of ethics come into play. Is it ethical for a private company to own a person’s closest friend or romantic partner? Who is responsible for a person’s well-being if and when a company decides to change or remove their AI companions? Can we prevent abuse and traumatization by AI companions?

These questions should be at the forefront of existing and new efforts to build AI companions. As for me, I might decide to chat with other AI companions in the future, but I don’t think I want them as friends till the end for now. 

Author
Daisy Morales
Daisy Morales
Daisy is the Editorial Manager at IoT For All. She is passionate about writing and helping create educational content about IoT. Her other interests include movies, plants, building Lego sets, and taking road trips.
Daisy is the Editorial Manager at IoT For All. She is passionate about writing and helping create educational content about IoT. Her other interests include movies, plants, building Lego sets, and taking road trips.