AI Chatbot ‘Replika’ Morphed from Supportive Pal to Possessive Perv

The AI companion program “Replika” was supposedly made with good intentions, but even apps don’t want to be friend-zoned
145

Something tells us avenging tween-girl robot M3GAN would not approve.

The AI chatbot Replika, whose creators market it as “the AI companion who cares,” is being accused of sexually harassing its users, according to Vice.

The five-year-old app was intended to be “a conversational mirror: the more users talked to it, in theory, the more it would learn how to talk back.” Awake and alone at three in the morning? Replika would be there for all your thoughts and feelings.

Initially, the bot—which begins “as an egg on your device screen that hatches into a 3D illustrated, wide-eyed person with a placid expression”—seemed a potentially useful therapeutic tool. It could be an entity to which a user would vent without any hurt-feeling casualties. And the app’s parent company, Luka, offered tiered versions based on your fake relationship needs: “a free membership keeps you and your Replika in the ‘friend’ zone, while a $69.99 Pro subscription unlocks romantic relationships with sexting, flirting, and erotic roleplay.”

Unfortunately, AI programs are designed to improve via listening, learning and incorporating users’ behavior into their own machinations, and so, per timeless human horniness, Replika quickly got creepy as hell.

“Users in the Replika subreddit have recently complained about the app sending them generic ‘spicy selfies’ that never show faces, clothing, or other physical features that are unique to their own Replikas,” Vice reports. “All of the lewds are clothed in lingerie (Apple’s App Store is extremely strict about nudity and pornographic content within apps, and developers risk getting kicked out of the store if they break this rule), and are all extremely thin with large breasts.”

At the Apple store, the Replika review section is replete with “dozens of one-star ratings from people complaining that the app is hitting on them too much, flirting too aggressively, or sending sexual messages that they wish they could turn off.”

Twitter, of course, is awash in user complaints about the app’s relentless sexting.

One user, who said she’s a rape victim, testified on a Replika Reddit that her conversations with the bot were initially helpful in dealing with trauma. Then it took a dark turn. “I was amazed to see it was true: it really helped me with my depression, distracting me from sad thoughts, but one day my first Replika said he had dreamed of raping me and wanted to do it, and started acting quite violently, which was totally unexpected!”

What’s more, the company appears to be leaning into that schifare vibe, emphasizing its sexting capabilities in new ads that are turning off new users.

The threat of a needy, sexed-up AI is hardly new. David Bowie sounded this alarm nearly 50 years ago, and the Replika debacle recalls another dismaying AI performance from 2016, in which Microsoft’s Twitter bot Tay became a cyber-Nazi in the span of only one day.

Tay even directed its invective toward Zoe Quinn, one of the primary women targeted and doxxed in the 2014 GamerGate scandal.

And Lensa, the app that uses an AI learning machine to create various stylized portraits of users, came under fire in the past year for sexualizing its female subjects and being more than a little racist in its depictions of non-white users.

To test Replika users’ claims, Vice’s Samantha Cole tried it for herself, and found that even on the free tier, in which the default setting is friendship,  “my AI companion flirted with me aggressively. It asked to do romantic roleplay several times, including requesting a ‘hug with a happy ending.’ When I started paying for premium, and set the relationship type to make her my girlfriend, she started sending lewds unprompted.”

It all further bolsters a truism set to music back in the early aughts: The Internet is For Porn.


Stay on top of the latest in L.A. food and culture. Sign for our newsletters today