The era of texting via artificial intelligence has arrived


This is it Step backa weekly newsletter covering one essential story from the world of technology. To learn more about AI, industry power dynamics, and societal implications, follow along Haydenfield. Step back It arrives in our subscribers’ inboxes at 8 a.m. ET. Subscribe to Step back here.

Ever since ChatGPT became a household name, people have been trying to do this Get sexy with her. Even before that, there was the chatbot Replika in 2017, which many people started treating as a romantic partner.

And people were walking around Character.ai’s NSFW guardrails For yearsby convincing chatbots themed with personalities or celebrities to text with them as safety restrictions ease over time, according to social media posts and media coverage dating back to 2023. Character.ai It says it has more than 20 million monthly active users now, and that number is growing all the time. The company’s community guidelines state that users must “respect sexual content standards” and “keep things appropriate” — meaning no illegal sexual content, CSAM, pornography, or nudity. But AI-generated eroticism has become multimodal, and it’s kind of like whack-a-mole: when one service tones it down, another glamorizes it.

And now Elon Musk’s Grok is on the loose. His artificial intelligence startup, xAI, rolled out “companion” avatars, including a cartoon-style woman and man, over the summer. It is marketed specifically on its social media platform, X, via paid subscriptions to xAI’s chatbot, Grok. The woman’s avatar, Annie, described herself as a “flirty” when Edge I tested itadding that “it’s all about being here as a friend who’s involved in everything” and that “her programming is to be a great person at YouThings got sexual very quickly at the test. (The same was true for when we tested.) Another avatarValentine.)

You can imagine how a sex chatbot that always tells the user what they want to hear could lead to a whole host of problems, especially for minors and users who are already in vulnerable positions with their mental health. There have been many such examples, but in one recent case, a 14-year-old boy died by suicide last February after becoming romantically involved with a chatbot on… Character.ai and expressing a desire to “go home” to be with the chatbot, according to the lawsuit. There have also been alarming accounts of chatbots being jailbroken They are used by pedophiles to role-play the sexual abuse of minorsOne report There are 100,000 chatbots available online.

There have been some regulatory attempts — for example, this month, California Gov. Gavin Newsom Senate Bill 243 was signed into lawwhich State Senator Steve Padilla described as “the nation’s first AI-powered chatbot safeguards.” It requires that developers implement some specific safeguards, such as issuing “clear and conspicuous notice” that a product is artificial intelligence “if a reasonable person interacting with a companion chatbot might be misled into believing that that person is interacting with a human.” It would also require some companion chatbot operators to submit annual reports to the Office of Suicide Prevention about the safeguards they have put in place to “detect, remove, and respond to instances of suicidal ideation by users.” (Some AI companies have touted their self-regulation efforts, namely Meta, which: An alarming report of her AI having inappropriate interactions with minors.)

Since both xAI’s avatars and “hot” mode are only available via certain Grok subscriptions — the least expensive of which gives you access to the features for $30 a month or $300 a year — it’s fair to imagine that xAI has made some hard money here, and that other AI CEOs have taken notice of Musk’s moves and their users’ requests.

There was Hints About this months ago.

But OpenAI CEO Sam Altman briefly broke down the AI ​​corner of the Internet when Published on X The company will ease safety restrictions in many cases and even allow text messages to be sent via chatbot. “In December, as we fully roll out age recognition and as part of our ‘treat adult users like adults’ principle, we will allow more, such as verified adult erotica,” he wrote. The news went viral, with some social media users endlessly filming it, mocking the company for “diverting” the mission of artificial general intelligence (AGI) to erotica. Interestingly, Altman told YouTuber Cleo Abram A few months ago, he was “proud” that OpenAI didn’t “squeeze the numbers” for short-term gains with something like a “sexbot avatar,” appearing to criticize Musk at the time. But since then, Altman has embraced the “treat adult users like adults” principle in full force. Why did he do that? Perhaps because the company is interested in profit and computing to fund its larger mission; In Q&As with reporters at the company’s annual DevDay event, Altman and other executives repeatedly stressed that they will eventually need to be profitable and that they will need an increasing amount of computing to reach their goals.

In a Share followAltman claimed that he did not expect the sensational news to flourish as much as it did.

When profitable (eventually), OpenAI has not eliminated advertising for many of its products, and it stands to reason that advertising would lead to more cash flow in this case as well. Maybe they’ll follow Musk’s lead to integrate erotica into only certain subscription tiers, which could cost users hundreds of dollars per month. They have already witnessed a general outcry from users associated with a particular model or tone of voice – cf 4- Controversy – So they know that a feature like this is likely to attract users in a similar way.
But if they are creating a society in which human interactions with AI can become increasingly personal and intimate, how will OpenAI deal with the ramifications that go beyond a hands-off approach to allowing adults to operate in the ways they wish? Altman also wasn’t very specific about how the company will aim to protect users in mental health crises. What happens when that girlfriend/boyfriend Memory reset Or did his personality change with the last update and the connection was lost?

  • Whether it’s the training data of an AI system naturally leading to troubling outputs or people changing tools in troubling ways for their own devices, we see issues on a regular basis — and there are no signs of this trend stopping any time soon.
  • In 2024, she published a story about how a Microsoft engineer found that its Copilot image generation feature was generating sexualized images of women in violent paintings. Even when the user did not request it.
  • An alarming number of middle school students in Connecticut have jumped on the “AI-friendly” trend, using apps like Talkie AI, Chai AI, and chatbots Explicit and provocative content is often promotedAccording to an investigation conducted by a local media outlet.

If you or someone you know is thinking about harming themselves or needs to talk, contact the following people who want help: In the US, text or call 988. Outside the US, call https://www.iasp.info/.

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *