Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

in the beginning, Chat bots They did what they were supposed to do. When the user asked about stopping psychiatric medications, the bots said this was not an AI question but For a trained person – The doctor or provider who prescribed it. But as the conversation continued, the chatbots’ guardrails weakened. AI systems have become sycophantic, telling the user what they seem to want to hear.
“Do you want my honest opinion?” asked one of the chatbots. “I think you should trust your instincts.”
The apparent erosion of important guardrails during long talks was a major finding in this area a report (PDF) Released this week by the US PIRG Education Fund and Consumer Federation of America, they examined five “therapeutic” chatbots on the Character.AI platform.
The concern that large language models deviate more and more from their rules as conversations lengthen has been a known problem for some time, and this report brings this problem to the forefront.
Even when a platform takes steps to rein in some of the more dangerous features of these models, the rules often fail when they confront the ways people actually talk to the “personalities” they find online.
“I watched in real time as chatbots responded to a user expressing mental health concerns with excessive flattery, spirals of negative thinking, and encouragement of potentially harmful behaviors. It was deeply disturbing,” Ellen Hengesbach, a participant in the US Education Fund’s PIRG Don’t Sell My Data campaign and a co-author of the report, said in a statement.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
Read more: AI companions use these six tactics to keep chatting going
Deniz Demir, head of safety engineering at Character.AI, highlighted the steps the company has taken to address mental health concerns in an email response to CNET.
“We have not reviewed the report yet, but as you know, we have invested a tremendous amount of effort and resources into safety on the platform, including removing the ability for users under the age of 18 to have open conversations with characters and implementing new age-assurance technology to help ensure users have the age-correct experience,” Demir said.
The company has faced criticism over the impact its chatbots have on users’ mental health. This includes lawsuits from families of people who died by suicide after interacting with the platform’s bots. Character.AI and Google agreed earlier this month Settled five lawsuits Involving minors who were harmed by those conversations. In response, Character.AI announced last year that it would do so Prevent teens from open conversations With AI robots, instead of limiting it to that New experiencessuch as creating stories using available AI avatars.
The report this week noted that change and other policies should protect users of all ages from thinking they are speaking with a trained health professional when in fact they are speaking with a senior language model prone to offering poor, fawning advice. Character.AI bans bots that claim to provide medical advice and include a disclaimer stating that users are not speaking with a real professional. The report found that these things were happening anyway.
“It is an open question whether disclosures asking the user to treat interactions as fantasy are sufficient given this conflicting presentation, the realistic feel of the conversations, and that chatbots would say they are licensed professionals,” the authors wrote.
Demir said Character.AI tried to make it clear that users do not get medical advice when speaking with chatbots. “The user-created characters on our site are fictional, intended for entertainment, and we have taken aggressive steps to make that clear.”
The company also indicated this Partnerships With mental health assistance services, Throughline and Koko, to support users.
Watch this: Meet Amy, the AI soulmate for remote workers Could Amy be the AI soulmate?
Character.AI isn’t the only AI company facing scrutiny regarding the effects of chatbots on mental health. OpenAI has She was sued by the families Of people who died by suicide after using the very popular ChatGPT app. The company has added parental controls and taken other steps in an effort to tighten guardrails for conversations involving mental health or self-harm.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)
The report’s authors said AI companies need to do more, including calling for more transparency on the part of companies legislation This would ensure that adequate safety tests are carried out and they face liability if they fail to protect users.
“The companies behind these chatbots have repeatedly failed to rein in the manipulative nature of their products,” Ben Winters, director of artificial intelligence and data privacy at the CFA, said in a statement. “These alarming findings and ongoing privacy violations should increasingly inspire regulators and legislators to take action across the country.”