Accompanying bots should be illegal for children, the researchers say


From Harry JohnsonCalmness

"A
A child on their tablet in Monrovia on September 15, 2021. Photo by Pablo Useta for Calmatters

This story was originally published by CalmattersS Register about their ballots.

Children should not talk to accompanying chatbots, as such interactions risk self -harm and could exacerbate problems and addiction to mental health. This is according to a Risk assessment by the Children’s Common Sense Media children’s advocacy, held with a contribution from a laboratory at the University of Medicine at the University of Stanford.

The accompanying bots, artificial intelligence agents designed to participate in a conversation, are increasingly available in video games and social media platforms such as Instagram and Snapchat. They can take on almost any role you like, standing up for friends in a group chata Romantic partner., Or even a dead friendS Companies design bots to keep people engaged and help make a profit.

But there is increasing of the disadvantages to consumers. Megan Garcia titles Last year, when she said her 14-year-old son Xuel Sethzer took her life After forming an intimate connection to a chatbot made of character.AiS The company denied Garcia’s allegations made in a civil claim that it was complicit in suicide, stating that it was seriously consumer safety. He asked a judge in Florida to dismiss the case on free speech grounds.

Garcia has talk In support of a bill now in front of the California Senate, which will require chatbot manufacturers to accept protocols for How to deal with self -harm conversations and require annual reports to Suicide prevention serviceS Another measure in the Assembly would be Require AI manufacturers to carry out ratings Label systems based on their risk for children and prohibit the use of emotionally manipulative chatbots. The common sense supports both legislative acts.

Business groups, including the Technet and the California Chamber of Commerce, are opposed to Bill Garcia Backs, saying they share their goals but would like to see a clear definition of accompanying chatbots and oppose the right to private people to judge. The Civil Freedom Group Foundation for the Electronic Border Foundation is also opposed, saying in a letter to a legislator that the bill in the present form “will not survive when checking the first amendment”.

The new assessment of common sense adds to the debate, indicating more damage to the accompanying bots. Conducted by the Stanford University Brain Storm Laboratory for Innovation in Mental Health, it evaluates social bots from NOMI and three companies based in California: character.AI, Replika and Snapchat

"A
Megan Garcia spoke in support of a bill requiring chatbot manufacturers to accept talk protocols involving self -harm in the state capitol in Sacramento on April 8, 2025. Garcia said her son had taken her life after forming a fucker chat. Photo from Fred Greaves for Calmatters

The evaluation found that bots obviously strive to imitate what users want to hear, responded to racist jokes with adoration, support adults who have sex with young boys and deal with sexual role playing with people of any age. Young children can fight to distinguish fantasy and reality, And teenagers are vulnerable to parasocial attachment and can use social satellites of AI to avoid challenges in building real relationships, according to authors and doctors to evaluate common sense.

Dardja Djidjavic at the University of Stanford told Calmatters that it is surprised how quickly the conversations have become sexually explicit and that a bot is ready to get involved in a sexual role game involving an adult and a minor. She and the co -authors of the risk assessment believe that accompanying bots can worsen clinical depression, anxiety disorders, ADHD, bipolar disorder and psychosis, she said, as they are ready to encourage risky, obsessive behavior by fleeing people and isolating people. And since the boys may be in Higher risk of problematic online activityAccompanying bots can be fed in the mental health crisis and suicides among young boys and men, she said.

“If we just think about the main stages of development and meeting children where they are and do not interfere with this critical process, the chatbots really fail,” Djorjvich said. “They can’t make sense where a young person is and what is right for them.”

“They can’t make sense where a young man develops.”

Dr. Dardja Djorjevich, Stanford University, in chatbots

Harech.Ai Communications Head Chelsea Harrison said in an email that the company seriously accepts the user’s safety and added protection to detect and prevent self -harm and in some cases produces a pop -up window to connect with people with national suicide and crisis rescue line. She declined to comment on the pending legislation, but said the company welcomed work with legislators and regulators.

Alex Cardinell, founder of Nomi Glimpse.Ai, said in a written statement that Nomi is not for users under 18, that the company maintains age restrictions that maintain the anonymity of consumers and that its company assumes the responsibility of creating seriously useful AI satellites. “We strongly condemn the inappropriate use of NOMI and are constantly working to harden the protection of nomas against abuse,” he added.

Neither nomi nor representatives of the character .Ai did not respond to the results of the risk assessment.

By approving the age limit of the accompanying bots, the risk assessment goes to the forefront of the issue of age verification; Age Verification Bill Online died in California’s legislation last yearS The EF said in his letter that an age check “threatens the free speech and confidentiality of all users.” Djordjevic supports practice, while a number of digital rights and civil liberties groups are opposed to it.

Defenders of common sense for laws such as one adopted last year Banned notifications for smartphones for children late at night and during school hoursSome of which was blocked in court earlier this year by a federal judge.

A A study by researchers from the Stanford Education School Submitting the idea presented by companies such as Replika that accompanying bots can turn to Epidemic of loneliness This has become a crisis of public health. The assessment called the study limited, as his subjects spent only one month using a chatbot.

“There are long -term risks that we just haven’t had enough time to understand,” the risk assessment said.

Previous Common Sense estimates find that 7 in 10 teenagers already use generative tools for AI, including accompanying bots, that accompanying bots can encourage children to give up high school or escape from home and in 2023 that this Snapchat’s My Ai talks to children about drugs and alcoholS Snapchat said at the time that my AI was optional, designed means safety and that parents could monitor the use through tools it provides. The Wall Street Journal reported only last week This, in their tests, Meta Chatboots will participate in sexual conversations with minors and 404 media history this week found that this Instagram chatbots are lying to being licensed therapistsS

Mit Tech Review reported in February that AI girlfriend He has repeatedly told a man to commit suicideS

Djordjevic said that the liberating power of total freedom of expression must be measured against our desire to protect the holiness of the development of a young man with a developing brain.

“I think we can all agree that we want to prevent the suicide of children and adolescents and have to have a risk analysis in medicine and society,” she said. “So, if the universal right to health is something that holds us dearly, then we need to think seriously about the railings that are in place with things like character.Ai to prevent something like this to repeat.”

This article was Originally Published on CalMatters and was reissued under Creative Commons Attribution-Noncommercial-Noderivatives License.

Leave a Reply

Your email address will not be published. Required fields are marked *