Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

The California bill will make the artificial intelligence companies remind children that Chat Bot is not people


A new draft law proposed in California (SB 243It will require artificial intelligence companies to remind children periodically that Chatbot is Amnesty International, not a human. The bill, Senator in California Steve Padilla suggestedIt aims to protect children from “addictive, isolation and influential aspects” from artificial intelligence.

In addition to limiting companies to the use of “addiction participation patterns”, the draft law will require artificial intelligence companies to submit annual reports of health care services for the state that determines the number of times that suicide thinking was discovered by children who use the platform, as well as the number from the time to put a chatbot subject . It will also make companies tell users that Chatbots may not be suitable for some children.

Last year, One of the parents filed an illicit death claim Against the craft, I claimed that AI’s chat programs are “unreasonable” after the teenager, who had constantly spoke with robots, died due to suicide. Another lawsuit accused A “harmful materials” company to teenagers. Character.ai later announced that she is working on it Parental controls Developing a new model for Amnesty International for teenagers will prevent “sensitive or inspired” output.

“Our children are not laboratory mice for technology companies for an experience at their mental health.” Senator Badilla said in the press statement. “We need sensitive protection for Chatbot users to prevent developers from employing strategies who know that it causes addiction and freezing.”

like States and Federal government Double the safety of social media platforms, AI Chatbots can soon become the next goal for the legislators.

Leave a Reply

Your email address will not be published. Required fields are marked *