Google and Acter.AI are negotiating the first major settlements in teen chatbot deaths


In what may mark the tech industry’s first significant legal settlement over harms associated with artificial intelligence, Google and startup Character.AI are negotiating terms with families whose teens died by suicide or harmed themselves after interacting with Character.AI’s chatbot companions. The parties agreed in principle to settle; Now comes the harder work of finalizing the details.

These are among the first settlements in lawsuits accusing AI companies of harming users, a legal frontier that should have OpenAI and Meta watching nervously from the wings as they defend themselves against similar lawsuits.

Character.AI was founded in 2021 by former Google engineers He came back to his former employer in 2024 in a $2.7 billion deal, and invites users to chat with artificial intelligence personalities. The most disturbing case involves Sewell Setzer III, who had sexual conversations with a robot Daenerys Targaryen at the age of 14 before committing suicide. His mother, Megan Garcia, told the Senate that companies should be “legally accountable when they intentionally design harmful AI technologies that kill children.”

Another lawsuit describes a 17-year-old whose chatbot encouraged self-harm, suggesting that killing his parents was reasonable Limit screen time. Character.AI banned minors last October, She told TechCrunch. The settlements are likely to include financial damages, although no liability was acknowledged in court filings made available Wednesday.

TechCrunch reached out to both companies.

Leave a Reply

Your email address will not be published. Required fields are marked *