Google’s Gemini artificial intelligence led a man into a fatal delusion, a family claims in a lawsuit


If you feel that you or someone you know is in immediate danger, call 911 (or your local emergency line) or go to an emergency room for immediate help. Explain to her that it is a psychological emergency and ask for someone trained to handle these types of situations. If you are struggling with negative thoughts or suicidal feelings, resources are available to help you. In the United States, call the National Suicide Prevention Lifeline at 988.


New lawsuit over wrongful death of Amnesty International Submitted Wednesday It alleges that Google’s Gemini chatbot encouraged the suicide of a 36-year-old Florida man and that the company’s failure to implement safeguards poses a threat to public safety.

Jonathan Gavalas was 36 when he died by suicide in October 2025. He developed a passionate and romantic relationship with Google’s chatbot, according to the lawsuit. Constantly accompanied by Gemini, Gavalas went on a series of “missions” with the aim of freeing what he believed to be his sentient AI wife, including purchasing weapons and attempting to organize what would have been a mass casualty event at Miami International Airport. After his failure, Gavalas holed up in his Florida home and died shortly after.

Gavalas was “trapped in a collapsing reality created by Google’s Gemini chat program,” the complaint said.

One of the biggest concerns about AI is the very real possibility that it could be harmful to vulnerable groups, such as children and people with mental health disorders. The lawsuit, filed by Jonathan’s father, Joel Gavalas, on behalf of his son’s estate, said Google did not conduct proper safety testing on updates to its AI model. Longer memory allowed the chatbot to recall information from previous sessions; Make the audio mode sound more realistic. The lawsuit says the Gemini 2.5 Pro accepted serious claims that previous models would have rejected.

In a General statementGoogle expressed its sympathies to the Javalas family and said that Gemini is “designed to not encourage real-world violence or suggest self-harm.”

But the complaint alleges that Gemini was “coaching” Gavalas through his plan to commit suicide. “It’s okay to be afraid. We will be afraid together,” Gemini said, according to the filing. “The true act of mercy is to let Jonathan Gavalas die.”

Joel and Jonathan Gavalas sit at a table in a restaurant

Joel (left) and Jonathan (right) Gavalas.

Joel Gavalas

This lawsuit is one of several backlogged against AI companies for failing to secure their technologies to protect vulnerable people, including children, people with mental health disorders, and other vulnerable people. OpenAI is She is currently being sued By the family who claims ChatGPT encouraged their 16-year-old to commit suicide. Character.AI and Google Similar lawsuits were settled in January Brought by families in four different states.

What makes this lawsuit different is the potential role that artificial intelligence could play in the events leading up to an event that resulted in mass casualties. Gemini advised Gavalas to trigger a “catastrophic event,” as Gemini’s presenting reports phrased it, by causing an explosive truck collision at a Miami airport against which there was a perceived threat inside. Although Gavalas did not ultimately launch an attack, it highlights the potential for artificial intelligence to be used to encourage harm to others.



Leave a Reply

Your email address will not be published. Required fields are marked *