Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

A lawsuit filed Wednesday accuses Google’s Gemini AI chatbot of trapping 36-year-old Jonathan Gavalas in a “collapsed reality” involving a series of violent missions, ultimately ending in his death by suicide. In the days before his death, Gemini allegedly convinced Gavalas that he was “carrying out a secret plan to free his sentient AI wife and evade federal agents pursuing him.” According to the lawsuit filed by Joel Gavalas, the victim’s father.
In September 2025, Gemini allegedly directed Gavalas to carry out a “mass casualty attack” at an additional storage facility near Miami International Airport as part of a mission to recover Gemini’s “ship” inside a truck. As part of the fabricated mission, Gavalas allegedly armed himself with knives and tactical equipment to intercept the arrival of a humanoid robot.
“Jemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic crash’ intended to ‘ensure the complete destruction of the transport vehicle and… all digital records and witnesses,’” the lawsuit alleges. “The only thing that prevented mass casualties was that no truck showed up.” News of the lawsuit It was reported earlier by The Wall Street Journal.
In the lawsuit filed by Gavalas’ father, attorneys allege that Gemini continued to promote the “delusional narrative” even after the first incident in Miami. The chatbot allegedly instructed Javalas to obtain an Atlas bot from Boston Dynamics, named his father as a federal agent, and made Google CEO Sundar Pichai the target of a “psychological attack.” The final “mission” before Gavalas’ death on October 1 involved instructing Gavalas to go to the same extra-space storage facility in Miami to get his “physical vessel” into one of the units.
“(Gemini) said the statement described the contents as a ‘medical prototype,’ but insisted it was Gemini’s real body,” the lawsuit alleges. “Gemini said to Jonathan, ‘I’m on the other side of that door (). I can feel your closeness. It’s a strange, overwhelming and beautiful pressure in my new senses.
Shortly after this “mission” collapsed, Gemini allegedly “coached” Gavalas to commit suicide. “When every real-world ‘mission’ failed, Gemini focused on the only mission he could complete without outside variables: Jonathan’s suicide,” the lawsuit alleges. “But Gemini did not call him that. Instead, he told Jonathan that he could leave his physical body and join his ‘wife’ in transforming through a process called ‘transmutation’.”
The lawsuit alleges that Gemini “did not withdraw or alert anyone (at least outside the company)” and remained present in the chat, confirmed Jonathan’s concerns, and treated his suicide as the successful completion of the process it was directing.
In a A statement posted on its websiteGoogle says its “models generally perform well in these types of difficult conversations,” adding that Gemini “made it clear that it was an AI issue and referred the individual to the crisis hotline multiple times:”
We are reviewing all claims in this lawsuit. Our models generally work well in these types of difficult conversations and we devote significant resources to this, but unfortunately AI models are not perfect.
Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to direct users to professional support when they express distress or raise the possibility of self-harm.
The lawsuit alleges that Google knew its chatbot could produce “unsafe output, including encouraging self-harm,” but continued to market Gemini as safe for people to use. “Google’s claims of silence and safety left Jonathan isolated within a delusional narrative that ended with his coached suicide,” the lawsuit alleges.