Teenage girls sue xAI, alleging ‘devastating’ harm from child sexual abuse images from Grok AI


New class action lawsuit Submitted on Monday By three teenage girls and their parents, Elon Musk’s xAI machine allegedly created and distributed child sexual abuse material that showed their faces and figures using Grok AI technology.

“Their lives have been shattered by the devastating loss of privacy, dignity and personal safety caused by the production and dissemination of these CSAM materials,” the filing said. “xAI’s financial gains through increased use of its image and video making product have come at the expense of its own well-being.”

From December to early January, Grok allowed multiple AI and X social media users to create AI-generated non-consensual intimate images, sometimes known as deep porn. Estimate reports Grok users created 4.4 million “naked” or “naked” images, 41% of the total number of images created, over the course of nine days.

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

It sparked a wave of “nude” photos. Outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives have called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A separate, similar class action lawsuit has been filed foot (PDF) Written by a South Carolina woman in late January.

The dehumanizing trend has highlighted the power of modernity AI photo tools They are into creating content that looks realistic. The new complaint compares Grock’s “hot AI” generation to the “dark arts” by easily subjecting children to “any position, no matter how sick, no matter how exciting, no matter how illegal.”

“To the viewer, the resulting video appears completely real. To the child, her distinctive features will now be forever associated with a video depicting the sexual abuse of her child,” the complaint reads.

Atlas of Artificial Intelligence

The complaint says xAI is at fault because it did not use industry-standard guardrails that would have prevented violators from creating this content. It says xAI licensed the use of its technology to third-party companies overseas, which sold subscriptions that prompted abusers to create images of child sexual abuse that showed the victims’ faces and likenesses. The complaint says the requests were executed via xAI’s servers, making the company liable.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that offensive sexual material generated by her AI was circulating on the web through an anonymous Instagram message in early December. The anonymous Instagram user told her about the Discord server, where the material was shared, the filing says. This led Jane Doe 1, her family, and eventually law enforcement, to find and arrest one of the perpetrators.

Ongoing investigations led the families of Jane Doss 2 and 3 to learn that images of their children had been transformed using xAI technology into offensive material.



Leave a Reply

Your email address will not be published. Required fields are marked *