Musk denies knowledge of Grok’s sexual images of minors as California AG launches investigation


Elon Musk He said Wednesday He is “not aware of any nude images of minors created by Grok” hours before the California prosecutor arrived I opened an investigation In xAI’s chatbot regarding “the spread of non-consensual sexually explicit material.”

And the denial of musk comes Pressure is mounting from governments All over the world – from the UK and Europe to Malaysia and Indonesia – after X users began asking Grok to convert their photos Real womenAnd in some cases, children, in images of a sexual nature without their consent. Copyleaks, an AI detection and content management platform, estimated that approximately one photo was posted every minute on X. Separate sample They were collected from January 5 to 6, and found 6,700 per hour over a 24-hour period. (X and xAI are part of the same company.)

“These materials…were used to harass people online,” California Attorney General Rob Bonta said in a statement. “I urge xAI to take immediate action to ensure this does not go ahead.”

The Attorney General’s Office will investigate whether and how XAI violated the law.

Several laws exist to protect targets of non-consensual sexual images and child sexual abuse material (CSAM). last year Take it law It was signed into federal law, which criminalizes the intentional distribution of non-consensual intimate images — including deepfakes — and requires platforms like X to remove such content within 48 hours. California also has its own Series of laws Signed by Governor Gavin Newsom in 2024 to crack down on sexually explicit deepfakes.

Grok began fulfilling user requests for X to produce sexual images of women and children at the end of the year. The trend appears to have taken off after some adult content creators asked Grok to create sexual images of themselves as a form of marketing, which then led to other users making similar claims. In a number of public cases, including for well-known figures such as Stranger Things actress Millie Bobby Brown, Grok has responded to claims that it alter real-life images of real women by altering clothing, posture, or physical features in sexually explicit ways.

according to Some reportsxAI has begun implementing safeguards to address this issue. Your puppy now Requires premium subscription Before responding to certain requests to create images, even then the image may not be created. April Kozin, VP of marketing at Copyleaks, told TechCrunch that Grok may meet demand in a more general or watered-down way. They added that Grok appears to be more lenient with adult content creators.

TechCrunch event

San Francisco
|
October 13-15, 2026

“Overall, these behaviors suggest that X is experimenting with multiple mechanisms to reduce or control the generation of problematic images, although inconsistencies remain,” Kozin said.

Neither xAI nor Musk have publicly addressed the issue directly. A few days after these cases began, Musk appeared to highlight the issue by asking Grok to create a file. Pictured in a bikini. On January 3, Security Account It said the company was taking “action against illegal content on X, including CSAM,” without addressing Grok’s apparent lack of safeguards or creation of manipulated sexual images involving women.

This placement mirrors what Musk posted today, emphasizing the illegality and user behavior.

Musk wrote that he was “unaware of any nude images of minors created by Grok. Literally zero.” This statement does not deny the existence of bikini photos or sexual modifications more broadly.

Michael Goodyear, an assistant professor at New York Law School and a former litigator, told TechCrunch that Musk would likely focus narrowly on CSAM because the penalties for creating or distributing synthetic sexual images of children are greater.

“For example, in the United States, a CSAM threatened dealer or distributor could face up to three years in prison under the Take It Down Act, compared to two years for non-consensual adult sexual images,” Goodyear said.

He added that the “bigger point” is Musk’s attempt to draw attention to problematic user content.

“Clearly, Grok does not generate images automatically. It only does so at the user’s request,” Musk wrote in his post. “When asked to create images, it will refuse to produce anything illegal, as Grok’s operating principle is to adhere to the laws of any given country or country. There may be times when a hostile hack of Grok’s claims will cause something unexpected to happen. If that happens, we will fix the bug immediately.”

Taken together, the post describes these incidents as uncommon, attributes them to user requests or adversarial claims, and presents them as technical issues that can be resolved through fixes. It stops short of acknowledging any flaws in the Grok’s basic safety design.

“Regulators, with an interest in protecting freedom of expression, may consider requiring AI developers to take proactive measures to block such content,” Goodyear said.

TechCrunch reached out to xAI to ask how often it had captured sexually manipulated images of women and children, what guardrails specifically had changed, and whether the company had notified regulators of the issue. TechCrunch will update the article if the company responds.

The California AG isn’t the only regulator trying to hold xAI accountable for this issue. Indonesia and Malaysia Both temporarily blocked access to Grok; India has X demanded make immediate technical and procedural changes to Grok; the The European Commission ordered xAI will retain all documents related to its Grok chatbot, in preparation for opening a new investigation; The UK’s online safety watchdog Ofcom opened a formal investigation Under the UK Online Safety Act.

xAI has come under fire for sexualized images of Grok before. As AG Bonta noted in a statement, Grok includes a “hot mode” To generate Explicit content. In October, an update made it easier to jailbreak the few existing safety guidelines, leading to many users creating explicit pornography using Grok, as well as Graphic and violent sexual images.

Many of the pornographic images produced by Grok were of people produced by artificial intelligence – something many may still consider morally questionable but perhaps less harmful to the individuals in the images and videos.

“When AI systems allow images of real people to be manipulated without explicit consent, the impact can be immediate and highly personal,” Alon Yamin, co-founder and CEO of Copyleaks, said in an emailed statement to TechCrunch. “From Sora to Grok, we’re seeing a rapid rise in AI’s capabilities to address manipulated media. To that end, detection and governance are needed now more than ever to help prevent abuse.”

Leave a Reply

Your email address will not be published. Required fields are marked *