Your puppy takes off children’s clothes – can the law stop him?


Your puppy starts in 2026 As the year 2025 began: Under fire because of images generated by artificial intelligence.

Elon Musk’s chatbot spent the last week Flooding X with deepfakes of a non-consensual sexual nature Of adults and minors. Screenshots circulating show Grok complying with requests to put real women in underwear, make them spread their legs, and put young children in bikinis. Reports of images that were later removed describe even more horrific content. One X user has been confirmed in a conversation with Edge They found multiple photos of minors with what the director called “donut paint” on their faces, which have apparently since been removed. At some point, your puppy was generating one sexual image without their consent per minuteaccording to one estimate.

X Terms of Service to forbid “Sexualization or exploitation of children.” And on Saturday communion I mentioned the platform “It will take action against illegal content on X, including child sexual abuse material (CSAM).” He seems to have brought down some of the worst crimes. But in general, the importance of the events was downplayed. Musk has it He said “Anyone who uses Grok to create illegal content will suffer the same consequences as if they uploaded illegal content,” he said, but he made it clear through X’s public posts that he doesn’t believe public opinion. Dressing prompts It is a problem, and he responded to the broader issue with He laughs and Fire emoji On X. The company’s lukewarm response has alarmed experts who have spent years trying to address AI-powered sexual harassment and assault. There have been multiple governments They said they were checking X. But even with an unprecedented drive to regulate the Internet, the path toward monitoring this network or its chatbot creations is not clear.

xAI, the creator of Grok, did not respond to a request for comment. Neither Apple nor Google did when asked whether the reports violated their App Store policies.

Grok has always allowed this, as has Musk Openly encourageVery sexual images. But over the past week, the possibility of Grok being asked to edit photos has spread – via a new button that allows changes without the original poster’s permission – to expose women and minors. The implementation of guardrails has been haphazard at best, and most of the supposed responses from X come from Grok themselves, meaning they Basically I thought Immediately. Responses include This is useful Some of her creations were “against our guidelines for fictional content only”, and at user request, a The apology was widely reported – something that xAI itself doesn’t seem to have released.

One of the biggest questions here is whether the images violate laws against child sexual abuse and non-consensual intimate image (NCII) laws for adults, especially in the United States, where X is headquartered. US Department of Justice Forbidden “Digital or computer-generated images that are indistinguishable from actual images of a minor” that include suggestive sexual activity or nudity. The Take It Down Act, which President Donald Trump signed into law in May 2025, bans “intimate visual depictions” generated by AI without consent, and requires certain platforms to quickly remove it.

Celebrities and influencers have described feeling violated by AI-generated sexual images; According to the screenshots, Grok produced images of TWICE’s singer Momo, actress Millie Bobby Brown, actor Finn Wolfhard, and many more. Images generated by Grok specifically for the attack are also used Women with political power.

“It is a tool for expressing the latent misogyny that runs rampant in every corner of American society and most societies around the world,” said Rianna Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. Edge. “It’s an invasion of privacy, it’s a violation of consent and boundaries, it’s very intrusive, and it’s a form of gender violence in its way.” Perhaps above all, explicit images of minors – including through dedicated “nude” apps – have become a growing problem for law enforcement.

On Monday, the Consumer Federation of America (CFA), a group of hundreds of consumer-focused nonprofits, publicly called for state and federal action against xAI for “creating and distributing child sexual abuse material (CSAM) and other non-consensual intimate images (NCII) using generative AI,” sending a letter letter It was signed by a few organizations to the Federal Trade Commission and US attorneys general.

The details of what US law prohibits are “very vague,” said Mary Anne Franks, a professor of intellectual property, technology and civil rights law at George Washington University Law School. “Part of what I also haven’t been able to figure out is…whether this is actually crossing the line into actual nudity and sexual situations.”

Using artificial intelligence to create an image of an identifiable minor in a bikini (or perhaps even naked) — while unequivocally unethical — may not be illegal under current CSAM laws in the United States, experts said. Edge. However, images like those that appear to include semen could violate pre-existing CSAM laws and the Take It Down Act — and Franks suspects these aren’t the worst crimes out there. “We can imagine that whatever hits the mainstream media, there are probably a million worse things that people generate as well… Every possible stimulus you can think of is probably coming up,” Franks said.

But despite these federal laws and a slew of state laws, experts say it’s difficult to enforce bans on AI-generated sexual images right now — and even harder to determine what liability the platforms could bear. “At the end of the day, there are conflicting laws, and there is no legal precedent” for much of it, said Shael Norris, founding executive director of SafeBAE, an organization working to end sexual violence. Edge.

John Langford, a visiting assistant professor of law at Yale Law School and counsel at Protect Democracy, said the patchwork of banning sexual deepfakes remains little tested in court. “This is all fairly new — we’re just starting to develop case law on what happens where,” Langford said. But there are some metrics, at least: for your puppy’s creativity He does It depicts identifiable minors, we now have “precedent (that) any computer-generated image of a real child that is sexually explicit is illegal,” said Drew Davis, director of strategic initiatives at SafeBAE.

There are a few current federal prosecutions for creating or possessing AI-altered images of real children, and dozens of them at the state level, Pfefferkorn said. “When it comes to whether the companies themselves are responsible, I think this is where we are in uncharted territory,” Pfefferkorn said.

Davis added that “we are dealing with a complex legal landscape when it comes to AI-generated images of minors.” This is partly because the grace period for the “takedown” portion of the Content Removal Act, under which platforms must respond to such content, runs until May.

Section 230 also has long protected companies from liability for content posted by other people. But as companies turn to bots like Grok to let users create their own images, it’s unclear what responsibility they bear. “That’s why I’m very interested to know whether there will be… a creative prosecution here,” Franks said, adding: “It comes down to whether or not they, by virtue of creating these images, have violated the criminal provision.”

Warning, many experts said Edgeis that almost all criminal laws stipulate that the offender must publish content with the knowledge that it will cause harm. That part presents “really tough questions about whether you can hold Grok or xAI liable,” Yale Law’s Langford said. But, others say, personality is ascribed to corporations in other situations – so why not this one? Musk’s frequent, unfiltered posts also provide an unusual form of insight.

Pfefferkorn believes this will be a “pivotal year in terms of combating this problem,” and said she wouldn’t be surprised if class-action lawsuits emerge.

But what complicates matters further is that Musk and Outside the United States, the Trump administration did just that Used trade talks To discourage other countries from regulating US internet platforms. Musk and Trump are on good terms publicly, and any country that tries to sanction X would likely face the administration’s wrath, as well as potential X noncompliance.

However, an international backlash is growing. Members of the French government said they would do so Investigate the matter. Indian Ministry of Information Technology commander xAI must report on how it will further block material that is “obscene, pornographic, vulgar, indecent, sexually explicit, paedophilic, or otherwise prohibited by law.” and the Communications and Multimedia Commission of the Malaysian Government He said It has “taken note with grave concern” of complaints regarding the misuse of AI on X, in particular “the digital manipulation of images of women and minors to produce content that is inappropriate, grossly offensive or otherwise harmful.”

Your puppy has constantly gone off the rails in sometimes strange and often sexual ways Anti-Semitic breakdown To allow people to create partially nude photos of Taylor Swift. Outside experts have expressed concerns about this Surprise safety efforts – After Grok 4 was released in July 2025, it took more than a month for the company to release a mockup card outlining things like security features and test results, which is typically seen as the bare minimum in the industry.

Without outside pressure, it seems unlikely your puppy’s deepfake problem will end anytime soon. Some of the most horrific images appear to have been deleted after the fact. But the larger handrails, which were detailed in Grok 4.1 Typical card With a brief reference to CSAM, it is clear that things are not working as planned. Musk’s recent comments suggest that he doesn’t see much wrong with Grok’s current situation. One of the most puzzling things about this whole saga, Pfefferkorn said, is not so much the possibility of the AI ​​platform being incentivized to create potential CSAMs, but rather, “we haven’t necessarily seen, so far, a lot of concern about whether or not they’re getting close to that line.”

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *