Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

For more than two years, an app called ClothOff has been terrorizing young women online — and it’s been extremely difficult to stop. The app has been removed from the two major app stores and banned from most social media platforms, but is still available on the web and through the Telegram bot. In October, a clinic at Yale Law School filed a lawsuit that would remove the app entirely, forcing the owners to delete all photos and cease operation entirely. But just finding the accused was a challenge.
“It is incorporated in the British Virgin Islands, but we believe it is run by a brother and sister and Belarus. It may even be part of a larger network around the world,” explains Professor John Langford, the lead lawyer involved in the lawsuit.
It is a bitter lesson to follow The recent deluge of non-consensual pornography Created by Elon Musk’s XAI, which Many of the victims included minors. Child sexual abuse material is the most legally toxic content on the Internet, is illegal to produce, transmit or store, and is regularly scanned on every major cloud service. But despite the severe legal ban, there are still a few ways to deal with image generators like ClothOff, as the Langford case demonstrates. Individual users can be sued, but platforms like ClothOff and Grok are much harder to monitor, leaving few options for victims hoping to get justice in court.
The clinic’s complaint is Available onlineIt paints a disturbing picture. The plaintiff is an anonymous high school student in New Jersey, whose classmates used ClothOff to alter her Instagram photos. She was 14 when the original Instagram photos were taken, meaning the AI-edited versions are legally classified as child abuse images. But even though the altered images are clearly illegal, local authorities have refused to prosecute the case, citing the difficulty of obtaining evidence from the suspects’ devices.
“Neither the school nor law enforcement authorities ever determined how widely CSAM was distributed to Jane Doe and the other girls,” the complaint states.
However, the court case is moving slowly. The complaint was filed in October, and in the months since, Langford and his colleagues have been in the process of serving notice on the defendants — a difficult task given the global nature of the company. Once served, the clinic can push for a court appearance and, ultimately, a sentence, but in the meantime the legal system has offered little respite to Cloth Off victims.
The Grok issue may seem like an easier problem to fix. Elon Musk’s xAI technology is no secret, and there’s a lot of money in the end for the lawyers who can win the lawsuit. But Grok is a general-purpose tool, which makes it difficult to hold accountable in court.
TechCrunch event
San Francisco
|
October 13-15, 2026
“ClothOff is designed and marketed specifically as a deep porn image and video generator,” Langford told me. “When you sue a public system that users can query for all sorts of things, it becomes more complicated.”
A number of US laws have already banned deep porn – most notably Take it law. But while specific users clearly violate these laws, it is very difficult to hold the entire platform accountable. Current laws require clear evidence of intent to harm, which means providing proof that xAI knew its tools would be used to produce pornography without consent. Without this evidence, xAI’s basic First Amendment rights would provide significant legal protection.
“In terms of the First Amendment, it’s pretty clear that child sexual abuse material is not protected expression,” Langford says. “So when you’re designing a system to create that kind of content, you’re clearly operating outside the scope of what the First Amendment protects. But when you’re a public system where users can query all kinds of things, it’s not so clear-cut.”
The easiest way to overcome these issues is to show that xAI deliberately ignored the problem. It’s a real possibility Recent reports Musk directed employees to reduce Grok’s warranties. But even then, this issue will be much more dangerous to deal with.
“Reasonable people would say we knew this was a problem years ago,” Langford says. “How could you not have stricter controls to make sure that didn’t happen? That’s kind of reckless or knowing, but it’s just a more complicated case.”
These First Amendment issues are why the biggest backlash to xAI has come from court systems without strong legal protections for free speech. Both Indonesia and Malaysia It took steps to block access to the Grok chatbot, while UK regulators did I opened an investigation Which could lead to a similar ban. Other preliminary steps It has been taken by the European Commission, France, Ireland, India and Brazil. In contrast, no US regulatory agency issued an official response.
It’s impossible to say how the investigations will pan out, but at the very least, the deluge of images raises a lot of questions for organizers to investigate — and the answers could be damning.
“If you are publishing, distributing or distributing child sexual abuse material, you are violating the criminal prohibition and can be held accountable,” Langford says. “The difficult question is, what did X know? What did X do or not do? What are they doing now in response?”