X did not solve Grok’s “undressing” problem. They just make people pay for it


After creating thousands to Pictures of women undressing and sexually explicit images of minors, Elon Musk’s X appears to have limited who can create images with Grok. However, despite the changes, the chatbot is still being used to create ‘nude’ sexual images on the platform.

On Friday morning, Grok’s account on The message also includes a link that nudges people toward the social media platform’s $395 annual subscription tier. In one system test that asked Grok to generate an image of a tree, the system returned the same message.

The apparent change comes after days of growing anger against and Auditing From Musk’s X and xAI, the company behind the Grok chatbot. Companies face a growing number of investigations by regulators around the world over the creation of non-consensual explicit images and alleged sexual images of children. British Prime Minister Keir Starmer Not ruled out He banned X in the country and said the actions were “illegal”.

Neither X nor xAI, the Musk-owned company behind Grok, have confirmed that they have made image creation and editing a paid-only feature. A spokesperson for X acknowledged WIRED’s inquiry but did not provide comment before publication. X has already He said It takes “action against illegal content on X”, including cases of child sexual abuse material. While Apple and Google have previously banned apps with similar “nudify” features, X and Grok remain available In their app stores. xAI did not immediately respond to WIRED’s request for comment.

For more than a week, users on While the public feed of images created by Grok had significantly fewer results for these “nudity” images on Friday, it still generated sexualized images when asked to do so by X users with paid “verified” accounts.

“We observed the same kind of triggers, and we observed the same kind of results, but a little less frequently than before,” Paul Bouchaud, lead researcher at the Paris-based nonprofit AI Forensics, tells WIRED. “The model can continue to produce bikini (pictures),” they say.

A WIRED review of some of Grok’s posts on Friday morning determined that Grok is creating images in response to user requests for photos of “putting her in latex underwear” and “putting her in a plastic bikini and covering her with a white layer of paint.” The images appear behind a “Content Warning” box indicating that adult material is being displayed.

On Wednesday, WIRED revealed that Grok’s standalone website and app, separate from the version on the Highly graphic and sometimes violent sex videosincluding celebrities and other real people. Grok can still be used to produce these videos, Bouchaud says. “You were able to create a sexually explicit video without any restrictions from an unverified account,” they say.

While WIRED’s test of creating images with Grok on

Experts say the change to X could immediately limit the amount of sexually explicit and harmful material the platform creates. But it has also been criticized as a simple step that acts as a bandaid for real damage Due to non-consensual intimate images.

Emma Pickering, head of technology-facilitated abuse at UK domestic abuse charity Refuge, said in a statement: “The recent decision to restrict access to paying subscribers is not only insufficient, it represents the monetization of abuse.” “While restricting AI image creation to paid users may slightly reduce volume and improve traceability, abuse has not been stopped. It has simply been placed behind a paywall, allowing X to profit from the damage.”

Leave a Reply

Your email address will not be published. Required fields are marked *