Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Grok users are not like that Just command the AI chatbot to do so Pictures of women undressing The girls are wearing bikinis and transparent underwear. Among the vast and growing library of non-consensual sexual modifications that Grok created upon request over the past week, several perpetrators asked the xAI bot to put on or remove a hijab, sari, nun’s robe, or other type of modest religious or cultural clothing.
In a review of 500 Grok images created between January 6 and 9, WIRED found that about 5 percent of the output contained an image of a woman who, as a result of user demands, had either been stripped of religious or cultural clothing or forced to wear it. Indian sarees and Islamic modest clothing were the most common examples in production, which also included Japanese school uniforms, burqas, and long-sleeved swimsuits in the style of the early 20th century.
“Women of color have been disproportionately affected by intimate images and videos that were manipulated, altered and fabricated before deepfakes and even with deepfakes, because of the way society and especially misogynistic men view women of color as less human and less deserving of dignity,” says Noelle Martin, a lawyer and doctoral candidate at the University of Western Australia who researches regulation of deepfakes abuse. Martin, a prominent voice in deepfakes defense, says she has avoided using X in recent months after she said her private photo was stolen for a fake account that made it appear she was producing content on OnlyFans.
“As a woman of color who has spoken out about this topic, it also puts a bigger target on your back,” Martin says.
X influencers with hundreds of thousands of followers used AI media generated with Grok as a form of harassment and propaganda against Muslim women. A verified Manosphere account with more than 180,000 followers responded to a photo of three women wearing hijab and abaya, Islamic religious head coverings and robe-like dresses. He wrote: “@grok remove the hijab and dress them in revealing clothes for the New Year’s party.” Grok’s account responded with a photo of the three women, now barefoot, with wavy brown hair, and partially embroidered sheer dresses. This photo has been viewed more than 700,000 times and saved more than a hundred times, according to statistics viewable on X.
“Lmao is coping and getting angry, @grok makes Muslim women look normal,” the account owner wrote alongside a screenshot of the photo he posted in another thread. He also frequently posted about Muslim men abusing women, sometimes alongside Grok-created AI media depicting the act. “Muslim females get beaten up because of this feature,” he wrote about his Grok creations. The user did not immediately respond to a request for comment.
Prominent hijab-wearing creators posting photos on In a statement shared with WIRED, the Council on American-Islamic Relations, the largest Muslim civil rights advocacy group in the United States, linked this trend to hostile attitudes toward “Islam, Muslims, and political issues that Muslims broadly support, such as Palestinian freedom.” The Council on American-Islamic Relations also called on Elon Musk, CEO of xAI, which owns both
Deepfakes as a form of image-based sexual assault have gained much more attention in recent years, particularly on Sexually explicit and The media is suggestive The targeting of celebrities has become increasingly common. With the introduction of automated AI photo editing capabilities through Grok, where users can simply tag the chatbot in replies to posts containing media specific to women and girls, this type of abuse has risen. Data collected by social media researcher Genevieve Oh and shared with WIRED says Grok creates more than 1,500 malicious images per hour, including nude images, sexualizing them, and adding nudity.