US Senators Demand Answers From X, Meta and Alphabet About Sexual Deepfakes


The problem of sexualized deepfakes in the tech world is now bigger than just X.

In a letter To the leaders of

The senators also required the companies to preserve all documentation and information related to the creation, discovery, supervision, and monetization of AI-generated sexual images, as well as any related policies.

The message comes hours after X said it Update your puppy To prevent it from making edits of real people in revealing clothing, and restrict image creation and editing via Grok to paying subscribers. (X and xAI are part of the same company.)

Pointing to media reports about how easily and mostly Grok created sexual and nude images of women and children, and the senators noted that platforms’ guardrails to prevent users from posting non-consensual sexual images may not be enough.

“We recognize that many companies have policies against non-consensual intimate images and sexual exploitation, and that many AI systems claim to block explicit pornography. But in practice, as we saw in the examples above, users find ways around these guardrails. Or these barriers fail,” the letter said.

Grok, and by extension X, has come under fire for enabling this trend, but other platforms are not immune.

TechCrunch event

San Francisco
|
October 13-15, 2026

Deepfake technology first gained popularity on Reddit, when a page was viewed Synthetic porn videos of celebrities It spread widely before the platform removed it in 2018. Sexual deepfakes targeting celebrities and politicians spread. It doubled on TikTok and YouTubealthough they usually originate elsewhere.

He called Meta’s oversight board last year Two instances of candid AI images Of female public figures, the platform also allowed striptease apps to sell ads on its services, although they did so He later sued a company called CrushAI. There have been multiple reports about Kids post fake photos of their peers on Snapchat. Telegram, which was not included in the senators’ list, has also become notorious Hosting bots designed to scrape images Of women.

X, Alphabet, Reddit, Snap, TikTok and Meta did not immediately respond to requests for comment.

The letter asks companies to provide:

  • Policy definitions of “deep fake,” “non-consensual intimate images,” or similar terms.
  • Descriptions of companies’ policies and enforcement approaches regarding non-consensual AI-assisted deepfakes of people’s bodies, non-nude images, altered clothing, and “virtual nudity.”
  • A description of current content policies addressing edited media and explicit content, as well as internal guidance provided to moderators.
  • How current policies govern AI tools and image generators as they relate to suggestive or intimate content.
  • What filters, guardrails or measures have been implemented to prevent the generation and distribution of deepfakes?
  • What mechanisms do companies use to identify deepfake content and prevent it from being re-uploaded?
  • How do they prevent users from benefiting from this content.
  • How platforms prevent themselves from monetizing non-consensual AI-generated content.
  • How companies’ terms of service enable them to block or suspend users who post deepfake content.
  • What are companies doing to notify victims of non-consensual sexual deepfakes?

The letter was signed by Senators Lisa Blunt Rochester (D-Delaware), Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Connecticut), Kirsten Gillibrand (D-N.Y.), Mark Kelly (D-Ariz.), Ben Ray Lujan (D-Wis.), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).

This move comes just one day after XAI owner, Elon Musk, said this “Not aware For any nude images of minors created by Grok. Later on Wednesday, the California Attorney General said I opened an investigation In xAI’s chatbot, after increasing pressure from Governments around the world Angry that there were no guardrails around your puppy that allowed this to happen.

At xAI Maintained It is taking action to remove “illegal content on

The problem is also not limited to sexual images that have been manipulated without their consent. Although AI-based image creation and editing services do not allow users to “undress,” they do allow one to easily create deep fakes. To pick a few examples, OpenAI’s Sora 2 is said to be It allows users to create candid videos Includes children; Google Nano Banana apparently I was born Photo showing the shooting of Charlie Kirk; and Racist videos Created using Google’s AI video model, it garners millions of views on social media.

The problem gets even more complicated when Chinese photo and video generators come into the picture. Many Chinese technology companies and apps — especially those linked to ByteDance — offer easy ways to edit faces, voices, and videos, and these outputs have spread to Western social media platforms. China has stronger requirements for labeling synthetic content that the United States does not have at the federal level, with audiences relying instead on fragmented and questionably enforced policies from the platforms themselves.

US lawmakers have already passed some legislation seeking to rein in fake pornography, but the impact has been limited. Take it lawwhich became federal law in May, aims to criminalize the creation and dissemination of non-consensual sexual images. but A number of provisions in the law They make it difficult to hold image generating platforms accountable, because they focus most scrutiny on individual users instead.

Meanwhile, a number of states are trying to take matters into their own hands to protect consumers and elections. this week, New York Governor Kathy Hochul proposed the laws This would require labeling AI-generated content as such, and banning non-consensual deepfakes in specific periods leading up to elections, including depictions of opposition candidates.

Leave a Reply

Your email address will not be published. Required fields are marked *