Elon Musk teases a new image rating system for X…we think?


Elon Musk’s X is the latest social network to roll out a feature to label edited photos as “manipulated media,” if Elon Musk’s post is to be believed. But the company did not explain how it would make this decision, or whether it would involve images edited using traditional tools, such as Adobe Photoshop.

So far, the only details about the new feature come from A A mysterious X post from Elon Musk saying, “Warning modified visuals,” while re-sharing the announcement of his new X feature unknown X account DogeDesigner. This account is often used as a proxy to introduce new X features, as Musk will repost from to share news.

However, details regarding the new system are still scant. The DogeDesigner post claimed that the new X feature could make it harder for legacy media groups to post misleading clips or images. It also claimed that the feature is new to the X.

Before it was acquired and renamed X, the company was known as Twitter has categorized the tweets Use tampered with, deceptively altered or fabricated media as an alternative to its removal. Yoel Roth, the site’s head of safety, said in 2020 that its policy was not limited to artificial intelligence but included things like “specific editing, cropping, slowing down, overdubbing, or manipulating subtitles.”

It’s unclear whether X adopts the same rules or makes any significant changes to address the AI. Help her Documents It currently says there is a policy against sharing non-original media, but it is rarely enforced, as in The latest deepfake debacle of users who shared nude photos without consent that were viewed. In addition, Even the White House is now sharing the manipulated photos.

Calling something “manipulated media” or “AI image” can be accurate.

Given that X is a playground for political propagandaBoth internally and externally, some understanding of how a company determines what is “edited,” or perhaps AI-generated or manipulated, must be documented. In addition, users should know whether or not there is any type of dispute process outside of X’s collective community feedback.

TechCrunch event

San Francisco
|
October 13-15, 2026

As Meta discovered when it introduced AI image classification in 2024, it’s easy for detection systems to go awry. In her case, Meta was found to be incorrectly labeled Real photos are labeled “Made with AI,” even though they were not created using generative AI.

This happened due to the increasing features of artificial intelligence They are combined To the creative tools used by photographers and graphic artists. (New Apple The Creator Studio suite launches todayis one recent example.)

As it turned out, this confused Meta’s meta tools. For example, Adobe’s crop tool flattened images before saving them as JPEGs, which triggered the AI’s Meta detector. In another example, Adobe’s Geneative AI fill, which is used to remove objects — like wrinkles in a shirt, or an unwanted reflection — was also causing images to be classified as “AI created,” when they were edited using only AI tools.

finally, Meta has updated its branding to say “AI Information,“So that images aren’t explicitly labeled as ‘made with AI’ when they’re not.

Today, there is a standards-setting body for verifying the authenticity and provenance of digital content, known as C2PA (Coalition for Source Content and Authenticity). There are also related initiatives such as CAIOr the Content Authenticity Initiative Project originfocus on adding tamper evident source metadata to media content.

The implementation of X is supposed to adhere to some sort of known process for determining AI content, but the owner of He also didn’t clarify whether he was specifically talking about AI photos, or anything other than a photo uploaded to the X directly from your smartphone’s camera. It is unclear if the feature is completely new, as DogeDesigner claims.

X isn’t the only outlet grappling with manipulated media. In addition to dead, Tik Tok Artificial intelligence content is also rated. Streaming services such as Deezer and Spotify We are also expanding initiatives to identify and classify AI music as well. Google Photos uses C2PA To point out how photos are taken on her platform. Microsoft, BBC, Adobe, Arm, Intel, Sony, OpenAI and more are working C2PA Steering Committeewhile many companies joined as members.

X is not currently listed among Membersalthough we reached out to C2PA to see if that has changed recently. X doesn’t typically respond to requests for comment, but we asked anyway.

Leave a Reply

Your email address will not be published. Required fields are marked *