Instagram and X have an impossible deadline to detect deepfake


The best methods we currently have for detecting and classifying online deepfakes are about to be put to a stress test. India announced The US on Tuesday required social media platforms to remove illegal AI-generated material much faster, and to ensure all artificial content is clearly labeled. Tech companies have said for years they want to go it alone, and now they only have a few days before they are legally obligated to do it. The rules come into effect on February 20.

India has one billion young internet users, making it one of the most important growth markets for social platforms. So, any commitments there could impact deepfake moderation efforts around the world — either by advancing the detection to the point where it becomes In reality Or force technology companies to acknowledge the need for new solutions.

Under India’s revised IT rules, digital platforms will be required to deploy “reasonable and appropriate technical measures” to prevent their users from making or sharing illegal artificially generated audio and video content, also known as deepfakes. Any unblocked creative AI content must be included with “permanent metadata or other appropriate technical provenance mechanisms.” Specific obligations are also being sought for social media platforms, such as requiring users to disclose AI-generated or edited material, deploying tools that verify those disclosures, and prominently labeling AI content in a way that allows people to immediately recognize it as artificial, such as adding verbal disclosures to the AI’s voice.

That’s easier said than done, considering how to do it Woefully underdeveloped AI detection and labeling systems currently exist. C2PA (aka Content Credentials) is one of the best systems we currently have for both, and works by attaching detailed metadata to images, video, and audio when created or edited, to invisibly describe how they were made or modified.

But here’s the thing: Meta, Google, Microsoft, and many other tech giants are already using C2PA, and it’s clearly not working. Some platforms like Facebook, Instagram, YouTube, and LinkedIn add labels to content flagged by the C2PA system, but these labels are difficult to detect, and some artificial content He should That metadata slips through the cracks. Social media platforms can’t label anything that doesn’t have source metadata to begin with, such as material produced by open source AI models or so-called “Striptease applications“which refuses to adopt the voluntary C2PA standard.

India has over 500 million social media users, according to the research shared by DataReportal Reuters. When broken down, there are 500 million YouTube users, 481 million Instagram users, 403 million Facebook users, and 213 million Snapchat users. It is also estimated to be the third largest market for X.

Interoperability is one of the biggest issues facing C2PA, and while new rules in India may encourage its adoption, C2PA metadata is far from permanent. It is so easy to remove that some online platforms can unintentionally remove it while uploading files. New rules order platforms no To allow metadata or labels to be modified, hidden or removed, but there’s not much time to figure out how to comply. Social media platforms like X, which have never implemented any AI rating systems, now have just nine days to do so.

Meta, Google, and X did not respond to our request for comment. Adobe, the driving force behind the C2PA standard, also did not respond.

Adding to the pressure in India is a requirement for social media companies to remove illegal material within three hours of it being discovered or reported, replacing the current 36-hour deadline. This also applies to deepfakes and other malicious AI content.

The Internet Freedom Foundation (IFF) warns that these imposed changes risk forcing platforms to become “rapid-fire censors.” “These impossibly short timelines eliminate any meaningful human review, forcing platforms to overdo automated removal.” FIFA said in a statement.

Since the amendments specify mechanisms of origin that should be implemented to the “maximum extent technically feasible,” it is likely that the officials behind the Indian application realize that current AI detection and labeling technology is not yet ready. Organizations that support C2PA have long sworn that the system will work if enough people use it, so this is the opportunity to prove it.

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *