India orders social media platforms to remove deepfakes faster


India has ordered social media platforms to step up oversight of deepfakes and other AI-generated impersonations, while significantly shortening the time needed to comply with takedown orders. It’s a move that could reshape how global technology companies moderate content in one of the world’s largest and fastest growing Internet services markets.

changes, published (PDF) on Tuesday as amendments to India IT Rules 2021subjecting deepfakes to a formal regulatory framework, mandating the labeling and traceability of synthetic audio and video content, while reducing compliance timelines for platforms, including a three-hour deadline for formal takedown orders and a two-hour window for some urgent user complaints.

India’s importance as a digital market increases the impact of the new rules. With over a billion internet users and a majority young population, the South Asian country is an important market for platforms like Meta and YouTube, making it likely that compliance measures adopted in India could impact global product and moderation practices.

Under the revised rules, social media platforms that allow users to upload or share audio and video content must require disclosures about whether the material was artificially created, deploy tools to verify those claims, and ensure deepfakes are clearly labeled and combined with traceable source data.

Certain categories of synthetic content – ​​including deceptive impersonation, non-consensual intimate images, and material associated with serious crimes – are prohibited entirely in the rules. Non-compliance, especially in cases reported by authorities or users, could expose companies to greater legal liability by jeopardizing safe harbor protections under Indian law.

The rules rely heavily on automated systems to fulfill these obligations. Platforms are expected to deploy technical tools to verify user disclosures, identify and classify deepfakes, and prevent the creation or sharing of prohibited synthetic content in the first place.

“The revised IT rules represent a more calibrated approach to regulating AI-generated deepfakes,” said Rohit Kumar, co-founder of New Delhi-based political consulting firm The Quantum Hub. “Significantly compressed grievance timelines – such as two- to three-hour takedown windows – will materially increase compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbor protections.”

TechCrunch event

Boston, MA
|
June 23, 2026

The rules now focus on AI-generated audio and visual content rather than all online information, with exceptions for routine, cosmetic or efficiency-related uses of AI, said Aparajita Rana, partner at AZB & Partners, a leading Indian corporate law firm. However, it warned that requiring moderators to remove content within three hours once they become aware of it deviates from well-established principles of freedom of expression.

“However, the law still requires intermediaries to remove content upon becoming aware or receiving actual knowledge, and that too within three hours,” Rana said, adding that labeling requirements will apply across formats to limit the spread of child sexual abuse material and deceptive content.

New Delhi-based digital advocacy group Internet Freedom Foundation He said The rules risk accelerating censorship by significantly compressing takedown timelines, leaving little room for human review and pushing platforms toward overly automated takedowns. In a statement published on X, the group also raised concerns about expanding categories of prohibited content and provisions that allow platforms to disclose user identities to private sector complainants without judicial oversight.

“These extremely short timelines eliminate any meaningful human review,” the group said, warning that the changes could undermine protections for free expression and due process.

Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules. While the Indian government appears to have taken on board proposals to narrow the scope of information covered – focusing on AI-generated audiovisual content rather than all online material – other recommendations have not been adopted. The scale of changes between the draft and final rules warrant another round of consultations to give companies clearer guidance on compliance expectations, the sources said.

Government removal powers have already been a Point of contention In India. She has long been critical of social media platforms and civil society groups The breadth and transparency of content removal ordersAnd even Elon Musk X New Delhi appealed in court more Directions to block or remove postsUnder the pretext that it amounts to excess and lacks adequate guarantees.

Meta, Google, Snap, X and India’s Ministry of Information Technology did not respond to requests for comment.

The latest changes come a few months after the Indian government’s decision, in October 2025, reduced Number of officials authorized to request removal of content from the Internet in response to A Legal challenge by X On the scope and transparency of removal powers.

The revised rules will go into effect on February 20, giving platforms little time to adjust compliance systems. The launch coincides with India hosting a tournament Artificial Intelligence Impact Summit in New Delhi From February 16 to 20, which is expected to be a draw Senior global technology executives and decision makers in the country.

Leave a Reply

Your email address will not be published. Required fields are marked *