Tech
India orders social media platforms to take down deepfakes faster

India has ordered social media platforms to step up policing of deepfakes and other AI-generated impersonations, while sharply shortening the time they have to comply with takedown orders. It’s a move that could reshape how global tech firms moderate content in one of the world’s largest and fastest growing markets for internet services.
The changes, published (PDF) on Tuesday as amendments to India’s 2021 IT Rules, bring deepfakes under a formal regulatory framework, mandating the labeling and traceability of synthetic audio and visual content, while also slashing compliance timelines for platforms, including a three-hour deadline for official takedown orders and a two-hour window for certain urgent user complaints.
India’s importance as a digital market amplifies the impact of the new rules. With over a billion internet users and a predominantly young population, the South Asian nation is a critical market for platforms like Meta and YouTube, making it likely that compliance measures adopted in India will influence global product and moderation practices.
Under the amended rules, social media platforms that allow users to upload or share audio-visual content must require disclosures on whether material is synthetically generated, deploy tools to verify those claims, and ensure that deepfakes are clearly labeled and embedded with traceable provenance data.
Certain categories of synthetic content — including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes — are barred outright in the rules. Non-compliance, particularly in cases flagged by authorities or users, can expose companies to greater legal liability by jeopardizing their safe-harbor protections under Indian law.
The rules lean heavily on automated systems to meet those obligations. Platforms are expected to deploy technical tools to verify user disclosures, identify, and label deepfakes, and prevent the creation or sharing of prohibited synthetic content in the first place.
“The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes,” said Rohit Kumar, founding partner at New Delhi-based policy consulting firm The Quantum Hub. “The significantly compressed grievance timelines — such as the two- to three-hour takedown windows — will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbor protections.”
Techcrunch event
Boston, MA
|
June 23, 2026
Aprajita Rana, a partner at AZB & Partners, a leading Indian corporate law firm, said the rules now focus on AI-generated audio-visual content rather than all online information, while carving out exceptions for routine, cosmetic, or efficiency-related uses of AI. However, she cautioned that the requirement for intermediaries to remove content within three hours once they become aware of it departs from established free-speech principles.
“The law, however, continues to require intermediaries to remove content upon being aware or receiving actual knowledge, that too within three hours,” Rana said, adding that the labeling requirements would apply across formats to curb the spread of child sexual abuse material and deceptive content.
New Delhi-based digital advocacy group Internet Freedom Foundation said the rules risk accelerating censorship by drastically compressing takedown timelines, leaving little scope for human review and pushing platforms toward automated over-removal. In a statement posted on X, the group also raised concerns about the expansion of prohibited content categories and provisions that allow platforms to disclose the identities of users to private complainants without judicial oversight.
“These impossibly short timelines eliminate any meaningful human review,” the group said, warning that the changes could undermine free-speech protections and due process.
Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules. While the Indian government appears to have taken on board proposals to narrow the scope of information covered — focusing on AI-generated audio-visual content rather than all online material — other recommendations were not adopted. The scale of changes between the draft and final rules warranted another round of consultation to give companies clearer guidance on compliance expectations, the sources said.
Government takedown powers have already been a point of contention in India. Social media platforms and civil-society groups have long criticized the breadth and opacity of content removal orders, and even Elon Musk’s X challenged New Delhi in court over directives to block or remove posts, arguing that they amounted to overreach and lacked adequate safeguards.
Meta, Google, Snap, X, and the Indian IT ministry did not respond to requests for comments.
The latest changes come just months after the Indian government, in October 2025, reduced the number of officials authorized to order content removals from the internet in response to a legal challenge by X over the scope and transparency of takedown powers.
The amended rules will come into effect on February 20, giving platforms little time to adjust compliance systems. The rollout coincides with India’s hosting of the AI Impact Summit in New Delhi from February 16 to 20, which is expected to draw senior global technology executives and policymakers to the country.