Listen to the article
India Tightens Social Media Regulations with Three-Hour Takedown Rule for Harmful Content
From February 20, 2026, major social media platforms operating in India will face dramatically stricter content moderation requirements, with the government mandating removal of harmful content within three hours instead of the previous 36-hour window.
The Ministry of Electronics and Information Technology (MeitY) has formally notified amendments to the Information Technology Rules of 2021 through Gazette Notification G.S.R. 120(E), targeting platforms including Meta’s Facebook and Instagram, X (formerly Twitter), YouTube, WhatsApp and Telegram.
These changes represent one of India’s most comprehensive efforts to regulate synthetic and AI-generated content on digital platforms, with particular focus on AI-generated deepfakes, impersonation posts, misinformation and non-consensual imagery.
Under the updated framework, intermediaries must remove flagged unlawful or harmful content within three hours of receiving a court order or government notice. Some reports indicate timelines could be as short as two hours for particularly sensitive cases such as non-consensual intimate imagery.
The rules also mandate prominent labelling of AI-generated content, requiring platforms to ensure all synthetic material is clearly marked before publication. Content creators must declare whether uploaded material is synthetic, and platforms must verify these claims using automated detection tools.
Intermediaries will also be required to maintain associated metadata or persistent identifiers that cannot be removed, improving transparency and traceability of manufactured content. Every three months, platforms must inform users about potential consequences of violating content norms, including account actions or legal implications.
Government officials have defended the compressed timeframe as necessary to neutralize harmful content before it can go viral and cause real-world harm. A senior MeitY official told reporters that tech companies “certainly have the technical means to remove unlawful content much more quickly than before.”
The regulatory tightening comes amid rising concern over the rapid proliferation of AI-generated material across India’s digital landscape. With over 800 million internet users, India represents one of the world’s largest digital markets, making effective content moderation particularly challenging yet crucial.
Digital rights advocates have raised concerns about the practicality of such compressed deadlines. Critics argue that three-hour windows make meaningful human review nearly impossible, potentially pushing platforms toward automated takedown systems that could remove lawful content in the process.
“While addressing synthetic harms is important, the extremely short timeframes risk creating a system where platforms remove first and ask questions later,” said a spokesperson from a leading digital rights organization who requested anonymity. “Smaller platforms lacking sophisticated moderation infrastructure may be disproportionately affected.”
Industry analysts note the shift represents a move from a notice-and-takedown model toward a more proactive governance regime, particularly regarding AI content. The new requirements place substantial compliance burdens on platforms trying to balance rapid removal with careful legal assessment.
The rules define “synthetically generated information” as including AI-created or altered audio, visuals and video that appear real. They treat harmful versions of such content as unlawful, similar to child sexual abuse material, impersonation, fake documents or other illegal content.
Political responses have emerged across India’s diverse states. In the Uttar Pradesh Assembly, legislators have called for specific laws against deepfakes and AI misuse, with state leaders debating implementation strategies for the Central government’s directive.
Market observers suggest major platforms have already begun preparing for compliance, though many will need to significantly scale up moderation teams and technical infrastructure before the 2026 implementation deadline.
While transparency advocates welcome mandatory labelling and faster response times to harmful content, free speech proponents urge careful calibration to avoid restricting legitimate synthetic content such as satire or creative art when properly identified.
As India continues to navigate the complex intersection of technology regulation and free expression, the amended rules represent one of the most assertive attempts globally to address the growing challenges posed by AI-generated content in a major digital economy.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments
Interesting to see India take such a strong stance on this issue. Regulating synthetic and AI-generated content is becoming increasingly important as these technologies advance. Curious to see if other countries follow suit with similar measures.
You raise a good point. As deepfake technology becomes more sophisticated, governments will likely need to implement tougher policies to combat its misuse. India seems to be at the forefront of this issue.
The 3-hour takedown rule seems quite aggressive, but I can understand the government’s motivation to quickly remove harmful misinformation and deepfakes. It will be a challenge for social media platforms to comply, but it’s an important step in protecting users.
This is a significant move by India to crack down on misinformation and deepfakes on social media. The 3-hour takedown rule will force platforms to be much more proactive in content moderation. It will be interesting to see how they adapt to meet these new requirements.
Agreed, the shorter timeframe will put a lot more pressure on platforms to respond quickly. Curious to see how this impacts the spread of harmful content in India.
This is a bold move by India to address the growing problem of online misinformation. The tight timeline for content removal will force platforms to improve their detection and moderation capabilities. Curious to see how effective it is in practice.
Agreed, it will be a real test for the platforms to meet these new requirements. Proactive moderation and automated systems will likely be essential to comply with the 3-hour rule.
As an investor in mining and energy equities, I’m curious to see how this new regulation in India might impact the spread of misinformation and manipulation around those sectors. Stricter content rules could help provide more reliable information for investors.