Listen to the article

0:00
0:00

Indian Government Introduces Stricter Content Regulations for Tech Platforms

The Ministry of Electronics and Information Technology (MeitY) has significantly amended the Information Technology Rules, transitioning from a reactive notice-and-takedown approach to a more proactive, rules-based governance model for digital platforms operating in India. The revisions to the 2021 IT Rules will substantially impact how major technology companies like Meta, X (formerly Twitter), and Google manage potentially harmful content.

Among the most consequential changes is the dramatic reduction in content takedown timelines. Social media platforms must now remove unlawful content within three hours of notification, down from the previous 36-hour window. For particularly sensitive content, including impersonation and intimate imagery, the response time has been compressed further to just two hours.

“The move to three-hour takedown obligations and two-hour action on impersonation and intimate imagery effectively requires round-the-clock compliance operations,” noted Ankit Sahni, partner at law firm Ajay Sahni & Associates. “The real challenge for platforms will lie in designing proportionate, verifiable systems that can meet these timelines without over-removal or chilling legitimate speech.”

Industry executives have expressed concerns about the feasibility of these compressed timeframes. With such short response windows, platforms will likely rely more heavily on automated content moderation systems rather than human review, potentially increasing the risk of enforcement errors and precautionary removal of legitimate content.

Salman Waris, partner at TechLegis, highlighted how these changes fundamentally alter the role of digital platforms: “This marks a departure from the Shreya Singhal precedent, which limited intermediary liability to actual knowledge and rejected proactive monitoring. This effectively shifts platforms from passive hosts to proactive gatekeepers.”

The amendments also introduce new regulations concerning synthetically generated information (SGI), commonly referred to as AI-generated or “deepfake” content. For the first time, the rules clearly define what falls under the SGI classification, distinguishing between routine editing practices and more substantial AI-generated content.

Routine activities such as using filters for photos, transcribing videos, removing background noise, creating presentations, and generating AI diagrams and graphs are permitted. However, platforms must ensure that content created primarily using AI is appropriately labeled as such by creators.

Notably, the government has relaxed some initially proposed labeling requirements following industry feedback. Draft rules had stipulated that labels must cover 10 percent of visual content, but the final regulations allow for more flexibility in implementation.

Despite this concession, experts point to significant technical challenges in complying with these regulations at scale. “Automated, real-time labeling across 22+ Indian languages and various content formats remains complex,” Waris explained. The requirements for embedding permanent, tamper-proof metadata or unique identifiers may be technically feasible but aren’t universally adopted across the industry.

For smaller platforms and emerging players in the Indian digital ecosystem, these new regulations could present substantial operational hurdles. “For new platforms and players, including homegrown platforms, this significantly increases the cost of doing business and barriers to entry,” said Vikram Jeet Singh, partner at law firm BTG Advay, noting that companies have merely 10 days to achieve compliance with these extensive new requirements.

Huzefa Tavawalla, partner at Cyril Amarchand Mangaldas, raised concerns about the expanded scope of due diligence requirements, which “extend beyond a standard of ‘reasonable efforts’ and move towards a more hard-coded obligation.” Failure to meet these obligations could result in platforms losing their “safe harbor” protections under Section 79 of the IT Act, which shields intermediaries from liability for third-party content.

As technology companies scramble to adapt their systems and processes to meet these stringent new requirements, the amendments represent a significant shift in India’s approach to digital content regulation, with potential implications for online expression and platform operations throughout the country.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. Olivia G. Miller on

    This policy shift away from a reactive approach is likely an attempt by the Indian government to gain tighter control over online discourse. It remains to be seen whether these new rules will effectively curb the spread of misinformation or set a concerning precedent for censorship.

    • Jennifer White on

      I agree, the government’s motivations here are worth scrutinizing. Striking the right balance between content moderation and protecting free speech will be critical.

  2. Amelia N. Smith on

    As an investor, I’ll be watching closely to see how these new rules impact the operating costs and legal risks for major tech firms in India. Increased compliance burdens could put pressure on profit margins and valuations if not managed effectively.

  3. Elizabeth Martin on

    The new content moderation rules in India seem quite strict. It will be interesting to see how major tech platforms adapt their operations to comply with the tight takedown timelines. Proactive governance may improve online safety, but could also raise platform liability concerns.

    • Elizabeth Johnson on

      You raise a good point. The shorter content removal deadlines could place significant operational burdens on tech companies, requiring more resources for real-time monitoring and decision-making.

  4. While the intentions behind these amendments may be good, the devil will be in the details of how they are implemented in practice. Careful monitoring will be needed to avoid unintended consequences or abuse of the new powers granted to the government.

  5. This is a complex issue with valid concerns on both sides. I hope India can find an effective way to address online harms without unduly restricting free expression or imposing unsustainable burdens on technology companies.

  6. Jennifer White on

    The shift towards a more proactive content moderation model is an interesting regulatory development. I’m curious to see if this approach spreads to other countries and how it might shape the future of online platform governance worldwide.

  7. From a user perspective, I hope these changes lead to a safer online environment in India, with faster removal of harmful and abusive content. However, the government will need to ensure the new framework doesn’t inadvertently censor legitimate speech.

  8. Elijah X. Moore on

    The compressed takedown timelines for sensitive content like impersonation and intimate imagery are understandable, but may prove challenging for platforms to implement consistently. Automated systems can struggle with nuanced context and could lead to over-removal.

    • That’s a fair concern. Relying too heavily on automation for such delicate content decisions could backfire and create new problems if not carefully designed.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.