Listen to the article

0:00
0:00

In a significant regulatory shift, India has enacted stringent new rules governing artificial intelligence on social media platforms, aiming to combat rampant disinformation while raising concerns about potential censorship and digital freedom limitations.

The regulations, set to take effect on February 20—coinciding with the final day of an international AI summit in New Delhi—drastically reduce the compliance window for platforms responding to government takedown orders from 36 hours to just three hours.

With over a billion internet users, India faces unprecedented challenges from AI-generated misinformation flooding social media channels. Major platforms including Instagram, Facebook, and X (formerly Twitter) will now face much tighter timeframes to remove content deemed problematic by authorities, a measure designed to prevent harmful posts from rapidly spreading.

This regulatory tightening in the world’s most populous nation increases pressure on social media giants already confronting growing public anxiety and regulatory scrutiny globally over AI misuse, including misinformation propagation and the creation of sexualized imagery of children.

Digital rights advocates, however, warn that overly broad AI regulation risks eroding freedom of expression. The Internet Freedom Foundation (IFF), a prominent digital rights group, has criticized the compressed timeframe, arguing it would force platforms to become “rapid-fire censors” with little time for proper content evaluation.

Prime Minister Narendra Modi’s government has previously faced criticism from rights organizations regarding alleged restrictions on free expression targeting activists and opposition voices—claims the administration consistently denies. Critics also note that India’s global press freedom rankings have declined during Modi’s tenure.

Last year, Indian authorities launched an online portal called “Sahyog” (meaning “cooperate” in Hindi) to automate the process of issuing takedown notices to platforms. The latest rules expand coverage to content “created, generated, modified or altered through any computer resource,” exempting only material changed during routine or good-faith editing processes.

Under the new framework, platforms must clearly and permanently label synthetic or AI-manipulated media with markings that cannot be removed or suppressed. Problematic content could disappear almost immediately following a government notification.

“The timelines are so tight that meaningful human review becomes structurally impossible at scale,” said IFF chief Apar Gupta, adding that the system shifts control “decisively away from users,” while “grievance processes and appeals operate on slower clocks.” He noted that most internet users are not informed when authorities order their content deleted.

Digital rights activist Nikhil Pahwa was more direct in his assessment, calling the mechanism “automated censorship.” The regulations further mandate platforms to deploy automated tools preventing the spread of illegal content, including forged documents and sexually abusive material.

Pahwa questioned the practicality of the requirements, stating, “Unique identifiers are unenforceable. It’s impossible to do for infinite synthetic content being generated.” Gupta similarly expressed doubts about labeling effectiveness, noting, “Metadata is routinely stripped when content is edited, compressed, screen-recorded, or cross-posted. Detection is error-prone.”

The US-based Center for the Study of Organized Hate (CSOH), in a report co-authored with the IFF, warned that the laws “may encourage proactive monitoring of content which may lead to collateral censorship,” with platforms likely erring on the side of caution to avoid penalties.

The regulations define synthetic data as information that “appears to be real” or is “likely to be perceived as indistinguishable from a natural person or real-world event.” This broad definition has raised concerns that legitimate content such as satire, parody, and political commentary using realistic synthetic media could be inappropriately targeted.

Despite these concerns, the widespread access to AI tools has undeniably “enabled a new wave of online hate facilitated by photorealistic images, videos, and caricatures that reinforce and reproduce harmful stereotypes,” according to the CSOH report.

A recent high-profile incident involved Elon Musk’s AI chatbot Grok, which sparked outrage in January when users manipulated it to create millions of sexualized images of women and children by altering online images of real people.

“The government had to act because platforms are not behaving responsibly,” Pahwa acknowledged, while adding that the rules appear to have been implemented “without thought” to their broader implications.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. India’s tightening of social media content rules in the face of AI disinformation is understandable, but the accelerated takedown timeline is concerning. Upholding digital rights will require nuance.

  2. These new regulations underscore the global struggle to mitigate the spread of harmful AI-generated content. It will be important to monitor their effectiveness and impact on digital rights.

    • Absolutely. Tight content takedown timelines could lead to over-censorship if not implemented carefully. Transparency and recourse mechanisms will be crucial.

  3. Emma K. Taylor on

    Interesting move by India to address the challenges of AI-powered disinformation on social media. Balancing content moderation and free expression will be critical as they implement these new rules.

    • Agreed, it’s a delicate balance. India will need to ensure the rules are applied fairly and transparently to avoid concerns over censorship.

  4. Robert Rodriguez on

    As the world’s largest democracy, India’s approach to governing AI and social media will be closely watched. Striking the right balance between public safety and individual freedoms is no easy task.

    • Elizabeth Jackson on

      You make a good point. India’s solution could serve as a model or cautionary tale for other nations grappling with similar challenges around AI and disinformation.

  5. The reduction in content takedown time from 36 hours to 3 hours is an aggressive move. Platforms will need robust systems to comply, while guarding against erroneous removals.

  6. Mary N. Martin on

    Combating AI-fueled disinformation is a global priority, but India’s approach raises valid concerns about potential overreach. Careful monitoring will be essential.

  7. Elizabeth Moore on

    With over a billion internet users, India faces a daunting task in reining in the spread of AI-generated misinformation. These new rules aim to act quickly, but could backfire if not implemented thoughtfully.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.