Listen to the article

0:00
0:00

India Tightens Rules on AI-Generated Content as Deepfake Concerns Mount

The Supreme Court of India has intensified scrutiny of AI-generated content as the government announces stricter regulations to combat the growing threat of deepfakes. Under new amendments to the Information Technology Rules, 2021, social media platforms including YouTube, Instagram, and X will be required to remove flagged synthetic content within three hours of notification by authorities.

The regulations, set to take effect from February 20, mandate that all AI-generated media must carry clear labels and permanent identifiers or metadata marking the content as artificial. This regulatory shift represents India’s most significant step yet toward addressing the proliferation of manipulated media that has increasingly undermined public trust.

“India’s move to explicitly bring synthetically generated information within the IT Rules is a significant regulatory step,” said Malcolm Gomes, Chief Operating Officer at Privy by IDfy. “It recognises that deepfakes and AI-generated media are no longer fringe concerns but real risks to public trust and institutional credibility.”

Experts warn, however, that labeling requirements alone may prove insufficient against the rapid spread of synthetic content. Gomes emphasized that by the time content is identified and labeled, it may have already caused substantial damage.

“Synthetic content spreads extremely fast across platforms, and by the time it is labelled, it may already have influenced opinion, disrupted markets, or caused reputational harm,” he noted. “The real safeguard has to sit much earlier in the process, through proactive deepfake detection tools, stronger content verification systems, and continuous monitoring.”

The judiciary itself has not been immune to AI-related challenges. Supreme Court lawyer Atul Kumar pointed to recent cases that highlight the technology’s disruptive potential in legal proceedings.

“In Deepak Raheja vs Omkara, AI generated non-existent case references in a rejoinder affidavit,” Kumar explained. “Similarly, the Supreme Court questioned the Election Commission’s use of AI-driven software, observing that such tools were not based on ground realities.”

Kumar believes more comprehensive measures are necessary. “One directive from a court or a government order will not solve the deepfake challenge. We need detailed guidelines and stricter laws,” he said, adding that while AI can assist with administrative matters, decision-making in legal contexts must remain human-led.

The rapid evolution of deepfake technology presents unique challenges for organizations beyond regulatory compliance. What was once considered a distant threat has evolved into an immediate risk that could impact brand reputation, operational security, and financial stability.

“AI governance is quickly becoming a board-level trust priority, whether organisations are ready for it or not,” Gomes observed. “The misuse of AI-generated media can create regulatory exposure, financial liability, operational disruption, and long-term brand damage. This is not just a content moderation issue, it is an enterprise risk issue.”

Corporate boards increasingly need transparency into AI deployment, verification processes, and third-party risk management. Organizations that approach AI governance as an ongoing risk management priority rather than merely checking compliance boxes will be better positioned to maintain stakeholder trust.

India’s regulatory approach reflects a growing global concern about deepfakes. Similar initiatives have emerged in other jurisdictions, including the European Union’s Digital Services Act and various U.S. state laws targeting synthetic media. However, the effectiveness of these measures remains uncertain as the technology continues to advance.

The three-hour takedown requirement stands out as particularly stringent compared to international standards, raising questions about implementation challenges for platforms operating across multiple time zones and jurisdictions.

As deepfake technology becomes increasingly sophisticated and accessible, the race between regulation and innovation continues. While India’s new rules establish an important foundation, experts agree that a more comprehensive framework combining proactive detection, judicial oversight, and legislative authority will be necessary to effectively combat synthetic media threats in the long term.

Without such robust measures, labeling requirements may ultimately prove inadequate against the growing tide of AI-generated deception that threatens to undermine public discourse and institutional trust.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. The Supreme Court’s scrutiny of this issue underscores the growing threat of synthetic media to public trust. Mandatory labeling could be a useful step, but I’m curious to see if it will be enough to address the complex challenge of AI-fueled disinformation.

    • Michael Martinez on

      Agreed, the labeling requirement is a start, but tackling AI-generated misinformation will likely require a multi-pronged approach. Curious to see how this evolves and whether other countries follow suit.

  2. This move by the Indian government is a step in the right direction, but the effectiveness of labeling requirements will depend on how well they are enforced and whether the public can reliably distinguish authentic from synthetic content.

  3. Interesting move by the Indian government to combat the rise of deepfakes and AI-generated content. Labeling requirements seem like a sensible approach, but I wonder about the practical challenges of enforcement and effectiveness in curbing misinformation.

  4. William Thompson on

    The Supreme Court’s involvement in this issue underscores the gravity of the deepfake challenge. Mandatory labeling is a reasonable approach, but I wonder if it will be sufficient to counter the potential for large-scale manipulation of information.

  5. As the use of AI-generated content continues to grow, it’s encouraging to see the Indian government taking proactive steps to combat misinformation. Labeling requirements could be a useful tool, but ongoing vigilance and adaptation will be key.

    • Elizabeth Johnson on

      Agreed, this is a complex issue that will require a sustained effort to address. Curious to see how the new regulations are implemented and if they prove effective in restoring public trust.

  6. William Williams on

    Stricter regulations on AI-generated content are necessary as deepfakes become more sophisticated and widespread. Labeling requirements could help, but combating misinformation will require a multi-faceted approach involving technological, legal, and educational measures.

  7. This regulatory shift in India highlights the urgent need to address the proliferation of deepfakes and synthetic media. Mandatory labeling seems like a prudent measure, but the effectiveness remains to be seen.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.