Listen to the article

0:00
0:00

Deepfakes and Digital Trust: India’s Regulatory Response to Synthetic Media Threats

A recent viral Instagram video featuring Finance Minister Nirmala Sitharaman promoting a scheme that promised extraordinary investment returns was promptly identified as a deepfake by the government’s fact-checking unit. This incident highlights a growing concern in India’s digital landscape, where synthetic media is increasingly blurring the line between reality and fiction.

The problem is widespread and accelerating. According to a 2024 survey, over 75% of Indian respondents reported encountering deepfake content within the past year, with 38% having been targeted by scams utilizing this technology. From celebrities to government ministers, the victims of malicious deepfakes span across India’s public sphere.

While deepfake technology has legitimate applications in entertainment and satire, its misuse for impersonation, reputation damage, and deliberate misinformation has triggered alarm among authorities and the public alike. The risks are particularly acute in India due to its massive online population, varying levels of media literacy, fragile information ecosystem, and complex socio-political environment.

In response to these challenges, the Indian government released draft amendments to the Information Technology Rules on October 22, 2025. The proposed changes would require artificial intelligence platforms and social media intermediaries to clearly label “synthetic information” in pursuit of creating what officials describe as an “open, safe, trusted, and accountable internet.”

The draft amendments envision a three-tier labeling system: first, at the point of content generation by AI tools; second, at the user level when sharing such content; and third, at the platform level, where major social media companies must ensure proper labeling of synthetic content.

While the government’s intent appears sound, experts have raised concerns about the amendments’ broad and ambiguous scope. The definition of “synthetically generated information” covers content “artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.”

This sweeping definition fails to distinguish between harmless edits like removing a stranger from a photo and malicious deepfakes impersonating public figures. It raises questions about whether users must disclose minor adjustments like color correction or only significant alterations.

The phrase “reasonably appears to be authentic or true” introduces further complications. It remains unclear how this standard would apply across different media formats and contexts. Would satirical AI-generated content that looks realistic require labeling? The determination of “reasonableness” would likely fall to social media platforms, potentially transforming them from mere conduits of information into active content gatekeepers as they attempt to avoid liability.

Technical requirements in the draft amendments specify that synthetic content must be labeled using at least 10% of visual space or the first 10% of audio for disclaimers. Critics have questioned the arbitrary nature of these percentages and the lack of evidence supporting them. The amendments also fail to specify the necessary level of detail in these labels – whether a simple “AI-generated” disclaimer suffices, or if specific manipulations must be identified.

As AI technology advances, distinguishing synthetic media from authentic content will become increasingly difficult for both users and platforms. Social media companies must not only identify synthetic content but also determine if it causes harm – a complex task with current detection technologies. This could lead platforms to over-censor legitimate content to avoid liability.

Detection systems themselves remain imperfect. Meta’s AI labeling system recently misidentified genuine content from the Kolkata Knight Riders cricket team as AI-generated, demonstrating how false positives can undermine trust. Conversely, unlabeled synthetic content may be perceived as authentic, further distorting the information landscape.

Beyond visible labeling, the draft amendments propose embedding unique metadata or identifiers to verify content authenticity. However, the proposal lacks essential privacy safeguards regarding how this data would be stored and anonymized, raising concerns about user privacy and anonymity.

Critics argue that monitoring all users of synthetic media effectively treats everyone as a potential offender. This approach could particularly harm vulnerable groups who rely on anonymity online for safety, such as survivors of domestic violence.

The fundamental question remains whether these amendments can effectively address the challenges posed by synthetic content. Many experts believe the answer is no, as the current approach is reactive rather than preventative. Harmful synthetic content can spread widely before being detected and labeled, and even after removal, the damage to victims often persists.

India’s strategy for managing synthetic media risks would benefit from stronger evidence-based policies, comprehensive impact assessments on labeling effectiveness, and greater investment in media literacy education. As the country navigates these complex digital challenges, strengthening public trust in institutions may prove more effective than relying solely on technological interventions like content labeling.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

7 Comments

  1. Isabella Jackson on

    This article emphasizes the urgent need for India to address the deepfake challenge. The risks to digital trust and the potential for scams and reputation damage are significant. I’m curious to learn more about the specific policy and technological solutions being explored to combat this issue.

  2. Interesting article on the challenges India faces in combating synthetic media. Deepfakes can be incredibly deceptive and the scale of the problem seems daunting. Curious to hear more about the regulatory responses being considered to address this threat to digital trust.

  3. William Taylor on

    The rise of deepfakes is certainly a concerning trend, especially given India’s large online population and varying levels of media literacy. Fact-checking efforts will be crucial, but the speed at which these synthetic media can spread poses a real challenge. I wonder what other proactive measures the government is exploring.

    • James Martinez on

      You raise a good point. The speed of deepfake dissemination is a major hurdle. Effective regulation and public awareness campaigns will likely be needed to stay ahead of this problem.

  4. William Jackson on

    This article highlights an important issue that goes beyond just India. Deepfakes pose a global threat to digital trust and integrity. I’m curious to learn more about the technical and policy solutions being developed to combat this emerging form of disinformation.

  5. Elizabeth Jones on

    The impact of deepfakes on public figures like the Finance Minister is concerning. Malicious actors can use this technology to undermine trust in institutions and spread misinformation. I hope India’s regulatory response evolves quickly to stay ahead of this rapidly evolving threat.

  6. The scale of the deepfake problem in India is quite alarming. With over 75% of respondents encountering this content, it’s clear the government needs to take decisive action. Labeling alone may not be enough – a multifaceted approach targeting both the technology and the incentives behind it will likely be required.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.