Listen to the article

0:00
0:00

India’s technology ministry has proposed new regulations that would require digital platforms to clearly label and identify AI-generated content, marking a significant expansion of the country’s digital governance framework amid growing concerns about synthetic media.

The Ministry of Electronics and Information Technology (MeitY) recently released draft amendments to the Information Technology Rules that introduce stringent requirements for handling artificially generated or modified content across social media platforms operating in India. The proposed changes, expected to take effect later this year, represent the government’s response to the rapid proliferation of AI tools capable of creating deceptively realistic content.

Under the draft amendment, “synthetically generated information” encompasses any digital content that has been created, altered, or modified using computer resources in ways that make it appear authentic. The rules would mandate that platforms prominently label such AI-generated material, with visual indicators required to cover at least 10 percent of the screen area or initial audio duration.

Social media companies would also need to embed permanent unique identifiers or metadata in synthetic content that cannot be removed or altered. This technical requirement aims to ensure transparency even when content is shared across multiple platforms.

For major social media platforms with more than 5 million Indian users—classified as “significant social media intermediaries” (SSMIs)—additional responsibilities would apply. These companies, which include Facebook, YouTube, and Snapchat, would need to implement verification systems and automated tools to validate user declarations about synthetic content.

“If a platform knowingly allows unlabelled or falsely declared AI-generated content, it will be deemed to have failed in exercising due diligence under the IT Act,” the draft states, potentially exposing companies to legal consequences for non-compliance.

The amendments also clarify that removing or disabling access to improperly labeled synthetic content through established grievance mechanisms would not constitute a violation of intermediary liability protections that platforms currently enjoy.

These proposed rules build upon India’s existing digital regulation framework, which was first established in February 2021 and subsequently updated in October 2022 and April 2023. The 2025 amendment specifically addresses the challenges posed by generative AI technologies that have become increasingly sophisticated and accessible to users.

Industry experts note that the regulations come at a time when concerns about AI-generated misinformation are particularly acute, with several states in India scheduled for elections in the coming months and national elections expected next year.

In its statement, MeitY emphasized that the regulations are designed to maintain an “open, safe, trusted and accountable Internet” while addressing growing risks of misinformation, impersonation, and election manipulation driven by generative AI technologies.

The rules represent part of a global trend toward regulating synthetic media. Several countries, including the European Union through its AI Act, have begun implementing similar requirements for transparency in AI-generated content.

For technology companies operating in India’s booming digital market, these regulations will likely necessitate significant technical adjustments and potentially increased content moderation costs. Small and medium-sized platforms may face particular challenges in implementing the required verification systems.

The ministry has invited stakeholder feedback on the draft amendment until November 6, 2025, providing an opportunity for industry participants to shape the final version of the regulations before implementation.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

8 Comments

  1. Interesting move by India to address the growing threat of deepfakes and synthetic content. Proper labeling is a reasonable step to help users identify AI-generated material. It will be important to monitor how this policy is implemented and enforced.

    • Yes, these types of regulations are becoming increasingly necessary as the capabilities of AI continue to advance. Transparency and clear identification of synthetic content will be crucial.

  2. This is a welcome initiative by the Indian government to address the growing problem of synthetic media. Clearly identifying AI-generated content is an important step to maintain trust and authenticity online. However, the devil will be in the details of how these rules are implemented.

  3. Isabella Martinez on

    The proposed IT rules seem like a prudent response to the rise of AI-generated content. Mandatory labeling could help mitigate the spread of misinformation and deepfakes. It will be interesting to see how platforms adapt to these new requirements.

    • Absolutely. Platforms will likely need to invest in new detection and labeling systems to comply. Enforcing these rules consistently across the vast amount of user-generated content will be a major challenge.

  4. Oliver Hernandez on

    Proactive regulation of AI-generated content is a smart move by India. As deepfake technology becomes more advanced, clear labeling requirements could help curb the spread of misinformation. However, enforcement challenges remain significant.

  5. Oliver U. Johnson on

    The draft IT rules targeting deepfakes and synthetic content raise some important questions. While transparency is crucial, there may be concerns around user privacy and the potential for over-censorship. Careful balancing will be needed as these policies are rolled out.

    • That’s a good point. Striking the right balance between authenticity and free expression will be critical. Regulators will need to work closely with platforms and civil society to get the implementation right.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.