Listen to the article
India has unveiled a significant regulatory framework targeting AI-generated content on digital platforms, as the Ministry of Electronics and Information Technology (MeitY) announced draft amendments to the Information Technology Rules aimed at enhancing transparency and accountability in the digital space.
The proposed changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2025, introduce comprehensive requirements for labeling and identifying synthetically generated content. These amendments are expected to be implemented later this year amid growing concerns about misinformation and manipulation in India’s vast online ecosystem.
Under the new framework, “synthetically generated information” encompasses any content created, modified, or altered using computer resources in a way that makes it appear authentic. Social media platforms will be required to prominently label AI-generated content and embed permanent unique identifiers or metadata that cannot be removed or altered, ensuring users can distinguish between authentic and artificially created content.
The draft rules specifically target Significant Social Media Intermediaries (SSMIs) – platforms with more than 5 million registered users in India – such as Facebook, YouTube, and Snapchat. These platforms will face additional compliance requirements, including verifying user declarations about synthetic content through appropriate technical measures and automated tools.
According to the proposal, synthetic visual indicators must cover at least 10 percent of the screen area, while audio notifications must appear at the beginning of content, making AI-generated material immediately recognizable to users.
The amendments also introduce a notable shift in platform liability. If a platform knowingly allows unlabelled or falsely declared AI-generated content to circulate, it will be considered a failure to exercise due diligence under the IT Act. This potentially exposes companies to greater legal consequences for mishandled synthetic content.
However, the draft includes a protective clause clarifying that removing or disabling access to synthetically generated content in compliance with grievance mechanisms will not amount to a violation of existing intermediary liability protections.
These amendments represent the latest evolution of India’s digital regulatory framework, which began with the original IT rules published in February 2021 and underwent previous amendments in October 2022 and April 2023. The 2025 amendment specifically addresses the rapid advancement of AI technology and its growing prevalence across digital platforms.
The timing of these regulations is particularly significant as generative AI tools become increasingly sophisticated and accessible to the general public. The ministry emphasized that the move is part of its broader strategy to maintain an “open, safe, trusted and accountable Internet” while addressing growing risks of misinformation, impersonation, and election manipulation driven by generative AI technologies.
Industry experts note that India joins a growing list of countries implementing regulations specifically targeting AI-generated content. The approach reflects global concerns about the potential for synthetic media to undermine trust in digital information, particularly as the technology becomes more convincing and widespread.
The implications for tech companies operating in India could be substantial, potentially requiring significant updates to their content moderation systems and user interfaces to comply with the new labeling and verification requirements.
MeitY has invited stakeholder feedback on the draft amendments until November 6, 2025, via email at itrules.consultation@meity.gov.in, indicating an openness to industry and public input before finalizing the regulations.
As India continues to cement its position as one of the world’s largest digital markets, these regulations may set important precedents for how AI-generated content is managed and identified across the global digital landscape.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


12 Comments
Regulating AI-generated content is a complex challenge, but these draft rules from MeitY appear to strike a balance between enabling innovation and ensuring accountability. I’m curious to see how they’re implemented in practice.
Yes, the devil will be in the details. Effective enforcement and clear guidelines for platforms and users will be crucial for these rules to have the desired impact.
While these rules focus on social media, I wonder if they could be expanded to cover other digital platforms and content types in the future. The challenge of synthetic content extends beyond just social media.
While deepfakes and synthetic content can be used for malicious purposes, they also have legitimate applications. I hope the final rules find a way to support beneficial uses while mitigating the risks.
That’s a fair point. Striking the right balance between innovation and risk mitigation will be key. Flexible, adaptable regulations may be needed as this technology continues to evolve.
This is an important step in addressing the growing problem of deepfakes and synthetic content online. Transparency and clear labeling will help users distinguish authentic from artificially created information.
Agreed. The proposed rules seem like a reasonable approach to tackle this issue and protect the integrity of online discourse.
Combating misinformation and manipulation online is a global challenge. It’s good to see India taking proactive steps with these draft IT rules. Curious to see how other countries respond to similar issues.
Absolutely. Coordinated, international efforts to address synthetic content and deepfakes will likely be more effective than fragmented, national approaches. This could set an important precedent.
The proposed unique identifiers and metadata requirements seem like a good way to ensure the provenance of online content can be verified. Transparency is key to building trust in the digital ecosystem.
Enforcing these rules effectively will be crucial. I hope MeitY works closely with platforms and experts to develop clear, practical guidelines that can be consistently applied.
Clearly labeling AI-generated content is a sensible requirement. Users deserve to know when they’re engaging with artificially created information, not just authentic human-generated content.