Listen to the article
India’s government has enacted significant changes to its digital media regulations, requiring platforms to clearly label AI-generated content as part of a broader push to combat deepfakes and misinformation in the rapidly evolving digital landscape.
The Union Government has formally amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandating that AI-generated and synthetic content across digital platforms must now carry prominent and visible labels, according to a notification issued by the Ministry of Electronics and Information Technology (MeitY).
Under the new regulations, digital platforms providing tools for creating or sharing “synthetic content” must not only ensure clear labeling but also, where technically feasible, embed permanent metadata or provenance identifiers to help trace content origins. This measure aims to provide transparency to users about the artificial nature of the content they encounter online.
The amendments introduce formal definitions of “audio, visual or audio-visual information” and “synthetically generated information,” describing them as content artificially created, modified, or altered using computer resources in a manner that makes them appear authentic or indistinguishable from real people or events.
“Synthetically generated information means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true,” the notification explains. The definition specifically targets content that could be “perceived as indistinguishable from a natural person or real-world event.”
Importantly, the government has carved out exceptions for routine editing, accessibility enhancements, and good-faith formatting, recognizing the legitimate uses of content modification technologies.
The regulatory changes come amid growing concerns about the misuse of artificial intelligence to create deepfakes—highly realistic but fabricated videos or audio recordings—that can spread misinformation or impersonate individuals without consent. Such technologies have created new challenges for content moderation and information integrity across global digital platforms.
Major social media intermediaries will face enhanced due-diligence obligations under the new framework. These include implementing automated systems to prevent the creation and circulation of unlawful synthetic content, particularly material involving child sexual abuse, misleading impersonations, and falsified electronic records.
The notification explicitly prohibits synthetically generated content that “contains child sexual exploitative and abuse material, non-consensual intimate imagery content, or is obscene, pornographic, paedophilic, invasive of another person’s privacy, including bodily privacy, vulgar, indecent or sexually explicit.”
Digital platforms will also need to collect declarations from users regarding whether uploaded content is AI-generated and verify these disclosures through appropriate mechanisms—adding another layer of accountability to the content creation process.
The amendments significantly tighten compliance timelines. Intermediaries must now respond to lawful takedown orders within three hours in certain cases, while grievance redressal timeframes have also been shortened, reflecting the government’s emphasis on swift action against potentially harmful content.
These changes represent India’s response to the global challenge of regulating AI-generated content. Similar initiatives are being pursued in other jurisdictions, including the European Union’s AI Act and various U.S. state laws addressing deepfakes.
Industry experts suggest these regulations could reshape content moderation practices across India’s digital ecosystem, which serves over 800 million internet users. Platform operators will need to update their technologies and policies to comply with the new labeling and verification requirements.
The amended rules are scheduled to take effect from February 20, 2026, giving digital platforms approximately two years to implement the necessary changes to their systems and practices. This implementation period acknowledges the technical complexities involved while maintaining the regulatory push toward greater transparency and accountability in India’s digital space.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Regulating AI-generated content is a complex challenge, but India’s approach of mandating clear labeling is a pragmatic first step. Curious to see if other countries follow suit with similar measures.
Yes, clear labeling is an important starting point. Effective enforcement and keeping up with evolving AI capabilities will be crucial to make these regulations truly impactful.
This new regulation aligns with growing global efforts to curb the spread of deepfakes and other AI-powered disinformation. Embedding provenance metadata is a smart technical approach to trace content origins.
I agree, the metadata requirement could make it easier to identify the source and authenticity of digital content. Curious to see how platforms implement this in practice.
Interesting move by India to combat AI-generated misinformation. Transparent labeling of synthetic content seems like a reasonable step to help users discern fact from fiction online.
While combating misinformation is a worthy goal, I wonder how effectively these new IT regulations can be enforced across India’s diverse digital landscape. Ongoing monitoring and refinement may be needed.
The line between real and AI-generated content is becoming increasingly blurred. This new policy underscores the importance of transparency and accountability around digital media, especially in the age of deepfakes.
As AI and synthetic media capabilities continue advancing, clear regulatory frameworks will be critical to maintain online integrity. India’s move seems like a proactive measure to get ahead of these emerging challenges.