Listen to the article
India Moves to Regulate Deepfakes and Synthetic Content Amid Rising Concerns
The Indian government is considering amendments to its Information Technology laws to combat the growing threat of AI-generated synthetic content and deepfakes. The proposed changes would impose greater accountability on major social media platforms like Facebook, Instagram, Google, YouTube, and X to prevent the spread of misinformation.
IT Minister Ashwini Vaishnaw announced on Wednesday that the government has received numerous requests to address synthetic content and deepfakes. “In Parliament as well as many other fora, people have demanded that something should be done about the deepfakes which are harming society,” Vaishnaw stated. He emphasized that deepfakes using prominent individuals’ images are affecting personal lives, privacy, and creating misconceptions.
The central aim of the proposed regulations is to ensure transparency for users. “The step we are taking is making sure that users get to know whether something is synthetic or real. Once users know, they can take a call,” Vaishnaw explained.
In its draft rules seeking stakeholder comments, the IT Ministry defined synthetically generated information as “information which is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.” The ministry emphasized its commitment to ensuring an open, safe, trusted, and accountable internet environment.
The proposed amendments include several key provisions. They would require clear labeling and metadata embedding for synthetic content to help users distinguish between artificial and authentic material. Visual markers would need to cover at least 10% of synthetic content, while audio indicators would be required at the beginning of modified recordings.
Social media platforms would face enhanced verification and declaration obligations, including implementing technical measures to identify and appropriately label synthetic content. Ministry officials indicated that “appropriate action” could be taken against platforms that fail to credibly address such information.
“These amendments are intended to promote user awareness, enhance traceability, and ensure accountability while maintaining an enabling environment for innovation in AI-driven technologies,” the ministry stated in its announcement.
The move comes amid growing global concern about deepfakes and synthetic media. Recent incidents of convincing AI-generated content going viral on social platforms have highlighted risks ranging from reputation damage to election manipulation and financial fraud.
India’s digital landscape has been particularly vulnerable to misinformation, with its vast and rapidly growing internet user base. The country’s upcoming elections and diverse social fabric make addressing synthetic content a pressing priority for authorities.
The ministry has invited feedback and comments on the draft amendment until November 6, allowing stakeholders to weigh in before final implementation.
In a related development, the IT Ministry also announced measures to streamline the content takedown process for social media platforms. The new rules specify that requests for removal of “unlawful information” can only be issued by senior officials and must include precise details and justifications. All such directives will undergo monthly review by high-ranking government secretaries to ensure actions remain “necessary, proportionate, and consistent with law.”
These dual initiatives reflect the Indian government’s expanding efforts to regulate digital content while balancing innovation, free expression, and public safety concerns. The move places India among a growing number of countries worldwide implementing regulatory frameworks to address the challenges posed by rapidly advancing AI technologies.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

