Listen to the article
India’s government proposed new regulations on Wednesday requiring artificial intelligence companies and social media platforms to clearly label AI-generated content, aiming to combat the spread of deepfakes and misinformation across the country.
The draft rules mandate that platforms must ensure AI-generated visual content carries markers covering at least 10 percent of the surface area, while audio clips must identify AI origins in the initial 10 percent of their duration. The regulations would apply to major technology companies including OpenAI, Meta, X (formerly Twitter), and Google.
Under the proposed framework, social media companies would be required to obtain user declarations confirming whether uploaded information is AI-generated and implement “reasonable technical measures” to verify this content. The rules aim to “ensure visible labelling, metadata traceability, and transparency for all public-facing AI-generated media,” according to India’s Information Technology Ministry.
The ministry has invited public and industry feedback on the proposal until November 6, acknowledging that the potential for misuse of generative AI tools “to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly.”
India’s move follows similar regulatory initiatives by the European Union and China, highlighting growing global concerns about AI-generated content. With nearly one billion internet users, India faces unique challenges given its diverse ethnic and religious communities where misinformation can potentially trigger serious social unrest.
Policy experts note that India’s approach represents a significant regulatory development in the global AI landscape. Dhruv Garg, founding partner at the Indian Governance and Policy Project, called the 10 percent surface area requirement “among the first explicit attempts globally to prescribe a quantifiable visibility standard.” If implemented, these rules would require AI platforms operating in India to build automated labelling systems that identify and mark AI-generated content at the point of creation.
The regulations come as India has already witnessed alarming cases of deepfakes during recent elections. Legal battles over AI-generated content are also underway in Indian courts, including a high-profile case involving Bollywood stars Abhishek Bachchan and Aishwarya Rai Bachchan, who have petitioned a New Delhi judge to remove and prohibit AI videos that allegedly infringe on their intellectual property rights. The celebrities have also challenged YouTube’s AI training policy as part of their legal action.
For major technology companies, India represents a crucial market. OpenAI CEO Sam Altman noted earlier this year that India stands as the company’s second-largest market by number of users, with its user base having tripled over the past year. This rapid growth underscores both the commercial importance of the Indian market and the potential impact of new regulatory requirements on AI companies’ operations.
The proposed regulations signal India’s growing determination to establish guardrails around emerging technologies while balancing innovation with public safety concerns. By placing more responsibility on technology platforms, the government aims to create accountability mechanisms that could help mitigate the risks associated with increasingly sophisticated AI-generated content.
OpenAI, Google, and Meta did not immediately respond to media requests for comment on India’s proposed regulations.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
The potential for misuse of generative AI tools is a real concern. I’m curious to see how these new rules balance the benefits of AI with the need for transparency and accountability.
That’s a valid concern. The industry feedback during the public consultation period will be crucial to crafting practical and effective regulations.
The proposed AI content labeling rules represent an important step in addressing the growing challenge of deepfakes and misinformation. However, ongoing vigilance and adaptation will be needed as these technologies continue to evolve.
Agreed. This is likely the first of many regulatory efforts we’ll see around the world as governments grapple with the implications of AI-generated content.
Combating the spread of misinformation is critical, especially as AI capabilities continue to advance. This regulation could set an important precedent that other countries may follow.
You raise a good point. Consistent global standards for AI content labeling would be ideal to address this issue effectively across borders.
This is a complex issue without easy solutions. I’m interested to see how the public and industry feedback shapes the final regulations in India and whether other countries follow suit.
While the goal of these regulations is admirable, the technical implementation details will be key. Ensuring robust verification processes and preventing workarounds will be critical.
You make a fair point. Enforcement and compliance monitoring will be essential for these rules to be truly effective in practice.
It’s good to see the Indian government taking proactive steps to address the risks of AI-generated content. Clear labeling requirements could go a long way in combating the spread of deepfakes and misinformation.
This proposal from India’s government seems like a reasonable approach to addressing the growing challenge of AI-generated content and deepfakes. Transparent labeling and verification requirements could help combat misinformation and build public trust.
I agree, clear labeling of AI-generated content is important. It will be interesting to see how the tech companies respond and if they can implement effective measures.