Listen to the article
Indian Parliament Receives Committee Report on Combating Fake News
India’s parliamentary oversight of online misinformation took a formal step forward this week as the Standing Committee on Communications and Information Technology’s report on fake news was officially tabled in the Lok Sabha on December 2. The comprehensive document, first published in October, now enters the parliamentary record, positioning its recommendations for potential inclusion in future media regulations.
The 22nd report outlines a sweeping agenda aimed at strengthening India’s approach to combating misinformation across traditional and digital media platforms. Among its key proposals, the committee calls for establishing a clear legal definition of fake news that could be incorporated into regulatory frameworks spanning print, broadcast, and digital media sectors.
The recommendations include stricter penalties for spreading misinformation, including the possible cancellation of accreditation for journalists found guilty of creating or disseminating fake news. News organizations would be required to implement mandatory internal fact-checking systems and appoint ombudsmen as part of a strengthened self-regulatory framework.
The committee emphasizes improved coordination between the Ministry of Information and Broadcasting (MIB) and the Ministry of Electronics and Information Technology (MeitY), particularly in areas where their regulatory jurisdictions overlap in the digital ecosystem. This reflects growing recognition of how misinformation crosses traditional media boundaries.
In a notable development that could reshape platform liability, the report suggests reviewing safe harbor protections currently provided to intermediaries under Section 79 of the Information Technology Act. The committee specifically questions whether these protections should apply when platforms algorithmically amplify misleading content, potentially shifting greater responsibility to social media companies.
The report also addresses emerging challenges posed by artificial intelligence and cross-border misinformation, highlighting the need for nationwide media literacy programs and collaboration with independent fact-checkers.
Stakeholder submissions included in the report reveal significant disagreement on fundamental questions. The Editors Guild of India advocated for a narrow definition of fake news as “deliberately fabricated or manipulated content disseminated with the intent to mislead or harm,” while the News Broadcasters and Digital Association warned about the term’s inherent ambiguity.
Media organizations including Hindustan Times, TV Today Network, and Network-18 emphasized the importance of distinguishing between deliberate deception and honest journalistic errors. Several stakeholders cautioned that overly broad definitions could threaten legitimate reporting and free speech.
“Any legal definition must remain both precise and workable,” noted The Indian Express, highlighting how the fake news label now encompasses everything from factual errors to AI-generated content.
The MIB proposed separating government-related misinformation—to be handled by the Press Information Bureau’s Fact Check Unit—from broader content issues that would fall under enhanced self-regulation.
Stakeholders identified various systemic weaknesses driving misinformation, including declining media credibility, engagement-driven algorithms that reward sensational content, and low digital literacy. The Press Council of India and TV Today Network pointed to threats against public order, election integrity, and social cohesion.
There was notable disagreement on whether existing legal frameworks are sufficient. While the MIB cited current statutory mechanisms, including the Information Technology Act and the Bharatiya Nyaya Sanhita, the Editors Guild warned about the “liberal use of criminal provisions against journalists whose reporting is critical of the establishment.”
Multiple organizations flagged artificial intelligence as both a risk and potential solution. The MIB advocated for hybrid models combining AI detection with human verification, while stakeholders widely supported mandatory labeling of AI-generated content.
Platform algorithms received particular scrutiny, with stakeholders agreeing that engagement-based recommendation systems amplify sensational and potentially misleading content. Hindustan Times argued that social media companies should be treated more like media entities, with safe harbor protections reconsidered when design choices drive the amplification of harmful content.
The tabling of this report, alongside recent draft amendments targeting deepfakes, signals a shift toward more interventionist content regulation in India. Together, these developments move the country toward clearer definitions of misinformation, stronger penalties, and greater scrutiny of how algorithms shape public information.
The government’s response in the coming months will reveal whether these recommendations become the foundation for targeted rule changes or a broader overhaul of India’s approach to platform liability and misinformation governance.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
Interesting to see India taking concrete steps to tackle the fake news epidemic. Curious to learn more about how these proposed measures would be enforced and monitored across diverse media platforms.
This is an important step towards combating the spread of misinformation in India. Establishing legal definitions and stricter penalties could help hold news organizations and journalists more accountable.
Cancellation of accreditation for journalists found guilty of spreading misinformation seems like a strong deterrent. However, the criteria for determining guilt must be robust and fair.
While the report’s recommendations aim to curb fake news, there are valid concerns about potential overreach and unintended consequences for legitimate journalism. The implementation details will be critical.
That’s a fair point. Striking the right balance between combating misinformation and preserving press freedoms will require careful policymaking.
Mandatory internal fact-checking and ombudsmen are sensible proposals to improve self-regulation in the media industry. Proactive measures like these are needed to address the growing fake news problem.
I agree, self-regulation is crucial. Relying solely on government intervention raises concerns about press freedom, so a balanced approach is needed.
The report’s focus on establishing clear legal definitions for fake news is a sensible starting point. But defining the boundaries will be challenging given the nuances involved.
Mandatory internal fact-checking and ombudsmen are positive steps, but will they be enough to address the rapid spread of fake news on social media? More comprehensive solutions may be needed.