Listen to the article
Parliamentary Committee Calls for Overhaul of Safe Harbour Provisions to Combat Fake News
A parliamentary committee has raised significant concerns about the effectiveness of Safe Harbour provisions that currently shield social media platforms from liability for user-generated content, according to its draft report on fake news.
The Standing Committee on Communications and Information Technology’s analysis highlights how the existing regulatory framework has failed to address the growing challenge of misinformation spreading on major platforms such as Facebook, X (formerly Twitter), Instagram, YouTube, and WhatsApp.
Under the Information Technology Rules of 2021, social media intermediaries are protected from liability for content posted by users, provided they exercise due diligence and maintain grievance redressal mechanisms. However, the committee found this has created an accountability vacuum where platforms have neither proactively prevented nor been held sufficiently responsible for the proliferation of misleading content.
The Ministry of Electronics and Information Technology (MeitY) emphasized that intermediaries are required to make reasonable efforts to prevent users from posting deceptive information, particularly regarding government business. Yet the practical implementation of these provisions has come under intense scrutiny.
A significant legal setback occurred in September 2024 when the Bombay High Court struck down Rule 3(1)(b)(v) of the IT Rules, which had empowered government-appointed authorities to direct platforms to remove content deemed false or misleading. The court ruled this violated constitutional guarantees under Articles 14 and 19, declaring it ultra vires to the IT Act of 2000.
MeitY is now preparing to challenge this ruling through a Special Leave Petition to the Supreme Court. The ministry argues that a statutory Fact Check Unit (FCU) under the Press Information Bureau (PIB) is essential to combat misinformation about government policies and programs.
The PIB’s existing Fact Check Unit, established in November 2019, follows a four-step FACT model—Find, Assess, Create, Target—to counter government-related misinformation. With over 320,000 followers on X alone, the FCU posts verified information and debunks false claims on its social media channels. However, its impact is limited as it lacks statutory enforcement powers, functioning primarily as an awareness tool rather than an enforcement mechanism.
Industry experts have identified several critical gaps in the current Safe Harbour regulations. There is no statutory requirement for intermediaries to appoint designated nodal officers accountable for action against fake news. The guidelines also lack strict deadlines for content removal, allowing for potentially harmful delays in addressing misinformation.
Cross-border challenges further complicate enforcement efforts. Many sources of fake news operate from foreign jurisdictions like Guatemala and Nigeria, placing them effectively beyond the reach of Indian authorities. Additionally, the rise of sophisticated deepfake videos and AI-generated misinformation has made distinguishing fact from fiction increasingly difficult.
The Editors Guild of India has proposed creating a mandatory system requiring platforms to designate nodal officers who would address flagged content within six hours. They also recommended licensing AI content generators under strict conditions to prevent misuse.
The committee’s draft report acknowledges that while the government has taken steps by establishing the FCU and issuing IT Rules, these measures remain insufficient. For genuine effectiveness, Safe Harbour provisions require reformation to include stronger accountability measures, time-bound actions, and statutory power for fact-checking units.
The report emphasizes that addressing fake news requires a multi-stakeholder approach rather than relying solely on government agencies. It calls for a comprehensive strategy that includes incorporating digital literacy into formal education, fostering cooperation between government departments, civil society, technology companies, and fact-checking organizations, and launching public awareness campaigns about misinformation tactics.
The committee also highlighted the need to reconsider how “fake news” is defined and integrated into regulatory frameworks across print, electronic, and digital media. They cautioned that vague terminology around fake news could be exploited to delegitimize media or suppress dissent.
As misinformation continues to threaten public discourse and democratic processes, the committee’s findings represent an urgent call to action for both policymakers and social media platforms to collaborate on developing more robust safeguards against the spread of fake news in India’s rapidly evolving digital landscape.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Increased accountability for social media platforms in dealing with fake news is a reasonable goal, but the devil will be in the details. Any reforms will need to be carefully crafted to avoid unintended consequences.
Holding social media platforms more responsible for user-generated content is a tricky proposition. While combating misinformation is important, we must be cautious about measures that could infringe on legitimate free expression online.
That’s a fair point. Any reforms would need to carefully consider the potential unintended consequences and avoid overly burdensome restrictions that could stifle online discourse.
The safe harbor provisions have played an important role in the development of the internet, but it’s clear they need to be updated to address the challenges of modern social media. I’m curious to see what specific proposals the committee puts forward.
It’s encouraging to see lawmakers taking a closer look at the role of social media platforms in the spread of fake news. Effective content moderation policies and increased transparency around their enforcement could go a long way in addressing this problem.
The challenge of fake news on social media platforms is a complex issue without easy solutions. Revisiting the safe harbor provisions to increase accountability could be a positive step, but striking the right balance between free speech and content moderation will be crucial.
Fake news has become a serious issue, but I’m not sure that simply overhauling the safe harbor provisions is the best solution. We need a more nuanced approach that balances content moderation with protecting free speech online.
This is a complex issue without easy answers. While the proliferation of misinformation is concerning, we have to be careful not to create a system that could be abused to censor legitimate debate and discussion.