Listen to the article

0:00
0:00

Parliamentary authorities have called for comprehensive regulations to govern online media and OTT platforms, citing urgent concerns about misinformation and harmful content circulating across digital channels.

A parliamentary standing committee chaired by BJP MP Nishikant Dubey submitted its twenty-sixth report on Tuesday, recommending significant expansion of regulatory oversight for digital media platforms. The committee’s findings highlight growing regulatory gaps that they believe require immediate attention.

“The increasing spread of misinformation and manipulated content across various media platforms necessitates strengthening of institutional mechanisms for timely verification, quick detection and blocking of fake news,” the committee stated in its report tabled in parliament.

The panel specifically recommended enhancing the capabilities of the Press Information Bureau’s Fact Check Unit (PIB-FCU), which was established in 2019 to combat misinformation. While a separate central Fact Check Unit under the Information Technology Act remains under judicial review by the Supreme Court, the committee emphasized that the existing PIB unit should be strengthened through expanded resources and technological capabilities.

Among the committee’s recommendations is the implementation of artificial intelligence tools for real-time monitoring of misinformation trends. The report also calls for improved coordination between government agencies and social media platforms to facilitate faster identification and removal of misleading content.

The Ministry of Information and Broadcasting acknowledged to the committee that “regulatory gaps” currently exist regarding online media and OTT platforms. Officials reported that 25 OTT platforms were initially blocked last year, followed by an additional 18 platforms based on provisions in the Bharatiya Nyay Sanhita and the Indecent Representation of Women Act.

“At present, blocking orders are continuously being issued against OTT apps and content circulating in online media,” the ministry informed the committee.

The report highlighted the government’s recent enforcement activities, noting that during “Operation Sindoor,” the PIB Fact Check Unit “worked round the clock to curb fake news, quickly detected and issued swift rebuttals to edited videos, misinformation, and propaganda.” This operation reportedly resulted in approximately 1,400 URLs being blocked.

Recent reporting has revealed that takedown orders are now handled through direct communication between government agencies and social media companies via the Ministry of Home Affairs’ Sahyog portal. According to previously published documents, this system allows government agencies, including police, to request content removals in bulk quantities with limited oversight.

The parliamentary committee’s recommendations go beyond just strengthening existing mechanisms. They call for “quantifiable targets” for the PIB Fact Check Unit, including trained personnel for regional languages and quarterly transparency reports detailing misinformation identified, corrective actions taken, and engagement with media platforms.

The push for expanded regulatory frameworks comes amid ongoing legal challenges. On March 10, the Supreme Court refused to stay the Bombay High Court’s decision that struck down a notification responsible for establishing the controversial central Fact Check Unit. During those proceedings, the Supreme Court emphasized the need for balance between protecting against fake online content while safeguarding free speech rights.

Critics have expressed concerns about potential overreach in government regulation of online speech, while supporters argue that misinformation poses serious threats to public safety and democratic discourse that require coordinated intervention.

The committee’s report ultimately recommends a “comprehensive regulatory framework to effectively address issues relating to misinformation, harmful content and public grievances,” while coordinating with relevant ministries to close regulatory gaps and develop structured grievance redressal mechanisms for online and OTT content.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. Michael Jones on

    While I appreciate the intent behind this proposal, I have concerns about the risks of AI-powered content moderation. Effective fact-checking is needed, but it must be implemented with robust safeguards to protect fundamental rights.

    • Agreed. The government should engage with digital rights experts, civil society, and other stakeholders to ensure the new fact-checking mechanisms uphold democratic principles and do not become tools for censorship.

  2. Combating misinformation is important, but we must be vigilant about the potential for overreach. I hope the government will engage in wide-ranging consultations to develop a balanced approach that respects free speech.

    • Olivia Johnson on

      Absolutely. The success of this initiative will depend on its ability to maintain public trust through transparency and accountability. Careful monitoring and course correction will be essential.

  3. Elizabeth Jackson on

    Strengthening fact-checking capabilities is a positive step, but the details of implementation will be crucial. The use of AI must be accompanied by strong oversight, transparency, and accountability measures to prevent abuse.

    • Olivia Thompson on

      Well said. Any automated content moderation system must have clear, publicly available guidelines, and be subject to independent audits and user appeals processes to ensure fairness and protect free expression.

  4. James Taylor on

    Curbing misinformation is important, but we must be careful not to stifle legitimate debate and criticism. Strengthening fact-checking is good, but the process needs to be transparent and fair.

    • Agreed. Fact-checking should not become a tool for suppressing views the government finds inconvenient. The AI systems used must be audited to ensure they are not biased.

  5. Enhancing fact-checking capabilities is a positive step, but the implementation will be crucial. The AI systems used must be transparent, accountable, and subject to independent audits to prevent misuse.

    • Agreed. Any automated content moderation should have clear guidelines, appeal mechanisms, and oversight to ensure it is not unfairly targeting certain views or individuals.

  6. Liam Rodriguez on

    While I support measures to counter fake news, I’m concerned about the potential for abuse of AI-powered content moderation. Robust safeguards and public oversight will be essential to protect free speech.

    • James Taylor on

      Absolutely. The government should engage with civil society and digital rights groups to develop a balanced approach that addresses misinformation without infringing on legitimate online discourse.

  7. Elijah Thomas on

    Digital platforms have a responsibility to combat harmful content, but this needs to be done in a way that respects user privacy and freedom of expression. Glad to see the government looking at ways to improve fact-checking.

    • Michael Rodriguez on

      Absolutely. Any AI-powered moderation system should be subject to public scrutiny to ensure it is not being misused for censorship. Robust oversight mechanisms are a must.

  8. Michael Thomas on

    Interesting proposal to strengthen India’s fact-checking capabilities. AI-powered content moderation could help curb the spread of misinformation, but will require careful oversight to avoid overreach or bias.

    • Noah A. White on

      Agreed, the challenge will be balancing effective content moderation with protecting free speech online. Transparency and accountability for the AI systems used will be crucial.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.