Listen to the article

0:00
0:00

Indian Government Scrutinizes Bot Accounts on Social Media Platforms

The Union government has launched an investigation into automated social media accounts as concerns grow about their role in spreading misinformation. The Ministry of Electronics and Information Technology (MeitY) recently met with major tech platforms to discuss how they detect and manage bot-run accounts, according to a Moneycontrol report.

On March 11, IT ministry officials held a 45-minute meeting chaired by Secretary S Krishnan and attended by cyber laws group coordinator Deepak Goel. Representatives from leading technology companies including Google, Meta, OpenAI, Snap, Sharechat, and Zupee participated in the discussions.

The primary focus of the meeting was understanding how automated accounts can rapidly amplify content across digital platforms. Officials expressed concern about how bots are programmed to instantly retweet posts and tag influential users, creating what they described as a “manufactured reality” of high engagement that can mislead users about content popularity.

During the meeting, government representatives questioned tech companies about their existing infrastructure for detecting and controlling bot networks. Officials also explored whether new government policies or regulations might be necessary to address the issue effectively. MeitY has requested that industry stakeholders submit detailed inputs on the matter.

The scrutiny comes amid growing global concerns about the impact of automated accounts on public discourse. Bot networks have been implicated in spreading misinformation during elections, amplifying divisive content, and creating artificial trending topics that can manipulate public opinion.

Beyond bot accounts, the meeting also addressed the emerging challenge of deepfakes—highly realistic but fabricated videos or images created using artificial intelligence. Officials discussed whether existing copyright regulations and data protection frameworks could provide legal tools to combat this growing problem.

The government’s initiative follows recommendations made in a 2025 Indian parliamentary panel report that urged authorities to establish comprehensive legal and technical frameworks to combat AI-generated fake news and bot-driven misinformation campaigns.

In 2026, the Parliamentary Standing Committee on Communications and Information Technology issued stark warnings about the threats posed by AI-generated misinformation to public order and democratic processes. The committee outlined several countermeasures, including mandatory labeling of AI-generated content, stricter penalties for non-compliant platforms, and the development of AI-powered monitoring tools.

One such proposed tool was SAMVAD, an AI dashboard designed to track online trends in real-time and identify coordinated inauthentic behavior across platforms.

India’s move to regulate bot accounts follows similar efforts in other countries. The European Union’s Digital Services Act requires platforms to assess and mitigate systemic risks, including those posed by coordinated manipulation. Meanwhile, the United States has been debating legislation to increase transparency around automated accounts.

For technology platforms, detecting and managing bot accounts presents significant technical challenges. While some automated accounts are easily identifiable through behavioral patterns, sophisticated bots can mimic human behavior in ways that make detection difficult without compromising user privacy.

The government’s examination of bot networks comes at a time when India is strengthening its digital regulatory framework. The Digital Personal Data Protection Act of 2023 and the upcoming Digital India Act are part of efforts to create a comprehensive legal structure for the country’s rapidly evolving digital ecosystem.

As artificial intelligence becomes more sophisticated, distinguishing between genuine and automated content presents increasing challenges for both platforms and regulators. The outcome of this government initiative could significantly shape how India approaches the regulation of automated content and artificial intelligence in public spaces.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. The Indian government’s move to address bot-generated misinformation is a welcome development. Social media platforms must prioritize the integrity of their platforms and take stronger actions against coordinated inauthentic behavior.

    • William Jackson on

      You’re right. Bots can easily distort the online discourse and create a false impression of public sentiment. Rigorous enforcement and proactive measures from tech companies are needed to curb this problem.

  2. Noah Thompson on

    The Indian government’s focus on bot-generated misinformation is a step in the right direction. Automated accounts pose a serious threat to the integrity of online discourse and must be addressed effectively.

    • Elizabeth Williams on

      I agree. Bots can be weaponized to amplify divisive narratives and sow confusion. Collaborative efforts between policymakers and tech platforms are crucial to curb the negative impact of these coordinated inauthentic activities.

  3. It’s encouraging to see the Indian government taking proactive measures to address the issue of bot-generated misinformation on social media. Platforms need to improve their bot detection and mitigation capabilities.

    • Patricia Johnson on

      Definitely. Automated accounts can easily manipulate online conversations and skew public perception. Stronger enforcement and transparency from tech companies will be essential to maintain the credibility of social media platforms.

  4. Isabella R. Smith on

    Tackling bot-driven misinformation is crucial for maintaining a healthy online ecosystem. I’m glad to see the Indian government taking this issue seriously and engaging directly with social media companies.

    • John Johnson on

      Agreed. The proliferation of bots can have serious consequences for public discourse and decision-making. Collaborative efforts between governments and tech firms are essential to address this challenge.

  5. Isabella Garcia on

    This is a timely and necessary move by the Indian government. Automated accounts are a major contributor to the spread of false and misleading information online. Strengthening platform policies and enforcement is key.

    • Lucas P. Martinez on

      Absolutely. Social media companies need to invest more resources into detecting and removing coordinated inauthentic behavior, as bots can quickly distort public narratives and undermine trust in online information.

  6. William Hernandez on

    Interesting to see the Indian government cracking down on bot-generated misinformation. Social media platforms need to be more proactive in identifying and removing these automated accounts that can easily manipulate online discourse.

    • Patricia Thompson on

      Absolutely. Bots can quickly amplify false narratives and create a false sense of popularity around certain content. Robust detection and takedown measures are crucial to maintain the integrity of online conversations.

  7. Oliver Lopez on

    This is an important step in combating the spread of misinformation on social media. Automated accounts pose a significant threat, as they can rapidly disseminate false or misleading content at scale.

    • Ava Rodriguez on

      I agree. Tech companies need to work closely with governments to improve their bot detection capabilities and enforcement actions. Transparency and accountability will be key in addressing this issue effectively.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.