Listen to the article

0:00
0:00

AI-generated political content sparks concerns ahead of national elections as sophisticated deepfakes proliferate across social media platforms, evading regulatory oversight despite new rules prohibiting their use.

Voters scrolling through social media feeds are increasingly encountering what appear to be testimonials from ordinary citizens, but are actually sophisticated AI-generated personas created to influence the upcoming national election, according to recent findings by fact-checking organization DismissLab.

The platform has documented numerous instances of synthetic avatars masquerading as regular voters while promoting specific political symbols and spreading negative claims about opposition parties and leaders. What makes these deepfakes particularly concerning is how many users mistake them for authentic content from real people, further amplifying misinformation.

“The technology has advanced to a point where casual viewers can’t distinguish between real people and AI-generated characters, especially when scrolling quickly through social media,” said a DismissLab spokesperson. “This creates a dangerous environment for voters trying to make informed decisions.”

One prominent example identified by investigators involves a Facebook page called Uttarbanga Television, which has amassed over 90,000 followers. The page reportedly published multiple AI-generated videos promoting a candidate in the Joypurhat 1 constituency. Analysts pointed to telltale signs of manipulation, including mismatched lip movements, distorted limbs, and nonsensical text appearing in video backgrounds.

Some of these synthetic campaign materials have gained significant traction, with several videos targeting opposition figures attracting millions of views across various platforms. This wide reach has raised alarms about the potential impact on voter perceptions and electoral integrity.

The Election Commission has implemented new rules to combat this emerging threat. The Election Conduct Rules 2025 explicitly prohibit the use of artificial intelligence to create misleading, defamatory, or hateful content during the campaign period. Additionally, candidates are now required to submit details of their social media accounts to returning officers before commencing campaign activities.

Starting January 22, political parties must adhere to these regulations in all online campaigning efforts. To facilitate enforcement, the Election Commission has established a coordination cell dedicated to receiving complaints about violations, irregularities, and propaganda. Citizens can report concerns via several dedicated phone lines.

Chief Election Commissioner AMM Nasir Uddin previously acknowledged the challenges posed by artificial intelligence, describing it as a “global headache” and pledging coordinated measures to address the issue. However, enforcement capabilities remain limited.

The regulatory gap was highlighted by Special Assistant Faiz Taiyeb Ahmad from the Ministry of Posts, Telecommunications and IT, who admitted that the government lacks direct authority to remove such material from platforms. “We can only report problematic content to platforms like Meta, but the final decision on removal depends on their internal categorization process,” Ahmad explained.

This limitation underscores the broader challenge of regulating digital content during elections, as social media companies maintain significant control over content moderation decisions on their platforms.

Digital rights experts have noted that the proliferation of AI-generated campaign material represents a concerning evolution in political communication tactics. Unlike traditional misinformation that might be easier to fact-check, synthetic media created by artificial intelligence can be produced at scale and distributed widely before verification processes can catch up.

As the election approaches, voter education initiatives are ramping up to help citizens identify potential AI-generated content. Experts recommend checking for unnatural facial movements, inconsistent lighting, strange backgrounds, and audio-visual synchronization issues as potential indicators of synthetic media.

The situation highlights the growing tension between technological innovation and electoral integrity, a challenge that democracies worldwide are increasingly facing as AI tools become more sophisticated and accessible to political campaigns.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

13 Comments

  1. Jennifer Taylor on

    This is a worrying development that erodes trust in the democratic process. While the technology behind these deepfakes is impressive, the potential for harm is significant. Policymakers need to act swiftly to address this threat.

    • Mary Rodriguez on

      I agree, the integrity of elections is at stake. Stronger regulations, content moderation, and public education will all be important to counter the spread of AI-generated propaganda.

  2. It’s alarming how advanced these AI-generated personas have become. Voters should be cautious and fact-check claims, especially from unfamiliar sources. Stronger regulations and content moderation practices are essential to protect the integrity of elections.

    • Elizabeth C. Brown on

      You’re right, this highlights the need for greater digital literacy education so voters can spot synthetic content. Platforms and authorities have to step up efforts to address this threat.

  3. Robert N. Martinez on

    It’s disturbing to see how easily manipulated social media can be. We need robust safeguards and transparency around online political content to protect the democratic process. This is a complex challenge that will require a multi-faceted approach.

  4. This is a sobering reminder of the potential for technology to be misused for nefarious purposes. While AI advances are impressive, we must be vigilant about the risks and work to develop effective countermeasures. The stakes are high when it comes to the integrity of elections.

  5. Michael Y. Rodriguez on

    I’m glad to see this issue being reported on. Deepfakes and AI-generated propaganda pose a serious threat to informed decision-making by voters. Policymakers and tech companies need to collaborate closely to address this challenge.

    • Elijah Williams on

      Absolutely. Proactive steps to improve content moderation, enhance digital literacy, and establish clear guidelines around political advertising online should be top priorities.

  6. Oliver K. White on

    This is a worrying trend that underscores the need for greater transparency and accountability in online political discourse. Voters must be empowered to critically evaluate the information they encounter, especially on social media. Robust safeguards and fact-checking efforts will be essential.

  7. Linda Williams on

    This is a concerning trend. AI-generated propaganda could have a major impact on elections if voters can’t distinguish it from authentic content. We need better safeguards and transparency around political advertising, especially on social media.

    • Oliver Martinez on

      Agreed, this is a complex issue with no easy solutions. Platforms and regulators will need to stay vigilant to combat the spread of misinformation and deepfakes.

  8. Elizabeth Jones on

    I’m curious to learn more about the specific techniques and technologies these deepfake creators are using. Are there any new detection methods or authentication tools being developed to combat this issue?

    • That’s a great question. Researchers are working on AI-based deepfake detection, digital watermarking, and other approaches, but the technology is evolving rapidly. Ongoing vigilance and collaboration between platforms, fact-checkers, and the public will be crucial.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.