Listen to the article

0:00
0:00

As Assam voters head to the polls Thursday in a deeply polarized political climate, a troubling report reveals the state’s election campaign marks the “first industrialized AI disinformation operation in an Indian state election.” The study, published by the Netherlands-based Diaspora in Action for Human Rights and Democracy (DAHRD), paints a concerning picture of technology being weaponized to influence democratic processes.

The 72-page report titled “AI-Weaponised Disinformation, Systematic Exclusion, and Institutional Failure in India’s 2026 Assam Assembly Election” suggests this campaign is far from conventional. According to the researchers, it represents “a documented architecture of disinformation that manufactures an altered reality—one in which an entire community is simultaneously dehumanised, disenfranchised, displaced, and erased from cultural memory.”

DAHRD researchers analyzed 273 social media accounts across Facebook, Instagram, and X with a combined reach exceeding 407 million followers—a number comparable to the European Union’s population. They identified 432 AI-generated posts across Facebook and Instagram that garnered more than 45 million views. One Instagram account alone, “politoons,” accounted for 88 percent of all AI content views, generating over 40 million views across 102 posts.

The scale of this operation suggests meticulous planning rather than ad hoc tactics. With just a 1 percent engagement rate, a single post from this network could potentially reach four million individuals—representing approximately one-sixth of Assam’s registered voters.

“The operation was industrialized, not improvised: a six-tier content ecosystem produced synthetic images, deepfake videos, and AI-generated communal content at volume,” the report states. The campaign was front-loaded with 70 AI posts in January, 58 in February, and 18 in March, ensuring a fully developed narrative was established before the Model Code of Conduct took effect on March 15.

The report identifies Congress party’s chief ministerial candidate Gaurav Gogoi as a primary target, with 31 deepfakes portraying him as a “Pakistani agent” and “Muslim sympathizer.” These fabricated videos were distributed through official BJP accounts, including a verified handle belonging to a state cabinet minister. In an unprecedented move, the campaign also targeted Gogoi’s British wife, Elizabeth Colburn—a private citizen not involved in politics—with six AI-fabricated videos depicting “intimate and communal scenarios.”

The DAHRD report outlines what it calls an “Exclusion Architecture” targeting Assam’s Muslim communities through four simultaneous operations: dehumanization through AI content and verified statements by Chief Minister Himanta Biswa Sarma; voter-roll purges removing 243,000 names alongside redistricting that reduced Muslim-majority constituencies from 35 to 20; a 68-post campaign promoting evictions; and systematic erasure of the 17th-century Sufi saint Azan Fakir from Assamese cultural identity.

Perhaps most concerning, the report suggests AI propaganda directly influenced legislation within a single electoral cycle. The narrative of “Land Jihad” evolved into law restricting property rights, effectively codifying propaganda into legal framework.

Institutional accountability mechanisms appear to have failed, according to DAHRD. The report documents 119 breaches of the Model Code of Conduct that resulted in no enforcement actions by the Election Commission. No content was removed from platforms, and the judiciary scheduled related hearings for April 21—twelve days after polling.

Chief Minister Himanta publicly acknowledged the calculated nature of his campaign rhetoric on March 12, stating: “We had not added the word Bangladeshi—it was constitutionally and legally wrong. But we will correct it and post it again.”

The researchers warn this approach is not contained to Assam. Similar mechanisms affecting voter rolls have been observed in neighboring West Bengal, which holds elections later this month. The report concludes with a stark assessment: “Assam is the laboratory. The rest of India is the intended market.”

The findings raise profound questions about electoral integrity and the evolving role of artificial intelligence in democratic processes—not just in Assam but potentially across India’s political landscape. As technology continues advancing, the ability to detect, counter, and regulate AI-driven disinformation campaigns may become a defining challenge for election authorities worldwide.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Liam D. Johnson on

    It’s deeply troubling to see how advanced AI technology can be used to manipulate public discourse and undermine the democratic process. This report underscores the urgent need for greater regulation and oversight to prevent such abuses.

    • Olivia Rodriguez on

      Agreed. The scale and sophistication of this AI-driven disinformation campaign is truly alarming. Protecting the integrity of elections should be a top priority for policymakers and tech companies alike.

  2. This report highlights the urgent need for greater oversight and regulation of AI technology, especially when it comes to its use in the political sphere. We must ensure that emerging technologies are not exploited to undermine democratic institutions and the will of the people.

    • Absolutely. The weaponization of AI for disinformation campaigns is a grave threat to the integrity of elections and the foundations of democracy. Robust safeguards and transparency measures are essential to prevent such abuses in the future.

  3. Amelia Lopez on

    This is deeply concerning. The weaponization of AI to sow disinformation and undermine democratic processes is a serious threat that needs to be addressed. I hope the authorities take strong action to combat this issue and protect the integrity of the election.

    • Ava Rodriguez on

      Agreed. Sophisticated AI-driven disinformation campaigns like this can have devastating consequences for a free and fair election. Transparency and accountability around the use of AI in the political sphere is crucial.

  4. Jennifer E. Hernandez on

    It’s troubling to see how advanced AI technology can be misused for such malicious purposes. This report highlights the urgent need for robust safeguards and regulations to prevent the exploitation of emerging technologies to undermine democratic processes.

    • Jennifer R. White on

      Absolutely. We must be vigilant in identifying and countering these AI-driven disinformation campaigns before they can take hold and sway public opinion. Protecting the integrity of elections should be a top priority.

  5. Amelia White on

    This is a worrying development that underscores the risks posed by the misuse of AI. I hope the authorities can work quickly to identify the perpetrators and put a stop to this coordinated disinformation campaign before it causes further damage to the electoral process.

    • Agreed. The sheer scale of this operation, reaching over 400 million people, is truly alarming. Strengthening digital literacy and fact-checking efforts will be crucial to combat the spread of this kind of AI-generated misinformation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.