Listen to the article

0:00
0:00

Election Commission Issues Directive Against AI Deepfakes in Bihar Elections

The Election Commission of India (ECI) has issued a comprehensive directive urging political parties to comply with the Model Code of Conduct ahead of the Bihar legislative assembly elections and by-polls in other constituencies. The directive specifically warns against the use of artificial intelligence tools to create deepfakes or spread misinformation, emphasizing the need to protect electoral integrity.

While the ECI’s instructions outline precautionary measures that political parties should take, critics note that the directive lacks clarity on what specific content would warrant removal and the exact process for such takedowns.

According to the ECI’s June 2024 press release, social media platforms must remove fake content—including deepfakes, misinformation, and synthetic content—within three hours of notification. However, neither the Commission’s Handbook on Media Matters nor its guidelines provide clear methods for identifying and confirming whether flagged content qualifies as fake news.

The Media Certification and Monitoring Committee (MCMC), which is responsible for certifying political advertisements, includes Social Media Experts who are tasked with identifying fake news. Yet regulatory authorities currently provide no clear definition of what constitutes a deepfake or how to effectively regulate AI-generated content.

“The current framework has significant gaps when it comes to emerging technologies like deepfakes,” notes a digital rights advocate who requested anonymity. “While there are established guidelines for paid news, the same clarity doesn’t exist for generative AI content.”

Rising Takedown Orders During Elections

Content takedown orders have been increasing, particularly during election periods. During the 2019 general elections, social media platforms removed more than 900 posts, with Facebook taking down 650, Twitter deleting 220, and ShareChat removing 31. Five YouTube videos were also removed, and three WhatsApp accounts were disabled.

More recently, the Ministry of Home Affairs’ Indian Cyber Crime Coordination Centre (I4C) issued 426 content takedown notices to online platforms since March 2024, targeting over 110,000 URLs and accounts. WhatsApp received the highest number of notices (78), directing the removal of 83,673 accounts or groups. Instagram followed with 73 notices affecting 22,150 URLs and accounts associated with deepfakes, investment scams, fake trading platforms, and misinformation.

These figures were disclosed in the government’s submission to the Karnataka High Court during a case filed by X (formerly Twitter) challenging content takedown orders under Section 79(3)(b) of the Information Technology Act.

MCMC’s Role and Composition

The MCMC requires pre-certification for political advertisements on social media before they can be posted. However, critics point out that “advertisement” is loosely defined, creating potential loopholes. For instance, the guidelines specify that “Any political content in the form of messages/comments/photos/videos uploaded on ‘blogs/self accounts’/websites/social media platforms will not be treated as political advertisement,” regardless of who uploads it.

The Social Media Expert on the MCMC should preferably be a government officer with at least five years of experience in IT or social media. If a private individual is appointed, they must have a master’s degree in IT, at least 10 years of relevant experience, and demonstrated neutrality.

Detecting Deepfakes: A Complex Challenge

Experts highlight numerous challenges in identifying AI-generated content. During MediaNama’s “Deepfakes and Democracy” event, specialists outlined several detection methods, including machine learning algorithms that can identify manipulation with up to 90% accuracy in controlled environments.

However, practical implementation faces significant hurdles. When deepfakes are shared online, platform-level transcoding alters their properties, making detection more difficult. Analyzing every uploaded video or image would require enormous processing power, rendering real-time detection impractical for platforms with billions of posts.

“As researchers publish detection models, deepfake creators quickly adapt and improve their algorithms to evade detection,” explained Gautham Koorma from UC Berkeley at the event. “It’s a constant technological cat-and-mouse game.”

The ECI’s advisory to political parties recommends avoiding AI-generated or manipulated audio or video, refraining from spreading misinformation, and promptly reporting fake accounts to platforms. However, the detailed advisory referenced in the documentation appears to be inaccessible on the ECI’s website.

As Bihar heads to the polls, the effectiveness of these measures against sophisticated AI-generated content remains to be seen, highlighting the growing challenge of maintaining electoral integrity in the age of advanced artificial intelligence.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. Patricia Jones on

    Glad to see the ECI recognizing the threat of AI-generated content like deepfakes. However, the implementation details seem vague – curious to see how they’ll identify and take down ‘fake’ content in practice.

    • Emma M. Taylor on

      You raise a fair point. Determining what qualifies as misinformation or deepfakes in a timely manner will be a significant challenge for the MCMC.

  2. Interesting to see the ECI taking proactive measures against deepfakes and misinformation ahead of the upcoming elections. Safeguarding electoral integrity is crucial, though the lack of clear guidelines on content removal is concerning.

  3. While the intentions behind the ECI’s directive are commendable, the devil will be in the details of implementation. Ensuring a fair and effective content moderation process for the upcoming elections will be critical.

  4. As someone with a background in digital forensics, I’m interested to see how the MCMC plans to authenticate and verify content in such a short timeframe. The challenges of detecting deepfakes are well-documented.

  5. Amelia Jackson on

    The ECI’s focus on misinformation and deepfakes is a timely and necessary response, but the lack of clarity around enforcement is worrying. Transparency and due process should be paramount in content moderation.

  6. Isabella Rodriguez on

    The directive to political parties on AI tools and deepfakes is a step in the right direction, but the ECI needs to establish clear, transparent processes for content moderation to avoid potential abuse or overreach.

    • James S. Miller on

      Agreed. Striking the right balance between protecting electoral integrity and preserving free speech will be crucial for the ECI moving forward.

  7. Robert Johnson on

    Kudos to the ECI for addressing the threat of AI-generated fake content, but the directive seems to raise more questions than answers. I hope they can develop robust, unbiased systems to uphold electoral integrity.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.