Listen to the article

0:00
0:00

Innovative Strategies to Fight Electoral Disinformation Campaigns

Meet La Chama and El Pana, two AI-powered news anchors who introduced themselves to reporters at the 14th Global Investigative Journalism Conference in Malaysia. Unlike most AI applications making headlines today, these digital personalities weren’t created to spread disinformation, but to combat it.

“They wanted to leverage the trust that media outlets were willing to place in two avatars to disseminate the content,” explained one of the AI anchors in a Venezuelan accent during a video presentation at the conference’s “Collaborating to Fight Electoral Disinformation Campaigns” panel.

Created by CONNECTAS, a Latin American investigative journalism network, these AI-generated avatars presented news to counter President Nicolás Maduro’s media crackdown during Venezuela’s contentious 2024 presidential election. The initiative represented a creative response to the extreme hostility and censorship journalists face in the country.

Carlos Eduardo Huertas, director of CONNECTAS, demonstrated how journalists can repurpose the same technology often used for harm. This approach seems timely and necessary. The World Economic Forum’s “Global Risks Report 2025” identifies misinformation and disinformation as the top short-term global risk over the next two years.

“AI could be a disturbing tool during election processes because it’s easy to create misinformation and spread lies,” Huertas told GIJN after the panel. “But if journalists work together in innovative ways, they can use artificial intelligence to protect journalism.”

La Chama and El Pana were part of CONNECTAS’s broader efforts called “Venezuela Vota” and “#LaHoraDeVenezuela,” created to counter government disinformation. When traditional reporting became dangerous for journalists, the network turned to AI as a protective measure. In total, 14 media outlets, information platforms, and independent organizations collaborated, using their own resources to counter official propaganda during the election.

The panel also featured Kwaku Krobea Asante, program manager at the Media Foundation for West Africa, and Sonia Bhaskar, program head at DataLEADS. They shared strategies implemented in Ghana and India, respectively, where elections were also held in 2024. Mexican investigative reporter Nayeli Roldán from Animal Politico moderated the discussion.

In Ghana, members from Fact-Check Ghana, Ghana Fact, and Dubawa joined forces to form the Ghana Fact-Checking Coalition. Asante explained that they incorporated civil society organizations focused on information integrity to strengthen the collaboration.

“When you try to work alone in silos, you achieve very little and may end up duplicating work,” Asante noted. “When you join forces, you can reach more and address issues more effectively.”

The coalition established two media situation rooms—one in northern Ghana and another in the southern region—during the December 2024 elections. These centers conducted live monitoring of election-related narratives to identify misinformation, disinformation, and polarizing content, producing real-time fact-checking reports to counter false information.

“We found coordinated networks engaging in disinformation astroturfing on X, and recycling of old photos and videos,” Asante revealed, highlighting the sophisticated nature of disinformation campaigns they encountered.

India faced similar challenges during its 2024 general election. Shakti: India Fact-Checking Collective, a consortium of more than 100 fact-checkers and news publishers, worked collaboratively to detect online misinformation and deepfakes early.

The collective distributed and amplified over 6,600 fact-checks across more than 10 languages, according to Shakti’s website. This massive effort was supported by deepfake and synthetic media experts, as well as lawyers. DataLEADS led the initiative in collaboration with The Quint, Vishvas News, BOOM, Factly, and the Press Trust of India, among other leading fact-checking organizations.

For effective collaboration against disinformation, the panelists emphasized that trust among member organizations is essential, along with clarity in strategies.

“It’s important to ensure there’s enough transparency on where funding is coming from, who is doing what exactly, and what the structure and flow of work is,” Asante shared. The panelists also stressed the importance of sustainability in fact-checking initiatives beyond election cycles.

Despite the sophisticated AI tools available for detecting disinformation, the experts unanimously agreed that traditional journalism skills remain fundamental in the fight against false information.

“The tools have evolved so much that detection tools are really playing catch-up,” Bhaskar explained after the panel. “Detection tools can only help supplement, but they’re not good enough. We are very far away from our silver bullet.”

“We are still relying heavily on our journalistic skills,” she added, underscoring that technology alone cannot replace thorough reporting, source verification, and critical analysis—skills that have always been at the heart of quality journalism.

As technology continues to evolve, these collaborative initiatives represent promising models for how journalists worldwide might adapt to increasingly sophisticated disinformation threats while preserving the core values of their profession.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Innovative use of AI to combat disinformation during elections. Curious to learn more about the specific strategies employed and how they ensured the digital anchors maintained credibility.

  2. This is an innovative and timely solution to a growing global problem. Leveraging AI to counter disinformation campaigns seems like a promising tactic, though the details on execution will be key.

    • I agree, the ability to repurpose the same technology often used for harm is an intriguing concept. I’m curious to see if this model can be scaled to other regions facing similar challenges.

  3. Elijah Rodriguez on

    The use of AI-powered news anchors to combat disinformation is a creative approach. It will be interesting to see if this tactic can be effectively replicated in other countries grappling with media censorship.

  4. This is an encouraging example of using technology for good. Leveraging AI to counter disinformation campaigns during elections is a laudable goal, though the implementation details will be crucial.

    • William Johnson on

      I agree, the ability to repurpose technology in this way is promising. It will be important to closely monitor how this approach is received and its long-term impact.

  5. Interesting strategy to leverage AI for good and combat disinformation. Curious to learn more about the technical details and how they ensured the AI anchors maintained credibility.

    • Isabella Martin on

      Yes, it’s critical that these AI tools are implemented responsibly and transparently to maintain public trust. Curious to see if this approach can be replicated in other countries facing media crackdowns.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.