Listen to the article
AI’s Nuanced Role in Democracy: Empowerment, Manipulation and Governance Challenges
Artificial intelligence is radically reshaping democratic processes worldwide, creating both unprecedented opportunities for civic engagement and dangerous new avenues for manipulation. These dual impacts were the focus of a recent webinar hosted by the Team Europe Democracy Secretariat, which brought together experts from International IDEA, Article 19, Safer Internet Lab, and Germany’s Federal Ministry for Economic Development and Cooperation (BMZ).
The discussion, moderated by Julia Keutgen from International IDEA and opened by Jakob Rieken of BMZ, explored AI’s growing influence on elections and democratic participation across multiple continents.
In some regions, AI-powered civic technology is strengthening democratic participation. Taiwan has developed online deliberation tools that enable citizens to directly influence legislation, while Kenya has deployed digital platforms for election monitoring and legislative tracking. AI translation technologies are making political discourse more accessible in multilingual societies, potentially expanding political inclusion.
However, the experts warned that AI is simultaneously amplifying existing threats to democracy. Information pollution—a toxic mix of facts, half-truths, and falsehoods—has become increasingly sophisticated, making it nearly impossible for citizens to distinguish reliable information from manipulation.
“While disinformation doesn’t require AI to be effective, artificial intelligence significantly raises the stakes by supercharging the sophistication of manipulation campaigns,” noted one participant. This pattern has been observed in 2024 elections across diverse contexts, from Bangladesh to South Africa.
Recent elections in Romania, Indonesia, and Mexico have all featured AI-generated or AI-amplified disinformation campaigns that eroded public trust. The technology is also enabling authoritarian regimes to enhance mass surveillance and repression, while women and minorities face increasingly sophisticated targeted harassment.
The “liars’ dividend”—where legitimate information can be dismissed as AI fakery—further compounds these challenges by undermining factual reporting.
Barbora Bukovská, Senior Director for Law and Policy at Article 19, highlighted findings from a BMZ-commissioned report showing how the global AI landscape is shaped by geopolitical competition. The United States, China, and European Union dominate AI development, often sidelining democratic oversight and leaving countries in the Global South particularly vulnerable.
“A handful of tech companies now control the AI ecosystem, frequently prioritizing profit over public interest and sometimes compromising ethical research standards,” Bukovská explained. She emphasized that Western AI governance frameworks often fail to address the specific needs of developing countries, underscoring the importance of local expertise and context-sensitive regulations.
Country case studies revealed the real-world impacts of these challenges. In Indonesia, Alia Yofira Karunian from Safer Internet Lab described how domestic “buzzer” networks deploy automated accounts to spread propaganda, frequently targeting women and minorities. The “ibu berjilbab pink” protester case illustrated the gendered nature of online attacks and exposed limitations in AI detection tools, which perform poorly when confronted with Indonesia’s linguistic diversity.
More encouraging developments were seen in Mexico and South Africa, where electoral authorities have partnered with civil society and technology platforms to combat disinformation. Thijs Heinmaa from International IDEA described Mexico’s creation of fact-checking hubs and the INE WhatsApp bot “Inés,” while South Africa’s Electoral Commission established cooperation agreements with Google, Meta, TikTok, and civil society groups to support initiatives like the Real411 reporting platform.
Nevertheless, these voluntary partnerships highlight the limits of self-regulation and the need for binding rules governing technology platforms.
The experts outlined a three-pronged approach to addressing these challenges: First, developing effective regulations through multi-stakeholder processes that mandate transparency for AI-generated content and prohibit privacy-violating surveillance; second, investing in digital literacy and independent journalism; and third, establishing international cooperation frameworks rooted in human rights principles.
BMZ representatives emphasized the urgent need to update the “guardrails of democracy” through rights-based, people-centered approaches consistent with international standards.
As AI continues to transform civic space, the experts concluded that safeguarding democracy requires both technical solutions and deeper social engagement—not just protecting democratic institutions from AI’s potential dangers, but actively shaping technology development to strengthen citizen empowerment and democratic resilience worldwide.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

