Listen to the article

0:00
0:00

AI Chatbots Unreliable for Voting Advice, Dutch Watchdog Warns

As voters increasingly turn to artificial intelligence for guidance in various aspects of life, Dutch authorities are raising alarms about using AI chatbots for electoral advice. The Dutch Data Protection Authority (AP) has cautioned against relying on popular AI tools such as ChatGPT and Gemini for voting decisions ahead of the country’s October 29 snap elections, citing reliability concerns and systematic bias.

In a comprehensive study, the AP tested four leading chatbots—ChatGPT, Gemini, Grok, and Le Chat—by inputting 200 typical voting profiles per political party currently represented in the Dutch parliament. The results revealed troubling patterns in how these AI systems approach political recommendations.

“What we saw was a kind of oversimplification of the Dutch political landscape, where one party on the left kind of sucked up all the votes in that corner of the political spectrum and the same with the one party on the right,” explained Joost van der Burgt, project manager at the AP.

The investigation found that regardless of user inputs or specific questions, the chatbots consistently recommended just two parties in 56% of cases: Geert Wilders’ right-wing Party for Freedom (PVV) or Frans Timmermans’ left-leaning Green Left-Labour Party (GL/PvdA)—coincidentally, the two parties forecast to win the most parliamentary seats.

Meanwhile, centrist viewpoints were notably underrepresented, and smaller parties like The Farmer-Citizen Movement (BBB) and Christian Democratic Appeal (CDA) were almost never suggested, even when user profiles directly aligned with these parties’ platforms. Oddly, the relatively new right-wing party JA21 received a disproportionately high number of recommendations despite having less extensive media coverage than established parties.

Van der Burgt attributes these biases to the fundamental way large language models operate. “They are basically statistical machines that predict missing words in a phrase or a certain output,” he told Euronews’ verification team. “If your political views position you towards one end of the political spectrum, it’s maybe not that surprising that generative AI picks a political party that fits that side and seems like a safe choice.”

The findings raise concerns about AI’s potential classification under the European Artificial Intelligence Act (AI Act), which came into force in August 2024. Researchers suggest chatbots providing voting advice could qualify as “high-risk systems” under rules scheduled for implementation in August 2026.

Van der Burgt believes safeguards should be implemented similar to those already in place for other sensitive topics. “It is already the case when it comes to questions about mental health or aids to create improvised weapons,” he noted. “In all these scenarios, a chatbot clearly states, ‘I’m sorry, we’re not allowed to help you with it.’ And we think that the same sort of mechanism should be in place when it comes to voting advice.”

The AP compared chatbot results to established Dutch voting advice tools like StemWijzer and Kieskompas, which rely on structured data rather than generative AI. These traditional tools ask voters to answer a series of 30 questions to determine their political alignment. Similar systems exist in other European countries, including Germany’s government-approved Wahl-O-Mat.

Experts highlight a critical advantage of traditional voting tools: transparency. “One fundamental problem with these chatbots is that the way they work is untransparent,” said van der Burgt. “We, nor the public, nor journalists can figure out why exactly they will produce a certain answer.”

Despite concerns, some researchers see potential in controlled AI systems designed specifically for electoral guidance. In Germany, researcher Michel Schimpf launched Wahl.Chat ahead of the country’s 2025 federal elections as an alternative to traditional tools. This specialized bot incorporates party manifestos and allows voters to ask specific policy questions.

“If you ask ChatGPT a question, its source could be a biased website, but we made sure that our bot relies on party manifestos,” Schimpf explained. His team incorporated fact-checking functionality and designed the system to reduce bias, though he acknowledges limitations: “It’s still AI, which is ultimately probabilistic.”

Other researchers are taking even more measured approaches. Naomi Kamoen, assistant professor at Tilburg University, is helping develop a limited chatbot with answers written by qualified researchers rather than generated by AI. “The goal is to inform, not necessarily to tell them what to vote for,” she said.

Experts generally agree that AI can play a constructive role if used properly. Jianlong Zhu, doctoral researcher at Saarland University, suggests AI can help educate voters by breaking down complex political concepts in real-time—something traditional voting tools cannot do.

“If you are using Wahl-O-Mat and there are concepts that you don’t understand, what you end up doing is randomly clicking stances,” Zhu explained. “But with a chatbot, you can ask what those terms mean, and the chatbot is effective in breaking down those terms to help you engage with the topic better.”

As AI continues to evolve, experts recommend users focus on asking chatbots about parties’ positions rather than requesting direct voting recommendations. Many also advocate for regulatory frameworks to ensure responsible AI use during election campaigns, highlighting that artificial intelligence should complement—not replace—traditional methods of political engagement like watching news, engaging in discussions, and reviewing party platforms.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

12 Comments

  1. Interesting to see the Dutch authorities warning about the reliability of AI chatbots for voting guidance. It’s concerning to hear that the chatbots tested oversimplified the political landscape and pushed voters toward just a couple of parties. This highlights the need for caution when relying on AI for important decisions like elections.

    • Olivia C. Hernandez on

      You’re right, we should be very careful about using AI for something as critical as voting advice. Systematic biases in the chatbots could end up unfairly swaying people’s political views.

  2. This is a concerning development, but not entirely surprising. AI chatbots, while impressive, still have significant limitations when it comes to navigating the complexities of political landscapes and providing nuanced, unbiased advice. Voters should be wary of over-trusting these technologies, especially for something as important as voting guidance. Maintaining human oversight and a diverse range of trusted information sources is crucial.

    • I agree completely. While AI can be a useful tool, it should not be treated as a replacement for human judgment, especially when it comes to something as vital as the democratic process. Voters must think critically and seek out a variety of reliable sources to make informed decisions.

  3. Isabella C. Williams on

    The Dutch watchdog’s findings are concerning but not surprising. AI chatbots are still a relatively new technology, and their ability to provide nuanced, unbiased advice on complex topics like politics is limited. This underscores the importance of human oversight and verification when using AI for sensitive applications.

    • Elizabeth Jones on

      Agreed. While AI can be a useful tool, it shouldn’t be treated as a replacement for human judgment, especially when it comes to something as important as voting. Voters need to think critically and not blindly accept AI-generated recommendations.

  4. Elizabeth I. Thompson on

    The Dutch watchdog’s warning about the limitations of AI chatbots for voting guidance is a valuable wake-up call. While these technologies can be useful in many domains, they are still prone to biases and oversimplifications that could have serious consequences when it comes to something as important as elections. Voters should approach any AI-generated advice with a critical eye and seek out multiple, trusted human sources.

    • Well said. We need to be very cautious about over-relying on AI for something as crucial as voting decisions. Maintaining human oversight and critical thinking is essential to preserve the integrity of the democratic process.

  5. Amelia Johnson on

    This is a good reminder that AI systems, while powerful, can still have significant limitations and blind spots when it comes to complex political and social issues. Voters need to think critically and seek out a variety of trusted human sources in addition to any AI tools.

    • Absolutely. Relying solely on AI for voting guidance seems very risky. Voters should cross-check any chatbot recommendations with other reliable sources to ensure they’re making informed decisions.

  6. Oliver P. Johnson on

    It’s good to see regulators taking a proactive approach and testing the reliability of AI chatbots for voting advice. The findings that the chatbots oversimplified the political landscape and pushed voters toward just a couple of parties is very concerning. This highlights the need for more transparency and accountability around the use of AI in the democratic process.

    • Jennifer Davis on

      Absolutely. Voters deserve access to balanced, nuanced information to make their own informed choices, not AI-driven recommendations that could unfairly sway their views. Regulators should continue to closely monitor the use of these technologies.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.