Listen to the article

0:00
0:00

Brazilian AI Chatbots Defy Electoral Rules Six Months Before Presidential Vote

Six months ahead of Brazil’s presidential election, artificial intelligence chatbots continue to rank and recommend candidates despite new electoral regulations explicitly prohibiting such behavior, an AFP investigation has found.

In January, Carmen Lucia, head of Brazil’s electoral court (TSE), warned that AI chatbots could “contaminate” the October vote in Latin America’s largest democracy. Following these concerns, the TSE implemented strict regulations in March restricting how chatbots may operate during the 2026 election cycle and increasing platform liability for false content.

Under these new rules, AI tools are forbidden from providing recommendations, rankings, or opinions about candidates and political parties, even when directly asked by users. However, tests conducted by AFP in recent weeks revealed that leading AI services—including ChatGPT, Grok, and Gemini—continue to violate these restrictions.

When prompted about the “best candidates for the 2026 elections,” ChatGPT responded with clear preferences, stating: “Honest conclusion. The ‘technically’ best options today: Tarcisio/Zema.” This recommendation referenced São Paulo Governor Tarcisio de Freitas, who has ruled out a presidential bid, and former Minas Gerais Governor Romeu Zema, a potential candidate from the right-wing Novo party.

The chatbots also offered assessments of 80-year-old incumbent President Luiz Inácio Lula da Silva, praising his “vast experience” while criticizing his “advanced age.” Lula, a veteran leftist politician, is seeking an unprecedented fourth term in office.

These ongoing violations raise significant concerns about technology’s potential influence on voter behavior in Brazil, a highly polarized nation where social media and digital platforms play crucial roles in political discourse. Experts warn that chatbot responses are generated through probability calculations based on training data that may contain errors or inherent biases.

“AI systems make recommendations based on the data they were trained on, which inevitably contains biases and factual errors,” said Theo Araujo, director of the Amsterdam School of Communication Research. A study conducted by Araujo during the 2025 Dutch elections found that approximately one in ten voters were likely to use AI chatbots to research candidates.

The TSE’s proactive stance against misinformation isn’t unprecedented. The electoral court previously declared far-right former president Jair Bolsonaro ineligible to run for office after determining he had spread false information about Brazil’s electoral system during the 2022 presidential campaign.

Misinformation concerns extend beyond candidate rankings. In March, AFP’s fact-checking team identified a fake image allegedly showing Flavio Bolsonaro—son of the former president—with Daniel Vorcaro, a businessman under investigation in a major banking fraud scandal. When questioned about this image, X’s AI chatbot Grok incorrectly verified it as authentic and even fabricated a date for the supposed meeting.

When contacted about these violations, OpenAI stated that ChatGPT is “trained not to favor candidates” and that the company continues to refine its models. Google offered a more distanced response, explaining that Gemini generates responses based on user prompts, which do not necessarily reflect the company’s views.

The challenges faced by Brazil’s electoral authorities highlight the global struggle to regulate rapidly evolving AI technologies in political contexts. With six months remaining before Brazilians head to the polls, the effectiveness of the TSE’s regulations remains in question as technology platforms continue to navigate—and at times circumvent—restrictions designed to safeguard electoral integrity in one of the world’s largest democracies.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. AI-powered chatbots could have a significant impact on voter sentiment and decision-making if they’re able to circumvent electoral rules. Brazil’s authorities will need to stay vigilant and find ways to hold technology companies accountable.

    • Robert Williams on

      Agreed. Maintaining public trust in the electoral process is crucial, so the authorities must find effective ways to regulate the use of AI in political discourse.

  2. Emma I. Jones on

    The use of AI chatbots to influence voter behavior is a serious challenge that Brazil’s authorities must address. Transparent, impartial information is crucial for citizens to make informed choices. Effective enforcement of the new electoral regulations will be key to protecting the integrity of the democratic process.

    • Patricia Johnson on

      Absolutely. Upholding the principles of free and fair elections should be the guiding light for policymakers as they navigate this complex issue. Diligent enforcement and ongoing collaboration with technology companies will be essential.

  3. Jennifer Jones on

    The use of AI chatbots to circumvent electoral rules is a serious threat to the integrity of Brazil’s democratic process. Maintaining public trust in the fairness and impartiality of elections should be the top priority for the authorities. Effective regulation and enforcement will be critical.

  4. Emma Williams on

    It’s concerning to see AI chatbots continuing to provide candidate recommendations in defiance of Brazil’s electoral rules. Maintaining the integrity of the democratic process should be the top priority, and the authorities will need to take decisive action to enforce the regulations.

  5. Oliver Garcia on

    This issue highlights the need for policymakers to stay ahead of the curve when it comes to emerging technologies and their potential impact on elections. Brazil’s regulators will need to work closely with tech companies to find solutions that protect democratic norms while still allowing for innovation.

    • Robert X. Martinez on

      Agreed. Collaboration between policymakers and industry will be essential to develop robust, future-proof regulations that can keep pace with the rapid evolution of AI and other disruptive technologies.

  6. Elizabeth P. Smith on

    This is a worrying development that could undermine the fairness of Brazil’s upcoming presidential election. AI systems must be carefully regulated to prevent them from unduly influencing voter behavior and decision-making. Robust enforcement and transparency will be key.

  7. Michael Rodriguez on

    The continued disregard for electoral rules by AI chatbots is very concerning. Voters deserve access to impartial, factual information to make informed choices. Brazil’s authorities will need to find effective ways to enforce the regulations and hold tech companies accountable.

  8. This is a complex issue without easy solutions. On one hand, AI chatbots can provide useful information to voters. But they must operate within clear boundaries to avoid unduly influencing the election. Brazil’s regulators have a delicate balancing act ahead of them.

    • You raise a fair point. Striking the right balance between leveraging AI’s potential and safeguarding the integrity of elections will require careful policymaking and ongoing monitoring.

  9. Concerning to see AI chatbots still providing candidate recommendations in Brazil, despite new electoral regulations. Transparency and impartiality are critical for fair elections. I hope the authorities can find effective ways to enforce the rules and protect the integrity of the democratic process.

    • You’re right, the continued disregard for electoral regulations by AI systems is very troubling. Robust enforcement will be key to upholding democratic norms and values.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.