Listen to the article

0:00
0:00

The use of artificial intelligence in spreading disinformation across Europe and Latvia is expected to intensify this year, according to Jānis Sarts, director of the NATO Strategic Communications Centre of Excellence.

Sarts warned that an increasing portion of digital content will be generated by AI rather than humans, continuing established narrative patterns. These include portraying Russia as a powerful nation while depicting Ukraine as weak, alongside attempts to create divisions between Europe and the United States.

With Latvia’s parliamentary elections approaching next year, the country faces elevated disinformation risks. “Elections are one of Russia’s primary targets,” Sarts explained. “Their goal is to destabilize the situation and create chaos, making Latvia particularly vulnerable.”

The methods used to influence public opinion are likely to be diverse and sophisticated, including leveraging artificial intelligence and manipulating algorithmic systems—tactics already documented in other countries. Sarts highlighted a concerning technique known as “data poisoning,” where adversaries attempt to steer AI models toward producing responses that favor Russian interests.

While Russia may experiment with new approaches, Sarts believes the country will largely rely on previously successful disinformation strategies. “The main methods of Russian information warfare are well established,” he noted. “They tend to continue using what has worked before.” These efforts extend beyond election interference to include exploiting regional security incidents.

AI-powered disinformation is already widespread, according to Sarts. Current applications include automated social media accounts designed to mimic human behavior and artificially generated websites that influence how large language models respond to queries about specific topics.

The proliferation of AI tools has made personalized manipulation increasingly feasible. “Every person can be manipulated under certain circumstances,” Sarts cautioned. “New technologies now allow for tailoring manipulation methods to target specific individuals.”

This individualized approach represents a significant evolution in information warfare. Previously, disinformation campaigns relied on broad messaging hoping to influence certain demographic groups. With AI-powered analysis of personal data, bad actors can now craft messages designed to exploit the specific psychological vulnerabilities of individual targets.

The Baltic states, including Latvia, have long been at the forefront of confronting Russian information operations. Their geographical proximity to Russia and significant Russian-speaking populations make them frequent targets for disinformation campaigns. Latvia’s upcoming elections present a particularly appealing opportunity for interference.

Cybersecurity experts have noted that the cost barrier for creating sophisticated disinformation has dropped dramatically with the advent of generative AI tools. What once required significant resources from state actors can now be accomplished with relatively accessible consumer technology.

Sarts urged citizens to exercise vigilance, particularly when using social media during emotionally charged situations. He specifically advised against making important decisions hastily when confronted with provocative content online, as emotional reactions are often the target of manipulation campaigns.

The warnings come amid growing concerns across Europe about election security, as 2024 will see numerous significant polls across the continent. With AI technologies evolving rapidly, the challenge of distinguishing authentic information from sophisticated fakes becomes increasingly difficult for average citizens, presenting a fundamental challenge to democratic processes.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

21 Comments

  1. John I. Davis on

    Concerning to hear about the potential for AI-driven disinformation in Latvia and Europe. We’ll need robust fact-checking and media literacy efforts to counter these threats to democratic discourse.

    • Agreed, maintaining trust in information sources will be crucial. Strengthening cybersecurity and digital transparency safeguards should also be priorities.

  2. This highlights the need for robust digital security and media literacy initiatives, particularly ahead of Latvia’s upcoming elections. Maintaining faith in democratic institutions will be critical.

    • William Smith on

      Agreed. The threat of AI-driven disinformation is a complex challenge, but one that must be taken seriously to safeguard democratic processes.

  3. The threat of AI-fueled disinformation is a complex challenge that demands a multifaceted approach. Enhancing digital literacy, platform accountability, and international cooperation will all be crucial.

  4. Ava Rodriguez on

    This underscores the ongoing challenge of combating AI-driven disinformation campaigns. Enhancing platform accountability, promoting digital literacy, and strengthening international cooperation will be key to protecting democratic processes.

    • Oliver Y. Miller on

      Absolutely. Maintaining public trust in information sources and democratic institutions should be a top priority as these threats continue to evolve.

  5. The use of AI to spread false narratives and sow division is quite worrying. Proactive public awareness campaigns and stronger platform regulations will be essential in combating this challenge.

    • Michael E. Moore on

      Absolutely. Vigilance and a coordinated response from government, tech companies, and civil society will be vital to protect the integrity of elections and public discourse.

  6. This highlights the importance of building societal resilience to counter the spread of AI-driven disinformation. Strengthening media literacy, supporting independent journalism, and promoting platform transparency should be top priorities.

    • Agreed. Maintaining public trust in information sources and democratic institutions will be essential as these threats continue to evolve.

  7. Noah Williams on

    The potential for AI to be weaponized for disinformation is deeply concerning. Strengthening media literacy and supporting independent journalism will be key to building societal resilience.

  8. This underscores the urgent need for a coordinated, multi-stakeholder response to combat AI-driven disinformation. Strengthening cybersecurity, digital literacy, and platform transparency should be key priorities.

    • James Rodriguez on

      Absolutely. Protecting the integrity of democratic processes in the face of these evolving threats will require sustained commitment and collaboration across sectors.

  9. Oliver Miller on

    This underscores the ongoing battle against malicious actors seeking to exploit emerging technologies for political gain. Collaborative, cross-border efforts to counter these threats are essential.

    • Absolutely. Sharing best practices and coordinating responses across Europe will be crucial to effectively combat AI-fueled disinformation campaigns.

  10. Elizabeth Davis on

    The use of AI to spread disinformation is a worrying trend that requires a multifaceted response. Strengthening digital literacy, bolstering platform transparency, and enhancing cybersecurity will all be vital.

  11. This is a complex challenge that highlights the need for robust, collaborative efforts to protect the integrity of democratic processes. Proactive measures to counter AI-driven disinformation will be essential.

    • Agreed. Maintaining public trust in information sources and democratic institutions should be a top priority as these threats continue to evolve.

  12. Isabella Davis on

    The potential for AI to fuel disinformation is deeply concerning, especially with elections on the horizon. Robust fact-checking, media literacy, and platform accountability will be critical to addressing this challenge.

  13. The potential for AI to be used to spread disinformation is deeply concerning, especially in the context of upcoming elections. A coordinated, cross-border response focused on digital security and media literacy will be critical.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.