Listen to the article
In a rapidly evolving digital landscape, the nature of foreign interference campaigns is undergoing fundamental transformation, according to Laura Jasper, a leading expert on Foreign Information Manipulation and Interference (FIMI) from The Hague Centre for Strategic Studies (HCSS).
During a recent interview with Antidisinfo.net, Jasper identified generative artificial intelligence as a game-changing tool that has dramatically altered the disinformation ecosystem, creating unprecedented challenges for democracies worldwide.
“Put very simply, GenAI poses challenges on three key aspects,” Jasper explained. “The speed at which disinformation is disseminated, the scale at which it is spread, and how it allows for the ‘personalization’ of messages.” This technology enables malicious actors to tailor propaganda at a massive scale for different target audiences, making traditional countermeasures increasingly difficult to implement.
The question of attribution in modern disinformation campaigns has also grown more complex. According to Jasper, analysts now speak in terms of probability rather than certainty when identifying the sources behind sophisticated attacks.
“The question of attribution is more often one of probability rather than it is a binary decision,” she noted. “This is because adversaries increasingly make use of proxies, false flags, and commercial tools, including GenAI.”
Jasper recommends that analysts assign confidence levels (low/medium/high) to their attributions rather than claiming absolute certainty. “Publishing the basis of the evidence that analysts gather is a way we can preserve credibility and also build our knowledge base by sharing this with other parties,” she added.
In recent research published by HCSS, including “Building Bridges: Euro-Indo-Pacific Cooperation for resilient FIMI Strategies” and “FIMI in Focus: Navigating Information Threats in the Indo-Pacific and Europe,” Jasper and her colleagues identified common vulnerabilities that foreign actors exploit across different regions.
“The main shared vulnerabilities are high dependency on commercial platforms combined with social trust fractures, such as polarization and low institutional trust,” Jasper observed. “The most dangerous element is the exploitation of existing social trust fractures, which are amplified by hostile actors.”
When measuring the effectiveness of disinformation campaigns, Jasper emphasized the importance of focusing on behavioral outcomes rather than just changes in opinion. “Behavior is driven by opinions,” she explained. “Someone might have changed their opinion, but this change is not visible in the physical world until the person’s behavior changes due to that shift in opinion.”
For researchers and security experts, this requires a more sophisticated approach to measurement. “Disinformation’s real goal is to change behavior, so analysts must first define the specific behavioral end-state they want to measure,” Jasper said. Examples include reduced voter turnout or increased protest participation.
Effective measurement requires establishing clear baselines and counterfactuals to determine whether behavior actually shifted following a disinformation campaign. Analysts typically combine quantitative data like polling and participation records with qualitative insights from interviews and focus groups to connect observed actions to exposure to disinformation.
True societal resilience, according to Jasper, is evident when communities quickly recover from attempted manipulation—when intended behavioral changes either don’t materialize or rebound rapidly.
When asked about how governments should respond to foreign influence operations that fall into “grey zones”—activities that are harmful but not strictly illegal—Jasper cautioned against approaches that operate outside legal frameworks.
Instead, she advocated for a whole-of-society approach: “The strength lies in engaging more local actors across borders to build trust within societies. With local, I mean community builders, investigative journalists, and others. I believe this should not solely come top-down from the government but rather be handled on a more granular level throughout the whole of society.”
As generative AI continues to evolve, Jasper’s insights underscore the need for innovative, collaborative approaches to counter increasingly sophisticated disinformation campaigns that threaten democratic institutions globally.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The shift from broad-based fake news to personalized political warfare is a worrying trend. We must understand these new tactics in order to develop effective countermeasures.
Agreed. This requires a multifaceted approach, involving policy, technology, and educational initiatives. Collaboration between stakeholders will be crucial.
This is quite a concerning development. The personalization of political disinformation through AI raises serious challenges for democracies. We’ll need innovative strategies to counter these rapidly evolving tactics.
Agreed. The speed and scale of dissemination is alarming. Identifying sources will be increasingly difficult, making targeted responses more complex.
It’s alarming to see how quickly the disinformation landscape is evolving. The ability to tailor propaganda at scale using AI poses significant risks to democratic discourse.
Absolutely. This underscores the need for robust fact-checking, media literacy programs, and transparency measures to combat these emerging threats.
Generative AI has undoubtedly become a powerful tool in the hands of bad actors. Tailoring propaganda to specific audiences is a troubling trend that will require vigilance and new solutions.
Absolutely. Democracies must stay ahead of these developments and find ways to build resilience against personalized disinformation campaigns.
This highlights the importance of media literacy and critical thinking skills. As the disinformation landscape evolves, the public needs to be better equipped to spot and resist these targeted manipulations.
Well said. Empowering citizens to navigate the online information space is key. Governments and tech companies have a responsibility to address this challenge.