Listen to the article
In an era where digital threats evolve faster than our defenses, foreign interference is undergoing a fundamental transformation, according to Laura Jasper, a leading expert on Foreign Information Manipulation and Interference (FIMI) from The Hague Centre for Strategic Studies (HCSS).
Speaking in a recent interview with Antidisinfo.net, Jasper warns that artificial intelligence has elevated disinformation campaigns beyond simple fake news distribution to something far more insidious: personalized political warfare designed to target individual citizens with unprecedented precision.
“The landscape of information operations has shifted dramatically,” Jasper explains. “What we’re witnessing isn’t just an incremental improvement in existing techniques, but a complete reimagining of how foreign actors can influence domestic politics.”
Jasper points to the growing sophistication of AI-powered tools that can analyze vast amounts of personal data to create highly tailored disinformation narratives. Unlike traditional propaganda that casts a wide net, these new approaches can micro-target individuals based on their specific vulnerabilities, beliefs, and psychological profiles.
“The real danger lies in how invisible these operations have become,” she notes. “When disinformation is precisely calibrated to align with someone’s existing worldview, it’s far less likely to trigger skepticism or critical thinking. The most effective manipulation doesn’t feel like manipulation at all.”
The HCSS researcher highlighted several real-world examples where sophisticated AI has already been deployed in political contexts across Europe and North America. In one case study she presented, a foreign operation created thousands of convincing synthetic identities that infiltrated local community groups over months before gradually introducing polarizing content during a municipal election.
Industry experts have long warned about the potential misuse of generative AI in political contexts, but Jasper’s research suggests these capabilities are already being weaponized at scale. The technology has reached a point where distinguishing between authentic grassroots political activity and orchestrated foreign interference has become extraordinarily difficult, even for trained analysts.
Particularly concerning is the asymmetric nature of the threat. “Democratic societies are uniquely vulnerable to these tactics,” Jasper emphasizes. “Open discourse and free expression are foundational values that these operations exploit. The most effective foreign interference doesn’t shut down conversation—it hijacks and redirects it toward objectives that serve foreign interests.”
The stakes extend beyond individual elections or policy debates. According to Jasper, sustained AI-driven influence operations aim to erode trust in democratic institutions themselves. By amplifying existing societal tensions and undermining faith in shared facts, these campaigns can fracture social cohesion in ways that persist long after any single political contest ends.
Traditional countermeasures have proven insufficient against these evolving threats. Media literacy initiatives, while valuable, struggle to keep pace with technologies specifically designed to bypass critical thinking. Platform-level content moderation faces similar challenges when influence operations occur across multiple channels simultaneously or manifest through seemingly authentic community engagement rather than obvious disinformation.
“We need a fundamental rethinking of our defensive posture,” Jasper argues. “This isn’t just about identifying and removing false content anymore—it’s about understanding complex influence ecosystems and developing resilience at both institutional and individual levels.”
The HCSS has advocated for stronger international cooperation, including intelligence sharing about emerging tactics and coordinated responses to identified foreign operations. Jasper also emphasizes the importance of meaningful regulation of AI development that addresses security implications without stifling innovation.
“The technology itself is neutral, but its applications aren’t,” she notes. “We need frameworks that acknowledge the dual-use nature of these tools and establish clear boundaries for their ethical deployment.”
As election seasons approach in several Western democracies, Jasper’s warnings take on added urgency. The personalized nature of modern information warfare means traditional signals of foreign interference may go undetected until significant damage has already occurred. Both citizens and institutions will need to develop new forms of vigilance appropriate to this transformed threat landscape.
“This isn’t a future concern—it’s our present reality,” Jasper concludes. “The question isn’t whether these technologies will be deployed to influence democratic processes, but how effectively we’ll recognize and counter them when they are.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
Personalized political warfare enabled by AI and big data is a worrying development that requires urgent attention. The ability to precisely target individuals with tailored disinformation campaigns poses a serious threat to the foundations of our democracy. We must act swiftly to address this challenge and protect our citizens.
I agree, this is a deeply concerning trend. Strengthening digital literacy, privacy protections, and collaborative solutions between government, tech companies, and civil society will be crucial to building societal resilience against these manipulative tactics.
The rise of AI-powered personalized political warfare is a disturbing development. It represents a fundamental shift in how foreign actors can influence domestic politics. We must remain vigilant and develop effective defenses to protect our democratic institutions and citizens.
Agreed. This is a worrying escalation in the information wars. Building societal resilience through digital literacy, privacy protections, and collaborative solutions between government, tech, and civil society will be crucial to countering these evolving threats.
This is concerning news. The use of AI and personal data to target individuals with disinformation is a frightening development. We need robust defenses against such malicious tactics that undermine the integrity of our democratic discourse.
I agree, we must stay vigilant and develop effective countermeasures to these evolving information warfare tactics. Protecting citizens from personalized propaganda is crucial for safeguarding our democratic institutions.
The personalization of political disinformation through AI is a deeply concerning trend. It undermines the very foundations of democracy by eroding trust and exploiting individual vulnerabilities. We need to urgently address this challenge through robust regulations, public education, and technological countermeasures.
This is a worrying development in the information wars. Using AI and personal data to micro-target citizens with disinformation is a dangerous escalation. We must find ways to inoculate the public against these manipulative tactics and preserve the integrity of our democratic discourse.
The rise of AI-powered personalized political warfare is a disturbing development that undermines the integrity of our democratic discourse. Using individual-level data to micro-target citizens with tailored disinformation narratives is a dangerous escalation in the information wars. We must find effective ways to counter these evolving threats and protect our institutions.
Personalized political warfare enabled by AI and big data is a worrying trend. It erodes trust in our institutions and exploits individual vulnerabilities in ways that traditional propaganda did not. We must find ways to build societal resilience against these threats.
Strengthening digital literacy and critical thinking skills among the public could help build resistance to these manipulative tactics. Raising awareness is key to empowering citizens to identify and reject personalized disinformation campaigns.
The rise of AI-powered disinformation targeting individuals is a significant challenge. We need robust regulations and oversight to ensure personal data is not abused for malicious political ends. Protecting privacy and autonomy should be priorities in this fight.
Agreed. Policymakers must work swiftly to update laws and enforcement mechanisms to keep pace with the evolving threat landscape. Collaboration between government, tech companies, and civil society will be crucial to develop effective solutions.
This news about the use of AI and personal data to wage personalized political warfare is deeply concerning. It represents a fundamental shift in how foreign actors can manipulate and influence domestic politics. We need robust solutions to protect our citizens and institutions from these evolving threats.
I agree, this is a worrying development that demands a comprehensive response. Strengthening digital literacy, privacy safeguards, and collaborative efforts between government, tech, and civil society will be crucial to building societal resilience against these manipulative tactics.
The rise of AI-powered personalized political warfare is a concerning trend that undermines the integrity of our democratic discourse. Using individual-level data to micro-target citizens with tailored disinformation narratives is a dangerous escalation in the information wars. We must find effective ways to counter these evolving threats.
Personalized political warfare enabled by AI and big data is a troubling phenomenon that requires urgent attention. The ability to micro-target citizens with tailored disinformation narratives poses a serious threat to the integrity of our democratic processes. We must act swiftly to address this challenge.