Listen to the article

0:00
0:00

AI Battling AI: Europe’s Fight Against Digital Disinformation

Last winter, as Christmas markets opened across Europe, social media was flooded with alarming videos. Posts claimed that radical Islamists were “invading” Christmas markets. One clip purportedly showed people “disrupting” the opening of the Brussels Christmas market, while a separate photo displayed a market surrounded by heavy security. The message was clear: Christian traditions were supposedly under threat.

The reality, however, was entirely different. The videos depicted peaceful demonstrations unrelated to religious attacks, and the concerning security photo had been artificially generated. What appeared convincing at first glance was either misleading or completely fabricated.

This scenario exemplifies today’s complex information landscape. According to a recent European Commission survey, nearly two-thirds of respondents reported encountering disinformation or fake news within the previous week alone. With artificial intelligence tools now capable of generating highly realistic images, videos, and text, distinguishing between authentic and fabricated content has become increasingly challenging.

In response to this growing threat, a multinational team of researchers and media specialists, backed by EU funding, has developed technological countermeasures to combat AI-generated disinformation.

“There is an urgent need to develop AI techniques for the media sector,” said Yiannis Kompatsiaris, research director at the Centre for Research & Technology Hellas (CERTH), who coordinated the four-year EU-funded initiative called AI4Media launched in 2020.

The project brought together experts from universities, media organizations, and technology companies to create AI tools that help journalists and fact-checkers verify digital content quickly and reliably.

AI has dramatically lowered barriers to producing convincing fake content. Anyone with access to generative AI can now create fabricated images, cloned voices, or realistic-looking news articles that social media platforms can amplify at unprecedented speed.

“When a fake story is supported by realistic images, it becomes much easier to believe – and more tempting to share because the content generates higher views,” Kompatsiaris noted.

The AI4Media team developed verification tools designed to integrate directly into newsroom workflows. Major European media organizations including Germany’s Deutsche Welle and Belgium’s VRT tested these tools in real-world settings.

“Fact-checkers and journalists face suspicious images every day,” explained Akis Papadopoulos, a CERTH researcher who worked on the project. He characterized the technology as a “first line of defense” – not replacing human judgment but helping flag potentially manipulated content efficiently.

According to the European Digital Media Observatory, AI-generated disinformation has increased steadily in recent months. The implications extend far beyond isolated hoaxes. Coordinated disinformation campaigns can influence elections, distort public debate, and undermine trust in institutions across the European Union.

In a parallel EU-funded project called AI4Trust, researchers at Italy’s Fondazione Bruno Kessler (FBK) partnered with universities and media organizations across Europe to analyze the broader dynamics of online disinformation.

“We are in a continuous loop of trying to understand and catch up with the latest technology,” said Riccardo Gallotti, head of the Complex Behavior Unit at FBK.

While AI4Media focused on detecting manipulated media and integrating verification tools into newsrooms, AI4Trust built a hybrid human-machine system to monitor and analyze disinformation at scale. Its platform tracks multiple social media and news sites in near real-time, using advanced AI algorithms to process multilingual content across text, audio, and images.

Because the volume of online material far exceeds human capacity, the system filters and flags high-risk posts for review by professional fact-checkers, whose verified assessments then improve the system’s performance.

“Using AI to detect AI might sound ironic, but it’s like an arms race,” Kompatsiaris said. Generative AI models are evolving at extraordinary speed. When AI4Media began, tools like ChatGPT were still in their infancy. Since then, the quality and realism of AI-generated content have advanced dramatically.

“We have entered a new era where the acceleration is hard for the human mind to keep up with,” Papadopoulos explained. “To keep up with AI, you need to be using AI.”

The European Union is bolstering these technological efforts with regulatory frameworks. Under the Digital Services Act, large online platforms must assess and mitigate systemic risks, including disinformation. The Artificial Intelligence Act introduces transparency obligations for certain generative AI systems, including requirements to label AI-generated content.

Additionally, the European Media Freedom Act establishes safeguards ensuring that professional media content is recognized and protected on major platforms. Large platforms must notify recognized media outlets before removing journalistic content and explain their reasoning, giving organizations time to respond.

“We need tools, but we also need policies and rules,” emphasized Kompatsiaris. “There is no single solution. We need a combination of AI tools, transparency, regulation, and awareness if we want to be more effective against disinformation.”

As the technology behind fake content continues to evolve, so too must the systems designed to detect it. For Europe’s information ecosystem, maintaining this technological balance while preserving media integrity remains a critical challenge for the foreseeable future.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

30 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.