Listen to the article
The Arms Race Against Disinformation: Using AI to Combat AI-Generated Falsehoods
Last winter, as Christmas markets opened across Europe, social media platforms were flooded with alarming videos allegedly showing radical Islamists “invading” Christmas markets. One clip purported to show people disrupting the Brussels Christmas market, while another image displayed a market surrounded by heavy security barriers, suggesting Christian traditions were under threat.
The reality, however, was entirely different. The videos were taken from peaceful demonstrations unrelated to the markets, and the security-surrounded market image was completely fabricated using artificial intelligence. What appeared convincing at first glance was either misleading or entirely fake.
This scenario exemplifies today’s complex information landscape. According to a recent European Commission survey, nearly two-thirds of respondents reported encountering disinformation or fake news within the previous week. As AI tools become increasingly sophisticated at generating realistic images, videos, and text, distinguishing fact from fiction has become more challenging than ever.
In response to this growing threat, a multinational team of researchers and media specialists has employed an unusual strategy: fighting AI with AI.
In 2020, experts from universities, media houses, and technology companies united in a four-year EU-funded initiative called AI4Media. Their goal was to develop AI tools that could help journalists and fact-checkers verify digital content quickly and reliably.
“There is an urgent need to develop AI techniques for the media sector,” explained Yiannis Kompatsiaris, research director at the Centre for Research & Technology Hellas (CERTH), who coordinated the initiative.
The democratization of AI has dramatically lowered barriers to producing convincing fake content. Today, anyone with access to generative AI tools can create fabricated images, clone voices, or produce realistic-looking news articles—content that social media platforms can then amplify at unprecedented speed.
“When a fake story is supported by realistic images, it becomes much easier to believe – and more tempting to share because the content generates higher views,” Kompatsiaris noted.
AI4Media built verification tools designed to integrate directly into newsroom workflows. Major media organizations, including Germany’s Deutsche Welle and Belgium’s VRT, tested these tools in real-world settings.
“Fact-checkers and journalists face suspicious images every day,” said Akis Papadopoulos, a CERTH researcher who worked on the project. He described the technology as a “first line of defense”—not replacing human judgment but helping flag potentially manipulated content quickly.
According to the European Digital Media Observatory, an independent EU-funded hub monitoring disinformation across member states, AI-generated disinformation has steadily increased in recent months. These are not merely isolated hoaxes but often part of coordinated campaigns capable of influencing elections, distorting public debate, and undermining institutional trust.
Identifying manipulated content represents only one facet of the challenge. Understanding how disinformation spreads—who amplifies it, how narratives evolve, and whether campaigns are coordinated—is equally crucial.
“We are in a continuous loop of trying to understand and catch up with the latest technology,” said Riccardo Gallotti, head of the Complex Behavior Unit at Fondazione Bruno Kessler (FBK), an Italian research center known for its work in digital innovation and AI.
In a parallel EU-funded project called AI4Trust, FBK partnered with universities and media organizations across Europe to analyze the broader dynamics of online disinformation. Partners included Euractiv in Belgium, Sky Italia, and fact-checking services like Maldita.es in Spain and Demagog in Poland.
While AI4Media focused on detecting manipulated media and integrating verification tools into newsrooms, AI4Trust built a hybrid human-machine system to monitor and analyze disinformation at scale. Their platform tracks multiple social media and news sites in near real-time, using advanced AI algorithms to process multilingual content across text, audio, and images.
Since the volume of online material vastly exceeds human capacity to review, the system filters and flags high-risk content for professional fact-checkers to evaluate. Their verified assessments then feed back into the system, improving its performance over time.
“It is indeed funny, but it’s like an arms race,” Kompatsiaris said of using AI to detect AI-generated content.
Generative AI models are evolving at extraordinary speed. When AI4Media began, tools like ChatGPT were still in their infancy. Since then, the quality and realism of AI-generated content have advanced dramatically.
“We have entered a new era where the acceleration is hard for the human mind to keep up with,” Papadopoulos said. “To keep up with AI, you need to be using AI.”
The researchers acknowledge that technology alone isn’t sufficient. The European Union has implemented various regulatory frameworks, including the Digital Services Act, which requires large online platforms to assess and mitigate systemic risks, including disinformation. The Artificial Intelligence Act introduces transparency obligations for generative AI systems, including content labeling requirements.
Additionally, a draft Code of Practice on transparency for AI-generated content aims to encourage clearer disclosure standards, while the European Media Freedom Act provides safeguards to ensure that professional media content is recognized and protected on major online platforms.
“There is no single solution,” Kompatsiaris emphasized. “We need a combination of AI tools, transparency, regulation, and awareness if we want to be more effective against disinformation.”
As the arms race continues, researchers, journalists, and policymakers face the ongoing challenge of adapting to ever-more-sophisticated AI capabilities. In this rapidly evolving landscape, the combination of technological innovation, regulatory frameworks, and public education represents humanity’s best defense against the growing tide of synthetic disinformation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
This is a concerning issue that requires a multi-pronged approach. AI-generated content is becoming increasingly sophisticated, making it harder for the average user to distinguish truth from fiction. Investing in robust detection methods is a must.
The examples provided really drive home how convincing AI-generated content can be. Equipping users with the ability to spot machine-made falsehoods is crucial. Looking forward to seeing the progress made in this important area.
Agreed, empowering users to think critically about online information is vital. Effective detection tools will be a big part of that, but education is also key.
This AI arms race is a complex issue without easy solutions. But the development of advanced detection tools is an encouraging step. Curious to learn more about the specific technical approaches being explored.
Yes, the technological solutions will need to evolve rapidly to keep pace with the advancing AI generating capabilities of bad actors. Interdisciplinary collaboration will be key.
As the article notes, the prevalence of fake news and disinformation is a major issue. I’m glad to see efforts underway to leverage AI itself in the fight against AI-powered falsehoods. Maintaining trust in online media is crucial.
Impressive to see the EU taking this challenge seriously. The ability to automatically detect machine-generated content could be a game-changer in the battle against digital misinformation. Looking forward to seeing the progress made.
Agreed, this is an area that requires sustained focus and investment. The stakes are high when it comes to preserving the integrity of online information.
Fascinating look at the AI arms race against disinformation. It’s critical we stay ahead of bad actors using AI to spread falsehoods. Fact-checking and detection tools will be vital to protect the integrity of online discourse.
The examples given really highlight how convincing AI-generated content can be. I’m curious to learn more about the specific techniques and technologies being developed to combat this growing threat to online information integrity.