Listen to the article
In a world increasingly shaped by technology, misinformation and disinformation remain the foremost technological risks globally, according to the World Economic Forum’s recently published Global Risks Report 2026. The report also highlights adverse outcomes of artificial intelligence and cyber insecurity as significant threats, ranking them 8th and 9th respectively among the top 10 global risks.
Saadia Zahidi, Managing Director of the WEF, emphasized in her preface that “technological acceleration, while driving unprecedented opportunities, is also generating significant risks in the form of misinformation and disinformation.” She noted particular concern about “potentially adverse long-term outcomes of AI,” which shows the sharpest increase in rank between short-term and long-term risks among all 33 threats covered in the report.
As generative AI capabilities for creating deceptive audio, video, images, and text have advanced rapidly in recent years, the technology’s potential for amplifying misinformation has become increasingly apparent. However, an interesting countertrend has emerged – the development of AI tools specifically designed to detect AI-generated content.
Google DeepMind’s SynthID represents one of the most promising developments in this arena. The robust digital watermarking technology allows users to identify content generated with Google AI by detecting imperceptible watermarks embedded within the content itself. The detector provides detailed information, including localized identification of generated content and confidence levels indicating the likelihood that material was AI-generated.
This technology has proven valuable for fact-checkers and journalists combating online misinformation. In one notable case from January 2026, when former Ghanaian finance minister Ken Ofori-Atta was reportedly detained by U.S. Immigration and Customs Enforcement (ICE), social media platforms were flooded with purported images and videos of his arrest. These visuals circulated widely across Facebook, Instagram, Twitter, and TikTok, despite no credible media outlets or official sources having released such footage.
When fact-checkers ran the suspicious content through Google’s SynthID detector, the system flagged the material with “very high confidence” that it was AI-generated, helping debunk the misleading visuals before they could further mislead the public.
Similar technology proved crucial in debunking a viral social media claim that Nigerian footballer Victor Osimhen had proposed to actress Regina Daniels on the pitch following Nigeria’s semi-final match against Morocco at the 2025 African Cup of Nations. Google Gemini’s analysis identified “significant signs of being AI-generated,” pointing to anatomical distortions and inconsistent digital textures in the fabricated image.
Even political campaigns have begun incorporating AI-generated content, as observed during Cameroon’s October 2025 presidential election. Fact-checkers discovered that campaign videos from both incumbent Paul Biya and opposition candidate Joshua Osih of the Socialist Democratic Front contained significant portions of AI-generated footage. When analyzed through SynthID, the tool identified specific sections that were created using Google’s AI technology.
Pushmeet Kohli, VP of Science and Strategic Initiatives at Google DeepMind, highlighted the importance of these detection tools in a May 2025 blog post, describing their verification portal as a means “to quickly and efficiently identify AI-generated content made with Google AI.” He emphasized that such tools provide “essential transparency in the rapidly evolving landscape of generative media.”
As AI technology continues to advance, the arms race between those creating misleading content and those developing detection tools intensifies. For now, fact-checking organizations worldwide are leveraging AI detection platforms to combat AI-generated misinformation, though the challenge remains substantial as these technologies become more sophisticated and accessible.
The WEF report’s ranking of misinformation as a top global risk underscores the urgency of developing robust detection tools and media literacy initiatives to help the public navigate an increasingly complex information landscape where the line between authentic and artificially generated content grows ever more blurred.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
The WEF’s warnings about the risks of AI and cyber insecurity highlight the importance of ongoing research and development in this space. Initiatives like Google’s SynthID are a positive step towards combating the spread of misinformation.
Agreed. Staying ahead of the curve when it comes to emerging technologies is crucial, and tools that can reliably detect AI-generated content are a necessary part of that effort.
The rapid advancements in generative AI capabilities are concerning, but I’m encouraged to see companies like Google taking proactive steps to address the challenges. Developing robust detection methods is crucial to maintaining trust and credibility.
You make a good point. As these technologies continue to evolve, it’s essential that we stay ahead of the curve and have the tools in place to identify and mitigate the risks.
This is an interesting development in the battle against misinformation. Using AI to detect AI-generated content is a clever approach, though it will likely be an ongoing arms race as AI capabilities continue to evolve.
You raise a good point. As AI becomes more sophisticated, the tools to detect it will need to keep pace. It’s crucial that we stay ahead of these rapidly advancing technologies.
The World Economic Forum’s warning about the risks of AI and cyber insecurity is a sobering reminder of the challenges we face. Proactive measures like Google’s SynthID are essential to maintaining trust in the digital realm.
I agree, the threats highlighted in the report are very concerning. Developing robust detection methods is a necessary step to mitigate the potential harms of AI-generated content.
This is a fascinating development in the fight against misinformation. Using AI to detect AI-generated content is a clever and necessary approach given the rapid advancements in generative AI capabilities. It will be interesting to see how this technology evolves over time.
Absolutely. The ability to reliably identify AI-generated content is crucial in maintaining trust and credibility in the digital realm. It’s an important step in the ongoing battle against the spread of false information.
Misinformation and disinformation are serious global issues that require innovative solutions. While AI can be part of the problem, it’s promising to see it also being leveraged as part of the solution through tools like SynthID.
Absolutely. The ability to combat AI-generated content with AI-powered detection is a positive development in the fight against the spread of false information online.
Leveraging AI to detect AI-generated content is a clever approach, but as you noted, it will likely be an arms race as the technology continues to advance. Proactive measures like this are essential to maintaining trust and credibility in the digital space.
Absolutely. Staying ahead of the curve when it comes to emerging technologies is crucial, and having the right tools in place to identify and mitigate the risks is a key part of that effort.
The World Economic Forum’s report underscores the urgent need for innovative solutions to address the risks posed by AI and cyber insecurity. Initiatives like Google’s SynthID are a welcome development, but the challenge will be to stay ahead of rapidly evolving technologies.
You make a good point. As AI capabilities continue to advance, the tools to detect and combat AI-generated content will need to evolve alongside them. It’s an ongoing battle that requires sustained effort and investment.