Listen to the article
The rapid evolution of AI technologies is creating unprecedented challenges for information integrity, with the democratization of content manipulation tools now accessible to virtually anyone. What once required substantial resources and technical expertise now demands only minimal investment of time and money, fundamentally altering the information landscape.
Recent data from NewsGuard reveals a staggering 1,150% increase in AI-generated news sites since April 2023, with over 2,089 such platforms now operating with minimal human oversight. These sites publish content across 16 languages, including French, English, Arabic, and Chinese, representing a significant shift in how information is created and distributed globally.
“Social media platforms play a dual role,” explains Professor Thierry Warin, an analyst specializing in economic dynamics and information issues in the digital age. “On the one hand, they democratize speech. On the other, they can become a vehicle for spreading fake news on a large scale.”
The AI tools themselves are increasingly contributing to misinformation problems. NewsGuard’s research shows that leading AI chatbots relayed false claims in 35% of cases in August 2025, nearly double the 18% rate observed the previous year. Perplexity’s performance has deteriorated dramatically, moving from a perfect record of refuting false information in 2024 to a 46.67% error rate in 2025. Industry giants ChatGPT and Meta both maintain concerning 40% error rates.
Deepfakes represent one of the most concerning developments in content manipulation. According to Entrust’s 2025 Identity Fraud Report, a deepfake attack now occurs every five minutes. Digital document forgeries have increased by 244% compared to 2023, while overall digital fraud has grown by an alarming 1,600% since 2021.
“The Centre for Security and Emerging Technology estimates that a basic deepfake can be produced for a few dollars and in less than ten minutes,” notes Professor Warin. “High-quality deepfakes, on the other hand, can cost between $300 and $20,000 per minute.”
The electoral landscape of 2024, with numerous global elections, proved particularly vulnerable to sophisticated disinformation campaigns. The Doppelgänger campaign, orchestrated by pro-Russian actors before the 2024 European elections, combined seven domains impersonating legitimate media outlets, 47 inauthentic websites, and 657 articles amplified by thousands of automated accounts.
Even more concerning is the ‘Portal Kombat’ network (also known as ‘Pravda’), which exemplifies a systematic approach to information dissemination. According to VIGINUM, this Moscow-based network published 3.6 million articles in 2024 across global online platforms. With 150 domain names in 46 languages, it publishes an average of 20,273 articles every 48 hours—a scale that would be impossible without AI assistance.
NewsGuard’s testing of ten popular generative AI models revealed that in 33% of cases, these systems repeated claims disseminated by the Pravda network. This highlights a technique known as ‘LLM grooming,’ where bad actors saturate search results with biased data to influence AI responses.
“Many recent elections have been marred by disinformation campaigns,” Warin points out. “During the 2016 US presidential election, the United States responded to Russian interference by expelling 35 diplomats. With generative AI, the scale of the phenomenon has changed dramatically.”
Beyond the creation of fake content, social media personalization algorithms contribute significantly to the fragmentation of the public sphere. “These systems tend to offer internet users content that matches their preferences,” explains Professor Warin. “This can create what are known as echo chambers.” Studies show that on Facebook, only about 15% of interactions involve exposure to divergent opinions, reinforcing ideological divides.
In response to these developments, several initiatives have emerged. Finland and Sweden lead in media literacy, scoring 74 and 71 points respectively on the European Media Literacy Index 2023. The European Commission adopted the 2022 Strengthened Code of Practice on Disinformation to improve platform transparency, while Canada’s Communications Security Establishment published a comprehensive 2023 report analyzing the use of generative AI in information interference contexts.
“Traditional countermeasures – human moderation, fact-checking, media literacy – must evolve to adapt to the scale of the phenomenon,” observes Warin. “Technological solutions, such as synthetic content detectors and digital watermarks, are currently being developed.”
The landscape continues to shift rapidly in 2025, with AI systems now prioritizing responsiveness over accuracy. The non-response rate of AI systems to sensitive questions has fallen to 0%, compared to 31% in 2024, but their propensity to repeat false information has increased proportionally.
“The adage that ‘information is power’, attributed to Cardinal Richelieu, remains relevant,” concludes Professor Warin. “From printing to television, each media revolution has redistributed the power of information. With generative AI, we are witnessing a major transformation of this ecosystem.”
The challenge facing society now is how to adapt verification mechanisms and rebuild trust in an information ecosystem profoundly transformed by these new technological realities.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
This is a concerning trend, as the spread of misinformation can have serious consequences. AI tools can be powerful, but their use needs to be carefully monitored and regulated to maintain trust in the information ecosystem.
Agreed. Platforms and policymakers must work together to address this challenge and ensure the integrity of online information.
This article raises important questions about the impact of AI on the quality of information. While AI tools can be powerful, their use needs to be carefully regulated to prevent the spread of misinformation. Maintaining the integrity of online information should be a top priority for all stakeholders.
Well said. Achieving the right balance between the benefits and risks of AI-powered content creation will be a key challenge going forward.
The rapid growth of AI-generated news sites is a worrying trend that could undermine trust in the information landscape. It’s crucial that we develop robust mechanisms to verify the authenticity and accuracy of online content, especially in sensitive areas like mining and commodities.
The rise of AI-generated news sites is a concerning development that could have far-reaching consequences for the mining and commodities sectors, where accurate information is crucial. Robust fact-checking and transparency measures are needed to ensure the reliability of online content in these industries.
The rapid growth of AI-generated news sites is a worrying trend that could have serious implications for industries like mining and commodities, where accurate information is crucial. Robust verification and transparency measures are needed to ensure the reliability of online content in these sectors.
Absolutely. Balancing the benefits of AI-powered content creation with the need to maintain information integrity will be a key challenge going forward.
This article highlights the complex interplay between technology, information, and trust. While AI tools can democratize content creation, their misuse can also undermine the credibility of online information. Finding the right approach to regulating AI-generated content will be a critical challenge for policymakers and platforms alike.
Agreed. Maintaining the integrity of information, especially in sensitive areas like mining and commodities, should be a top priority for all stakeholders.
The democratization of content manipulation tools is a double-edged sword. While it empowers more voices, it also increases the risk of misinformation. Robust fact-checking and transparency measures are crucial to maintain the credibility of online information.
Absolutely. We need a multi-pronged approach to tackle this issue, involving both technological and human-led solutions.
This is a concerning statistic, highlighting the need for greater oversight and regulation of AI-generated news sites. Ensuring the accuracy and reliability of information should be a top priority for all stakeholders in the digital ecosystem.
I agree. Platforms and policymakers need to work together to strike the right balance between free speech and information integrity.