Listen to the article
UK Government Urged to Develop AI Crisis Response Strategy Against Disinformation
The UK Government faces mounting pressure to create a dedicated AI-specific crisis response strategy as artificial intelligence increasingly becomes weaponized to spread disinformation following major incidents.
A comprehensive new report from the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) warns that existing government frameworks are inadequate to combat the sophisticated AI-driven information threats that emerge in the aftermath of terror attacks and national security crises.
The study highlights how AI tools are being deployed to create and amplify conspiracy theories, incite violence, and undermine democratic stability—all at speeds and scales that current response systems cannot effectively address.
“Crisis events are unpredictable and volatile scenarios,” said Sam Stockwell, Senior Research Associate at CETaS. “Combined with a poorly understood AI threat landscape, this means that we are not currently equipped to deal with this growing threat to public safety.”
The researchers are calling for clear protocols defining how government should respond when AI is used to manipulate public understanding during fast-moving crises. Their recommendations include establishing monitoring indicators and severity thresholds to assess when AI information threats require escalation, alongside formalized data-sharing processes with AI companies for rapid intervention.
Since July 2024, the researchers have identified at least 15 major international crisis events where AI-enabled information threats played significant roles. Notable examples include the Southport murders in July 2024 and the Bondi Beach terrorist attack in December 2025. In these cases, malicious actors deployed a range of sophisticated AI tactics including convincing deepfakes designed to promote false narratives, data poisoning attacks aimed at corrupting AI training sources, and AI-powered bot networks that mimicked human behavior to sway public opinion.
The consequences of these AI-enabled disinformation campaigns have been significant, with the report noting instances where misleading AI-generated content complicated law enforcement responses, fueled harmful conspiracy theories, and encouraged real-world violence. This activity has been linked to both domestic actors and coordinated foreign networks.
However, the researchers also emphasize that AI tools could play a constructive role in crisis response if properly implemented. The same technologies could be deployed to detect and remove harmful content before it spreads widely, while AI chatbots might help amplify accurate, authoritative information during emergencies.
The report extends its recommendations beyond government to include the technology industry and regulatory bodies. It suggests that AI companies should improve transparency around chatbot limitations during live crises, including implementing prominent warnings when users query information about unfolding events.
The tech sector is also urged to strengthen incident response mechanisms, with the report specifically calling on the Frontier Model Forum to establish new channels for sharing threat intelligence. Additionally, the researchers recommend that Ofcom examine the financial incentives behind AI-enabled disinformation as part of its upcoming consultation on fraudulent advertising.
With future incidents likely, the report stresses that sustained monitoring and information-sharing will be critical to addressing these evolving threats. CETaS plans further research into how terrorists may use chatbots to support attack planning, how AI could support debunking efforts during crises, and effective methods to counter AI data poisoning attacks.
“While we need to address the critical risks associated with AI tools in this context, we must also recognize that the same technology can help to strengthen democratic resilience in times of crisis,” Stockwell added.
As AI continues to advance and become more accessible, the challenge for governments worldwide will be developing nimble, effective response mechanisms that can address the dark side of these technologies while harnessing their potential benefits for public safety.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
This report highlights the urgent need for the UK government to prioritize AI strategy development, particularly in the realm of crisis response and disinformation mitigation. Failing to do so could have severe consequences for national security and public trust.
Absolutely. The government can’t afford to be reactive on this issue – a comprehensive, forward-looking AI strategy is required to stay one step ahead of adversaries exploiting these technologies.
As AI technology continues to advance, the potential for malicious actors to leverage it for nefarious purposes will only increase. Proactive policymaking is essential to get ahead of these emerging threats to public safety and democratic institutions.
I’m curious to learn more about the specific protocols and frameworks the researchers are advocating for. Clearly defined government response plans could help limit the damage caused by AI-driven disinformation during critical incidents.
Interesting that the report highlights the speed and scale at which AI can be used to amplify conspiracy theories and incite violence. This underscores the need for nimble, AI-focused crisis response capabilities to stay ahead of evolving threats.
Agreed. The government will need to invest in advanced AI monitoring and mitigation tools to effectively combat these challenges in real-time.
This is an important issue that deserves attention. AI is a powerful tool that can be misused to spread disinformation, especially during crises. A comprehensive government strategy to counter these threats is crucial for safeguarding public trust and democratic stability.