Listen to the article

0:00
0:00

UK Urged to Develop AI Crisis Response Strategy Amid Growing Disinformation Threats

Government officials are facing mounting pressure to create a dedicated AI-specific crisis response framework as artificial intelligence increasingly becomes weaponized to spread disinformation during national emergencies and security incidents.

A comprehensive new report from the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) reveals that AI tools are being systematically deployed in the aftermath of terror attacks and other crisis events to disseminate conspiracy theories, incite violence, and undermine democratic institutions.

The research indicates that existing government protocols are inadequate to counter the unprecedented speed, scale, and sophistication of AI-driven information threats. Experts are calling for clearly defined response mechanisms that outline precisely how authorities should react when AI is utilized to manipulate public understanding during rapidly evolving situations.

“Crisis events are unpredictable and volatile scenarios,” said Sam Stockwell, Senior Research Associate at CETaS. “Combined with a poorly understood AI threat landscape, this means that we are not currently equipped to deal with this growing threat to public safety.”

Among the report’s key recommendations is the establishment of monitoring indicators and severity thresholds that would enable officials to determine when AI-powered information threats warrant escalation. Additionally, researchers advocate for formalized data-sharing processes with AI companies to facilitate swift intervention when disinformation campaigns emerge.

The study’s methodology included an extensive review of existing research, interviews with 25 experts spanning government, industry, and academia, and a simulated AI-driven security incident to assess real-time threat evolution.

Since July 2024, researchers have documented at least 15 major international incidents where AI information threats played a significant role. Notable examples include the Southport murders in July 2024 and the Bondi Beach terrorist attack in December 2025, where AI tools were deployed to spread false narratives and inflame public sentiment.

The tactics identified in the report include sophisticated deepfakes designed to promote fabricated storylines, data poisoning attacks aimed at corrupting AI training sources, and AI-powered bot networks that convincingly mimic human behavior to manipulate public discourse.

In several instances, these AI-generated disinformation campaigns have directly impacted crisis response efforts, complicated law enforcement operations, and triggered real-world violence. The report links such activities to both domestic actors and coordinated foreign influence operations.

Despite these grave concerns, researchers emphasize that AI technologies could also serve constructive purposes in crisis scenarios. The same tools might be leveraged to detect and remove harmful content before widespread circulation or to amplify accurate, authoritative information through chatbots during emergencies.

The report extends its recommendations beyond government to include industry stakeholders and regulatory bodies. It suggests that AI companies should improve transparency regarding chatbot limitations during unfolding crises, potentially implementing prominent warnings when users query information about ongoing events.

The tech sector is urged to strengthen incident response mechanisms, with the Frontier Model Forum encouraged to establish new channels for sharing threat intelligence. Additionally, the report recommends that Ofcom examine the financial incentives driving AI-enabled disinformation as part of its upcoming consultation on fraudulent advertising.

Looking ahead, the researchers emphasize that continuous monitoring and information-sharing will be essential as the threat landscape evolves. Future research initiatives will explore how terrorists might utilize chatbots to support attack planning, how AI could enhance debunking efforts during crises, and effective countermeasures against data poisoning attacks.

“While we need to address the critical risks associated with AI tools in this context, we must also recognize that the same technology can help to strengthen democratic resilience in times of crisis,” Stockwell added, highlighting the dual-use nature of these powerful technologies.

As the UK navigates this complex technological landscape, the report serves as a timely reminder that governance frameworks must evolve rapidly to address emerging digital threats that can profoundly impact national security and social cohesion.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Olivia Hernandez on

    Countering AI-powered disinformation during national emergencies is a critical issue that deserves high-level attention. I hope the UK’s efforts can serve as a model for other countries.

    • Michael Taylor on

      Agreed. Developing effective strategies to combat AI-driven manipulation of information will be essential for maintaining public trust and democratic stability.

  2. Jennifer Thompson on

    Interesting to see the UK government being urged to develop an AI-driven crisis response strategy. Mitigating disinformation threats during emergencies is a critical challenge that requires innovative approaches.

    • Olivia Z. Thompson on

      Agreed. The scale and sophistication of AI-powered disinformation campaigns can undermine public trust and democratic institutions. A proactive framework is needed to combat these threats.

  3. This is a complex challenge without easy solutions. Balancing the need for transparency and public information with the risk of amplifying false narratives will require careful planning and coordination.

    • Agreed. Authorities will need to find the right approach to swiftly identify and debunk AI-driven disinformation without inadvertently giving it more attention.

  4. The report highlights some alarming trends around the misuse of AI during crises. I’m glad to see the UK government taking proactive steps to address this emerging threat.

  5. This report highlights the growing misuse of AI to spread conspiracy theories and incite violence during crises. Authorities need to stay ahead of these evolving information threats.

    • Absolutely. The government must act quickly to establish clear protocols for how to respond when AI is weaponized to manipulate public understanding during fast-moving situations.

  6. Michael Martin on

    This is an important step in helping the UK government stay ahead of the curve on AI-enabled disinformation threats. Robust crisis response protocols are vital for protecting citizens.

  7. The UK’s move to develop an AI-focused crisis response strategy is commendable. Disinformation campaigns can have serious consequences, so having a plan to quickly identify and counter them is crucial.

  8. Elizabeth Thompson on

    Developing an AI crisis response strategy is a smart move by the UK. Disinformation campaigns can be highly disruptive, so having a plan to counter them is critical for national security.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.