Listen to the article
Artificial Intelligence Weapons: Experts to Examine AI’s Role in Disinformation Campaigns
Artificial intelligence (AI) is rapidly transforming how information spreads globally, creating both innovative opportunities and dangerous vulnerabilities. As large language models, image generators, and GPTs enhance creativity and workflow efficiency, they simultaneously provide powerful tools for those seeking to create and disseminate false narratives designed to manipulate public opinion.
In response to these growing concerns, a panel of experts will gather to analyze AI’s emerging role in information warfare and its implications for national security. The event will specifically focus on how AI technologies can be weaponized to conduct sophisticated misinformation and disinformation campaigns that threaten social cohesion and state security.
“We’re seeing adversarial states increasingly explore AI capabilities to enhance their influence operations,” said Jenny Town, Senior Fellow and Director of the 38 North Program at the Stimson Center, who will moderate the discussion. “Understanding these evolving threats is critical for developing effective countermeasures.”
The panel will examine concrete examples of how malicious actors leverage AI to create convincing fake content, from deepfake videos to artificially generated news articles that can be nearly indistinguishable from legitimate sources. These technologies make it increasingly difficult for average citizens to discern fact from fiction, potentially undermining trust in democratic institutions.
Among the featured speakers is Dr. Jieun Shin, Associate Professor in the Department of Media Production, Management, and Technology at the University of Florida. Dr. Shin’s research focuses on how algorithmic systems shape information flow and public discourse, providing valuable insights into AI’s impact on media ecosystems.
The discussion comes at a critical time as nations worldwide grapple with the dual-use nature of AI technologies. Recent incidents of AI-generated content influencing election campaigns, spreading false information during crises, and creating convincing impersonations of public figures have highlighted the urgency of developing coordinated responses to these threats.
A key focus of the event will be exploring how the United States and the Republic of Korea can strengthen bilateral cooperation to counter AI-enabled disinformation. Both countries face similar challenges from regional actors who have demonstrated sophisticated capabilities in information operations.
“South Korea’s experience with targeted disinformation campaigns makes it an important partner in developing joint strategies,” noted one of the organizers. “Its advanced technological infrastructure and democratic values align well with American interests in creating responsible AI governance frameworks.”
The panel will also address potential defensive measures, including technological solutions like content authentication systems, regulatory approaches, public education initiatives, and international cooperation frameworks that balance innovation with security concerns.
Experts suggest that successful counter-strategies will require unprecedented collaboration between government agencies, technology companies, academic institutions, and civil society organizations. Traditional approaches to information security have proven inadequate against the scale and sophistication of AI-powered disinformation campaigns.
The timing of this event coincides with ongoing debates about AI governance in legislative bodies worldwide, including discussions of the EU’s AI Act, proposed regulations in the United States, and international efforts to establish norms for responsible AI development.
This research initiative and the accompanying event are made possible through generous support from the Korea Foundation, which has consistently supported scholarly exchanges and policy dialogues between the United States and South Korea on emerging security challenges.
The discussion represents part of a broader effort to understand and mitigate the risks associated with rapid AI advancement while preserving the benefits these technologies offer for economic development, scientific research, and legitimate creative expression.
As AI capabilities continue to evolve at an unprecedented pace, forums like this provide essential opportunities for experts to share insights and develop collaborative approaches to safeguarding the information environment that underpins democratic societies.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
This panel discussion on the implications of AI-powered disinformation campaigns is a timely and important topic. I’m curious to learn more about the potential impacts on national security and social cohesion.
Yes, the stakes are high, and we need to take this threat seriously. I hope the panel will provide concrete recommendations for policymakers and tech companies to address these challenges.
The growing use of AI in disinformation campaigns is a concerning development that requires a thoughtful and comprehensive response. I’m looking forward to the insights from this panel of experts.
Absolutely. Addressing this issue will require a collaborative effort between governments, tech companies, and civil society. I hope the discussion will help identify effective strategies and policy interventions.
The weaponization of AI for information warfare is a worrying trend that deserves close scrutiny. I’m interested to hear the experts’ insights on the specific tactics and countermeasures related to this emerging threat.
Agreed. Developing a deeper understanding of the technical capabilities and vulnerabilities involved will be crucial for formulating effective strategies to combat AI-driven disinformation campaigns.
Artificial intelligence has become a double-edged sword, with both positive and negative implications for information dissemination. Careful analysis of its potential misuse is necessary to safeguard against harmful disinformation campaigns.
Absolutely. Policymakers and the public need to be aware of these emerging threats and work together to develop robust countermeasures to protect democratic discourse.
This is a complex issue with far-reaching implications. I hope the panel will provide a nuanced and comprehensive analysis of the challenges posed by AI-powered disinformation, as well as potential solutions.
Yes, a balanced and well-informed discussion is essential. Addressing this threat will require a multifaceted approach involving policymakers, tech companies, and the public.
The panel discussion on AI’s role in disinformation campaigns is timely and important. I’m curious to learn more about the specific tactics and techniques used by adversarial states to leverage these technologies for influence operations.
Me too. Understanding the technical capabilities and modus operandi of these actors will be crucial for developing effective strategies to mitigate the risks.
This is a concerning development, as the spread of misinformation and disinformation is a serious threat to social stability and national security. Understanding how AI can be weaponized in this context is crucial for devising effective countermeasures.
I agree. The panel of experts will provide valuable insights into this evolving issue and how to address the risks posed by adversarial states leveraging AI for influence operations.