Listen to the article

0:00
0:00

AI Disclosure Agreement Proposal Aims to Combat Synthetic Media Threats

The rapid advancement of generative AI technology over the past decade has fundamentally transformed how people work, communicate, and access information. While these systems have delivered unprecedented convenience and productivity, they’ve simultaneously created a dangerous vulnerability: the ability to produce synthetic media that perfectly mimics authentic human communication at scale, with minimal cost and remarkable realism.

Security experts are increasingly concerned about this growing threat. As predicted by Chinese scholar Li Bicheng in 2019, we now face a reality where AI systems can adopt convincing personas capable of manipulating public opinion and advancing hidden agendas. Modern AI can generate increasingly realistic content at a pace that overwhelms traditional disinformation countermeasures.

“The danger isn’t simply that synthetic media exists, but that it circulates without disclosure,” explains cybersecurity analyst Thomas Helmus, who has studied the problem extensively. “When you can’t distinguish between authentic and artificial content, the foundation of trust that underpins our information ecosystem begins to crumble.”

Current efforts to combat this problem remain inconsistent and insufficient. Warning labels, while helpful in reducing belief in false content, vary widely in effectiveness depending on design and implementation. Private platforms, influenced by corporate priorities and economic incentives, apply labeling standards inconsistently. Even initiatives like the European Commission’s Code of Practice on Disinformation, which has improved transparency in Europe, cannot fully address foreign manipulation or content originating outside EU jurisdiction.

The security implications are already visible in ongoing conflicts. The Russia-Ukraine war has featured fabricated videos of combat operations, falsified diplomatic communications, and generated images of attacks. While many current deepfakes remain relatively easy to identify, experts warn this is changing rapidly as the technology advances.

“What we’re seeing is just the beginning,” notes information warfare specialist Kate Kostyuk. “Without coordinated international policy, military personnel, civilians, and policymakers all become vulnerable to sophisticated psychological manipulation campaigns that can escalate tensions and destabilize regions.”

This vulnerability extends far beyond military conflict. Synthetic media can be weaponized to fabricate policy announcements, interfere with democratic processes, and enable precise targeting of vulnerable populations with tailored disinformation. The global accessibility of generative tools means regulation cannot focus solely on specific actors or governments.

In response to these growing threats, security experts are proposing a multilateral Synthetic Media Disclosure Agreement modeled after existing international frameworks like the Geneva Conventions and nuclear arms treaties. Rather than restricting AI development, the agreement would focus on transparency and accountability.

The proposed agreement contains three key pillars. First, it would mandate clear labeling of all AI-generated or AI-altered media intended for public distribution, using standardized disclosure markers to flag synthetic content. Second, it would establish individual accountability, requiring states to adopt domestic legal frameworks prohibiting individuals in positions of influence from distributing synthetic content without proper disclosure. Finally, it would outline enforcement mechanisms including coordinated diplomatic pressure, sanctions, or tariffs to encourage compliance.

“This isn’t about censorship,” clarifies digital rights advocate Maria Romero. “The framework preserves freedom of expression while promoting transparency. Synthetic media would remain legal for artistic, educational, and commercial purposes. The agreement targets deception, not creation.”

Proponents argue the approach is feasible because it builds on international security models that states already recognize. The EU’s Code of Practice demonstrates that transparency reforms can be implemented at scale, while NATO’s continued coordination shows states are equipped for multilateral cooperation.

Challenges remain, particularly concerning states or content creators who may refuse to join such an agreement. Additionally, labeling requirements cannot address synthetic content that has already been circulated. However, supporters maintain that continuous monitoring, international coordination, and credible consequences can help manage these challenges.

As generative AI continues its rapid advancement, the proposal’s advocates argue that establishing clear international norms mandating disclosure represents the most realistic path forward to restore transparency without restricting the legitimate benefits of artificial intelligence technology.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.