Listen to the article

0:00
0:00

As artificial intelligence technology advances, experts are raising alarms about a fundamental shift in propaganda methods from broad messaging to highly personalized psychological manipulation targeting individual vulnerabilities.

In response to growing concerns, X (formerly Twitter) has updated its creator policy specifically addressing AI-generated war videos. Under the new guidelines, creators who fail to clearly label synthetic war content as artificial face serious consequences, including loss of monetization privileges and potential permanent expulsion from the platform’s revenue sharing program.

“Historically, propaganda was amplified through television debates, newspapers, or mass forwards on messaging platforms. AI has transformed propaganda from a loud broadcast into a personalised whisper,” said Kartik Gupta, Instructor of AI and Machine Learning at Newton School of Technology.

The evolution of AI systems has enabled unprecedented capabilities for analyzing personal data. Modern algorithms can process behavioral patterns, linguistic preferences, browsing history and social engagement metrics to identify specific vulnerabilities in individuals. This allows for the creation of tailored narratives precisely calibrated to resonate with a person’s cultural background, religious beliefs, or political preferences.

Making matters worse, the quality of synthetic media has improved dramatically. A biometric study conducted in late 2025 revealed that only a small percentage of participants could accurately distinguish between authentic content and AI-generated material, highlighting how AI-driven propaganda can erode trust in information before manipulation is even detected.

Social media platforms have become the primary vectors for distributing AI-generated misinformation, according to Atul Rai, Co-founder and CEO of Staqu Technologies. “Deepfakes and synthetic media gain traction because platform algorithms prioritize engagement, allowing manipulated content to reach large audiences,” Rai explained.

He emphasized that platforms must leverage their technological capabilities to implement advanced AI systems capable of identifying manipulated audio, synthetic video, and coordinated bot networks designed to amplify propaganda. Additionally, stronger governance frameworks are needed, including rapid response protocols during geopolitical crises, transparent labeling of AI-generated content, and partnerships with fact-checking organizations.

Responsibility extends beyond platforms alone, noted Kaushal Bheda, Director at Pelorus Technology. Content creators who deliberately generate disinformation or propaganda bear direct accountability for resulting harm. Meanwhile, platform developers and operators must implement effective preventative measures and respond quickly to law enforcement requests.

When authorities identify active harm campaigns, delays in providing data or suspending accounts allow damage to compound. “Immediate cooperation with investigations, expedited legal compliance, and proactive intelligence sharing are non-negotiable responsibilities for platforms operating at global scale,” Bheda said.

The verification of digital content is becoming increasingly critical as society enters an era where images, voices, and videos can no longer be presumed authentic by default. Gupta noted that verification must transition from an individual responsibility to a systemic approach. This requires governments, platforms, and educational institutions to develop stronger early-warning systems, authentication protocols, and rapid response frameworks, particularly during high-risk periods like elections or natural disasters.

“There may be difficult debates ahead around temporary amplification controls during national emergencies. While controversial, such measures reflect a broader tension between open digital ecosystems and public safety,” Gupta added.

Garry Singh, President of IIRIS, highlighted that while major platforms have implemented mechanisms to identify AI-generated propaganda, a significant challenge remains in the speed of content moderation. “The mechanism to address complaints and remove bad content is slow, causing concerns of spreading before the content can be taken down,” Singh explained.

The potential risks extend far beyond misinformation, encompassing threats to life safety, financial security, resource allocation, and public opinion formation. As AI-generated content becomes increasingly sophisticated, the line between reality and manipulation continues to blur, creating unprecedented challenges for social platforms, regulators, and society at large.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. Michael N. Lopez on

    The development of AI-powered propaganda is a worrying trend that could have far-reaching consequences for individual autonomy and the integrity of public discourse. Rigorous regulations and transparency measures are needed to address this challenge.

  2. Noah W. Martinez on

    This highlights the double-edged nature of AI technology. While it can be a powerful tool for good, the potential for abuse in the realm of propaganda is deeply unsettling. Diligent efforts to stay ahead of these threats are crucial.

  3. William Q. Martinez on

    The shift from broad messaging to personalized psychological manipulation is deeply unsettling. This development highlights the urgent need for effective detection methods and clear guidelines to mitigate the risks of AI-powered propaganda.

  4. The ability to target individuals with personalized propaganda is alarming. While AI advances bring many benefits, the potential for misuse in the wrong hands is worrying. Robust detection methods and clear policies are needed to mitigate these risks.

    • Lucas Garcia on

      I agree. The evolution of AI-powered propaganda is a serious threat that needs to be addressed head-on. Proactive measures to regulate synthetic media and protect users are essential.

  5. Amelia Smith on

    This issue speaks to the double-edged sword of technological progress. While AI presents many beneficial applications, the potential for malicious use in propaganda is a serious concern that must be proactively addressed.

    • William White on

      I agree completely. The risks of AI-driven propaganda cannot be overlooked, and policymakers must act quickly to establish robust safeguards and accountability measures.

  6. Robert Rodriguez on

    The personalized nature of AI-driven propaganda is particularly insidious. Effectively countering this will require a multifaceted approach, including robust content moderation, user education, and ongoing innovation in detection capabilities.

  7. Elijah Johnson on

    This is a concerning development. AI-driven propaganda could be a powerful tool for manipulation and the erosion of truth. Proper labeling and accountability for synthetic media is critical to maintain trust and transparency.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.