Listen to the article

0:00
0:00

In a troubling development that highlights the growing menace of AI-generated political disinformation, fabricated images depicting former Israeli Prime Ministers Naftali Bennett and Yair Lapid alongside members of Arab political parties appeared this week on official Likud party social media accounts.

The manipulated images spread rapidly through WhatsApp and Telegram groups, reaching widespread public awareness before being identified as fake. This incident represents a calculated attempt to manipulate public perception rather than an isolated mishap, as politicians increasingly harness AI technology to create alternative realities for electoral advantage.

Political analysts have begun referring to such AI-generated content as “slopaganda”—a blend of “slop” and “propaganda”—describing realistic but fabricated content mass-produced by artificial intelligence. These sophisticated fakes are specifically designed to polarize voters, smear opponents, and disseminate disinformation at an unprecedented scale.

The power of these manipulations lies in the fundamental human tendency to trust visual evidence. In today’s information-saturated environment, few voters have the time or resources to verify the authenticity of every image they encounter, making visual disinformation particularly effective at shaping public opinion.

This phenomenon extends far beyond Israel’s borders. During recent U.S. elections, artificially generated audio files mimicking President Joe Biden’s voice instructed voters to stay away from polling stations. In India, deepfake videos featuring Bollywood celebrities urged citizens to vote against Prime Minister Narendra Modi. Slovak elections were marred by fake recordings that cast doubt on electoral integrity despite later debunking.

With Israel potentially facing elections, the timing of this incident raises urgent questions about the country’s preparedness to combat sophisticated AI-driven electoral manipulation. Similar incidents in other countries have included fabricated videos of candidates withdrawing from races and even false declarations of surrender during conflicts.

In response to the fabricated images, Bennett’s campaign headquarters filed a police complaint while Lapid’s team sent a formal letter to the Central Elections Committee. However, these actions remain largely symbolic without substantive regulatory frameworks in place.

Legal experts suggest amending Israel’s election propaganda laws to require clear labeling of AI-generated content, though the political feasibility of passing such legislation remains questionable. Even with labeling requirements, the distinction between legitimately enhanced content and manipulated disinformation would remain challenging.

Israel’s Privacy Protection Authority has already established guidelines classifying deepfakes used to humiliate individuals as privacy infringements—both civil and criminal offenses that warrant compensation. While this represents progress, it fails to address the specific challenges of electoral disinformation.

Social media platforms have proven unreliable in policing such content. Research indicates that fewer than one-third of AI-generated images and videos across major platforms like Instagram, LinkedIn, Pinterest, TikTok, and YouTube receive appropriate labeling.

Legal scholars argue that in the absence of updated legislation, Israel’s Central Elections Committee should take immediate action by establishing a rule prohibiting the publication of AI-generated materials depicting political opponents as part of election propaganda. Such manipulations could be classified as unfair electoral interference under Section 13 of the Election Propaganda Law.

While politicians using AI to enhance their own images raises ethical questions, generating false representations of opponents crosses a critical line that demands regulatory intervention.

Without clear boundaries and enforcement mechanisms, experts warn that upcoming elections risk being decided not by the strength of ideas or policies, but by the sophistication of digital deception—potentially undermining the fundamental integrity of democratic processes in Israel and beyond.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. This highlights the importance of developing effective countermeasures against AI-generated disinformation. Fact-checking, digital forensics, and public education will be key to combating this threat.

    • Agreed. Strengthening media literacy and critical thinking skills in the public is crucial, so people can better identify manipulated content and seek out reliable information sources.

  2. Interesting to see how AI-generated propaganda is being used to manipulate political narratives. It’s a concerning trend that highlights the need for greater media literacy and fact-checking, especially around visual content.

    • Absolutely, the ability to create realistic but fabricated images at scale is a serious threat to informed democracy. We need robust safeguards and transparency around the use of these technologies.

  3. As someone who follows the mining and commodities space, I’m curious how this type of disinformation could impact public perceptions and policy decisions related to critical minerals and energy resources.

    • That’s a good point. Misleading narratives around sensitive topics like mining and energy could sway public opinion and skew policy discussions in problematic ways. Fact-based, impartial reporting is essential.

  4. Isabella Taylor on

    While the use of AI in political propaganda is concerning, I’m hopeful that advancements in detection and mitigation techniques can help address this challenge over time. Maintaining a free and fair democratic process is essential.

  5. Isabella Thompson on

    Disturbing to see how easily AI can be weaponized for political gain. Protecting the integrity of elections should be a top priority as these technologies become more sophisticated.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.