Listen to the article
Russian propaganda campaigns using AI-generated imagery have targeted Ukrainian cash-in-transit guards recently detained in Hungary, according to Hungarian fact-checkers who identified a coordinated disinformation operation.
The Hungarian fact-checking organization Vastagbőr revealed that Ripost, a tabloid controlled by Prime Minister Viktor Orbán’s Fidesz party, published artificial intelligence-generated images related to the detention of Ukrainian cash-in-transit personnel. These fabricated images were then amplified across social media platforms in what appears to be a sophisticated influence operation.
“What stood out immediately was the unusual engagement metrics,” a representative from Vastagbőr explained. “Ripost posts typically receive between 10 to 200 reactions, but these fabricated stories garnered over 48,000 interactions—with our analysis showing that approximately 99% came from bot accounts.”
The fact-checkers noted a distinct pattern in the bot accounts’ profiles. “Most displayed Romanian or Moldovan names, suggesting Russian operators are repurposing fake profiles originally created to influence elections in Moldova,” Vastagbőr stated. This recycling of digital assets points to an established disinformation infrastructure being deployed opportunistically against Ukraine.
The incident stems from a diplomatic flare-up that occurred on March 6, when Hungarian authorities detained seven Oschadbank cash-in-transit guards who were transporting currency and valuables from Austria to Ukraine. Hungarian officials published photos of the seized items and announced an investigation into possible “money laundering,” stating they intended to expel the detained Ukrainian personnel.
The detention prompted a sharp diplomatic response from Ukraine. Foreign Minister Andrii Sybiha characterized Hungary’s actions as “hostage-taking and state terrorism” and advised Ukrainian citizens to avoid travel to Hungary. The situation was partially defused when the seven Ukrainian workers returned home later that same evening.
Serhii Sydorenko, an editor at European Pravda, corroborated the findings regarding the AI-generated imagery, highlighting how quickly misinformation spread across multiple platforms before fact-checkers could intervene.
This incident represents the latest chapter in increasingly strained relations between Hungary and Ukraine. Prime Minister Orbán has maintained closer ties with Moscow than other European leaders since Russia’s full-scale invasion of Ukraine, frequently criticizing sanctions against Russia and delaying European Union aid packages to Kyiv.
Media and disinformation experts warn this case demonstrates how AI tools have accelerated the creation and dissemination of false narratives during geopolitical tensions. The technology enables propagandists to rapidly produce convincing fake images that can be deployed to shape public opinion before verification processes can catch up.
“What makes this particularly concerning is the coordination between state-controlled media outlets and bot networks,” noted one European disinformation researcher who requested anonymity. “It suggests a deliberate strategy to exploit minor diplomatic incidents to drive wedges between Ukraine and EU nations at a critical time.”
Hungarian-Ukrainian relations have been complicated by other issues, including disputes over the rights of the Hungarian minority in western Ukraine and Hungary’s energy deals with Russia. This latest incident and the accompanying disinformation campaign further complicates an already tense relationship between the neighboring countries.
The Hungarian fact-checkers have urged social media platforms to take stronger measures against coordinated inauthentic behavior, particularly when it involves AI-generated content that can be difficult for average users to identify as fake.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
This is a concerning example of how Russian actors are leveraging AI and coordinated bot networks to amplify their propaganda. The high engagement metrics on these fabricated stories are a testament to the scale and sophistication of their operations. Fact-checkers play a vital role in exposing these tactics.
Fascinating to see how Russian propaganda is using AI-generated images to spread disinformation. It’s concerning how quickly these fabricated stories can gain traction online through coordinated bot campaigns. Fact-checking efforts are crucial to exposing these tactics.
I’m curious to learn more about the specific tactics used in this influence operation, like the recycling of fake profiles from Moldova. It’s a good reminder of how sophisticated and adaptable these propaganda efforts can be.
This is a troubling example of how Russian actors are leveraging new technologies like AI to amplify their propaganda. The recycling of fake profiles from Moldova is a clever tactic that highlights the adaptability of these influence operations. Fact-checkers play a vital role in countering these threats and helping the public navigate the online information landscape.
The use of AI-generated imagery in propaganda is a concerning development that can make it harder to distinguish truth from fiction. While the technology has legitimate applications, it’s alarming to see it deployed for malicious disinformation campaigns. Ongoing fact-checking efforts are crucial to exposing these tactics.
It’s unsettling to see how Russian propaganda is exploiting new technologies like AI to spread disinformation. The repurposing of fake profiles from Moldova is a clever tactic, highlighting the adaptability of these influence campaigns. Ongoing vigilance and fact-checking will be crucial to counter these threats.
The use of AI-generated imagery is a worrying development in the fight against online disinformation. While the technology can have legitimate applications, it’s alarming to see it deployed for malicious propaganda campaigns. Diligent fact-checking remains essential.