Listen to the article

0:00
0:00

In a concerning trend reflecting the growing sophistication of artificial intelligence, 2025 has witnessed a surge in AI-generated misinformation targeting sports figures, celebrities, and political events. These fabricated videos and images, often indistinguishable from genuine content, have reached millions across social media platforms despite their fictional nature.

In March, a manipulated TikTok video claiming Egyptian football star Mohamed Salah had announced his departure from Liverpool went viral, accumulating hundreds of thousands of engagements. The video purported to show Salah blaming teammates Darwin Núñez and Diogo Jota for his decision to leave following Liverpool’s Champions League exit.

Fact-checkers quickly identified the deception. The visuals came from a genuine January 2025 post-match interview after Liverpool’s victory over Lille, but had been digitally altered—mirrored and paired with manipulated audio. Despite containing a disclaimer admitting it was created using AI tools, the video’s realistic appearance fooled many viewers.

The music industry became another target when a Facebook post circulated claiming Afrobeats star Wizkid had built a school offering free education. The post included what appeared to be photographic evidence of the “Wizkid FC School.” Investigation revealed the image was AI-generated, bearing Meta AI watermarks and scoring 98 percent on AI-detection tools. Visual anomalies, including misspelled signage, further confirmed its fabricated nature.

Political misinformation wasn’t far behind. In April, a TikTok video falsely claimed that Burkina Faso’s military leader Ibrahim Traoré had declared the country tax-free, eliminating all taxes on salaries, businesses, and foreign investments. The video, which garnered over 150,000 likes and 14,000 shares, featured a newscast-style voiceover over unrelated footage.

The claim contradicted official records and credible media sources. In reality, Burkina Faso’s 2025 Finance Act, passed in December 2024, had expanded the country’s tax base to include e-commerce taxation. The video was partially AI-generated, with visual and audio inconsistencies such as staccato movements and out-of-sync speech.

Following the tragic death of Liverpool footballer Diogo Jota, a viral video claimed Barbadian singer Rihanna had released a tribute song. The nearly four-minute clip showed a woman purported to be Rihanna in various settings, but displayed the jerky movements and inconsistent gestures characteristic of AI generation. Fact-checkers confirmed Rihanna’s last solo release was actually “Lift Me Up” in October 2022, and the alleged tribute appeared on none of her verified channels.

Infrastructure also became a target for AI-generated misinformation. A viral collage claimed a newly constructed N10 billion flyover in Lafia, Nasarawa State, collapsed just weeks after commissioning. The images were traced to an AI-generated video from a Facebook page known for synthetic content. The false claim gained traction by piggybacking on a real incident in Keffi, where part of a different flyover was damaged after being struck by an overloaded truck.

In August, a hyper-realistic AI-generated video showing luxury cars being ferried through floodwaters in Lekki, Lagos, sparked widespread attention. Despite being labeled as AI-generated satire, the clip amassed nearly one million views within days. Comments revealed many viewers believed it depicted an actual flooding incident—an assumption made more plausible by Lekki’s history of severe flooding.

That same month, another AI-generated video falsely purported to show the abandoned home of late Access Bank CEO Herbert Wigwe. The video, marked with a “Jester AI” watermark, originated from a comedian’s Facebook post as a generic philosophical skit with no reference to Wigwe. Archived footage of Wigwe’s actual residence confirmed the viral video misrepresented his property.

These incidents highlight a troubling reality: as AI-generated content becomes increasingly sophisticated, the line between fact and fiction continues to blur. Particularly concerning is that older and less digitally literate audiences often struggle to interpret AI labels or may not understand what “AI-generated” means—a vulnerability that misinformation campaigns are increasingly exploiting.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Jennifer Thomas on

    The article raises important questions about how to verify the authenticity of online content in the face of increasingly sophisticated AI-powered disinformation. Robust fact-checking protocols will be essential going forward.

  2. Liam Rodriguez on

    It’s alarming that the music industry has also become a target for this type of AI-generated misinformation. Platforms need to improve detection and removal of this content before it can reach and mislead large audiences.

  3. Patricia Rodriguez on

    This is a sobering reminder of the potential for AI to be misused to spread disinformation. Maintaining public trust in information sources will be an ongoing challenge that requires vigilance and cooperation across sectors.

  4. The article highlights how even prominent public figures like Salah and Wizkid can be targeted by these fabricated videos and posts. No one is immune, so robust verification processes are essential.

  5. Fact-checking will be crucial, but the sheer volume of misinformation makes it a daunting challenge. Platforms, governments, and civil society all need to collaborate to address this growing threat to truth and transparency.

  6. This is a worrying trend that could have far-reaching impacts on public discourse and trust in information sources. I hope the Disinformation Commission’s work leads to effective solutions to combat AI-generated misinformation.

  7. This is a complex challenge that will require sustained effort and innovation. I’m curious to see what new technologies and techniques emerge to help identify and limit the spread of AI-generated misinformation.

  8. I’m curious to know more about the specific AI tools and techniques being used to create these fabricated videos and posts. Understanding the methods will be key to developing better countermeasures.

  9. The manipulated TikTok video about Mohamed Salah is a prime example of the problem. Even with a disclaimer, the realistic visuals and audio can easily fool people. Disinformation like this has real consequences and needs to be addressed.

  10. Elijah Martinez on

    Concerning to see the rise of AI-generated misinformation, especially when it’s so convincingly realistic. Fact-checking will be crucial to combat the spread of these fabricated videos and posts. Glad to see the article highlighting this growing issue.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.