Listen to the article

0:00
0:00

False Celebrity Death Claims Spark Growing Social Media Crisis

A disturbing trend is sweeping across social media platforms, with fabricated posts falsely claiming that well-known celebrities are dying of cancer or have suffered tragic losses in their families. These AI-generated hoaxes are causing significant distress to the individuals targeted while exposing critical flaws in how platforms handle misinformation.

Manchester City footballer Phil Foden recently found himself at the center of such malicious fabrications. Doctored images showing him in tears circulated online with captions claiming one of his children had died and another was battling cancer – allegations that were entirely false. The footballer sought legal counsel as the fabricated stories spread widely across social media platforms.

Rebecca Cooke, the mother of Foden’s children, was forced to publicly address the situation. “We are aware of the pages and accounts spreading these stories,” she said. “They are completely false and very disturbing. I don’t understand how people can make up these things about anyone, especially children. It’s sickening.”

This incident represents just one example of a much larger problem. An investigation has uncovered dozens of Facebook pages using artificial intelligence to create convincingly doctored images of celebrities paired with entirely false narratives. These pages, often boasting thousands of followers, use phrases like “verified information” and “just confirmed” to enhance their veneer of credibility.

One such page, UK Celeb Update, which has 42,000 followers, posted an AI-generated image of a well-known TV presenter with a bald head and the caption “Final Days of a Beloved Icon.” The post falsely claimed she was in the “heartbreaking final chapters of her life” while “bravely battling breast cancer” – none of which had been publicly confirmed by the celebrity. The post received over 7,000 likes and 1,700 comments, many expressing sympathy and well-wishes.

Another page, The British A List, with 26,000 followers, shared an artificially edited image showing a TV couple holding a baby scan with the caption suggesting they had been “quietly protecting the biggest secret of their lives.” The couple has never confirmed any pregnancy.

Football-focused pages, including one with 65,000 followers, have similarly used AI to generate false stories about Manchester City players, including fabricated “health battles” involving young children, loss of unborn children, and “top secret shock medical examinations.”

Social media expert Hannah O’Donoghue-Hobbs, founder of January 92 consultancy firm, explains why these posts are particularly problematic: “They’re deliberately designed to look credible, using AI-generated imagery, recognizable faces, emotionally charged language and vague ‘breaking news’ framing to stop people scrolling and trigger an instinctive reaction before critical thinking kicks in.”

The damage extends beyond simple misinformation. “The real harm isn’t just misinformation. It’s reputational and emotional damage,” O’Donoghue-Hobbs notes. “Announcing false terminal illnesses, deaths or pregnancies about real people is deeply distressing for the individuals involved and their families, and it erodes public trust more broadly.”

She points to social media algorithms as a key part of the problem, explaining that platforms currently “reward engagement above accuracy, meaning emotionally manipulative content is actively amplified.”

James Bore, a Chartered Security Professional, echoes these concerns, stating that Meta, which owns Facebook, is “taking no visible action” and that their approach “actively promotes and encourages” misinformation. “The algorithm is built to prioritize engagement, not veracity,” he explains.

When contacted, Meta directed inquiries to an online statement indicating they are “committed to fighting the spread of false information” and are making changes to how manipulated media is handled. The company says it adds “AI info” labels to content when it detects industry-standard AI image indicators or when users disclose they’re uploading AI-generated content.

Meta also claims that pages repeatedly sharing false information will see reduced distribution and removed advertising abilities. However, the investigation found that pages posting false content have been operating for months without apparent consequences.

As AI technology becomes increasingly sophisticated, the line between truth and deception grows increasingly blurred. Without stronger platform safeguards and greater public awareness, this crisis of digital misinformation threatens to undermine public trust while causing real harm to those caught in its crosshairs.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. Lucas R. Davis on

    The Phil Foden incident is just one example of the types of fabrications and privacy violations that people are facing on social media these days. Something has to be done to protect users, especially vulnerable groups like children, from this kind of abuse.

    • Absolutely. The emotional toll this takes on victims and their families is unacceptable. Social media platforms must be held accountable and compelled to implement robust safeguards.

  2. While the power of social media algorithms to spread misinformation is concerning, I’m curious to learn more about the specific findings and recommendations from this study. What data and analysis did they use to conclude the damage is ‘irreversible’?

    • William J. Rodriguez on

      That’s a fair question. The study’s conclusions do seem quite dire. It would be helpful to understand the methodology and evidence base in more detail to fully assess the validity of their claims.

  3. William Hernandez on

    This is a complex issue without easy answers. I’m curious to learn more about the specific policy and regulatory changes that experts are proposing to address the root causes of this problem. Meaningful reform of social media platforms will likely require a multi-faceted approach.

    • Agreed. There are no simple fixes, but a combination of updated content moderation policies, algorithmic transparency, and robust user privacy protections could help mitigate the worst effects of misinformation spread on social media.

  4. As someone invested in the mining and commodities space, I’m concerned about how this trend could potentially impact public perception and policy decisions around critical minerals and energy resources. Fact-based, objective reporting is crucial in these sectors.

    • That’s an excellent point. Misinformation campaigns could certainly distort public understanding of issues like mineral supply chains, renewable energy transitions, and environmental impacts. Rigorous journalism has never been more important.

  5. James N. Thompson on

    This article highlights an important issue that extends beyond just the mining and energy sectors. Fake news and disinformation campaigns can have significant real-world impacts across many industries and areas of society.

    • John U. Miller on

      You’re right, this is a broader societal problem. Addressing the root causes and systemic flaws that enable the spread of misinformation will require concerted, multi-stakeholder efforts.

  6. Oliver Thompson on

    This is a serious issue that needs to be addressed. Social media platforms must do more to combat the spread of misinformation and fake news, which can have devastating real-world consequences for individuals and society as a whole.

    • Elizabeth N. Jones on

      I agree. The ability of malicious actors to leverage AI and social media to target and harass people is truly disturbing. Stronger content moderation and algorithmic oversight are urgently needed.

  7. While I appreciate the gravity of the situation described, I would caution against overly alarmist language like ‘irreversible damage.’ The challenges posed by social media algorithms and fake news are certainly serious, but I believe solutions can be found through concerted efforts.

    • Fair perspective. Framing the issue in a constructive, solution-oriented manner is important to avoid stoking further polarization. Collaborative approaches that bring together tech companies, policymakers, and civil society will be crucial.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.