Listen to the article
In a digital landscape where the lines between reality and fabrication continue to blur, artificial intelligence tools are now capable of creating convincing musical tributes attributed to famous artists—often without their knowledge or consent.
Recent examples include numerous emotional YouTube videos featuring AI-generated songs supposedly performed by celebrities like Adele, Ed Sheeran, and Justin Bieber, mourning the death of right-wing activist Charlie Kirk. These videos have collectively amassed millions of views despite the fact that the vocal performances bear little resemblance to the actual artists.
“Rest in peace, Charlie Kirk! The angels sing your name. Your story’s written in the stars, a fire that won’t wane,” intones one AI voice in a tribute video showing Kirk, a prominent ally of President Donald Trump.
Many viewers appear unaware that these emotional tributes are AI-generated, as evidenced by thousands of comments thanking the artists for songs they never actually performed. While YouTube policy requires creators to disclose when content has been altered or synthetically created using AI tools, these disclosures are often buried in video descriptions where viewers are unlikely to notice them unless they expand the text.
“I’m concerned that what made the internet so cool to begin with—really weird, creative people doing things they’re passionate about for fun, is gone. It’s been replaced by AI slop created by grifters aiming to make money,” explains Alex Mahadevan from the nonprofit media institute Poynter.
The proliferation of such content highlights a growing concern about passive consumption in the digital age. “We’re becoming passive consumers of ‘content’ and not active, conscious digital citizens,” Mahadevan adds.
These videos are just one example of how widely accessible AI music generators are transforming ordinary users into virtual musicians. Platforms like Suno advertise their ability to “make any song you can imagine,” offering suggestions such as “make a jazz song about watering my plants” or “make a house song about quitting your job.”
When tested, these tools can generate multiple song options within seconds based on simple text prompts. The technology has advanced to the point where AI “bands” like The Velvet Sundown have established verified Spotify accounts with hundreds of thousands of listeners. The group describes itself as “not quite human. Not quite machine.”
This trend raises significant questions about copyright protection for vocal and visual likenesses. “I absolutely think that someone’s likeness should be protected from replication in AI tools. That goes for dead people, too,” says Mahadevan.
Lucas Hansen, co-founder of nonprofit CivAI, suggests that while outright bans on likeness generation are unlikely, legal restrictions on commercialization may be forthcoming. “There might also be restrictions on distribution, but existing laws are much less strict towards non-monetized content,” he notes.
The music industry has already begun taking legal action. In June, the Recording Industry Association of America announced that major record companies had sued two music generators, including Suno, alleging copyright infringement.
Last year, more than 200 artists including Katy Perry and Nicki Minaj signed an open letter to AI developers and technology platforms, warning that training tools on existing songs “will degrade the value of our work and prevent us from being fairly compensated.”
The letter described the situation as an “assault on human creativity” that “must be stopped,” calling for protection “against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.”
As AI technology continues to advance, the tension between creative innovation and intellectual property rights will likely intensify, leaving regulators, platforms, and content creators to navigate an increasingly complex digital landscape where distinguishing between authentic and synthetic content grows more challenging by the day.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The ability of AI to create such realistic-sounding vocal tributes is impressive, but also worrying from an ethics standpoint. These videos could easily dupe unsuspecting viewers into believing they are genuine. More safeguards are needed.
I agree. The potential for abuse of this technology is significant. Stronger content policies and user education will be crucial to prevent the spread of AI-generated misinformation masquerading as real.
While the technology behind these AI tributes is fascinating, the lack of disclosure is highly problematic. Viewers should not be misled into thinking these are authentic performances. Platforms need to do more to ensure transparency around synthetic media.
The ability of AI to create such realistic-sounding vocal performances is impressive, but also raises major ethical concerns. Viewers deserve to know upfront when content is synthetically generated, not authentically performed. Stronger transparency requirements are needed.
Agreed. The lack of disclosure around the AI nature of these tributes is very troubling. Platforms must do more to ensure users are not misled by synthetic media masquerading as real.
This is a concerning example of how AI can be exploited to spread disinformation. Fake tributes like these, presented as genuine, erode public trust and make it harder to discern truth from fiction online. Robust content policies are needed to address this issue.
This is a troubling development. Deepfakes and AI-generated content can be used to spread disinformation and mislead people. While technology advances, it’s crucial we have clear disclosure policies and media literacy to avoid falling for these fabrications.
Absolutely. The lack of transparency around the AI nature of these tributes is very concerning. Viewers should be made aware upfront that the content is synthetic, not authentic.