Listen to the article
New Zealand authorities are raising alarms over the growing threat of AI-generated fake news, particularly during natural disasters when accurate information becomes critical for public safety.
Following the deadly Mount Maunganui landslide that claimed six lives, the National Emergency Management Agency (Nema) issued warnings about artificial intelligence-created content circulating on social media platforms. Nema emphasized that mainstream media remains their primary channel for disseminating reliable emergency information.
The problem resurfaced last month when severe weather devastated northern parts of the Gisborne district. Fake news and AI-generated images spread rapidly across social media from pages masquerading as legitimate news outlets. The situation became so problematic that Tairāwhiti Civil Defence and Gisborne District Council published their own alerts, including screenshots of false claims, to help residents identify misinformation.
These authorities advised the public to pause before sharing content, especially posts designed to provoke anger or panic. They encouraged readers to verify sources and report suspicious content. Troublingly, the warning posts received significantly less engagement than the fake news they were attempting to debunk.
“Gone are the days when only the technologically illiterate were fooled by digitally manipulated images,” one official noted. The rapid evolution of AI technology has outpaced the public’s ability to detect sophisticated fakes, creating a dangerous information environment during crises.
An investigation by RNZ explainer editor Nik Dirga for the Australian Associated Press revealed a Facebook page called “NZ News Hub” with nearly 5,000 followers, including some politicians, before it was apparently removed from the platform. Among its disturbing content was an AI-manipulated video that animated a still photo of 15-year-old Mount Maunganui landslide victim Sharon Maccanico, making it appear as though she was dancing—a particularly egregious ethical violation.
Beyond the spread of misinformation, these fake news outlets frequently incorporate original reporting and images stolen from legitimate media organizations before manipulating them with AI tools. Currently, social media platforms do not require these pages to label AI-generated content or disclose their sources. They operate outside any independent regulatory framework that might enforce ethical guidelines or codes of conduct.
This proliferation of counterfeit news stories and images, often designed to imitate mainstream media outlets, contributes significantly to the erosion of public trust in journalism overall. While legitimate news organizations certainly have their flaws, they differ fundamentally from these AI-content farms in their accountability structures.
Established media outlets maintain editorial guidelines—including specific policies governing AI usage—that are publicly available and allow audiences to hold them accountable. When mistakes occur, legitimate news organizations issue corrections and face scrutiny from other media outlets, creating a self-regulating ecosystem.
By contrast, fake news pages on social media platforms lack transparency and accountability. Many are managed by anonymous accounts whose primary goal is accumulating attention, engagement, and followers through any means necessary—even deliberately misleading or enraging audiences who may be consuming the content in good faith.
As AI technology continues to advance, distinguishing between authentic and fabricated content becomes increasingly challenging for the average social media user. The trend is particularly dangerous during emergencies when accurate information can mean the difference between safety and harm, raising urgent questions about how social media platforms should regulate AI-generated content, especially during crises when communities are most vulnerable.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


22 Comments
Uranium names keep pushing higher—supply still tight into 2026.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Silver leverage is strong here; beta cuts both ways though.
Interesting update on AI-Generated Content Poses Ethical and Safety Concerns for Digital Landscape, Editorial Warns. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Production mix shifting toward Fake Information might help margins if metals stay firm.