Listen to the article
The erosion of truth in an AI-dominated world has accelerated in recent days, highlighted by a diplomatic incident between Donald Trump and Iran that exemplifies our new relationship with reality.
This week, Trump posted an ultimatum on Truth Social threatening to “hit and obliterate” Iran’s power plants if the Strait of Hormuz wasn’t reopened within 48 hours. Iran’s response was defiant, suggesting a country with little to lose in the confrontation.
Hours later, Trump posted again, claiming “VERY GOOD AND PRODUCTIVE CONVERSATIONS” had taken place with Iran. “I didn’t call them. They called me,” he boasted with characteristic bravado.
Iranian officials quickly contradicted this account, stating unequivocally that there had been no call, no negotiations, and that Trump had “backed down out of fear.” Trump subsequently deleted his post, leaving two entirely different versions of events hanging in the digital ether.
This incident reflects our new normal – competing realities existing simultaneously, with audiences simply selecting the version that aligns with their existing beliefs. The most troubling aspect isn’t the contradiction itself but our collective indifference to it. Society has grown accustomed to Trump’s pattern of posting, retracting, and reshaping reality in real-time, effectively raising the threshold for what constitutes a demonstrable falsehood.
“Fake news” was never simply about questioning media reliability. It systematically trained audiences to doubt everything, creating an environment where, when actual misinformation arrives, the frameworks for evaluating truth have already been dismantled.
The Middle East conflict has become a testing ground for AI-generated content at an unprecedented scale. Just last week, the internet was consumed by debates over whether videos of Benjamin Netanyahu were AI-generated, raising questions about the Israeli prime minister’s wellbeing.
Even respected journalists have shared footage only to delete it later with apologies when it was revealed to be fabricated. The situation has become so dire that users on X (formerly Twitter) regularly ask “Grok,” the platform’s AI tool, to verify the authenticity of posts – essentially asking one algorithm to evaluate another.
While social media grapples with AI-generated misinformation, corporations are embracing the technology for more tangible purposes. Atlassian recently cut 1,600 jobs – approximately 10% of its workforce – with CEO Mike Cannon-Brookes directly citing AI capabilities as the reason. “We need fewer people in certain roles because AI now does the work,” he acknowledged plainly.
The market response was telling: Atlassian’s stock rose on the announcement of these layoffs. This pattern is repeating across industries, with AI-related job cuts pushing U.S. job losses beyond one million in 2025, with 2026 projected to be worse. Australian workers are increasingly vulnerable to the same trend.
Women face disproportionate impact from these technological shifts. They are overrepresented in the administrative, communications, and coordination roles that AI is consuming first – not due to capability differences, but because these historically undervalued positions now lack protection against automation. There’s a bitter irony that jobs deemed never “important enough” to compensate properly are suddenly important enough to automate.
The convergence of these trends has created a perfect storm for truth and economic stability. War zone footage cannot be trusted. Presidential communications contain verifiable falsehoods. News articles themselves may be AI-generated. And many media professionals who might have helped navigate this landscape have received redundancy notices.
This situation didn’t materialize overnight. Society arrived here gradually through each tolerated lie, each excused deepfake, each dismissal of a deleted post as inconsequential. Advanced technology merely accelerated what was already underway. The pressing question isn’t whether we can reverse course, but whether we can acknowledge what’s happening with clear-eyed honesty.
As these dynamics continue to unfold, the digital world will not pause for society to adjust. It will simply continue posting, deleting, and posting again in an endless cycle that further blurs the boundaries between fact and fiction.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


25 Comments
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Interesting update on Reality Check: What the Last 48 Hours Reveals About Our Broken System. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Production mix shifting toward News might help margins if metals stay firm.
Production mix shifting toward News might help margins if metals stay firm.