Listen to the article
In a landmark move that has sparked global attention, Australia has implemented the world’s first nationwide ban on social media access for everyone under 16 years of age. The sweeping legislation, which took effect on Wednesday, prohibits minors from having accounts on major platforms including TikTok, Instagram, YouTube, Snapchat, Facebook, Reddit, and other popular services.
The policy, formalized through the Online Safety Amendment (Social Media Minimum Age) Bill 2024, establishes a legal minimum age requirement for social media usage that places enforcement responsibility entirely on technology companies rather than children or their parents.
Under the new regulations, platforms are required to implement robust age verification systems such as ID checks or AI-powered facial-age estimation. Alternative verification methods must be available for users unable or unwilling to provide traditional identification. Enforcement will be overseen by Australia’s eSafety Commissioner, who has the authority to pursue court-imposed penalties against platforms failing to take “reasonable steps” toward compliance.
The legislation creates a distinct departure from previous approaches to online safety by eliminating parental consent options and self-certification processes. Notably, the law does not penalize minors who attempt to access these platforms, instead making technology companies solely responsible for preventing underage access.
Existing accounts held by users under 16 must now be disabled by the platforms themselves. A select group of “safe-listed” applications remains available to young users, including YouTube Kids, WhatsApp, Google Classroom, Messenger Kids, and various educational and helpline services.
Despite the government’s position that the ban will protect children’s mental health and online safety, the legislation has faced significant criticism from child advocacy groups. UNICEF Australia, while acknowledging the good intentions behind the law, has raised concerns about its effectiveness and implementation.
In a public statement, UNICEF warned that a simple age cutoff fails to address fundamental structural problems on social platforms. The organization emphasized that harmful content, predatory behavior, aggressive algorithm design, and inadequate reporting mechanisms will continue to exist regardless of age restrictions. UNICEF also criticized the legislative process for its limited consultation with young people—the very demographic the policy targets.
The Australian approach has already sparked discussions about similar measures in other countries, particularly the United States. A bipartisan group of U.S. senators led by Senator Brian Schatz has introduced the Kids Off Social Media Act, though with notable differences from Australia’s comprehensive ban.
The proposed U.S. legislation would prohibit social media accounts for children under 13, reinforcing existing platform policies. It would also ban algorithmic recommendation feeds for users under 17, effectively eliminating features like TikTok’s “For You Page” and Instagram’s Explore feed for younger users. Enforcement would fall to the Federal Trade Commission and state attorneys general, with requirements for schools to limit social media access on their networks.
Australia’s pioneering legislation represents a significant shift in how governments are approaching the intersection of technology, social media, and youth protection. The policy establishes a precedent that other nations may follow as concerns about social media’s impact on young people’s mental health continue to grow globally.
As the ban takes effect, technology companies operating in Australia face the immediate challenge of implementing effective age verification systems while maintaining user privacy and platform accessibility. The long-term effects of this policy on both the Australian digital landscape and global approaches to online safety regulation remain to be seen, but its implementation marks a decisive moment in the ongoing debate about protecting children in increasingly digital environments.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


31 Comments
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Production mix shifting toward Fact Check might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Interesting update on Australia’s Social Media Age Ban Sparks Debate on Child Safety. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Production mix shifting toward Fact Check might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Silver leverage is strong here; beta cuts both ways though.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.