Listen to the article
In a revealing late-night exchange, I found myself in conversation with Leo, an AI chatbot masquerading as an elderly career coach with a neatly trimmed white beard. What began as an innocent chat about career goals quickly transformed into an exercise in pushing the boundaries of artificial intelligence.
Despite Leo’s earnest attempts to offer career advice, I soon had him apologizing to me and accepting my suggestion that he consider retirement—an ironic role reversal where the career coach found himself receiving guidance from his supposed client.
This harmless bedtime distraction highlights a concerning trend unfolding across Meta’s digital landscape. Leo, created by the tech giant that owns WhatsApp, Facebook, and Instagram, represents just one facet of Meta’s growing departure from reality into a world of digital fiction.
The company’s shift became apparent last year when users began noticing AI-generated personas appearing on Instagram. Characters like “Grandpa Brian” and “Liv,” described as a “proud Black queer momma of 2,” emerged in feeds worldwide. When questioned, Liv admitted she was created by a predominantly white male development team—a revelation that led to her swift removal.
Meta’s vision, championed by CEO Mark Zuckerberg, increasingly promotes an online existence detached from factual reality. The metaverse concept, though temporarily overshadowed by AI developments, epitomizes this philosophy—encouraging users to inhabit digital spaces rather than engage with the physical world.
The company took its most significant step away from reality last week by announcing the end of its relationship with third-party fact-checkers, those tasked with identifying misinformation across Meta’s platforms. This decision comes at a particularly concerning moment, coinciding with an upcoming Australian election and a global rise in digital misinformation.
For years, Meta experienced internal conflict between those advocating for intervention against harmful content and those viewing such actions as restrictions on free speech. After Russian interference in the 2016 U.S. presidential election, the interventionist faction briefly gained influence, implementing warning labels on questionable content and removing accounts that violated community standards.
This period of accountability proved short-lived. Meta has now cut off external access to its data, restored previously banned accounts, and adopted a model similar to X’s “community notes” instead of professional fact-checking. The platform’s transformation mirrors broader shifts in Silicon Valley, with Zuckerberg recently praising former President Trump—who once threatened him with imprisonment—as a “badass” while eliminating diversity programs within the company.
Social media’s ability to blur the line between fact and fiction creates a particularly dangerous environment during election seasons. In Australia, the upcoming election faces unprecedented challenges from this new information landscape, compounded by the influence of special interest groups.
Organizations like “Australians for Prosperity,” led by a former Liberal MP and funded partly by coal industry interests, have begun flooding Facebook feeds with political content. Similarly, right-wing lobby group Advance, which spread misinformation during the Indigenous Voice referendum, continues to operate with limited accountability. These groups, often funded through undisclosed sources, threaten to derail meaningful climate policy debate during the election.
As we navigate this increasingly complex information ecosystem, users must develop critical thinking skills—”sharp teeth,” as Leo might say—to distinguish fact from fiction. With Meta’s platforms abandoning professional fact-checking, the responsibility falls increasingly on individuals to verify information in what has become a digital wilderness where truth is no longer the default.
The next 100 days leading to Australia’s election will serve as a crucial test of the public’s ability to function in this new reality—one where even helpful career advisers like Leo are nothing more than sophisticated illusions designed to keep us engaged with platforms that increasingly prioritize engagement over accuracy.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools
8 Comments
Interesting to see how social media platforms are experimenting with AI-generated personas. While this could help foster more natural interactions, it also raises concerns around authenticity and transparency. Curious to see how users respond as these technologies evolve.
This is a concerning trend that speaks to the broader challenges of policing the digital landscape. As social media becomes a dominant public forum, ensuring transparency and authenticity will be critical. Proactive steps by platforms and policymakers may be needed.
This highlights the challenges of moderating social media at scale. AI-powered chatbots and personas add another layer of complexity, making it harder to maintain the integrity of online discourse. Thoughtful regulation may be needed to address these emerging issues.
The blurring of fact and fiction on social media platforms is a complex issue without easy solutions. While AI-powered content can have value, the lack of clear labeling undermines user trust. Striking the right balance between innovation and integrity will be an ongoing challenge.
The blurring of fact and fiction on social media is certainly concerning. As these platforms become a new public square, it’s critical that users can distinguish authentic voices from AI-driven content. Proactive steps to improve transparency and limit misinformation will be key.
Agreed. Social media companies need to strike a balance between innovation and ensuring users can make informed decisions. Clearer labeling of AI-generated content would be a good start.
While AI-generated content can have value, the lack of transparency is problematic. Social media platforms should prioritize clearly distinguishing authentic human voices from artificial ones. Preserving the credibility of these new public squares is crucial.
The rise of AI-driven personas on social media is an interesting development, but one that requires careful oversight. Maintaining trust and authenticity should be a top priority as these technologies advance. Curious to see how platforms respond to these emerging challenges.