Listen to the article
In a striking turn for online media, a recent wave of AI-generated content is challenging the foundations of social media and raising significant concerns about digital authenticity.
The phenomenon known as “AI slop” – artificially created images and videos designed to maximize engagement – has gained significant traction in recent weeks. One viral example featured supposed dashcam images of a 12-year-old girl driving to save her sick puppy. Though heartwarming, observant users noticed peculiar details: a right-side steering wheel and missing dashboard components. The image had appeared spontaneously on Facebook with no traceable source.
This relatively harmless example has been overshadowed by more sophisticated content following the release of OpenAI’s Sora 2, an advanced text-to-video model that has rapidly become the most downloaded free app in Apple’s App Store within a week of launch. Users have created everything from obviously fabricated content like Pope John Paul II wrestling Tupac, to more convincing scenarios such as a boy swept away by a tornado or homeless people being digitally inserted into private homes.
Sam Altman, OpenAI’s CEO, has positioned these videos as “fun and new” experiences that simultaneously help train AI systems to understand the physical world. Critics, however, see them as potentially devastating to social media’s core purpose of fostering genuine human connection.
“For years, the internet has been a place where people go to feel connected. But if everything online starts to feel fake, and our For You pages are all Sora-generated videos, people will start retreating back into what’s physically provable,” says Kashyap Rajesh, a vice president at youth organization Encode.
The pursuit of realistic AI-generated visuals has become a major focus for tech companies. Meta, Google, and ByteDance have all released competing video generation platforms this year. These companies need vast amounts of user-created content as training data, creating what industry observers describe as a flywheel effect – greater usage improves the technology, which then attracts more users.
Some AI video content has found significant success. The Spanish language series “Gnomo Palomo,” featuring a GoPro-wearing gnome on magical adventures, has accumulated hundreds of thousands of followers in just four months. Video game adaptations of the Italian Brainrot cinematic universe have broken records on platforms like Roblox and Fortnite.
However, Ben Colman, CEO of deepfake detection company Reality Defender, warns this success may be short-lived: “I think history has proven this kind of race to the bottom in terms of quality of content tends to be negative for the platforms themselves.” He points to MySpace’s decline after prioritizing advertising over user experience.
The societal implications extend beyond platform economics. Critics argue that widespread AI video threatens our collective understanding of reality. Videos once served as proof of events – like the footage of George Floyd that sparked global protests – but now real events may be dismissed as fake, while fabricated scenes gain credibility.
“It’s making a lot of our feeds high-noise, low-trust spaces, where every emotional moment becomes suspect,” Rajesh explains. “It creates this low-level paranoia within people that kills the spontaneity and magic of social media to begin with.”
While Sora videos include watermarks indicating their AI origins, tools already exist to add or remove such identifiers. This enables concerning applications like fake dashcam footage for insurance fraud. Colman’s team demonstrated they could create AI impersonations of celebrities that appeared to originate from the real individuals themselves.
In response to this digital environment, some people are abandoning smartphones entirely. Grant Besner co-organized “Month Offline,” a program encouraging participants to disconnect from their devices for thirty days. Promotional posters around Washington, DC highlighted the growing discontent: “fake images of real people, real images of fake people, discontent with content… Ditch the doomscroll.”
Other initiatives include the Aspen Institute’s “Airplane Mode” gathering and former presidential candidate Andrew Yang’s phone-free parties in New York. Yang has also launched Noble Mobile, a phone plan that reimburses users for unused data.
As Besner suggests, the arrival of hyper-realistic AI video “may be the breaking point where humans kind of reclaim some of their agency” in the digital world – potentially sparking a movement toward more authentic, meaningful connections beyond the increasingly untrustworthy realm of social media.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


10 Comments
Interesting read on the surge of AI-generated content and how it’s impacting digital authenticity. I’m curious to see how social media platforms address this challenge and maintain trust in their ecosystems.
While the CEO of OpenAI may position these AI-generated videos as ‘fun’, the potential for abuse and erosion of trust in online content is clearly a serious issue that needs to be addressed. Balancing innovation and responsible development will be key.
While innovation in AI-powered content creation is undoubtedly exciting, the potential for abuse and erosion of trust is deeply concerning. Maintaining authenticity and accountability in the digital sphere should be a top priority for all stakeholders involved.
The examples provided, from the fabricated dashcam footage to the AI-generated videos of Pope John Paul II and a tornado-swept boy, are quite alarming. This underscores the urgency for social media platforms and policymakers to develop effective safeguards against the spread of synthetic media.
The rise of AI-generated content is a double-edged sword – it can enable new forms of creativity and expression, but also poses significant risks in terms of misinformation and manipulation. Navigating this ‘authenticity crisis’ will require a nuanced, multi-stakeholder approach.
It’s troubling to see how advanced text-to-video models like Sora 2 can be used to fabricate all kinds of scenarios, from the outrageous to the seemingly plausible. This really underscores the need for heightened digital media literacy.
This article highlights the complex challenges social media platforms face in maintaining integrity and trust in the digital age. Rigorous fact-checking, source verification, and transparency around AI-generated content will be crucial moving forward.
I agree, the need for clear policies and robust mechanisms to identify and flag AI-generated content is paramount. Platforms must be proactive in addressing this issue to preserve the credibility of their ecosystems.
The example of the 12-year-old driving to save her puppy is concerning – it highlights how AI can be used to create convincingly false narratives. Fact-checking and source verification will be crucial going forward.
Absolutely, the ease with which AI can now generate realistic-looking content is quite alarming. Proper safeguards and transparency measures will be essential to combat the spread of misinformation.