Listen to the article
Deepfakes and Cheap Fakes: Technical Solutions Alone Won’t Solve the Problem, Research Finds
A comprehensive report published by Data and Society researchers Britt Paris and Joan Donovan argues that combating manipulated audio-visual content requires more than technological solutions. The researchers warn that addressing the growing threat of deepfakes and what they term “cheap fakes” demands a combination of technical innovation and social policy reforms.
Deepfakes represent a sophisticated form of audio-visual manipulation created using experimental machine learning techniques. Recently, a strikingly realistic video showing comedian Bill Hader morphing into Tom Cruise went viral on YouTube, demonstrating the technology’s evolving capabilities. In another high-profile instance, artists Bill Posters and Daniel Howe, in collaboration with advertising company Canny, created a deepfake video of Facebook founder Mark Zuckerberg delivering an ominous speech about Facebook’s power—marking what many consider the first prominent “white hat” deepfake operation.
But the researchers emphasize that not all manipulated content requires advanced AI. “Cheap fakes,” a term coined in the report, can be created using accessible software like Photoshop or even no software at all—through techniques such as using lookalikes, re-contextualizing footage, or simply altering video playback speeds.
Paris and Donovan place these emerging technologies within a broader historical context of media manipulation. They argue that technical solutions alone, particularly AI-driven content filters, could create new problems while attempting to solve existing ones. “They make things better for some but could make things worse for others,” the researchers note. “Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.”
The report highlights concerning disparities in who bears the brunt of harm from manipulated media. “Those without the power to negotiate truth—including people of color, women, and the LGBTQA+ community—will be left vulnerable to increased harms,” the authors warn. This power imbalance underscores why purely technological approaches are insufficient.
Instead, the researchers advocate for a multi-faceted approach that includes legal frameworks to prosecute bad actors and prevent the spread of manipulated content. The report calls for policy solutions that penalize harmful individual behavior while also implementing federal measures to hold corporations accountable for the societal impact of their platforms and technologies.
“It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation,” the researchers emphasize. They maintain a pragmatic outlook, acknowledging that “Deepfakes aren’t going to disappear,” and that mitigation strategies must focus on limiting harm rather than eliminating the problem entirely.
The report concludes with a call for a deeper understanding of how evidence and truth are socially constructed: “Limiting the harm of AV manipulation will require an understanding of the history of evidence, and the social processes that produce truth, in order to avoid new consolidations of power for those who can claim exclusive expertise.”
This research emerges amid growing concerns about digital manipulation across various sectors. Recently, Mozilla, Coil, and Creative Commons launched a $100 million “Grant for the Web” initiative to promote innovation in web monetization. Meanwhile, regulatory scrutiny of tech giants continues to intensify, with the House Judiciary Antitrust Subcommittee requesting extensive documentation, including private emails, from Amazon, Facebook, Alphabet, and Apple as part of ongoing antitrust investigations. In the cybersecurity realm, the UK’s National Cyber Security Centre has issued reports highlighting significant ransomware, phishing, and supply chain threats facing businesses.
As manipulated media becomes increasingly sophisticated and accessible, the research suggests that addressing these challenges will require collaboration across technical, social, legal, and policy domains.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Interesting update on Media Manipulation Through Deepfakes Requires Technical and Social Solutions, Report Finds. Curious how the grades will trend next quarter.
If AISC keeps dropping, this becomes investable for me.
Production mix shifting toward Media Manipulation might help margins if metals stay firm.
Exploration results look promising, but permitting will be the key risk.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.