Listen to the article

0:00
0:00

The Deepfake Dilemma: A Journalist’s Perspective on Truth in the AI Era

Fifty years ago this month, I covered my first political campaign. In the decades that followed, I came to see most politicians as among the most charming people on the planet. Many are incredibly likable. They could sell eggs to a hen. That’s their business.

I was taught that there were two sides to every story. Both were usually somewhat plausible. My job was to present the facts and let the readers decide.

Years ago, the top commissioner in Pennsylvania’s richest county I covered was furious that a subordinate had revealed an embarrassing detail about the county budget to the press.

“All I did was tell the truth,” the minor official explained to the top guy, as he recounted to me later.

The commissioner—a lawyer—stood up, walked over to a window and replied, “Don’t you know there are at least five different ways to tell the truth?” He paused, then added, “I bet you never heard it put quite that way, did you?”

This memory resonates deeply today as we confront a far more troubling challenge to truth: deepfake videos. If you think American politics are already shattered, this powerful technology threatens to inflict even greater damage on public discourse.

The combination of compelling video storytelling with the desperation to win elections creates a dangerous incentive for grand fakery. Political campaigns have evolved dramatically over the decades. Old-time political bosses passed out turkeys during holidays. Later generations handed out literature and produced issue-oriented “white papers” on hot topics.

Those practices now seem quaint. Today’s political operatives need only an AI creator with a goal: make it appear as though someone said something they never actually said.

The threat extends far beyond politics. Consumers will increasingly face sophisticated deepfake sales videos. Fake celebrity endorsements will promote bogus products. Investment scammers will deploy realistic AI-generated content. Both reputations and life savings stand to be devastated by these technological deceptions.

As the New York Times recently warned in a headline, “A.I. videos are so good you can no longer trust your eyes.” The publication reported that “any video you see on an app that involves scrolling through short videos, such as TikTok, Instagram’s Reels, YouTube Shorts and Snapchat now have a high likelihood of being fake.”

In previous eras, disinformation campaigns were typically handled through intermediaries to maintain plausible deniability. But that norm is changing. Recently, a deepfake video targeting Senator Charles Schumer was proudly claimed by the National Republican Senatorial Committee. As Washington Free Beacon reporter Jon Levine observed: “We are [in] a terrifying new world.”

The situation appears poised to worsen with OpenAI’s pending release of Sora, a deepfake software that digitizes users’ faces and voices. Industry observers describe it as a game-changer due to its sharp features and the speed with which it can produce convincing fakes—reportedly in just minutes.

Legislative efforts to address the problem are underway but remain limited. Earlier this year, Congress passed the Take It Down Act, creating legal protections when someone’s image is used without permission. Denmark has gone further, recently passing a law that effectively allows citizens to copyright their face, voice, and body to prevent unauthorized use.

For the average person, protection begins with skepticism. Assume any video could be fake. Look carefully at whether words sync perfectly with a speaker’s mouth, though even this telltale sign is becoming less reliable as AI technology advances.

When encountering product promotions in videos, avoid clicking embedded links. Instead, independently research both the product and the company. The results will often surprise you—for better or worse.

In this new era where seeing is no longer believing, truth demands more than honesty. It requires constant vigilance from both media professionals and citizens.

If that Pennsylvania politician from years ago had five ways to tell the truth, imagine how many more ways—both legitimate and deceptive—exist today. The challenge for journalists and citizens alike is to navigate this increasingly complex landscape while preserving our shared understanding of reality.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

30 Comments

  1. Interesting update on Deepfake Technology Poses Growing Threat to Politicians and Consumers Alike. Curious how the grades will trend next quarter.

  2. Interesting update on Deepfake Technology Poses Growing Threat to Politicians and Consumers Alike. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.