Listen to the article

0:00
0:00

In an era where AI-generated content floods social media platforms, tech companies are struggling to establish systems that can help users distinguish between real and artificial imagery. The industry’s most prominent effort, C2PA (Coalition for Content Provenance and Authenticity), is facing significant challenges despite backing from major tech giants.

C2PA was designed as a metadata standard that embeds information about a photo or video’s origin, editing history, and creation process. Spearheaded by Adobe and supported by companies including Microsoft, Meta, OpenAI, and Google, the system aims to provide transparency about content authenticity by tracking a file’s journey from capture through any subsequent modifications.

“At the point that you take a picture on a camera, you upload that image into Photoshop, all of these instances would be recorded in the metadata,” explains Jess Weatherbed, who covers creative tools for The Verge. “And then as a two-part process, all of that information could then hypothetically be read by online platforms.”

The system’s proponents claim this metadata is tamper-proof, but evidence suggests otherwise. Even OpenAI, a steering committee member of C2PA, has acknowledged that the metadata can be easily stripped—sometimes accidentally during the normal upload process to social platforms.

Camera manufacturers including Sony, Nikon, and Leica have embraced the standard in newer models, but there’s little movement to update existing cameras. Meanwhile, Apple—perhaps the most influential camera maker given iPhone’s ubiquity—has remained conspicuously absent from the initiative.

The distribution side presents even greater obstacles. Even when content is properly labeled at creation, social platforms often strip this metadata during uploads. While Instagram, LinkedIn, and Threads claim to support the standard, implementation has been inconsistent at best.

“Unless they can all come to an agreement, every platform, literally every platform that we access and use online… there needs to be that uniform, total uniform conformity for a system like this to actually make a difference,” Weatherbed notes.

The absence of key players further undermines the system’s effectiveness. Twitter (now X) was originally a founding member of the initiative, but abandoned it after Elon Musk’s acquisition. TikTok’s participation has been limited, and YouTube’s implementation remains spotty despite parent company Google’s involvement with both C2PA and its own SynthID technology.

Beyond technical challenges, there’s a fundamental tension between labeling content as AI-generated and maintaining its perceived value. Many creators resist such labels, believing they diminish their work. Platforms, meanwhile, have financial incentives to avoid stigmatizing AI content they’re increasingly promoting and profiting from.

Instagram head Adam Mosseri recently acknowledged this new reality in a startling admission: “For most of my life, I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it’s going to take us years to adapt. We’re going to move from assuming what we see as real by default to starting with skepticism.”

This shift has profound implications for society, particularly regarding documentation of important events like protests or government actions. The ability to trust visual evidence has been foundational to social movements and public accountability.

“We’re in a position now where there’s more online than we’ve ever seen because everything is being funneled out,” Weatherbed explains. “Why would they want to harm that profit stream, effectively, by having to slam on the brakes of development until they can figure out how they are going to effectively be able to call out when deepfakes are proving to be a problem?”

The challenge is compounded by bad-faith actors deliberately creating misinformation, including government entities. Recent examples of AI-manipulated imagery from U.S. government accounts highlight the urgent need for reliable authentication systems.

Without significant regulatory pressure or consumer demand, the outlook appears bleak. “I would say this has failed,” Weatherbed concludes about C2PA. “It’s dead in the water. It’s never going to get to a universal solution.”

As AI technology continues advancing, establishing trust in digital media will likely require a combination of improved technical standards, platform cooperation, regulatory frameworks, and a fundamental shift in how users evaluate visual information—a complex challenge with no simple solution in sight.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

29 Comments

  1. Isabella Moore on

    Interesting update on Reality Struggles Against Rising Tide of Deepfakes. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.