Listen to the article

0:00
0:00

As artificial intelligence rapidly transforms industries worldwide, the news media sector finds itself at a critical crossroads, balancing AI’s benefits against significant ethical concerns and workforce implications. This tension has reached a boiling point at ProPublica, where journalists recently took to picket lines in what could become the first strike in journalism history with AI as the central issue.

The ProPublica dispute highlights a growing divide between management and journalists over how AI should be implemented in newsrooms. Union members, represented by the New York Guild, have voted overwhelmingly to authorize a strike if their demands aren’t met, seeking contractual guarantees about AI disclosure and the role of human journalists—protections increasingly common in the industry.

“It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert at the Poynter Institute journalism think tank.

ProPublica has resisted these contractual limitations, with company spokesman Tyson Evans explaining they’re “approaching AI with both curiosity and skepticism” but believe it would be “a mistake to freeze editorial decisions in a contract that will last years,” particularly as technology evolves so rapidly.

The reluctance to commit to specific AI policies reflects broader industry uncertainty. According to Trusting News, less than half of U.S. news organizations have public AI policies, despite research showing consumers overwhelmingly want transparency when AI is used in news creation.

AI has undeniably improved certain aspects of journalism. News organizations have employed it to analyze vast document collections like the Epstein files, generate headline suggestions, summarize stories, and transcribe interviews—tasks that previously consumed significant time and resources.

However, the rush to implement these tools has led to embarrassing errors. Bloomberg has issued multiple corrections for mistakes in AI-generated summaries. Business Insider and Wired were forced to remove articles attributed to a fabricated author. The Los Angeles Times encountered problems with AI-generated opinion pieces, while Ars Technica acknowledged AI had fabricated quotes in its content.

Jon Schleuss, president of NewsGuild-USA, notes that 57 of 283 contracts at U.S. news organizations now contain language related to artificial intelligence, with the first such agreements appearing just last year. The Associated Press was among the early adopters incorporating AI provisions.

“I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”

The debate extends beyond newsrooms to statehouses. Two New York Democratic lawmakers recently introduced legislation requiring clear disclaimers when AI is used in published content, potentially setting a precedent in the nation’s publishing capital.

Public attitudes present a paradox for news organizations. Research by Benjamin Toff, director of the Minnesota Journalism Center, reveals that while most Americans believe it’s crucial for newsrooms to disclose AI use, such transparency actually decreases trust in the content.

More concerning, nearly a third of Americans oppose any AI use in journalism—a position that conflicts with industry direction. In Cleveland, Plain Dealer editor Chris Quinn recently criticized a job candidate who refused an offer based on ethical concerns about AI, defending his newspaper’s practice of having reporters collect information that AI then synthesizes into articles.

The disclosure question becomes increasingly complex as AI integration deepens. “There are just so many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that broadly declaring when AI is used in the newsgathering process actually seems like a disservice to the reader in some cases,” Mahadevan explained.

As AI capabilities advance exponentially, the fundamental nature of journalism appears poised for transformation. “Speaking realistically, the newsroom of the future is going to look completely different than it does today,” Mahadevan said. “Which means people will lose jobs. There will be new jobs.”

This reality underscores why the ProPublica dispute represents more than a single labor disagreement—it signals the beginning of a profound reckoning with technology that will reshape one of democracy’s essential institutions.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

6 Comments

  1. Amelia Williams on

    I’m curious to see how this plays out at ProPublica and across the industry. Journalists raising concerns about AI’s impact deserve a seat at the table. Responsible AI adoption that maintains journalistic integrity should be the goal.

  2. This is a complex issue with valid concerns on both sides. Journalists need to balance AI’s benefits with ethical considerations and workforce impacts. Transparency and clear guidelines around AI usage seem crucial to maintain trust and journalistic integrity.

  3. Michael I. Thomas on

    This is a vital issue for the future of journalism. AI can boost efficiency, but oversight is crucial to prevent bias, privacy breaches, and displacement of human reporters. Careful, collaborative policymaking seems the wise path forward.

  4. The ProPublica dispute reflects the broader tension between innovation and worker protections. AI has benefits, but journalists need robust safeguards. This debate will shape the credibility and trustworthiness of news media going forward.

  5. Interesting to see unions taking a stand on AI in journalism. Contractual protections for human journalists could help ensure AI is implemented responsibly. But companies may need flexibility to innovate. Finding the right balance is key.

    • Agreed, it’s a delicate balance. AI has potential to enhance journalism, but safeguards are needed to preserve the human element and ethical standards.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.