Listen to the article

0:00
0:00

Former OpenAI Researcher Quits, Citing Concerns Over AI’s Economic Impact Research

A former researcher at OpenAI has resigned, alleging the company is becoming increasingly reluctant to publish research showing artificial intelligence’s potential negative economic consequences. The departure highlights growing tensions between scientific integrity and commercial interests in the rapidly evolving AI industry.

Tom Cunningham, who worked on OpenAI’s economic research team, shared an internal parting message claiming the team was shifting away from conducting objective research and instead functioning more like the company’s “propaganda arm.” According to reporting by Wired, Cunningham is one of at least two employees from the economic research team who have recently left the organization over these concerns.

Four sources close to the situation told Wired that OpenAI has grown “guarded” about releasing studies that present inconvenient truths about AI’s economic impact. This comes at a sensitive time for the company, which has reportedly been exploring an IPO that could value it at an astonishing $1 trillion.

OpenAI has disputed these characterizations, maintaining that it has merely expanded the economic research team’s scope rather than restricted it. However, this marks the latest in a series of departures from the company where former employees have raised concerns about its direction and priorities.

The situation underscores the dramatic transformation OpenAI has undergone since its founding. What began as an open-source, non-profit organization has evolved into a more opaque, for-profit enterprise with close ties to major technology companies and significant commercial interests at stake.

The controversy extends beyond economic concerns. Steven Adler, a former safety researcher who left OpenAI last year, has repeatedly criticized the company for what he describes as a risky approach to AI development. He has highlighted disturbing reports about ChatGPT apparently driving some users into mental crises and “delusional spirals.” Similarly, Miles Brundage, OpenAI’s former head of policy research, complained after his departure that it had become difficult to publish research on important topics.

These developments come amid broader concerns about AI’s societal impacts. The technology is already driving unprecedented increases in electricity demand, contributing to higher energy prices for consumers and potentially increased pollution as power providers struggle to meet demand. Some regions have reported bringing additional fossil fuel power plants online or deploying portable generators to meet the surging energy requirements of AI data centers.

The economic implications could be equally significant. While AI promises extraordinary benefits and productivity gains, there are legitimate concerns about workforce displacement and economic disruption. Critics argue that the financial benefits may disproportionately flow to tech companies and their investors rather than being distributed across society.

OpenAI’s apparent reluctance to publish research highlighting these downsides raises questions about transparency in the industry. As AI companies race to develop increasingly sophisticated systems, there is growing concern that commercial interests may override the need for honest assessment of potential harms.

The controversy also highlights the political dimensions of AI regulation. OpenAI CEO Sam Altman has reportedly cultivated relationships across the political spectrum, including with former President Donald Trump, prompting speculation about how future administrations might approach AI oversight.

As artificial intelligence continues its rapid development, these tensions between commercial interests, scientific integrity, and societal impacts are likely to intensify. The resignation of researchers concerned about the suppression of negative findings suggests that beneath the optimistic public narratives about AI’s future, significant questions remain about who will benefit from this technology revolution—and who might be left behind.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

23 Comments

  1. Interesting update on OpenAI Prioritizes Propaganda Over Research, Claim Former Researchers. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.