Listen to the article

0:00
0:00

OpenAI’s Altman Questions Social Media Authenticity Amid Bot Proliferation

OpenAI CEO Sam Altman expressed growing concern about the authenticity of social media content on Monday, noting that the line between human and AI-generated posts has become increasingly blurred. “I have had the strangest experience reading this: I assume it’s all fake/bots, even though in this case I know codex growth is really strong and the trend here is real,” Altman posted on X.

His comments came after browsing the r/Claudecode subreddit, where numerous users have been sharing their experiences switching from Anthropic’s Claude Code to OpenAI’s recently launched Codex programming service. The subreddit has become so flooded with migration testimonials that one user sarcastically asked: “Is it possible to switch to codex without posting a topic on Reddit?”

Altman, who remains a significant Reddit shareholder following the platform’s IPO last year, reflected on several factors contributing to the authenticity crisis. He noted that “real people have picked up quirks of LLM-speak” and “the Extremely Online crowd drifts together in very correlated ways,” creating an environment where human communication increasingly resembles AI-generated content—despite LLMs being designed to mimic human communication in the first place.

The irony hasn’t been lost on observers that OpenAI’s models were trained on Reddit content while Altman served as a board member through 2022. This feedback loop—where AI learns from humans who then adopt AI speech patterns—has accelerated the convergence between human and machine communication styles.

This phenomenon extends beyond linguistic patterns. Altman highlighted how social media incentive structures—particularly those tied to monetization and engagement metrics—have further distorted online discourse. He also acknowledged the impact of “astroturfing,” suggesting that OpenAI itself has been targeted by coordinated campaigns possibly orchestrated by competitors.

The timing of Altman’s observations is particularly notable following OpenAI’s troubled GPT-5 launch. Instead of the expected wave of enthusiasm, many users took to Reddit and X to voice frustrations about the model’s performance issues, from its altered personality to inefficient credit consumption. Altman subsequently conducted a Reddit AMA acknowledging the rollout problems and promising improvements, but user sentiment has remained largely negative.

“The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago,” Altman concluded in his post.

The proliferation of AI-generated content across the internet has reached concerning proportions. Data security company Imperva reported that over half of all internet traffic in 2024 was non-human, primarily due to the widespread use of large language models. On X specifically, the platform’s own AI assistant Grok has acknowledged that “hundreds of millions of bots” likely populate the service.

This digital authenticity crisis has spread beyond social media into education, journalism, and even legal proceedings. Schools struggle with AI-generated assignments, news organizations face scrutiny over AI content, and courts have encountered AI-fabricated legal submissions.

Some industry observers have speculated that Altman’s commentary might be laying groundwork for OpenAI’s rumored social media platform. Reports from April indicated the company was in early stages of developing a service to compete with established platforms like X and Facebook.

However, creating a bot-free social experience presents significant challenges. Even if OpenAI attempted to build such a platform, the distinction between human and AI communication continues to blur. A University of Amsterdam study demonstrated that even in a social network composed entirely of AI bots, participants quickly formed the same problematic behaviors seen in human networks—creating cliques, echo chambers, and exhibiting tribal behavior.

As AI capabilities advance, distinguishing between authentic human expression and synthetic content may become increasingly difficult, raising fundamental questions about the future of online communication and digital trust.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

8 Comments

  1. William Jackson on

    Altman’s comments highlight an important issue that could have implications for how information about the mining and commodities industries is perceived and shared online. It will be interesting to see if platforms develop better tools to address bot proliferation.

  2. Interesting perspective from Altman. The bot proliferation on social media is certainly concerning and makes it harder to discern authentic human voices. I wonder how this trend could impact the mining and energy sectors if trust in online discourse continues to erode.

    • That’s a good point. Misinformation and inauthentic content could sway public sentiment around critical issues in the mining and energy spaces.

  3. Patricia Hernandez on

    As someone with a keen interest in the mining and energy sectors, I’m concerned about the potential impact of widespread bot activity on social media. Altman raises valid points about the erosion of trust and authenticity in online discourse.

    • Agreed. Misinformation and coordinated bot campaigns could significantly influence public opinion and decision-making around critical issues in these industries.

  4. John R. Hernandez on

    I agree, the blurring of the line between human and AI-generated content is a real challenge. As someone invested in mining and commodities, I hope platforms can find ways to better identify and limit the spread of bot-driven narratives on these topics.

    • Absolutely. Maintaining trust and credibility in online discussions around mining, energy, and related equities is crucial for investors and the public.

  5. The bot proliferation issue that Altman highlights is certainly worrying, especially given the potential implications for how information about mining, commodities, and energy is shared and perceived online. I hope platforms find effective ways to address this challenge.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.