Listen to the article
In a high-stakes legal battle that has captivated Silicon Valley, Elon Musk and OpenAI CEO Sam Altman are facing off in an Oakland courtroom over fundamental questions about artificial intelligence’s future and the original mission of the company they once built together.
The trial, which began last week in federal court, centers on Musk’s allegations that Altman betrayed the founding principles of OpenAI by pivoting from a nonprofit structure to a for-profit model. Altman’s team counters that Musk is simply attempting to undermine a competitor to benefit his own AI ventures.
While Judge Yvonne Gonzalez Rogers has explicitly warned attorneys not to get “sidetracked” by broader debates about AI’s dangers, the existential questions about advanced artificial intelligence have nonetheless permeated the proceedings. Witness testimony has repeatedly touched on concerns ranging from workforce disruption to the potential catastrophic risks of superhuman AI.
AI pioneer Stuart Russell, brought by Musk’s legal team as an expert witness at $5,000 per hour, outlined numerous AI-related threats during his testimony. The University of California, Berkeley computer scientist detailed risks including racial and gender discrimination, job displacement, misinformation, and concerning emotional attachments formed by some chatbot users that can lead to psychological harm.
Russell emphasized the winner-take-all nature of the race toward artificial general intelligence (AGI), systems that would surpass human capabilities across most tasks. “Whichever company develops AGI first would have a very big advantage” and an increasingly insurmountable lead over competitors, he told the court.
At the heart of the dispute is the 2015 establishment of OpenAI as a nonprofit startup primarily funded by Musk, the world’s richest person. Both Musk and Altman claim they envisioned OpenAI safely developing advanced AI for humanity’s benefit, not for individual profit or control. Each side now accuses the other of attempting to dominate the organization.
Despite the judge’s early admonishment that “this is not a trial on the safety risks of artificial intelligence” or “whether or not AI has damaged humanity,” Musk managed to express his concerns during testimony. He described artificial general intelligence as technology “as smart as any human,” adding that “we are getting close to that point” with superintelligent systems potentially emerging as soon as next year.
“I was concerned AI would be a double-edged sword,” Musk testified, explaining that he wanted OpenAI to serve as a “counterpoint” to Google, which he characterized as having “all the money, all the computers and all the talent” for AI development with no balancing force.
Musk repeatedly emphasized that he deliberately chose to establish OpenAI as a nonprofit “for the public good” rather than creating a for-profit venture like his other companies. However, Judge Gonzalez Rogers expressed skepticism about this position, noting that Musk “despite these risks, is creating a company that is in the exact same space,” referring to xAI, the billionaire’s AI company launched in 2023 that has since merged with his rocket company SpaceX.
For their part, OpenAI’s representatives maintain that their mission remains focused on public benefit. Co-founder and president Greg Brockman, also named as a defendant alongside Altman and OpenAI, testified that he considered the technology they were developing to be “transformative” — bigger than corporate structures or any individual. “It was about humanity as a whole,” he stated.
Brockman further testified that his primary goal was always OpenAI’s mission, while claiming it was Musk who sought unilateral control. He recounted a meeting where Musk initially seemed receptive to Altman serving as CEO but ultimately declared that “people needed to know he was in charge.”
Beyond monetary damages, Musk is seeking Altman’s removal from OpenAI’s board. A victory for Musk could potentially derail OpenAI’s reported plans for an initial public offering.
The nine-person jury, selected from the San Francisco Bay Area, will ultimately determine which Silicon Valley titan’s version of events is more credible in a case that has broader implications for the future governance and direction of AI development.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
The conflicting visions between Musk and Altman underscore the complex challenges in balancing innovation, profit motives, and public safety when it comes to transformative AI. Thoughtful regulation will be critical to navigate these waters.
Absolutely. Careful policy frameworks need to be developed to ensure AI development remains aligned with the greater good of humanity.
It’s concerning to hear about the potential catastrophic risks of advanced AI outlined in this trial. Robust safeguards and stringent oversight will be essential as this technology continues to advance at a rapid pace.
Agreed, we can’t afford to be complacent. Proactive steps must be taken to mitigate the existential risks posed by superintelligent AI systems.
This trial serves as a stark reminder that the development of superintelligent AI carries profound implications for the future of humanity. Careful consideration of safety and societal impact must be at the forefront as this technology progresses.
Absolutely critical. Proactive, interdisciplinary work is needed to ensure the responsible advancement of transformative AI capabilities.
This trial highlights the critical need to carefully manage the development of powerful AI systems. The risks of advanced AI are quite serious and must be weighed against the potential benefits. Responsible governance and oversight will be key as the technology evolves.
Agreed, the stakes are incredibly high. AI’s unprecedented capabilities could pose existential threats if not approached with the utmost caution and foresight.
This trial highlights the need for a collaborative, multi-stakeholder approach to shaping the future of AI. Striking the right balance between innovation and safety will require input from technologists, policymakers, ethicists and the public.
Absolutely. An inclusive, transparent process will be key to developing AI governance frameworks that build public trust and ensure the technology remains beneficial to humanity.
The conflicting visions and high stakes in this trial underscore the immense importance of getting the governance and regulation of advanced AI right. Maintaining a balanced, thoughtful approach will be key to navigating the complex challenges ahead.
Well said. Striking the right balance between innovation and safety will be a defining challenge of our time as AI continues to rapidly evolve.
The expert testimony on AI risks underscores the immense importance of developing robust safety protocols and oversight mechanisms. Responsible innovation should be the guiding principle as this transformative technology advances.
Well said. Navigating the complex ethical and technical challenges of advanced AI will require a sustained, collaborative effort across disciplines.