Listen to the article
Conservative activist Robby Starbuck has filed a $15 million defamation lawsuit against Google, alleging the company’s artificial intelligence tools falsely linked him to sexual assault allegations and white nationalist Richard Spencer. The lawsuit, filed in Delaware Superior Court on October 22, 2025, represents Starbuck’s second legal action against a major technology company over AI-generated misinformation.
The Wall Street Journal broke the news, reporting that Starbuck claims Google’s AI search tools produced defamatory content about him. Google spokesperson José Castañeda responded the same day through the company’s official social media channels, stating the issues “mostly deal with claims related to hallucinations in Bard that we addressed in 2023.”
“We know LLMs aren’t perfect, and hallucinations are a known issue, which we disclose and work hard to minimize,” Castañeda wrote, acknowledging the technical challenges inherent in large language models while defending Google’s approach to addressing them.
The complaint follows a similar pattern established earlier this year. In April 2025, Starbuck sued Meta, claiming its AI falsely insisted he participated in the January 6th attack on the Capitol and had been arrested for a misdemeanor. That case was resolved when Meta settled by hiring Starbuck as an advisor to combat “ideological and political bias” in its chatbot. The exact terms of that settlement remain undisclosed.
Google’s defense strategy centers on the technical limitations of AI systems. “But it’s also true that if you’re creative enough, you can prompt a chatbot to say something misleading,” Castañeda stated in the company’s response. Google referenced an independent study showing it has “the least biased LLM” among competitors, though the specific study was not identified.
The legal landscape for AI defamation remains largely uncharted territory. According to The Wall Street Journal, no court in the United States has awarded damages in a defamation suit involving an AI chatbot. In a similar case from 2023, conservative radio host Mark Walters sued OpenAI, claiming ChatGPT defamed him by falsely linking him to fraud and embezzlement accusations. Courts found in favor of OpenAI, determining Walters failed to prove “actual malice.”
Starbuck has built a public profile through online campaigns targeting corporate diversity initiatives. His social media presence focuses on pressuring companies to modify or eliminate diversity, equity, and inclusion programs. The activist’s legal strategy appears aimed at securing influence within technology companies rather than solely pursuing financial compensation.
Google attempted to resolve the matter before litigation. “We did try to work with the complainant’s lawyers to address their concerns,” Castañeda noted in the statement. The company indicated it would “review the complaint” once formally received.
The timing coincides with growing scrutiny of how AI systems handle personal information and generate content. Just last month, Penske Media Corporation filed a 101-page federal antitrust lawsuit against Google, alleging the search giant systematically coerces publishers into providing content for AI systems without compensation. That complaint examines the same suite of products implicated in Starbuck’s defamation claims.
Technical research on AI hallucinations published by OpenAI researchers on September 4, 2025, reveals fundamental statistical causes behind false but convincing AI-generated information. The research demonstrates that language models hallucinate because they function like students taking exams—rewarded for guessing when uncertain rather than admitting ignorance.
The complaint’s focus on Bard carries particular significance, as Google’s response explicitly states the claims “mostly deal with claims related to hallucinations in Bard that we addressed in 2023.” This timeline suggests the alleged defamatory content originated from an earlier iteration of Google’s AI systems rather than current implementations.
Based on the precedent set with Meta, Starbuck may prioritize securing an advisory position at Google over monetary compensation. That settlement granted him influence over Meta’s AI development processes, particularly regarding alleged ideological and political bias.
Multiple technology companies face mounting legal pressure over AI systems. Courts are increasingly grappling with accountability when AI produces false information, as evidenced by recent cases involving AI-generated fake legal citations that resulted in sanctions against attorneys.
The Delaware Superior Court will need to determine whether existing defamation frameworks adequately address AI-generated content. Courts in other jurisdictions have found plaintiffs failed to meet the actual malice standard when suing AI companies, but each case presents unique factual circumstances.
The outcome of this case could significantly influence how technology companies approach similar disputes in the future. If Starbuck secures an advisory position at Google similar to his Meta arrangement, it could establish a precedent for activists leveraging defamation threats to gain influence over AI development. Alternatively, if Google successfully defends the suit, it might discourage future plaintiffs from pursuing AI defamation claims.
For the marketing industry, these developments raise important questions about brand safety as AI-generated content increasingly appears in search results and advertising contexts. The case highlights the complex challenges at the intersection of artificial intelligence, legal liability, and the protection of individual reputation in the digital age.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
I’m not surprised to see another lawsuit related to AI-generated content. As these tools become more prevalent, we’ll likely see more of these kinds of legal challenges.
You make a fair point. Mitigating the risks of AI-driven misinformation will be an ongoing process of trial and error, with lawsuits playing a role in shaping the landscape.
While the potential for AI-driven misinformation is concerning, I’m hopeful that continued research and development will lead to more reliable and accountable AI systems over time.
You make a fair point. Responsible innovation and proactive measures to address AI’s limitations could help mitigate these kinds of risks down the line.
The lawsuit highlights the need for stronger regulations and accountability measures around AI development and deployment, especially when it comes to sensitive personal information and reputations.
Absolutely. As AI becomes more ubiquitous, the public deserves assurances that their rights and privacy will be protected from potential misuse or errors.
This is a concerning case about the real-world impacts of AI-generated misinformation. It’s critical that tech companies prioritize accuracy and transparency around the limitations of their AI tools.
Agreed. Robust safeguards and oversight are needed to prevent such harmful falsehoods from being amplified, even unintentionally, by powerful AI systems.
This is a sobering reminder that the rapid advancement of AI technology outpaces our ability to fully understand and control its implications. Policymakers have their work cut out for them.
Absolutely. Striking the right balance between innovation and safeguards will require careful, collaborative effort from tech companies, lawmakers, and civil society.
I’m curious to see how this case progresses and what it might mean for the future of content moderation and AI governance. Tech companies will need to grapple with these challenges.
Indeed, this lawsuit could set an important precedent for how the legal system approaches AI-related harms. It’s a complex issue without easy solutions.