Listen to the article

0:00
0:00

The growing battle over AI-generated misinformation took a new turn yesterday as conservative activist Robby Starbuck filed a lawsuit against Google, claiming the tech giant’s artificial intelligence systems fabricated damaging and defamatory information about him.

Filed in Delaware state court, the lawsuit alleges Google’s AI models labeled Starbuck as a “child rapist,” “serial sexual abuser” and “shooter” in responses to user queries. Starbuck claims these false statements were delivered to millions of users through Google’s AI products.

Google spokesperson Jose Castaneda responded that most claims stemmed from “hallucinations” in Google’s Bard large language model, issues the company worked to address in 2023. “Hallucinations are a well-known issue for all LLMs, which we disclose and work hard to minimise,” Castaneda said. “But as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”

The lawsuit details several specific incidents where Google’s AI systems allegedly fabricated information about Starbuck. In December 2023, Starbuck discovered that Google’s Bard had falsely connected him with white nationalist Richard Spencer, citing non-existent sources. More recently, in August, Google’s Gemma chatbot reportedly spread false sexual assault allegations against him, also based on fictitious sources.

According to court documents, the AI systems additionally claimed Starbuck committed spousal abuse, attended the January 6 Capitol riots, and appeared in Jeffrey Epstein’s files – all allegations Starbuck vehemently denies.

Starbuck, known for his opposition to diversity, equity and inclusion initiatives, is seeking at least $15 million in damages. “No one — regardless of political beliefs — should ever experience this,” Starbuck said in a statement about the lawsuit. “Now is the time for all of us to demand transparent, unbiased AI that cannot be weaponized to harm people.”

This isn’t Starbuck’s first legal challenge against tech companies over AI-generated content. He filed similar allegations against Meta Platforms in April, ultimately reaching a settlement in August. As part of that resolution, Starbuck became an advisor to Meta on AI issues.

The lawsuit highlights growing concerns among public figures and privacy advocates about the potential for AI systems to generate and spread misinformation. As these systems become more widely available and increasingly sophisticated, their potential to damage reputations and spread false information has emerged as a significant concern.

Starbuck’s complaint specifically cites concerns that the false accusations could lead to increased threats against his life, noting the recent assassination attempt against conservative activist Charlie Kirk.

The case represents one of the first major legal challenges to AI companies over fabricated content about individuals, potentially setting important precedents for how courts will handle liability for AI-generated misinformation. Legal experts have noted that Section 230 of the Communications Decency Act, which typically shields platforms from liability for user-generated content, may not apply as clearly to content generated by the platforms’ own AI systems.

For Google and other AI developers, the lawsuit underscores the complex technical and ethical challenges in preventing AI hallucinations – instances where AI systems confidently generate false information. Despite significant advances in AI technology, completely eliminating such fabrications remains difficult.

Alphabet, Google’s parent company, saw little market reaction to news of the lawsuit. As of late afternoon trading in New York, the stock remained relatively flat, up just 0.06 percent.

The case comes amid broader regulatory scrutiny of AI technologies worldwide, with legislators and policymakers increasingly focused on establishing guardrails for responsible AI development and deployment.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

18 Comments

  1. James V. Thompson on

    This case raises important questions about the responsibility of tech companies for the outputs of their AI models. While creative prompting can be an issue, the onus is on developers to ensure their systems don’t spread harmful misinformation.

    • Agreed. Robust fact-checking and content moderation should be table stakes for any major tech firm deploying AI systems that interface with the public.

  2. The allegations in this lawsuit are concerning, but not entirely surprising given the known limitations of current AI models. I’m curious to see how Google responds and what steps they take to improve the reliability of their AI systems going forward.

  3. Oliver Z. Jones on

    Misinformation and defamation from AI are serious problems that need solutions. While creative prompting can lead to issues, tech firms ultimately hold responsibility for the outputs of their AI models.

    • Absolutely. Robust fact-checking and content moderation are crucial to ensure AI systems don’t cause harm through the spread of false information.

  4. This case highlights the importance of continued research and development to address the challenges of AI-generated content. While the technology holds great promise, incidents like this underscore the need for robust safeguards and transparency measures.

  5. Elizabeth Hernandez on

    While ‘hallucinations’ may be a known issue, that doesn’t excuse the real-world damage they can cause. This lawsuit highlights the urgent need for tech firms to address AI-driven misinformation more proactively.

    • Absolutely. Consumers deserve to have confidence that the information they receive from AI is truthful and reliable. More rigorous safeguards are clearly needed.

  6. This lawsuit raises valid concerns about the potential for AI-generated content to cause real harm, especially when it comes to defamatory statements. Responsible development of these technologies is crucial to protect individuals from false and damaging information.

    • Olivia L. Brown on

      I agree. Tech companies need to be proactive in addressing these issues and ensuring their AI systems are robustly tested for accuracy and safety before being deployed.

  7. While AI hallucinations are a known issue, the allegations in this case seem quite serious. I’m curious to see how the courts will rule and whether it leads to tighter regulations around the use of AI-generated content, particularly in sensitive areas like reputation and character.

  8. James Martinez on

    This lawsuit highlights the challenges AI systems face in generating truthful and reliable information. While ‘hallucinations’ in language models are a known issue, tech companies must take greater responsibility for the outputs their AI produces.

    • William I. Garcia on

      Agreed. AI-generated content can have real-world consequences, so developers need robust systems to detect and prevent the spread of misinformation.

  9. William Jackson on

    Interesting case highlighting the challenges AI models face in generating truthful, factual content. While AI can be a powerful tool, it seems more oversight is needed to prevent the spread of misinformation. I hope this lawsuit leads to improvements in AI transparency and accountability.

  10. Amelia I. Jackson on

    This is a tricky situation that highlights the need for greater transparency and accountability around AI systems. I hope the lawsuit prompts a constructive dialogue on how to balance the benefits of AI with the risks of inaccurate or harmful outputs.

    • Olivia Williams on

      Agreed. Striking the right balance between innovation and responsible development will be crucial as AI becomes more ubiquitous. Careful oversight and clear guidelines will be essential.

  11. This lawsuit underscores the need for greater transparency and accountability around AI-generated content. Tech companies must be held responsible for the accuracy and integrity of their AI models.

    • Well said. Developing trustworthy AI systems that avoid fabricating false claims should be a top priority for the industry.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.