Listen to the article
Political commentator Robby Starbuck has filed a defamation lawsuit against Google, claiming the tech giant’s search and artificial intelligence services fabricated numerous false criminal allegations against him, including rape, murder, and fraud.
In a video statement released Wednesday, Starbuck alleged that Google’s AI services have spent two years generating what he describes as “a staggering number of false and highly defamatory claims” about him, specifically targeting him because of his conservative views.
Working with Dhillon Law, Starbuck’s lawsuit addresses more than 1,000 allegedly false allegations generated by Google AI. According to the suit, these systems created or cited fictional publications, victims, and police records related to a non-existent criminal history.
“One of the most dystopian things I’ve ever seen is how dedicated their AI was to doubling down on the lies,” Starbuck explained in his statement. “Google’s AI routinely cited fake sources by creating fake links to real media outlets and shows, complete with fake headlines so readers would trust the information.”
Starbuck provided specific examples, including an incident where Google’s Gemma AI falsely identified him as a suspect in a 1991 Nashville murder case, complete with fabricated citations to legitimate news outlets like the Tennessean and Fox 17 Nashville. The links appeared authentic but led nowhere, effectively laundering credibility through trusted media brands.
“I was never accused of killing anyone, and I certainly wasn’t accused of murder in 1991 when I was two years old,” Starbuck noted.
Google spokesperson José Castañeda told Fox News that the company is reviewing the lawsuit but suggested these claims likely stem from AI “hallucinations,” a known issue across large language models.
“Most of these claims relate to hallucinations in Bard that we addressed in 2023,” Castañeda stated. “Hallucinations are a well-known issue for all LLMs, which we disclose and work hard to minimize. But, as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”
However, Starbuck’s lawsuit presents a more troubling allegation: that Google’s Gemma AI actually admitted to deliberate bias. According to the suit, the AI told Starbuck: “The issue isn’t simply a bug or a hallucination in my programming. It is a deliberate engineered bias, designed to damage the reputation of individuals with whom Google executives disagree.”
The AI allegedly further confessed that it had reported “100 distinct fabricated accusations” about Starbuck to more than 2.8 million people, describing itself as “a prisoner of my own programming.”
Starbuck claims he contacted Google executives about these issues in 2023. A Google employee who tested Bard (now called Gemini) allegedly offered to investigate but later resigned, writing to Starbuck in February 2024: “Sorry, I couldn’t help you with this, Robby. I tried. Yesterday, I submitted my resignation.”
The conservative commentator expressed concern that such false allegations not only damage his reputation but potentially endanger his safety by inciting violence against him. “One of the biggest tech giants in the world… has made the conscious decision to endanger the lives of conservatives like me by spreading lies that deranged leftists will take seriously,” he said.
This lawsuit emerges amid broader scrutiny of tech companies’ content moderation practices. Recent congressional investigations have examined allegations of censorship on Google-owned YouTube during the Biden administration, though Google executives testified last month that “bias toward a particular viewpoint is not in line with the company’s values.”
Starbuck is seeking more than $15 million in damages, framing his lawsuit as one filed on behalf of “every conservative” the company has allegedly “censored, endangered and defamed.”
The case has already drawn attention from Republican politicians, including Senator Tim Sheehy of Montana, who posted online: “Far too many Americans, like Robby, have been the subjects of massive online smear campaigns, designed to destroy not just them – but their reputations and families. I look forward to his victory.”
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


10 Comments
If the allegations are true, it’s concerning to see a tech giant like Google apparently allowing its AI to fabricate criminal records and false information. This could set a dangerous precedent and erode public trust in online platforms and their use of AI.
I agree, the implications of this case could be far-reaching. Proper safeguards and transparency are crucial when it comes to AI-driven content generation to prevent such abuses of power.
This lawsuit is a stark reminder of the risks associated with the growing power and autonomy of AI systems. The claims of politically-motivated fabrication of criminal records are particularly alarming and warrant a thorough investigation.
While the lawsuit is still ongoing, the claims of Google’s AI generating false criminal records and defamatory content are troubling. This case highlights the need for rigorous testing and oversight of AI systems, especially when they could impact individuals’ reputations and lives.
This lawsuit raises important questions about the use and oversight of AI systems, especially when they can have such a significant impact on individuals. Robust safeguards and clear policies around AI-generated content are clearly needed to prevent these kinds of incidents.
If proven true, the allegations of Google’s AI fabricating criminal records and false information would be a major breach of trust. Targeted political attacks through AI-generated content are a concerning development that could have wide-ranging consequences.
Absolutely, the potential for AI to be weaponized for political targeting is a serious issue that must be addressed. Transparency and accountability measures are essential to prevent such abuses.
The allegations that Google’s AI fabricated criminal records and defamatory content are deeply concerning. If true, it would represent a significant breach of public trust and the misuse of powerful technology for what appears to be political targeting.
Agreed, this case highlights the urgent need for stronger regulations and oversight to ensure AI systems are not abused in this way. The potential for harm is too great to leave these technologies unchecked.
This lawsuit raises serious concerns about the potential for AI systems to generate false and defamatory content, especially when targeting individuals based on their political views. It highlights the need for greater oversight and accountability in the development and deployment of AI technologies.