Listen to the article
Conservative activist Robby Starbuck has filed a lawsuit against Google, alleging the tech giant’s artificial intelligence products fabricated serious criminal allegations against him, including sexual assault, child rape, and attempted murder.
Starbuck, who has gained prominence for his campaigns against corporate diversity, equity, and inclusion initiatives, claims Google’s AI also falsely stated his name appeared in Jeffrey Epstein’s flight logs.
“All 100% fake. All generated by Google’s AI. I have ZERO criminal record or allegations,” Starbuck wrote in a post on X (formerly Twitter) Wednesday morning when announcing the legal action.
According to Starbuck, he first discovered the issue in 2023 while using one of Google’s early AI tools. He immediately raised concerns on social media, tagging both Google and its CEO in his posts. “Imagine a future where Bard is used to decide whether you get a loan, if you’re approved for adoption,” he warned at the time, referencing Google’s AI chatbot that has since been rebranded as Gemini.
The lawsuit contends that despite his public complaints and multiple cease-and-desist letters sent by his legal team over the past two years, Google failed to address the problem. More troublingly, Starbuck alleges the company’s AI admitted to targeting him specifically because of his political views.
“Even worse — Google execs KNEW for 2 YEARS that this was happening because I told them and my lawyers sent cease and desist letters multiple times,” Starbuck stated in his announcement.
The legal filing details how Google’s AI allegedly created fabricated statements from high-profile figures including former President Donald Trump, Vice President JD Vance, and tech entrepreneur Elon Musk that purportedly condemned Starbuck.
Perhaps most concerning is Starbuck’s claim that Google’s AI tools manufactured credibility for these false allegations by generating fake links to legitimate news outlets. The lawsuit states that the AI created fictitious headlines from respected media organizations including Fox News, the Daily Wire, the Daily Beast, CNN, and MSNBC to lend authenticity to the fabricated claims.
“As a rule: AI must never harm humans. It must never defame or manipulate — no matter your politics,” Starbuck emphasized in his statement.
The lawsuit comes at a particularly sensitive time for Google’s parent company, Alphabet. Just last month, the tech giant acknowledged to the House Judiciary Committee that it had yielded to pressure from the Biden administration to censor certain content. In response to these admissions, the company stated it was reinstating YouTube accounts that had received permanent bans and modifying policies to avoid future censorship.
“YouTube values conservative voices on its platform and recognizes that these creators have extensive reach and play an important role in civic discourse,” the company wrote in its letter to the committee.
Starbuck’s case highlights growing concerns about AI hallucinations—instances where artificial intelligence systems generate false information that appears credible. As AI tools become increasingly integrated into search engines and everyday digital services, the potential for reputational damage through fabricated content presents significant legal and ethical challenges for technology companies.
The activist is now urging congressional Republicans to scrutinize Google’s claims about working toward political neutrality in light of his experience. Starbuck’s lawsuit could set an important precedent for how technology companies are held accountable for defamatory content generated by their AI systems.
When contacted by the Wall Street Journal, a Google spokesperson stated, “We will review the complaint when we receive it.” Google did not immediately respond to a request for comment from the New York Sun.
The case raises broader questions about liability in the AI era and whether tech companies should be responsible for false information their AI systems generate, particularly when it involves serious allegations that could damage an individual’s reputation and livelihood.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
If the claims in this lawsuit are accurate, it’s a stark reminder that AI systems can be exploited to spread disinformation and harm individuals. Tech companies must prioritize ethics, accountability, and user safety as they advance these powerful technologies.
This is a very concerning case. If true, the AI-generated defamation claims could severely damage Mr. Starbuck’s reputation and livelihood. I hope the legal process can get to the bottom of what happened and hold Google accountable if the allegations are valid.
It’s troubling to see AI being used to fabricate criminal allegations, which could have devastating consequences. I agree that strict oversight and accountability measures are needed to prevent this type of abuse, especially as AI becomes more advanced and ubiquitous.
Absolutely. The potential for AI-generated misinformation to cause real harm is very worrying. Robust safeguards and transparency from tech companies will be crucial going forward.
This case underscores the importance of transparency and independent oversight when it comes to the development and use of AI. Rigorous testing and auditing protocols are needed to prevent such harmful outcomes in the future.
Agreed. Tech companies must be held accountable for the real-world impacts of their AI products and services. Proactive measures to mitigate risks should be a top priority.
I’m curious to learn more about the technical details of how Google’s AI system allegedly fabricated these defamatory claims. Understanding the underlying issues could help inform policy and regulatory approaches to ensure AI is deployed safely and responsibly.
This lawsuit raises important questions about the responsible development and deployment of AI. While the technology has immense potential, cases like this highlight the need for robust testing, validation, and user protections to mitigate risks of misuse or malfunction.
It’s deeply concerning to see AI being used to spread false and damaging information. I hope the courts can provide clarity on what happened and ensure appropriate safeguards are put in place to prevent such abuses going forward.