Listen to the article

0:00
0:00

In a troubling trend impacting the legal system, artificial intelligence tools are increasingly causing problems in court cases due to “hallucinations” – confidently stated but entirely fictional legal references. This growing issue has now reached False Claims Act litigation, with multiple recent cases highlighting the dangers of unverified AI output in legal proceedings.

As of September 19, 2025, researchers have documented 244 court opinions noting fabricated material attributed to AI use. A separate study identified 22 cases in just a one-month period between June and August 2025 where courts found fictitious cases cited in legal documents.

False Claims Act (FCA) litigation, which involves cases where the government or individuals acting on its behalf pursue penalties against those who have allegedly defrauded government programs, has not been immune to this problem. Two recent high-profile cases demonstrate the specific risks within this specialized legal field.

In Smith v. Athena Construction Group, Inc., an attorney was forced to withdraw from representing a client after submitting a brief containing non-existent case law and fake quotations. The attorney had reportedly used AI tools including Grammarly and Lexis+. A co-counsel who filed the brief on the attorney’s behalf admitted to not checking citations, stating there was “no reason to know or suspect” the references were inaccurate.

Similarly, in Khourey v. Intermountain Health Care Inc., defendants challenged a relator’s expert witness for submitting reports with fabricated citations. During a deposition, the expert admitted to using ChatGPT for research assistance. The relators ultimately had to withdraw the questioned expert testimony and report.

These cases highlight the dual vulnerability of FCA litigation to AI hallucinations – both in attorney-prepared filings and expert witness materials. The sophisticated and technical nature of FCA cases, which often involve complex regulatory frameworks and industry-specific knowledge, may make them particularly susceptible to undetected AI errors.

Legal professionals are now being warned to implement strict verification processes. Attorneys must thoroughly check all citations and legal assertions in any filing before submission, regardless of whether they personally used AI or are reviewing a colleague’s work. Failure to verify sources could potentially violate Federal Rule of Civil Procedure Rule 11 or professional conduct rules outlined in the American Bar Association’s Formal Opinion 512, issued in June 2024.

Courts are responding to the challenge as well. Federal judges are increasingly adopting standing orders and local rules specifically addressing AI usage in their courtrooms. FCA litigators are advised to check for such requirements in their jurisdictions before submitting any materials.

For expert witnesses, who play a crucial role in the often technically complex FCA cases, the stakes are equally high. Legal teams are now considering adding clauses to expert engagement letters requiring disclosure of any AI use. Beyond hallucination risks, experts’ AI use raises fundamental questions about evidence reliability and credibility.

If experts do utilize AI tools, they must be prepared to explain the technology’s methodology, ensure results can be consistently reproduced, and independently verify that no false sources were incorporated. The core principles of evidence – transparency and verification – remain paramount despite technological advancements.

While AI technology continues to evolve rapidly and offers potential benefits for legal research and document preparation, these cases serve as a stark warning about its current limitations. For those involved in FCA litigation, where allegations of fraud are central to proceedings, the irony of introducing inadvertent “false claims” through AI hallucinations is particularly concerning.

As one legal expert noted, “AI is not going anywhere anytime soon. It has the potential to be a useful tool in the FCA litigation space, but litigants must be on the lookout for these potential pitfalls to avoid inadvertently making legal ‘false claims’ of their own.”

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.