“The irony.”
So wrote federal district judge Laura M. Provinzino when she rejected as unreliable an artificial intelligence expert’s report that was found to have contained three non-existent, AI-generated citations. The “irony” here was supplied by the fact the expert’s expertise is on AI’s capacity to mislead, and the case itself involved a First Amendment challenge to a Minnesota law forbidding the dissemination of so-called “deepfakes” with the intent to injure a political candidate or influence the result of an election.
So, yes … the irony.
Importantly, however, the court’s opinion in Kohls v. Ellison, No. 24-cv-3754 (D. Minn., Jan. 10, 2025), is instructive for litigators. It is a reminder that everything filed with a court – not just legal briefs and other advocacy filings – must be vetted for errors possibly caused by the use of artificial intelligence.
The substance of the expert’s opinion in the Kohls case should also be of interest to litigators because it aligns with the near-universal belief among litigators that video evidence is more trustworthy and compelling than textual information in the eyes of jurors. That’s what drives laws prohibiting deepfakes: the belief that AI-generated, faked video images on a screen are so compelling that constitutional free speech protections must bend to allow restrictions on their dissemination.
According to the state’s experts in Kohls, research suggests that audiovisual information is more likely to be trusted than verbal messages alone. And when the information is transmitted via a deepfaked video, the deception is potentially greater than verbal deception because of the primacy of visual communication for human cognition.
The court expressed disappointment that the expert demonstrated less care with a legal filing than he gave to academic papers. In fact, the court added, “the Court would expect greater diligence from attorneys, let alone an expert in AI misinformation at one of the country’s most renowned academic institutions.
In fact, some litigators believe that today’s video-besotted jurors not only prefer evidence they can see but they also expect lawyers to behave the way they’ve seen actors perform on television.
According to pleadings filed in the case, the citation errors were “hallucinations” committed by ChatGPT-4o, which the expert used early in the drafting stages of his report. He admitted in a declaration filed with the court that he failed to go back and check all of the citations, as was his habit when writing academic articles. Regarding the substance of the report, the expert stood by it, stating that his views were unchanged and were accurately expressed notwithstanding the erroneous citations to academic literature. In fact, he noted, one of the incorrect citations should have pointed to a report that he personally authored.
Errors “Shattered” Expert’s Credibility
The judge found this explanation plausible and innocently made, but not excusable. The citation errors “shatters his credibility with this Court,” the court wrote. The court expressed disappointment that the expert demonstrated less care with a legal filing than he gave to academic papers. In fact, the court added, “the Court would expect greater diligence from attorneys, let alone an expert in AI misinformation at one of the country’s most renowned academic institutions.”
Turning to the larger question of artificial intelligence’s place in the justice system, the court didn’t fault the expert for using it, adding this caveat:
But when attorneys and experts abdicate their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers, the quality of our legal profession and the Court’s decisional process suffer. The Court thus adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions!
Under Rule 11 of the Federal Rules of Civil Procedure, lawyers have a “personal, nondelegable responsibility” to validate the truth and legal reasonableness of all papers filed in federal civil litigation. This duty extended to the false citations contained in the expert’s declaration, even if they were unintentionally made and filed with the court by attorneys having no intention to mislead.
Judge Provinzino remarked, in closing, that the “inquiry reasonable under the circumstances,” as provided in Rule 11(b), might now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.
The news wasn’t all bad for the state’s lawyers in Kohls. Also on Jan. 10, the court dismissed the plaintiff’s complaint seeking a preliminary injunction against enforcement of the Minnesota deepfake law.
The Kohls case isn’t the first to identify legal pitfalls with AI-generated information. In Mata v. Avianca Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023), the trial court imposed sanctions on an attorney for including fake, AI-generated legal citations in a filing. In Park v. Kim, 91 F.4th 610, 614–16 (2d Cir. 2023), the U.S. Court of Appeals for the Second Circuit referred an attorney for potential discipline for including fake, AI-generated legal citations in a filing. And in Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024), a state appellate court dismissed an appeal because the attorney filed a brief with multiple fake, AI-generated legal citations. Courts, it seems, have zero tolerance for AI-generated errors.