By now, the story of two New York attorneys facing scrutiny for citing nonexistent cases generated by the artificial intelligence (“AI”) tool ChatGPT has made national (and international) headlines. Late last month, a federal judge in the Southern District of New York sanctioned the attorneys and their firm $5,000. The court’s decision (Roberto Mata v. Avianca, Inc., No. 22-cv-1461-PKC (S.D.N.Y. June 22, 2023) (ECF No. 54)) provides a humbling reminder of both an attorney’s responsibilities in ensuring the accuracy of his or her filings, and the limits of certain technologies in the legal profession.
The primary focus of the court’s decision was the conduct of the attorneys rather than an attack on the use of AI technology in general. As the court explained at the outset: “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” Id. at 1. The court explained that the problem of the attorneys citing fake cases generated by ChatGPT was made worse because the attorneys “continued to stand by the fake opinions after judicial orders called their existence into question.” Id. Indeed, the court found that the attorneys acted in bad faith and violated their Rule 11 obligations by, among other things, continuing to advocate for the fake cases and legal arguments premised on those cases, even after being informed by their adversary and the court that the cases could not be located. Id. at 29.
The court’s decision not only reaffirms the duties attorneys have to the court, their adversaries, and their clients. Id. at 1-2. It also provides a real-world example of the current limits of AI in conducting legal research. The court described the fake decisions generated by ChatGPT in detail, stating that while they “have some traits that are superficially consistent with actual judicial decisions,” the legal analysis in one of them was akin to “gibberish,” the procedural history “borders on nonsensical,” and the decision “abruptly ends without a conclusion.” Id. at 10-11. When one of the attorneys who cited these fake cases questioned the AI tool about their legitimacy, the answer from ChatGPT maintained that the cases were all real and could be found in legal research databases like Westlaw and LexisNexis. Id. at 41-43. The AI tool was simply wrong. This unfortunate episode confirms that when it comes to legal research, analysis, and advocacy (all of which are obviously key aspects of an attorney’s job), there is still no substitute for human participation and involvement.
This is not to discount the widespread enthusiasm for AI generally in recent years. In fact, routine use of certain AI technology is now commonplace for many lawyers (think predictive coding in eDiscovery). ChatGPT, however, was launched less than a year ago. Yet already there has been talk of it someday making lawyers “obsolete.” That day has not yet arrived, as demonstrated above. Indeed, during the March 2023 Legalweek conference in New York, ChatGPT was reported to suffer from “hallucinations,” which means that “sometimes the technology ‘predicts’ facts that have no actual basis in reality.” In addition to these “hallucinations,” there is concern by some law firms that use of ChatGPT risks exposing confidential client information, with one law firm banning in-office use of ChatGPT entirely. In light of the serious concerns raised by the use of ChatGPT in the tool’s short existence, the likely path forward for many law firms will involve cautiously balancing the benefits that ChatGPT (and other AI tools) can offer, while taking steps to protect against the risks created by reliance on such tools.
[View source.]