ChatGPT may be smart enough to pass the bar exam, but lawyers should take caution before relying on the Artificial Intelligence (“AI”) platform to conduct any legal business.
On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes and citations.
The underlying dispute behind Judge Castel’s opinion was a personal injury claim. Roberto Mata, the client of the two sanctioned lawyers, sought to hold the airline Avianca liable for an injury he endured when a metal serving cart struck him during a flight in 2019. In response to these allegations, Avianca filed a motion to dismiss, alleging that the statute of limitations had expired. Mr. Mata’s lawyers then filed a 10-page brief, the subject of Judge Castel’s order, arguing that the case should be allowed to proceed. Avianca’s lawyers were unable to locate the law cited in the brief and brought this to the attention of the Court. As Judge Castel noted in his order, it was at this point that Mr. Mata’s lawyers made a grave error. Rather than withdrawing their brief, they “doubled down” on their lies, “and did not begin to dribble out the truth” until much later.
At the sanctions hearing, one of Mr. Mata’s lawyers explained that although he couldn’t find some of the cases generated upon searching for them, he operated “under the false perception that this website [i.e., ChatGPT] could not possibly be fabricating cases on its own.” While this assumption was erroneous at best, the artificial intelligence platform did in fact weave real cases, and the names of real judges, within some of the fabricated decisions. Judge Castel’s opinion offers a detailed analysis of one such opinion, Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019), which the sanctioned lawyers produced to the Court. The Varghese decision is presented as being issued by three Eleventh Circuit judges. While according to Judge Castel’s opinion the decision “shows stylistic and reasoning flaws that do not generally appear in decisions issued by the United States Court of Appeals,” and contains a legal analysis that is otherwise “gibberish,” it does in fact reference some real cases. Additionally, when confronted with the question of whether the case is real, the AI platform itself doubles down, explaining that the case “does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.”
In response to the lawyers’ misconduct, Judge Castel issued a sanctions order, sanctioning the attorneys under Rule 11 and fining them $5,000 to be jointly and severally imposed. To ensure that the penalty was “sufficient but not more than necessary to advance the goals of specific and general deterrence,” Judge Castel further did not mandate an apology as a “compelled apology” he wrote, “is not a sincere apology.”
As AI technology continues to become more pervasive, Judge Castel’s opinion, and the conduct of these two sanctioned lawyers, should caution those within the profession to take heed when relying on any sort of technology, let alone AI technology. While this scandal may have been among the first, it likely will not be the last. In fact, one federal district judge in Texas, Judge Brantley Starr, recently issued an order warning lawyers against using any artificial intelligence—including ChatGPT, Harvey.AI or Google Bard—in drafting legal briefs. In part, Judge Starr explained “while attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth).” For this reason, amongst others, Judge Starr concluded that all attorneys and pro se litigants before him must file a certificate attesting either that no portion of their briefing used “generative artificial intelligence” or that any language drafted by this technology was checked for accuracy by a human.
While it may be true that automated artificial intelligence will one day completely outsmart humans, the technology is not quite there. Moreover, as this technology is “unbound by any sense of duty, honor, or justice,” as Judge Starr warns, lawyers should take heed before relying on it in any major part. Quite possibly, a human touch may never be replaceable.
[View source.]