Use of ChatGPT in Federal Litigation Holds Lessons for Lawyers and Non-Lawyers Everywhere

Seyfarth Shaw LLP
Contact

You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial intelligence (AI) program ChatGPT to perform their legal research for the court submission, but did not realize that ChatGPT had fabricated the citations and decisions.  This case should serve as a cautionary tale for individuals seeking to use AI in connection with legal research, legal questions, or other legal issues, even outside of the litigation context.

In Mata v. Avianca, Inc.,(Case No. 22-cv-1461 (S.D.N.Y.)) the plaintiff brought tort claims against an airline for injuries allegedly sustained when one of its employees hit him with a metal serving cart.  The airline filed a motion to dismiss the case. The plaintiff’s lawyer filed an opposition to that motion that included citations to several purported court decisions in its argument. On reply, the airline asserted that a number of the court decisions cited by the plaintiff’s attorney could not be found, and appeared not to exist, while two others were cited incorrectly and, more importantly, did not say what plaintiff’s counsel claimed. The Court directed plaintiff’s counsel to submit an affidavit attaching the problematic decisions identified by the airline.

Plaintiff’s lawyer filed the directed affidavit, and it stated that he could not locate one of the decisions, but claimed to attach the others, with the caveat that certain of the decisions “may not be inclusive of the entire opinions but only what is made available by online database [sic].” (Id. at Dkt. No. 29.) Many of the decisions annexed to this affidavit, however, were not in the format of decisions that are published by courts on their dockets or by legal research databases such as Westlaw and LexisNexis. (Id.)

In response, the Court stated that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations” (Id. at Dkt. No. 31.), using a non-existent decision purportedly from the Eleventh Circuit Court of Appeals as a demonstrative example.  The Court stated that it contacted the Clerk of the Eleventh Circuit and was told that “there has been no such case before the Eleventh Circuit” and that the docket number shown in the plaintiff’s submission was for a different case. (Id.) The Court noted that “five [other] decisions submitted by plaintiff’s counsel . . . appear to be fake as well.” The Court scheduled a hearing for June 8, 2023, and demanded that plaintiff’s counsel show cause as to why he should not be sanctioned for citing “fake” cases. (Id.)

At that point, plaintiff’s counsel revealed what happened. (Id. at Dkt. No. 32) The lawyer who had originally submitted the papers citing the non-existent cases filed an affidavit stating that another lawyer at his firm was the one who handled the research, which the first lawyer “had no reason to doubt.” The second lawyer, who conducted the research, also submitted an affidavit in which he explained that he performed legal research using ChatGPT. The second lawyer explained that ChatGPT “provided its legal source and assured the reliability of its content.” He explained that he had never used ChatGPT for legal research before and “was unaware of the possibility that its content could be false.” The second lawyer noted that the fault was his, rather than that of the first lawyer, and that he “had no intent to deceive this Court or the defendant.” The second lawyer annexed screenshots of his chats with ChatGPT, in which the second lawyer asked whether the cases cited were real. ChatGPT responded “[y]es,” one of the cases “is a real case,” and provided the case citation. ChatGPT even reported in the screenshots that the cases could be found on Westlaw and LexisNexis. (Id.)

This incident provides a number of important lessons. Some are age-old lessons about double-checking your work and the work of others, and owning up to mistakes immediately. There are also a number of lessons specific to AI, however, that are applicable to lawyers and non-lawyers alike.

This case demonstrates that although ChatGPT and similar programs can provide fluent responses that appear legitimate, the information they provide can be inaccurate or wholly fabricated. In this case, the AI software made up non-existent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases. Similar issues can arise in non-litigation contexts as well.  For example, a transactional lawyer drafting a contract, or a trusts and estates lawyer drafting a will, could ask AI software for common, court-approved contract or will language that, in fact, has never been used and has never been upheld by any court. A real estate lawyer could attempt to use AI software to identify the appropriate title insurance endorsements available in a particular state, only to receive a list of inapplicable or non-existent endorsements. Non-lawyers hoping to set up a limited liability company or similar business structure without hiring a lawyer could find themselves led astray by AI software as to the steps involved or the forms needed to be completed and/or filed. The list goes on and on.

The case also underscores the need to take care in how questions to AI software are phrased. Here, one of the questions asked by the lawyer was simply “Are the other cases you provided fake”? (Id.) Asking questions with greater specificity could provide users with the tools needed to double-check the information from other sources, but even the most artful prompt cannot change the fact that the AI’s response may be inaccurate. That said, there are also many potential benefits to using AI in connection with legal work, if used correctly and cautiously. Among other things, AI can assist in sifting through voluminous data and drafting portions of legal documents.  But human supervision and review remain critical.

ChatGPT frequently warns users who ask legal questions that they should consult a lawyer, and it does so for good reason. AI software is a powerful and potentially revolutionary tool, but it has not yet reached the point where it can be relied upon for legal questions, whether in litigation, transactional work, or other legal contexts. Individuals who use AI software, whether lawyers or non-lawyers, should use the software understanding its limitations and realizing that they cannot rely solely on the AI software’s output.  Any output generated by AI software should be double-checked and verified through independent sources. When used correctly, however, it has the potential to assist lawyers and non-lawyers alike.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Seyfarth Shaw LLP

Written by:

Seyfarth Shaw LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Seyfarth Shaw LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide