In May 2023, Steven Schwartz of Levidow, Levidow & Oberman admitted that he used a generative AI (GAI) platform to produce six non-existent court decisions as citations during his representation in a personal injury case against Avianca Airlines. He has since issued regrets and has thrown himself on the mercy of the court.
U.S. District Judge P. Kevin Castel of the Southern District of New York later fined Schwartz and the firm for what he said were bad faith actions. Judge Castel wrote in his opinion that “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance … but existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
In a concurrent development related to the New York case, Texas Judge Brantley Starr issued a Mandatory Certification Regarding Generative Artificial Intelligence to rein in the growing use of AI technology in legal proceedings.
In the certification ordering lawyers to sign an AI pledge, Judge Starr said that “All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”
This situation raises many issues, both legal and ethical, about the growing popularity of GAI platforms in the legal industry, especially around duty of competence, duty of confidentiality, and responsibilities regarding non-lawyer assistance.
[View source.]