Artificial intelligence (AI) advocacy in the Federal Court

Smart & Biggar
Contact

In April 2023, a curious law professor and lawyer submitted his “Introductory Civil Procedure” exam to ChatGPT, an AI chatbot, to see how it would score. The results were surprising: ChatGPT outperformed over 70% of his second-year law students and even provided the professor with new insights into civil procedure.1 Similar examples demonstrating the impressive abilities of ChatGPT and other AI tools are becoming commonplace.

Despite these advances, the legal profession is unlikely to be replaced by AI. Historical context shows that technological advancements over the past 40 years have not stymied the profession—in fact, the number of lawyers in Canada has increased by 400% during that time.

The challenge for lawyers is not replacement, but rather the appropriate integration of AI into their legal practice. For litigators, AI offers opportunities to streamline laborious and costly court procedures, to the benefit of the client. However, there are risks in entrusting AI to undertake previously manual work. This article explores how AI tools are impacting litigation, focusing on the Federal Court's guidelines for AI use by parties and the Law Society of Ontario’s stance on generative AI.

Party use of AI in Federal Court proceedings

On May 7, 2024,2 the Federal Court issued a “Notice to the Parties and the Profession – the Use of Artificial Intelligence in Court Proceedings”. This notice mandates that all parties, whether represented or self-represented, must disclose if AI was used in preparing documents. Exceptions include certified tribunal records and expert reports; however, expert reports should disclose any use of AI in the summary of methodology required as part of the Code of Conduct for Expert Witnesses.

Parties must include a Declaration indicating AI's role in specific paragraphs of their documents, such as:

“Artificial intelligence (AI) was used to generate content in this document at paragraphs 20-30.”

Documents are considered AI-generated if the AI's contribution resembles that of an author, typically using prompts or information given to the AI tool. No Declaration is needed if AI only suggests changes or critiques content created by humans who then manually implement these suggestions.

Counsel appointed after documents are prepared must assess if AI was used to prepare those documents. Indicators of AI authorship include, among other things, unusual tone, complex sentence structures, or vague statements.

The Federal Court provides assurance to practitioners that including a Declaration will not result in any adverse inference, maintaining neutrality in this regard.

The Federal Court notice also expresses certain concerns about AI, including “hallucinations,” “deepfakes,” potential fabrication of legal authorities, and inherent biases in AI systems.

Among the potentially negative outputs of AI, hallucinations appear to be the most prevalent in litigation. The term “hallucination” in the context of AI refers to the generation of false or fabricated information by AI systems in response to prompts or requests. This phenomenon includes the creation of inaccurate facts, misleading citations or other erroneous content that AI might produce without factual basis.

To date, there have been several notable instances where such “hallucinations” have impacted legal proceedings. For example, in the case of Mata v Avianca3 in the United States, two attorneys included fictitious citations generated by ChatGPT in their court submissions and were consequently reprimanded.

Similarly, counsel in the Canadian case of Zhang v Chen4 were held personally liable for costs incurred in identifying and addressing false citations produced by AI. The court's decision underscored the importance of verifying the authenticity of citations and other factual content before submitting documents to ensure the integrity of legal proceedings.

To mitigate these types of concerns, the Federal Court notice advises:

  • offering traditional services if clients are not familiar with AI or prefer not to use AI
  • exercising caution with AI-generated legal references and analyses in documents
  • verifying that AI-generated documents align with legal standards.

Law Society of Ontario on practitioner use of AI

Litigators and other practitioners must also be vigilant when using AI to manage client documents and internal legal work products.

On April 25, 2024, the Law Society of Ontario published a “White Paper on Generative AI,”5 highlighting key concerns in these areas, particularly surrounding confidentiality and privilege. Litigators must exercise caution with confidential or privileged information when using AI tools, as some providers may use this data for training or storage.

A cited example involves Samsung, where an employee's action of pasting sensitive code into ChatGPT raised confidentiality concerns. ChatGPT retains information by default, though users can opt out of this practice. The danger applies equally in litigation where, for example, a litigator submits a confidential factum to ChatGPT to check for errors, and is effectively disclosing the confidential information contained in the document.

To safeguard client confidentiality, the Law Society of Ontario therefore recommends the following:

  • reviewing AI terms of use and understanding how inputs are managed
  • avoiding input of confidential or privileged information into AI systems lacking adequate security
  • redacting sensitive information and obtaining client consent if confidentiality cannot be ensured.

The White Paper additionally alludes to potential concerns around the anonymization of information, and the evolving technology might affect the anonymized status of data over time.6 Client consent should always be obtained if there are any residual risks or concerns.

Conclusion

In all likelihood, AI will not replace lawyers. Rather, AI will become a valuable tool in legal practice, allowing litigators to streamline laborious tasks.

However, when employing AI, lawyers must remain vigilant: verifying results, being transparent with the Court and clients about AI use, and ensuring the protection of confidential and privileged information. While AI presents exciting possibilities, it is crucial to navigate its risks and limitations carefully.

References

 

  1. Joshua J.A. Henderson, Rise of the robots: ChatGPT comes for lawyers, 42 Adv J No 3, 34 – 35 at para. 2.

  2. https://www.fct-cf.gc.ca/Content/assets/pdf/base/FC-Updated-AI-Notice-EN.pdf

  3. Mata v Avianca: https://casetext.com/case/mata-v-avianca-inc-2

  4. Zhang v Chen: https://www.bccourts.ca/jdb-txt/sc/24/02/2024BCSC0285cor1.htm

  5. https://lawsocietyontario.azureedge.net/media/lso/media/about/convocation/2024/convocation-april-2024-futures-committee-report.pdf

  6. Martin-Bariteau, Scassa, Artificial Intelligence and the Law in Canada, Chapter 5 AI and Data Protection Law, 2. Existing and Emerging Privacy Issues, 2.1. Personal Information, 1st Ed.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Smart & Biggar

Written by:

Smart & Biggar
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Smart & Biggar on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide