September 13th, 2023
1:00 PM - 2:00 PM ET
In an age of digital transformation, the legal industry is increasingly thinking about using AI and Large Language Models (LLMs) like GPT for document review, legal research, and even writing legal briefs. Yet, in our discussions, legal professionals regularly express concern about LLM security. Are we risking a waiver of attorney-client or work-product privileges by sending our data to OpenAI? What if that data includes confidential client information?
If these questions resonate with you, you cannot afford to miss our upcoming webinar: "Are Large Language Models Like GPT Secure? A Look at the Technology and the Law."
We’ll delve into the key issues that every legal professional should consider:
- Can large language models learn from and share the information I send?
- Does a commercial license provide reasonable protection for my communications?
- Will OpenAI or Microsoft review the information I send such that it might waive attorney-client privilege?
- How do large language model providers like Microsoft assure data security and confidentiality?
- What is the law governing these questions and how will it be applied?
Our experts will unpack these questions and help you better understand how these new LLMs work, how commercial providers provide a “reasonable expectation of privacy” for your communications and what you should expect from your LLM vendor to protect against waiver of privilege.
Join us as we tackle the elephant in the room: Are LLMs like GPT secure and are we risking confidentiality and privilege when we send client data to these AI platforms for analysis?
Speakers:
John Tredennick, CEO and Founder of Merlin Search Technologies
Dr. William Webber, Chief Data Scientist of Merlin Search Technologies.
Mary Mack: CEO and Chief Legal Technologist at EDRM
Professor William Hamilton: Senior Legal Skills Professor at the University of Florida Levin College of Law.