The Case Of The Imaginary Yacht: Is ChatGPT The Future Of Legal Research?

Dunlap Bennett & Ludwig PLLC
Contact

I recently asked a robot for legal advice. I had heard about OpenAI’s ChatGPT program, famous for its ability to rap in the style of well-known musical artists (hello, copyright lawsuits), write college essays (teachers, GPTZero is your new hero), and even write computer code. Yes, a computer program that writes computer programs. Now that’s meta. (But not Meta®; sorry, Facebook.)

As a lawyer with a dash of programming experience, I was curious to see ChatGPT flex its legal research muscles. So, I asked ChatGPT this question: “Has the Trademark Trial and Appeal Board [TTAB] ever addressed whether large or expensive goods that are under construction meet the requirement of ‘use’ in commerce under Section 1(a) of the Lanham Act?”[1]

In a fraction of a second, ChatGPT—which has memorized over 386 million pages of text and processes this enormous pile of data using 175 billion parameters[2]—produced an intelligent-sounding four-paragraph answer to my obscure question, complete with citations. Amazing! Or is it? Here is a verbatim snippet of ChatGPT’s response:

In the case of In re Sones, 590 F.3d 1282 (Fed. Cir. 2009), the TTAB considered whether the applicant’s use of a mark on a partially completed yacht was sufficient to establish trademark use in commerce.

This case sounds marvelously relevant to my question. But there are a few problems.

While In re Sones is a real case, the citation provided by ChatGPT points to a decision of the U.S. Court of Appeals for the Federal Circuit, not the TTAB. More concerning, however, there was no yacht, sailboat, sloop, schooner, nor any other sailing vessel in the Sones case. On the contrary, the goods at issue in Sones were bracelets emblazoned with the mark “ONE NATION UNDER GOD.”

Undeterred, I used the feedback feature of ChatGPT to “train” the program that its answer was wrong. Next, I tried asking the question again. “Certainly!” ChatGPT replied with scripted enthusiasm, and directed me to the case of In re T.V. Today Network Ltd., 116 USPQ2d 1289, 1291 (TTAB 2015). The problem? This case is not real. It does not exist anywhere except in the imagination of ChatGPT.

Why did ChatGPT lie to me? The answer lies in understanding how ChatGPT works. ChatGPT uses a software innovation called a “transformer.” Transformers were invented at Google in 2017 to improve translation software.[3] Early translation software translated sentences one word at a time. Not surprisingly, this method did not work well when translating between languages, say, English and French, with vastly different grammar and sentence structure. Enter the transformer. Transformers can be “trained” to “read” vast quantities of written material in the target languages and use this knowledge to translate sentences by looking at the context, not just individual words.

When you ask ChatGPT a question, the software encodes your question as a series of contextual clues. The software then uses these clues to scour its enormous library of data for information that seems relevant to your question and attempts to craft the answer you want. The upshot of this contextual approach is that Artificial Intelligence (AI) software can “learn” from the context and “create” new text that answers our questions with a human touch. The downside is that AI can use context to “hallucinate” (yes, that is a technical term[4]) believable data that is simply false. For instance, ChatGPT’s bogus In re T.V. Today citation above looks realistic in part because the reporter series matches the fictional date of the case (2015). ChatGPT probably figured this out—in fractions of a second—after noting that hundreds of human-written citations to 2015 TTAB decisions also cite the second series of U.S. Patents Quarterly (USPQ2d).

ChatGPT’s ability to scan a stupendous scope of data at lightning speed is superhuman. And yet, as TV presenter John Oliver observed, AI software such as ChatGPT is sometimes “stupid in ways that we can’t always predict.”[5] So, considering both the awesome potential and the occasional idiocy of ChatGPT, is ChatGPT the future of legal research? The answer, emphatically, is yes.

The legal profession, perhaps more than any other, is ideal for AI-enhanced research. Lawyers have been writing like robots for centuries. Our rigid adherence to wording and citation conventions makes it especially easy for a program like ChatGPT to understand legal writing. Moreover, ChatGPT is a “semi-supervised” AI that has already shown great promise when humans fine tune it for legal tasks. For example, the latest version of the software that powers ChatGPT passed the bar exam in the 90th percentile, a vast improvement over its predecessor, which placed dismally in the 10th percentile.[6]

Advances in AI technology such as ChatGPT promise to enormously enhance the speed and thoroughness of legal research in the future. Yet like any AI trained on human-written text, ChatGPT has the potential to rehash the best and the worst of humanity. Legal research is a high-stakes proposition, and the consequences of erroneous research can devastate clients. That said, it is highly probable that ChatGPT or its offspring will soon be able to generate mostly correct legal answers, research memos, contracts, and more, in seconds. This unprecedented ability will give lawyers who keep up with the technology an immense advantage.

The use of AI-enhanced legal research will likely become imperative in the future for lawyers to offer their services competitively and ethically. With this great power, however, will come the vital responsibility for legal professionals to understand how to interact effectively with AI and carefully supervise its shortcomings. The lawyer of the future must therefore be wary not only of relying on irrelevant or overruled authorities, but also imaginary yachts.

[1] For those interested, one correct answer to this question is: Yes. In the Board’s precedential decision in The Clorox Company v. Hermilo Tamez Salazar, 108 USPQ2d 1083 (TTAB 2013), the Board observed in dictum that even large or expensive goods must be complete and offered for sale to satisfy the “use in commerce” requirement.

[2] ChatGPT: Everything you need to know about OpenAI's GPT-4 tool, BBC Science Focus (Mar. 16, 2023), https://www.sciencefocus.com/future-technology/gpt-3/.

[3] Ashish Vaswani et al., Attention is All You Need, Advances in neural information processing systems 30 (June 12, 2017), https://arxiv.org/abs/1706.03762.

[4] AI Has a Hallucination Problem That’s Proving Tough to Fix, Wired (Mar. 9, 2018), https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/.

[5] Matthew Leaney, Opinion: John Oliver is wrong to worry about ChatGPT. AI can help us solve complex problems., Yahoo! News (Mar. 17, 2023), https://news.yahoo.com/john-oliver-wrong-worry-chatgpt-090016442.html.

[6] Daniel Martin Katz, GPT-4 Passes the Bar Exam, Illinois Tech (Mar. 15, 2023), https://www.iit.edu/news/gpt-4-passes-bar-exam#:~:text=GPT%2D4%20scored%20a%2075,it%20in%20the%2010th%20percentile.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dunlap Bennett & Ludwig PLLC | Attorney Advertising

Written by:

Dunlap Bennett & Ludwig PLLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Dunlap Bennett & Ludwig PLLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide