Know What You’re Getting with AI Assistance: Your ChatGPT Isn’t “Hallucinating,” It’s Bullshitting

Holland & Hart - Your Trial Message
Contact

Holland & Hart - Your Trial Message

For a little over a year, the world has been abuzz with the experience of accessible artificial intelligence, and overflowing with speculation on the many ways it will change the ways we live and work. For law, a field that specializes in the use of language, there has been an inevitable focus on the ways large language models and machine learning will change the practice of law. Assisting with research, plowing through electronic discovery, generating contracts, managing client communications, drafting motions, assessing potential jurors, outlining opening statements — the techno-optimists among us have suggested that all of those legal tasks and more will be streamlined by AI, making legal assistance more affordable and accessible.

But there is just one little problem — AI can be wrong, and not just a little bit wrong. It creates realistic sounding legal arguments and conversations, but it also often does so by making things up.  One lawyer learned that the hard way when he submitted a ChatGPT-written brief, only to have it be discovered that it contained, in the judge’s words, “bogus judicial decisions, with bogus quotes, and bogus internal citations.” In the current discussion, the falsehoods that AI will generate are referred to as “hallucinations,” as if the machine is concerned with reality, but is just simply and occasionally misperceiving that reality. The term also suggests that the inaccuracies are just a bug that can be fixed as the technology matures. For example, some have suggested that AI might do its own fact-checking once it is more integrated with correct and accurate databases. Unfortunately, a clear understanding of what these machine learning models are actually doing suggests that isn’t the case: AI will not fix its errors because AI is indifferent to the truth. An interesting new article from three University of Glasgow researchers (Hicks, Humphries & Slater, 2024) carries the provocative title, “ChatGPT Is Bullshit,” but it is a thoughtful critique. They also mean “bullshit” in the academic sense — yes there is an academic sense of “bullshit,” it means a lack of concern for truth, not necessarily conscious lying, but indifference. There is even a formal scale measuring a person’s “receptivity to bullshit” that I have written about in the past as a tool for understanding jurors. To the Glasgow authors, this academic label of bullshit is better than “hallucinations” because, “the models are in an important way indifferent to the truth of their outputs.” In this post, I’ll discuss the implications this has for the use of AI in the law.

AI: What Is Really Going On Under the Hood?

The popular applications of AI — ChatGPT, Google Bard, Meta’s LLaMa and others being generated by the day — all work off machine learning based on what are called “large language models.” But that is a concept that is barely understood by most people. We tend to think of computer programs as a set of conditional commands for turning inputs into outputs. That is not what current LLM-based AI is doing. The Glasgow article contains the most useful description I’ve seen to date on how it actually works. Hicks, Humphries and Slater note that LLM works to generate natural human-sounding communication, and succeeds at that, generating often but not always correct answers as well. The way it does that is not by having any concept or referent for “truth.” Instead, it does that based on likelihood associations, what the authors describe as “probability functions for a word to appear in a text given its context and the text that has come before it.” So, in a way, it is a more sophisticated version of the predictive text that appears on your phone when you’re typing an email. Correct answers occur when the algorithm leads the system to that combination of words, and incorrect answers occur the same way.

The authors note, “LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing.” This lack of concern with the truth, and the resulting errors are not a bug, but a feature of the design. The answers are produced by literally mindless statistical association. Many have noted that conversations with ChatGPT can feel quite “real,” as if you are interacting with a thinking entity, but that is truthiness rather than truth, to use comedian Stephen Colbert’s word for the feeling of something being true independent of its actual truth-value. The conclusion for the authors is “Calling their mistakes ‘hallucinations’ isn’t harmless; it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived.”

All of us, but particularly those in the legal field, need to seriously grapple with the implication is that AI is not at all concerned with truth, nor even able to accommodate a concept of truth.

What Is the Problem With Treating Inaccuracy as a Bug to be Fixed?

The connotation of  calling AI errors “hallucinations” is that the machine really wants to and tries to represent reality, but it just makes occasional situational mistakes in so doing. It is a metaphor, but metaphors can be important in guiding public understanding, including the understanding of the legal field and policymakers. The problem is that AI is not “trying to be right.” When it delivers correct answers, as it often does, it is simply because the language of those right answers carry a high proximal likelihood. But what is lacking is the ability to guarantee or even reliably check on correctness.

This suggests that the problem won’t work itself out. It may even get worse. Right now, for example, large language models are harvesting the internet, and learning primarily from human-created texts. Humans — often, not always — have a concern with accuracy. But what will happen when the models increasingly are learning from other AI-generated text online? As AI moves toward becoming both the dominant consumer and producer of internet content, there is the likelihood of a self-reinforcing cycle (a “circle of bullshit,” to use the academic term) resulting in this problem being more likely compounded rather than alleviated.

The Law: What Role, If Any, for AI?

The attitude from many seems to be, “You can use AI, but you’ve got to be careful in fact-checking it.” Some will even naively ask ChatGPT to fact-check itself, or to “return only true answers.” One problem, as some of the lawyers testing AI have learned, is that AI will not only lie, it will also lie about its lying. So it remains to be seen whether there are or will be practical and meaningful ways to check that.

But, to anthropomorphize for a bit, think of AI as a your young associate: You know you can tolerate a few mistakes early on. But would you tolerate an associate who not only made many errors, but who also had no desire to be correct? Would you keep an associate who would not just make up facts, but also make up convincing fact-checks on those facts? I suspect you wouldn’t.

That suggests that for now, AI is best considered a tool that is good for creating human-sounding communication, but not particularly good at upholding the law’s concern for accuracy and truth. The academic progenitor of the “bullshit” classification, Harry Frankfurt (2002) warns, “indifference to the truth is extremely dangerous… by the mindlessly frivolous attitude that accepts the proliferation of bullshit as innocuous, an indispensable human treasure is squandered.” The Law, with a focus on accuracy, is part of that indispensable human treasure.

The Glasgow authors’ article is worth a read for anyone seeking to understand AI better, and it offers a more foundational critique than the typical “useful, but a bit buggy” attitude many casually have at this issue. I suspect that in many fields and in society at large we are at a cross-roads, and AI could play a strong role in plunging us into a true “post-truth” age with profound implications. Current and future plans to merge AI with the search engines most people use to research everything might clearly and quickly make things much worse. Lawyers in particular should aim for a clearer understanding of what AI is actually doing, and apply substantial limits on how AI is used in the field of law. AI may end up having useful applications, particularly where we can use the power to mimic human dialogue. But where facts matter most, calling bullshit may simply be more accurate.

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38.

Image credit: Shutterstock, used under license 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Holland & Hart - Your Trial Message | Attorney Advertising

Written by:

Holland & Hart - Your Trial Message
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Holland & Hart - Your Trial Message on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide