Navigating the Risks and Benefits of Artificial Intelligence in the Legal Industry

Furia Rubel Communications, Inc.
Contact

In this episode of On Record PR, Gina Rubel goes on record with Éva Kerecsen, Chief Legal Counsel at NNG LLC, to talk about navigating the rapid evolution of artificial intelligence in the legal industry. Éva is a highly experienced legal professional with a passion for ensuring transparent and compliant operations in the dynamic world of technology. At NNG LLC, a prominent company in the automotive software industry, Éva has been managing and coordinating the entire legal activity of the organization for almost 10 years. She oversees approximately 800 legal issues per year, spanning copyright, e-commerce, IT law, employment law, commercial law, and data protection. Additionally, she has played a crucial role in the expansion of NNG’s global presence, coordinating legal activities and advising NNG subsidiaries worldwide.

Prior to her current role, they held the position of Legal Advisor at NNG LLC, where they provided invaluable guidance on various legal matters, including employment law, IP law, and marketing and communication-related issues. During this time, Éva was actively involved in contract negotiation, lease agreements, and the development of the company’s trademark portfolio.

Drawing from an impressive educational background, Éva holds a Bachelor of Law (LLB) from Pázmány Péter University, Faculty of Law, and a Master’s degree in Information Technology Law (LLM) from the University of Pécs. They have also expanded their knowledge through specialized courses, including an Introduction to US Law program at The George Washington University Law School and Law and Psychology at Orac Academy, furthermore ongoing studies in Data Protection Law at Eötvös Lóránd University, Faculty of Law.

With a keen focus on bridging the gap between law and technology, Éva is a sought-after expert in the legal field, leveraging their expertise to address the intersection of legal issues and technological advancements.

Gina Rubel: Hello, Éva, and thank you for being on our show today. Let’s set the stage. We met in the beautiful city of Budapest. Let’s talk about where we met and why I’m so excited to have you here today, this wonderful topic.

Éva Kerecsen: Yes, that’s right. We met in Budapest at a World Services Group (WSG) conference organized for lawyers. WSG is a professional network for lawyers, and I was talking about artificial intelligence at that conference. I am a Budapest-based lawyer, so I can bring a bit of the European perspective to this very famous podcast and I am glad to share my thoughts on it.

Gina Rubel: I’m so happy to have you here. In fact, this is our 125th podcast and you are our special guest because this topic is so very important, especially because of how it affects the global community corporations and law firms in particular. With you being in-house counsel, it’s important for our listeners to understand that corporations are well aware and are paying mindful attention to these topics.

What is your experience and perspective of generative AI in general and the rapid development of Chat GPT in particular?

When I was talking about artificial intelligence at this conference, what I experienced was a very lively vivid discussion about artificial intelligence. The whole room became so active, so we had a lively and also lovely discussion about this topic with the lawyers. I was talking about this topic because I have a very close relationship with artificial intelligence from several aspects. I would split it into two main categories. I am dealing with AI in a professional advising capacity because I am not just a legal counsel of an automotive software development company called NNG, but also having my own legal practice where I advise clients on business in the technology field. I can face a lot of questions from these companies.

In my work, there are three categories of concerns I encounter.

  1. Everyone is concerned whether the usage of ChatGPT and other AI tools are safe enough for use.
  2. Trade secret and confidentiality questions and issues.
  3. Copyright and intellectual property (IP). Those are material questions. For example, how we can utilize what we are creating by using artificial intelligence tools.

On the other hand, I am also facing the benefits of the tools from the administrative perspective because as a head of an in-house team, I am also regularly checking opportunities to utilize these legal tech tools.

I see three main areas of benefit.

  1. The ability to make it more efficient to review a document and to ensure compliance with corporate standards.
  2. The filing system. AI can offer a very efficient filing system and can help a lot if we have to do due diligence, filter wisely, and respond to specific questions.
  3. It also helps if I have to prepare a summary for a board meeting, for example, about a bunch of contractual portfolios, and give a specific answer for the overall liability cap. The tool makes it in minutes.

With the growing reliance on generative AI technology like ChatGPT, what are the potential ethical challenges and human rights considerations for lawyers?

There are many ethical considerations regarding using AI. There are many which are more global. I wouldn’t really touch on these topics and issues but rather focus on those which are more relevant to the legal work.

First, I would mention the hallucination of the AI (also known as AI hallucination) and especially the large language models like ChatGPT. That is when they are basically making up and creating information or content that doesn’t align with reality or factual accuracy. That is coming from the principle of these large language models, because the primary task remains the prediction of the next token, or what is the next word in the string of words. That is the reason why it can hallucinate.

Gina Rubel: For our listeners, AI hallucination is when AI predicts what the next word will be. It’s based on algorithms and essentially mathematics. Sometimes it’s making these predictions, but in the way we would hallucinate as human beings, it’s making up the information. Éva, have you been following the ChatGPT case out of New York?

Éva Kerecsen: Yes, I wanted to mention this case where a New York lawyer made an injury claim by using ChatGPT and the claim contained precedents that did not exist.

Gina Rubel: The cases were made up. I am a former litigator and I run a public relations and crisis communications company in the US, but we have clients globally. Why I find this so fascinating is that we did an analysis using one of the AI tools that we use on the volume of coverage that case got in negative press. As of two days ago, which is a month after the case happened, it had over 250 headlines and 700 million views, so people are seeing this negative press on the internet because they used ChatGPT in a way that it was never meant to be used.

Could you provide us with an overview of the EU’s recent adoption of the AI Act and tell us why that’s so significant?

In Europe, the regulatory body is taking steps in order to change the approach and protect human values and European citizens. They have switched from the soft flow approach into these regulatory steps. Recently, the European Parliament has adopted the EU Artificial Intelligence Act, which is going to be applicable not only to those companies who are based in Europe, but similarly to GDPR (General Data Protection Regulation), it is also applicable to those companies who are, for example, in the US, but providing AI services to European citizens. The Act uses a risk-based approach and categorizes AI tools into four different categories.

In the first category, there are those AI tools which are considered unacceptable risks and these are going to be totally banned in Europe. For example, the social credit system, which has been used in China since 2014, is going to be absolutely banned in Europe. Based on social scoring, you as a citizen may get positive or negative scores. If you are, for example, commenting something positive on social media, then you are going to get positive points. If you donate, you are going to get positive points. However, if you are not paying back your bank loan you are going to get negative points. These scores are going to decide whether you are entitled to travel, for example, or which university your child can attend. Basic human rights are based on social scoring in China, and that is absolutely going to be banned in Europe to protect human rights and human values. I absolutely agree with this approach. The regulatory body has to protect the citizens.

Gina Rubel: As somebody who reads up on generative AI and AI tools every day, I did not know about that. That is mind-blowing that you can socially score or give someone a social credit in China. I’m glad the EU has decided to ban that. What’s the next level?

Éva Kerecsen: On the second level are those AI tools which are considered highly risky, and there is a sector-specific approach. There is an annex of this AI act which lists those sectors and areas where the AI tool is used, and the user and developer have to comply with specific requirements. For example, the management and operation of a critical infrastructure such as transportation or healthcare are falling into this category. If there is a driver who draws in a system, for example, which is based on an AI algorithm, then the developer and also the car manufacturer have to comply with the new regulation and with all these measures that are prescribed for the high-risk AI systems. Here both the developer and the users have to make an ex-ante impact assessment and also apply for the O3T to have the AI and register it.

Gina Rubel: Since you’re an automotive technology, do you think you fall into that category?

Éva Kerecsen: Yes. Some products may fall into this category. That is going to be huge administrative and evaluation work.

Gina Rubel: At the WSG conference, you were talking about automated vehicles. That would make sense how an automated vehicle would fall into that highly risky category and have more regulations. What’s the third category?

Éva Kerecsen: Now we are getting back to ChatGPT because the chatbots and all those systems that interact with humans are falling into this category. For the third category, the main rule is the transparency. Those who develop and also use ChatGPT have to disclose that the content was generated by AI tool and also have to publish summaries of copyrighted data, which were used for the training.

Gina Rubel: How do they get that data? That’s a whole other conversation, isn’t it?

Éva Kerecsen: Yes, it is not easy. That is rather applicable for those who are making the algorithm and developing the algorithm.

Gina Rubel: That makes sense. If we have unacceptable and high risk, what’s the title of this next category?

Éva Kerecsen: That is limited risk. The fourth one is the low and minimal risk category, and there are no obligations for these categories.

When you have other countries that have banned the use of ChatGPT, such as Italy, how does that play into the EU rules?

Temporarily in Italy ChatGPT was banned, but after several discussions with the government and between OpenAI and Google, they released this ban. This was mainly because of privacy concerns, and that’s a valid question. On top of The AI Act in Europe, the companies also have to comply with the GDPR, and that is going to be quite a complex situation. For example, if I use ChatGPT then and put some data of my client into ChatGPT, I not only have to comply with the transparency requirement but also have a legal basis for handling the data. I have to involve ChatGPT as a data processor and make the proper documentation to put it there. On top of that, I also have to comply with the recommendations of the Bar Association, so that’s also important.

How has the adoption of AI, such as ChatGPT technology, impacted the daily work and productivity of lawyers?

It is a great tool. You can research, you can use it for summarizing something, or help lawyers write an email or memo on a certain topic. In our everyday lives, it can help a lot. On the other hand, I also see that the tool is available for the customers. That is quite interesting that the customers are also doing the research work and they are prepared. The knowledge is not just for the lawyers anymore, and the clients are more prepared for a discussion. I also see that sometimes they got a very wrong conclusion. I am not quite sure what is the reason; it is either the hallucination of the ChatGPT or the lack of legal structure or thinking. You may have the blocks and wood and everything, but if you are missing some structural elements of house design, the house will collapse.

Gina Rubel: I love that you use the house analogy. In my book, Everyday PR, I talk about the analogy of a house and how it has different elements. You need an engineer, you need an architect. There are different layers of information. I didn’t think about generative AI that way, but you’re right. We’re pulling from information that’s out there already and you’re assuming it’s accurate, which it may not be.

It certainly doesn’t do away with the importance of human evaluation, but one of the things a lot of people like to say is that lawyers’ jobs are going to go away, and that’s so not true because everything is fact-specific.

Éva Kerecsen: Absolutely. Some people may prepare based on ChatGPT and research, but ChatGPT will never win a case for you; it will never negotiate the contract in a way that you would like.

Gina Rubel: It’s a starting point. That’s all it is.

Éva Kerecsen: It is a helping tool, and I think that is a great tool to facilitate our everyday work, but it will never understand human reasons and humanity and the feelings that play into why a case is being sued, for example. Emotions are so important.

Gina Rubel: The way I think of it is if you are to cook a meal and you look in your ice box to see what ingredients you have available – if you put those ingredients into Google, you’re going to get a whole bunch of different recipes to look at. If you put them in ChatGPT, they’re going to create a recipe, but it’s not necessarily going to give you all the details you need, and you need to take the time to look through it. Well, how is this going to work with that? However, it’s much faster. If you go back even further in technology, you don’t have to go to the cookbooks on your shelf. It’s just taking the time, but you still have to check it to make sure it’s going to work.

In your general counsel role at NNG, are you asking your panel lawyers how they are using generative AI tools? If they are doing training, what are you asking them?

I trust them, so I know that they are using them in the right way, but what is important is to give them advice to help them understand generative AI. That means that everyone has to test it, its capabilities, its functionalities, and the potential legal implications. When you know the answer, it is still worth trying it to see the answer coming from ChatGPT. I see a lot of misunderstanding in that part, and it is not correct each time. That connects to my second point, that we have to always acknowledge the limitations of ChatGPT and make the validation. We should always check what the output was and also verify that the information is correct.

Gina Rubel: ChatGPT was never made to be a case research tool.

Éva Kerecsen: It is a general tool, and it is absolutely not for legal professionals.

What are some specific ways that lawyers are leveraging large language models in the legal practice?

They can use it as a tool for rewriting emails, memos, summaries, and also some research, but just to a certain extent. For example, it can be a good tool if you are preparing for a conference and there is someone to whom you can start to talk about that specific question; maybe you can have some good ideas or directions that you wouldn’t necessarily think about. After consulting ChatGPT, you can have some good inputs to your speech.

Gina Rubel: I agree. I just used it myself. I had to write two proposals for speaking engagements and they were very specifically tailored to the venue, their theme, and their audience. The more specific your prompt, the more specific the outcome. It probably saved me three hours of work. I did not use the initial output. I used some of it, but it gave me the ideas. As a lawyer and a communicator, it’s a lot easier to start with something and edit it than it is to start with a blank page.

Éva Kerecsen: I completely agree with that, and with the approach itself. Don’t use it like the New York lawyer used it, but as a good input for further work. That’s absolutely great for that.

Gina Rubel: We also have to remember that it’s only as good as the data that it has, which goes back to September 2021.

How can lawyers strike a balance between using generative AI tools like ChatGPT and maintaining the critical human element in legal work, such as client relationships and empathy?

That is getting back to how important the human factors are in a legal case and how I see in each legal case the main element is what the client wants to achieve. There are human feelings – sometimes it’s anger, sometimes they want to just win the case or take revenge in certain cases. That is so important for the lawyers to understand what is behind the court case or a dispute.

Gina Rubel: And really what the client needs. So, you might decide this case is a high-risk case for us, we need to settle it. There’s a strategy there that a machine is not going to give you.

Éva Kerecsen: Yes. I also see that human skills are going to have more value for lawyers. Improve your speaking skills, your emotional skills, and be more focused on humans and human values and feelings, and you can become a better lawyer and have some distinctive character.

Gina Rubel: It will be interesting to see when a company comes out with a large language model that’s built on emotional intelligence.

Éva Kerecsen: This is something ChatGPT wants to have as well. Sometimes it can answer some questions in quite an emotional way, but that is not the same. That’s right for creative work. We are giving copyright for a specific creative work, and the law values the human interaction and the human work of it and the humanity in the work. What I could feel when, for example, I listen to a good Queen song and I just feel the goosebumps on my arms, you would never feel the same with a generative AI song.

How can law firms and corporations address concerns about bias, fairness, and potential misinformation when using generative AI in a legal context?

That is so important to have specific policies and also follow the recommendations of the bar associations, for example, which relate to the client-attorney privilege rules, the confidentiality rules, or the privacy rules. It was so good to see that you gave some examples of these policies and how a ChatGPT tool can be used in the company.

Gina Rubel: That’s been a big platform for our company. We are one of the first agencies like ours, a legal marketing agency, to adopt a generative AI use policy internally. We’ve adopted it with our staff and with all of our consultants.

I think there’s a lot of opportunity. Unfortunately, law firms tend to be reluctant to adopt new policies because they all have to have a say, especially the bigger the firm. But it’s no different than when we were doing training around using email in the workplace or using social media. There are policies; companies put policies in place for those things to protect themselves and their employees. We have been advocating very strongly in this space for that reason. Unfortunately, we’re seeing law firms that are banning the use of large language models and other tools rather than figuring out how to adopt them safely. That’s short-sighted.

Éva Kerecsen: For example, Samsung banned it because some parts of the source code went to ChatGPT, and they saw that the confidentiality and trade secrets are infringed by the employees who were using ChatGPT during their work. They temporarily banned the usage of it. I don’t think that it is the right approach, but all the companies have to regulate the way AI tools can be used during their work.

How can lawyers stay informed about the latest developments and best practices regarding the use of AI tools in the legal profession?

I think it is so important to be actively engaged in legal tech communities. There are very good platforms on LinkedIn where lawyers can stay informed on the recent development of the legal tech AI tools, the latest development in technology, and how they can leverage it. They can attend conferences and webinars or just listen to podcasts like this one.

At our agency, Furia Rubel, we recently launched a generative AI resource center. If there’s ever anything you think that should be in there, please feel free to send it to us. It’s going to be updated daily because it’s changing daily. We want to help people find a place where information is readily available, whether or not it’s a conference that someone’s promoting because it’s on the topic for lawyers.

If you could tell our listeners any one thing about why you’re excited about these tools, what would it be?

I am excited for this tool because I feel the enormous potential of it and how it facilitates my everyday life. I am so happy to be able to use it, although I also see the dangers and risks of using this tool.

Gina Rubel: That’s what we need to be mindful of. I want to just give one more shoutout to World Services Group, who invited me and my colleagues to come out and speak at the WSG European Regional Meeting 2023 because that’s where we got to meet. What a gift that was. For anyone who’s never been to Budapest, what an incredible city you live in. If anybody’s coming out to visit, perhaps they’ll reach out to you on LinkedIn and let you know.

Éva Kerecsen: I would be so glad to host anyone in Budapest.

Gina Rubel: It is not only a beautiful city, but everyone was so friendly and kind. I think I walked about 10 miles every day just to see as much as I possibly could. I do hope that I get out to see you again and that we will share lots of good information, and perhaps as the field changes, we’ll have to have another discussion.

Éva Kerecsen: I would be so glad to have this discussion again. Thank you for inviting me.

Gina Rubel: Congratulations to our producers, Jennifer Simpson Carr and Matt Henderson. This is our 125th episode and without them it would not happen.

Éva Kerecsen

E-mail: eva.kerecsen@nng.com

Website: www.nng.com

LinkedIn: https://www.linkedin.com/in/eva-kerecsen/

Learn about GDPR: https://gdpr-info.eu/

Learn about the EU AI Act: https://artificialintelligenceact.eu/

 

Written by:

Furia Rubel Communications, Inc.
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Furia Rubel Communications, Inc. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide