Assessing and Governing AI: Our Answers to Your Questions

Osano
Contact

With our webinars, there are always plenty of good questions and not enough time to answer them all satisfactorily. That was especially true in our recent webinar, When AI Meets PI: Assessing and Governing AI from a Privacy Perspective.  

Our audience asked some terrific questions, and while we ran out of time to address them during the webinar, we are committed to providing the answers. With AI, there’s no such thing as too much information, and there is a lot of confusion and uncertainty. The more you know, the better equipped you are to use AI in a responsible and privacy-forward way. 

We asked our Head of Privacy, Rachael Ormiston to answer our AI webinar questions, plus some of the more interesting questions we’ve gotten about AI and privacy lately. Here are her answers.  

From the Webinar 

There are several AI frameworks we are hearing about right now. Which are gaining adoption with mature organization? Which one best focuses on AI compliance so we can incorporate it into our long-term privacy plan? 

During the webinar, our audience brought up three frameworks they commonly see: 

  • The NIST AI Risk Management Framework 
  • ISO 23984, which addresses AI risk management 
  • ISO 42001, which is the AI management system standard 

All are gaining traction as companies try to establish programs to support AI governance. In my view, the two that are gaining most momentum are the NIST and ISO 42001 standards. 

The NIST AI 600-1 RMF is a voluntary framework to manage AI-related risk, developed in part to fulfill an October 30, 2023, Executive Order. It focuses on ethical and responsible development of AI assets and practices, and there are companion playbooks to support it. One advantage to using this framework is, if you are already using the NIST Privacy or NIST Security frameworks in your program, there is some familiarity in focus and structure that allows crosswalks to other elements of your operations.  

The ISO 42001 framework helps companies fulfil their AI obligations by integrating into existing processes and programs. It focuses on responsible deployment of AI. As with other ISO frameworks, ISO 42001 requires independent assessment by a third-party auditor—and as a result, is a useful tool to show validation of efforts by external parties.  

If I want to assess the privacy risk associated with AI use, what are the most critical questions to ask? 

Before your organization starts using AI, you want to make sure you can quantify the associated privacy risks. As with other applications and data sources, the goal of the assessment is to begin by understanding what the AI will accomplish: 

  • What data does it use? 
  • What is the purpose of processing that data?  
  • How will that be used? 
  • Will that data be shared with a third party?   

But there are other AI-centric factors that we didn’t have to consider before: 

  • Can you extract personal data if someone wants to exercise their privacy right? 
  • Will that data be shared with an AI developer whose training model might be developed by the information you share? 
  • Is the AI making decisions? If so, what is the impact, and is there human oversight?  

Osano offers an AI Assessment template that can be a great starting point for privacy teams. 

As Emily, Chris, and Scott mentioned in the webinar, AI unlearning is still in the academic stage. How do we address the right to be forgotten when personal information lives in a large language model (LLM)? 

Once data is in an LLM, it can be hard to remove. Therefore, it’s very important to assess: 

  • What is the harm if you cannot remove the data? 
  • If you can remove the data, how can you assure that the removal was successful? 

There will be some types of LLM where you simply cannot remove the risk of violating a privacy right. In those cases, you simply cannot provide personal data, and you will need to set clear guidelines regarding AI usage. 

When are the EU AI Provisions coming into force? 

The EU AI Act has a staggered schedule for when specific provisions will come into effect. General purpose AI model obligations come into effect in August 2025. If you want a thorough rundown of dates and details, we highly recommend checking out the excellent chart created by the Future of Privacy Forum.   

It seems that the tech giants' standard terms often allow use of customer AI inputs and even outputs for training or fine-tuning their AI models. What is your comfort level with using AI meeting transcription given the prevalence of such provisions?  

In our own experience as a company, we have found that the terms and conditions around AI can vary based on the tech company and the pricing plan. At Osano, we are very cautious about any data that we contribute becoming part of an AI model when we’re still learning about its usage. As a result, when we use AI, it is only with approved vendors who do not use our data—because that typically requires an enterprise plan, it does cost us more. But it is an investment that we take seriously to ensure that we manage data responsibly. 

Other AI Questions We’ve Heard 

Here are some AI questions we’ve heard from privacy pros, gleaned from events, and been asked by inquiring minds in recent months. 

How much of AI is just hype? I have heard that there is going to be an “AI Winter,” and that AI investment is slowing down. Is this something I need to consider regarding AI strategy? 

I think we are starting to see the initial AI hype dissipate. However, privacy pros should still take AI seriously and pay close attention to assessing and governing how it shows up in your organization.  

GenAI has become more tangible over the past two years, but AI as a technology is nothing new. We've recently seen some significant strides forward in how we can and should use AI. As companies continue to embrace AI and find uses for it, privacy pros should be working quickly and diligently to establish appropriate guardrails for responsible usage.  

In my career, I’ve seen many innovations seem to plateau or slow down, only to rapidly gain momentum again later. I think we will see AI will go through the same ebbs and flows.  

I’m a lawyer. I’m not technical, and I’ve never worked on AI compliance or governance. What recommendations do you have for me? 

AI can feel daunting, even to engineers. But you do not have to be an AI expert to be able to red flag AI issues. As we heard from Emily and Chris in the webinar, privacy pros are well positioned to support AI governance because of the skills they already have. We know of privacy pros that have completed AI workshops, such as those offered by the IAPP, while others have spent quality time with their engineers.  

With AI, there is also the opportunity of lots of experimentation, either at home with ChatGPT or by watching simulations online. That might not be the right approach for everyone; but I think the key is not to feel intimidated and experiment as you feel comfortable. We're all learning! 

The EU AI Act is harmonized and simple compared with US frameworks. How do you stay on top of it all as you build tech? Wouldn’t a model like the one in the EU be easier? 

In many ways, yes, but remember, there are many flavors of AI. That means sometimes different rules are necessary. That said, it would be great to have some degree of simplicity. Thinking back to Scott’s coffee analogy from the webinar, we might need a few different types of coffee on the menu. But we don’t want every single variation of decaf, nut milk, iced, whipped, etc. With AI I do worry that we may end up with an unnecessarily long menu unless we see some uniformity in regulation.  

Do you think we need AI nutrition labels, similar to what has been discussed for privacy? 

Yes–I think we are headed in that direction. We know that in some states, you must disclose how GenAI is being used. And in other states, there are requirements for specific impact assessments. I think it is valuable for organizations to be proactive and share more about their AI usage and what they do with it, particularly in an era where there is a tendency to lose trust. I’d love to see this become market standard. 

I don’t often work with our engineering team. As we start to build AI into products and operations, how should I approach building that relationship? 

I find that Taylor Swift memes help. But I understand that not all engineers are Swifties.  

In all seriousness, the best way you can build a strong relationship is to communicate well and often and simplify your explanations and requests. Make it as easy as possible for them to work with you. Checklists are great, especially when you can integrate them with a ticketing system like JIRA or AzureDevOps.  Also, begin to involve the engineering team in your privacy impact assessments now to build muscle memory for when they will regularly need to weigh in on AI. 

Want to Learn More About Assessing and Governing AI? 

Our webinar, When AI Meets PI: Assessing and Governing AI from a Privacy Perspective, contains much more useful information about how to ensure that AI is being used responsibly and with privacy in mind, including: 

  • Things privacy pros need to know about the EU AI Act and other pending and proposed AI legislation 
  • Why privacy pros are well-suited to contribute to AI governance 
  • How to get started with assessing AI applications and products to discover and protect 

This recording (and others) are available in our Resources section 

Written by:

Osano
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Osano on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide