Optimizing AI: Strategies for Intelligent Management

Morgan Lewis
Contact

Morgan Lewis

Evaluating what intelligent management of artificial intelligence (AI) means for an organization requires analysis of various intersecting factors. First, stakeholders need to determine what counts as AI. That determination will help reveal what policies should be created—as well as how to shape their goals, purpose, and scope. While risk mitigation in the world of AI is paramount, to fully maximize AI’s potential, organizations must embrace the reality that constant technological change will demand flexible implementation, vigilant monitoring, and a commitment to regular updating.

WHAT COUNTS?

AI simulates human intelligence to conduct tasks in an automated fashion. This will mean different things for different organizations because the technology implicates almost any tool that provides assistance in generating work product. This can include off-the-shelf or completely in-house models. Generative AI tools or functions built into other applications can also qualify. For example, when collaborative meeting tools create an auto-generated meeting summary, that means using and interacting with AI tools. As technology evolves and we adopt new programs into our business lives, the scope of what qualifies as AI will expand and, unfortunately, might be murky at times.

This fluidity makes it especially important to keep open lines of communication within the enterprise to understand what tools people are using and how they should be covered under the AI use guidance. What individuals can and cannot do with respect to AI must be clearly defined. Providing examples of use cases is a good way to guide standardization. In governing usage, leadership would do well to talk about the policy in detail with employees. Typical questions may include the following:

  • Are personal devices allowed?
  • Are public-facing AI tools in play?
  • Do AI tools impact other policies?
  • Exactly what type of data is being employed?
  • Will the legal department have access?
  • What about the confidentiality of privileged information?
  • What data is prohibited from being shared within the AI tool?
  • Will people be told that they need to opt out of data sharing, or are there instances where the environment might be closed?
  • What rules are at stake?

Standardizing enforcement is also important. How will appeals be handled? How will examiners assess the historical record?

GAUGING RISK

To analyze risk effectively, organizations should consider risk from a high level before diving in to mitigate threats raised by potentially problematic AI tool usage or management. What is the organization's tolerance for risk based on the business model and department function? For example, allowances for employee control might be stricter for the department that posts on social media than for one completely focused on inner-facing work. Similarly, transportation, banking, and healthcare companies will likely have a lower tolerance for error compared to some other sectors. In all cases, human review and independent verification should be obtained before results are used, but the processes for validation might differ. For example, digital controls can be very robust and powerful, but they can also require layers of testing such that they may not be an appropriate primary gatekeeper for all departments. Remember, manual tools such as clearly identifying who has access to the tool and providing prompt training can be extremely valuable.

Red teaming (stress testing for potential hacks) is an effective way to identify risks and their entry points. This exercise is particularly useful for off-the-shelf or third-party models because their construction is often opaque. Note that documentation of all analyses and testing is vital, particularly if issues arise with the validity of using the model’s results. This testing may lead to an assessment of the improvements that must be made and determining how to implement them. Keep in mind that it is critical to ensure that any risk assessment documentation is tightly controlled to avoid creating a roadmap for potential hackers.

Confidentiality and data privacy add additional complexities to adopting AI use. The EU General Data Protection Regulation and similar data privacy laws have required companies to think about data protection and privacy in a completely different way (and not just in Europe). Testing should assess how a party can implement data privacy requests, such as the right to erasure, within the model and what that means for retraining the model and validating the adjusted model’s output.

One of the most commonly known risks of AI use are hallucinations, which are misleading or incorrect results generated by an AI model. The safety risks of hallucinations are not to be underestimated, and it is important to remember that not all incorrect output will be easily identifiable. Misalignment, which happens when the model correctly draws from the dataset but produces an inaccurate output, can be difficult to spot and have significant business impacts. This is why both routine model testing and human results validation are key to effective implementation of AI.

Organizations should also consider name, image, and likeness (NIL) issues. For example, images or voices of athletes and singers generated with the AI tool could create copyright and trademark infringement.

THE LONG VIEW

While it is essential to have an AI policy in writing, the policy is only as good as its implementation and its actual effect in practice. Defining the types of data that the AI tool has access to is crucial. An enterprise should not simply point its AI tool at its data lake. Doing so could create a multitude of issues, such as an issue of privilege waiver or including conflicting historical company data. Some key questions to ask when developing a model are as follows:

  • Who is going to evaluate the model?
  • What will be the criteria for assessment?
  • Which individuals can help understand how the organization will really be using this tool?
  • What data loss protection tools are in place, and what side effects might they have?

All this reinforces the need for real-world testing before anything is rolled out. Cybersecurity, engineering, information technology, compliance, and legal are all examples of key stakeholders that should have a hand in the development and monitoring processes. Diversity in perspectives and knowledge will improve the effectiveness of model development, training, and monitoring.

Once an organization has rolled out AI use, questions may arise when facing potential litigation involving AI: What is relevant to that litigation? Is it inputs? Outputs? Prompts? The code itself? A combination? If, say, there is a claim of infringement based on outputs, the inputs might be able to reveal the entire process, possibly including information that would negate infringement claims. And never forget that the underlying dataset may be constantly shifting based on how data is fed into the model and maintained, which could be an issue when trying to preserve data.

Apart from retention and preservation, organizations also need to consider data disposition and deletion. This is very important because data hygiene is key, particularly in those locations with data privacy statutes. However, even in those jurisdictions that are less demanding, enterprises must be ready in case of a data breach—knowing in advance what is in the model that could have been targeted. Must consumers or third parties be informed? Has data been stored no longer than necessary? Might the model still have that information within it, even if it had been deleted from the underlying data sources? If a third-party tool is being used, what are the rights and the ability to go in and delete? If people used personal devices to access AI tools, has there been rigorous monitoring?

Deletion of data and data disposition will affect how models operate, and the monitoring done after deletion or any AI tool updates will be as critical as it was in the development phase. Modular lines of code can be valuable here because they can be plugged into various models to ensure consistent risk assessment. Also, there might not be a single, company-wide AI model. Hallucinations, context, relevance, accuracy, and adequacy are all based on the data itself. And it is usually wise to create a log of all activities to see where the errors or misalignments arise.

INFORMATION GOVERNANCE TAKEAWAYS

As discussed, an AI policy is only as effective as its implementation and updates. Attention to the user is vital because such policies will not be effective unless they are user friendly. To aid end users, provide user education and training. Don't rush a rollout. Be careful and intentional throughout implementation, which can be a lengthy phase. Institute a process for assessing the constant evolution of both technological change and relevant regulations. And remember to keep a close eye on the use of third-party tools.

Intelligent AI management entails many moving parts. It requires patience, collective development from diverse functions of the enterprise, and persistent oversight based on experienced guidance.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morgan Lewis

Written by:

Morgan Lewis
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Morgan Lewis on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide