The Three C's To Consider Before Deploying AI Technologies: Contracts, Compliance and Culture

Clark Hill PLC
Contact

Before a business deploys an Artificial Intelligence/Machine Learning (AI/ML) or automated technology, there are three critical considerations: Contracts, Compliance, and Culture. This article is the second in a series and addresses the second C: Compliance.

In 2019, member countries in the Organization for Economic Cooperation and Development (OECD) adopted the AI Principles. The OECD and other bodies have correctly recognized that not all AI/ML or automated systems are created equal in terms of risks and benefits, nor in terms of context and use case. Thus, the OECD AI Principles included a Framework for Classifying AI Systems, which helps to differentiate between different AI systems based on the impact they can have on people’s lives. The classification framework looked at AI systems based on a number of factors including data inputs (data collection and provenance, structure), AI model structure, and data outputs. The OECD framework works to link the AI classification with policy implications based on the AI Principles, such as fairness, transparency, safety, and accountability.

Since then, numerous other organizations have issued voluntary frameworks for the deployment of AI/ML technologies. Recent examples include the NIST Artificial Intelligence Risk Management Framework, the International Standards Organization (ISO)’s Framework for Artificial Intelligence Systems (ISO.IEC JTC 1/SC 42), the Institute of Electrical and Electronics Engineers (IEEE)’s Standard for AI Model Management (IEEE 2941-2021), and more recently, the United State Office of the White House’s Office of Science and Technology Policy’s Blue Print for AI Bill of Rights. Added to the mix of recommended frameworks is a range of pending or passed legislation directed at regulating the use of AI/ML systems, including the EU’s draft Artificial Intelligence Act, the federal (and as of this writing stalled) United States Algorithmic Integrity Act or the algorithmic integrity provisions of the American Data Privacy and Protection Act (ADPPA), and state or city proposals such as the anticipated automated decision rulemaking from the California Privacy Protection Agency, the sectoral Colorado Division of Insurance (DOI)’s draft Algorithm and Predictive Model Governance Regulation, and the enacted New York City’s Local Law 144 on the use of automated employment tools.

Together, these myriad frameworks, laws, and proposals may overwhelm a business that wishes to thoughtfully deploy AI/ML tools. But a closer examination reveals that these frameworks and proposals all include some iteration of the following component parts of a comprehensive AI/ML compliance program, critical to an organization’s “trustworthy” use of AI.

Risk Assessment/Classification

The first component of an AI/ML compliance program is some form of Risk Assessment or Risk Classification for the AI/ML use case.

The risk assessment should identify potential risks of the AI/ML tool, and the risks evaluated should include not only risks to the organization (i.e., reputational, interruption), but also risks to the general public or to an identified stakeholder class, or ethical concerns. The risks should be measured (low, medium, high), and a probability may be assigned (highly likely, unlikely). The goal of the risk assessment process is both to identify unfair results or disproportionate impacts which may result from the use of the AI/ML tool and to ensure the use of the AI/ML tool is proportional to the business need.

Data privacy and individual/consumer rights should be considered during the risk assessment process, including looking closely at the data inputs, quality, and provenance; and an evaluation of whether aggregated/de-identified or synthetic data can be used in lieu of personal information.

The risk assessment process is also an opportunity to look closely at ways to improve the use of the tool, including evaluating the tool’s effectiveness and reliability for the specified use case.

Under some state laws, such as the California Consumer Privacy Act, risk assessments are required for certain processing activities, and must be made available upon request to state regulators for inspection/review.  

AI Governance & Written Policies

AI governance is the ability to direct, manage and monitor the AI activities of an organization. For most organizations, AI governance will include internal, enforceable policies and procedures; external notices and rights protocols; and the development of a cross-functional compliance team to be responsible for the same.

  • Internal Policies: Inventory, Auditing, Reporting, and Oversight

For most organizations, an internal AI Governance Policy that sets forth standards for pre-deployment evaluation (design, testing), use parameters and metrics, and post-deployment model validation for all AI/ML tools will make sense. This AI Governance Policy should also include measures for continuous monitoring for fairness, quality, and technical drift or creep, and will set forth the organization’s strategy for AI oversight.

For high-adapting businesses, the creation of an inventory of in-use AI models (theoretically comparable to a data map or data inventory for data privacy compliance) will be important and is currently required under the Colorado DOI proposed regulations. The inventory should include a detailed description of all AI/ML tools in use, the purpose and associated problems their use is intended to solve, potential risks identified, and implemented safeguards. The inventory can also be used to track data inputs, outputs, limitations, and overall model performance. It is the foundational resource for the organization and governance team to allow them to understand and communicate the scope of an organization’s AI/ML use. The inventory should be reviewed for accuracy and updated at least annually.

The development of a cross-functional AI team that reports to, or has the involvement of, senior leadership, is an emerging requirement of some regulations (for example, the Colorado DOI’s draft Algorithm and Predictive Model Governance Regulation). An AI governance team should oversee the business unit’s use of AI/ML tools, evaluate and improve business outcomes, and raise overall transparency within the organization. Such a team will include legal and compliance functions but also data scientists, developers, engineers, marketing, or HR representatives. The team will also be responsible for identifying emerging or existing regulatory requirements and implementing best practices prior to use. AI governance is not a one size fits all approach but should be customized to reflect the size, adaption level, and risk appetite of the organization.

For AI tools used within an organization by staff (think ChatGPT or an alternative if allowed), a business should deploy an Acceptable Use Policy that clearly identifies acceptable AI/ML uses and prohibited use cases, and the AI governance team (or other department) should monitor for compliance, and for data loss and other employee-centric risks.

  • External Policies: Notice, Disclosures, and Opt-Out Requirements 

Where are your business’s data inputs coming from? If external consumer data or regulated personal information is used by an AI/ML tool or for automated decision making, updates to Privacy Policies, Employee Handbooks, and other just-in-time notices and disclosures may be required.  Existing state privacy laws, such as the California Consumer Privacy Act and NY Local Law 144, require that businesses provide notice to consumers whose information is being processed by an automated or AI/ML tool for certain types of decision making. Additional requirements, such as obtaining opt-in consent or posting the results of bias audits for these tools may be required.

Further, an essential component of the ethical use of AI and automated tools, and a right required under Article 22 of the European Union’s General Data Protection Regulation (GDPR) (“Automated individual decision making”) is the right of an individual to both opt out of the use of the information for automated AI/ML processing and the right to have human intervention into that decision making. It is expected that automated processing regulations under the CCPA will endorse a similar approach requiring opt-out and human oversight of certain AI/ML and automated use cases. Thus, part of any AI/ML compliance strategy will include a protocol for obtaining consent and addressing consumer/individual opt-outs and complaints.

Employee Training & AI Proficiency

Ultimately, personnel at all levels of the organization may be involved in the deployment of AI/ML tools and automated technologies. This can include customer service professionals leveraging intelligent chatbots, human resources professionals evaluating resumes sorted through the use of an HR Tech AI/ML tool, medical professionals evaluating health care recommendations of AI/ML technologies, or banking professionals evaluating fraud detection alerts or lending/financing offers based on automated analyses.

Training of relevant or responsible personnel can take the mystique out of AI/ML and provide them with the strategies needed to implement the organization’s directives when it comes to AI adoption. Further, such training is required under certain emerging regulatory regimes.

On the whole, training and AI proficiency efforts by an organization should work to enhance the technical AI capacity of an organization, allowing it to further modernize and innovate.

AI Incident Response Plan & Auditing Your AI/ML For Vulnerabilities

AI/ML technologies can be subject to security vulnerabilities and intrusions, and cause privacy harm and other real-world impacts on people. An “AI event” can be anything from a publicly posted consumer complaint of discrimination/disparate impact, the failure of an AI tool that is relied upon for critical business functions, or a regulatory investigation into a certain use case.

Just as businesses have and test their cyber Incident Response Plan (IRP) to detect, mitigate and respond to cyber threats, so too should businesses have a plan in place to respond to unintended consequences of AI usage. Both NIST AI guidance and the Colorado DOI proposed regulations require the development of an AI Incident Response Plan that details plans in the event significant risks develop during the deployment of the AI/ML technology.

An AI Incident Response Plan will include, at a minimum, a checklist for assessing the current use of the AI/ML tool; a process for evaluating whether suspension or termination of the AI/ML tool is necessary during the response process; thresholds for when events should be escalated to legal departments, executive teams or the board of directors; evaluating insurance (Tech E&O, cyber, EPL or other) coverage that may be available along with notification requirements to third parties, individuals or regulators; and a communication strategy for internal and external coms concerning the event.

Importantly, an AI IRP will include measures to establish and promote attorney-client privilege, including retaining outside counsel. And, just as Cyber IRPs are tested at least annually and updated to reflect the current threat landscape, so too should the AI IRP be tested and current AI/ML vulnerabilities and threats incorporated.

Vendor Management and Tech E &O Insurance

Last, and as discussed in our first article on the Three Cs to Deploying AI/ML Technologies, licensing of AI/ML tools require certain vendor management and third-party contracting strategies, and consideration of Tech E&O insurance.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Clark Hill PLC

Written by:

Clark Hill PLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Clark Hill PLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide