NAIC adopts principles for trustworthy artificial intelligence in insurance that support the avoidance of proxy discrimination against protected classes

Eversheds Sutherland (US) LLP
Contact

Eversheds Sutherland (US) LLPOn August 14, 2020, the National Association of Insurance Commissioners (NAIC) adopted a set of principles that will guide the work of insurers and entities, including data providers, that play an active role in the lifecycle of artificial intelligence (AI) systems used in insurance. The principles set out expectations that AI systems and AI actors be fair and ethical, avoid proxy discrimination against protected classes, be accountable, compliant with the law, transparent, and have secure, safe and robust outputs. The guidelines are intended to assist regulators and NAIC committees when addressing the level of regulatory oversight they should give to insurance-specific AI applications.

The NAIC AI Working Group began work on the AI principles in June 2019, using the Organization for Economic Cooperation and Development’s (OECD) AI principles as a model. The OECD’s AI principles have been adopted by 42 countries, including the United States. Significant tensions emerged during debate at meetings of the NAIC’s AI Working Group and of the Innovation and Technology Task Force on the meaning of the term “proxy discrimination” and the practicality of avoiding proxy discrimination against protected classes in a risk-based insurance system. Nevertheless, NAIC leadership recognized the importance of addressing proxy discrimination in AI; the concept of avoiding “proxy discrimination against protected classes” has remained in the AI principles that the full NAIC membership adopted during the August 2020 National Meeting. 

AI is a powerful, dynamic tool that can be used in insurance to identify hidden correlations and relationships among factors in massive data sets. AI-based applications currently underwrite and price insurance policies, assist in processing claims and fraud avoidance, and identify populations of consumers for targeted advertising, among other uses. AI is a strong growth area in a business whose “deep learning” capabilities, without accountability and transparency, could lead, and, some allege has already led, to unfair outcomes. If an algorithm used in AI is biased, insurance regulators stress that the algorithm must be avoided; AI actors must anticipate this possibility and not, in the words of one regulator, “cut the machine loose” and claim they were not aware of the issue.

The NAIC AI principles are intended to be an aspirational document that acts as guidance and “do not carry the weight of law or impose any legal liability.” They nonetheless constitute a statement of regulatory policy and “should be used to assist regulators and NAIC committees addressing insurance-specific AI applications” and on that basis are intended to serve as guiding principles for new regulations and enforcement of existing ones. They nonetheless reflect regulators’ recognition that regulation should not stifle technological innovation, stating the principles should be “interpreted and applied in a manner that accommodates the nature and pace of change in the use of AI by the insurance industry [to] promote innovation, while protecting the consumer.”

In adopting these principles, the NAIC joins the EU, other governments and big tech companies (IBM, Google, Microsoft, among others) in recognizing the potential benefits and harms of AI, such as biased outcomes. To deal with the power and potential of AI, the NAIC principles call on AI actors to implement mechanisms and safeguards to foresee potential adverse consequences and address these risks comprehensively. The NAIC principles recognize the need for improving public confidence in AI by providing stakeholders with a way to inquire about, review and seek recourse for AI-driven insurance decisions. The principles also call on AI actors to ensure a reasonable level of traceability of AI data sets, processes and decisions made during the lifecyle of an AI system. To that end, the principles provide that AI actors should enable transparent analyses of their AI systems, consistent with best practices and legal requirements. Finally, the principles provide that AI actors should apply a systemic and continuous risk management approach to their AI systems and outputs in the areas of privacy, data security and unfair discrimination.

Many regulators noted that their duty is to ensure that the industry innovates responsibly with AI systems that are well thought out with no unintended consequences. The New York State insurance regulator remarked on the synergies between the AI principles and NY’s Circular Letter No. 1 from 2019 in making insurers responsible for the models, algorithms and data designed or used by third party vendors. 

Insurance regulators and industry representatives recognize that adopting these AI principles is only the beginning. The far more difficult task will be putting these principles into practice and operationalizing them in law and/or regulation as necessary. 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Eversheds Sutherland (US) LLP | Attorney Advertising

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide