The National Association of Insurance Commissioners has Entered the AI Chat

Fox Rothschild LLP
Contact

Fox Rothschild LLP

In short, users of such technology should realize that existing insurance laws apply to AI and that regulators are looking for a lot of paperwork to evidence compliance with those laws.

What exactly do you need to know?

Purpose:

  • Set forth the department’s expectations as to how insurers will govern the development/acquisition and use of such technologies and systems by or on behalf of the insurer to make or support such decisions
  • Advise insurers of information and documentation that the department may request during an investigation or examination of any insurer that addresses the use of such technologies or systems

Key points:

Applicable insurance laws apply to AI

  • Insurers that use AI Systems to support decisions that impact consumers must do so in a manner that complies with and is designed to assure that the decisions made using those systems meet the requirements of all applicable federal and state laws. Specifically, that includes unfair trade practices or unfair claims settlement practices.
  • Those laws require, at a minimum, that decisions made by insurers not be arbitrary, capricious or unfairly discriminatory. Compliance with those standards is required regardless of the tools and methods insurers use to make them.
  • The Principles of Artificial Intelligence that the NAIC adopted in 2020 are an appropriate source of guidance for insurers as they develop and use AI systems.
  • An insurer is responsible for assuring that rates, rating rules and rating plans that are developed using AI techniques and predictive models that rely on BD and ML do not result in excessive, inadequate or unfairly discriminatory insurance rates with respect to all forms of casualty insurance— including fidelity, surety and guaranty bond — and to all forms of property insurance — including fire, marine and inland marine insurance. The law also applies to any combination of any of the foregoing.

AIS Program

  • Insurers must develop, implement and maintain a written program for the use of AI Systems that is designed to assure that decisions impacting consumers made or supported by AI Systems are accurate and do not violate unfair trade practice laws or other applicable legal standards (AIS Program).
  • The existence of an AIS Program, including documentation related to the insurer’s adherence to the standards, processes and procedures set forth in the AIS Program, will facilitate both compliance with the existing laws and the department’s investigations and actions.
  • The AIS Program that an insurer adopts and implements should be reflective of, and commensurate with, the insurer’s assessment of the risk posed by its use of an AI System, considering the nature of the decisions being made, informed or supported using the AI System; the nature and the degree of potential harm to consumers from errors or unfair bias resulting from the use of the AI System; the extent to which humans are “in-the-loop;” and the extent and scope of the insurer’s use or reliance on data, models and AI Systems from third parties.

AIS Program Guidelines

  • Should be designed to mitigate the risk that the insurer’s use of AI Systems to make or support decisions that impact consumers will result in decisions that are arbitrary or capricious, unfairly discriminatory or that otherwise violate unfair trade practice laws.
  • Be adopted by the board of directors or an appropriate committee of the board.
  • Tailored to and proportionate with the insurer’s use and reliance on AI and AI Systems.
  • May be independent of or part of the insurer’s existing enterprise risk management (ERM) program.
  • Address governance, risk management controls and internal audit functions.
  • Address the use of AI Systems across the insurance product life cycle.
  • Address all of the AI Systems used by or on behalf of the insurer to make decisions that impact consumers, whether developed by the insurer or a third party and whether used by the insurer or by an authorized agent or representative of the insurer.
  • Specifically, with respect to predictive models: address the insurer’s processes and procedures for designing, developing, verifying, deploying, using and monitoring predictive models, including a description of the methods used to detect and address errors or unfair discrimination in the insurance practices resulting from the use of the predictive model.
  • Each AIS Program should address the insurer’s standards for the acquisition, use of, or reliance on AI Systems developed or deployed by a third party. That includes due diligence and the specific terms in the contracts.

Things the department can ask in an investigation:

1) Evidence re: AIS Program:

  • The written AIS Program or any decision by the insurer not to develop and adopt a written AIS Program.
  • Information and documentation relating to or evidencing the adoption of the AIS Program.
  • The scope of the insurer’s AIS Program, including any AI Systems and technologies not included in or addressed by the AIS Program.
  • How the AIS Program is tailored to and proportionate with the insurer’s use and reliance on AI Systems.
  • The policies, procedures, guidance, training materials and other information relating to the adoption, implementation, maintenance, monitoring and oversight of the insurer’s AIS Program including: (1) Processes and procedures for the development of AI Systems; (2) Processes and procedures related to the management and oversight of algorithms and predictive models; (3) Protection of non-public information, including unauthorized access to algorithms or models themselves.

2) Information and documentation relating to the insurer’s pre-acquisition/pre-use diligence, monitoring, oversight and auditing of AI Systems developed or that a third party deployed

3) Information and documentation relating to or evidencing the insurer’s implementation and compliance with its AIS Program

  • Documentation relating to or evidencing the formation and ongoing operation of the insurer’s coordinating bodies for the development, use and oversight of AI Systems, including documentation identifying key personnel and their roles, responsibilities and qualifications.
  • Management and oversight of algorithms, predictive models and AI Systems, including: (1) Documentation of compliance with all applicable AI Program policies; (2) Information about data used in the development and oversight of the specific model or AI System (including the data source, provenance, data lineage, quality, integrity, bias analysis and minimization, suitability and updating); (3) Information related to the techniques, measurements, thresholds, benchmarking and similar controls adopted by the insurer; (4) Validation, testing and auditing.

4) Third Party AI Systems

  • Due diligence conducted on third parties and their data, models or AI Systems.
  • Contracts with third-party AI System.
  • Audits and confirmation processes performed with respect to third-party compliance with contractual and, where applicable, regulatory obligations.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Fox Rothschild LLP | Attorney Advertising

Written by:

Fox Rothschild LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Fox Rothschild LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide