Various chefs within the National Association of Insurance Commissioners and some individual states’ chefs continue to address insurers’ use of artificial intelligence (AI), machine learning (ML), the use and protection of consumer data, and related issues.
NAIC
- The Innovation, Cybersecurity, and Technology (H) Committee is drafting a principles-based model bulletin on AI/ML governance. In regard to the draft model, Commissioner Kathleen Birrane emphasized that there was strong agreement among the chefs to:
- Avoid using tongs to reach vendors of consumers’ data, AI, and ML. Rather, the chefs will instruct insurers on their responsibility over vendors. The bulletin may also include suggestions on the specific provisions that an insurer should include in its vendor contracts, including audit rights, the ability to understand the vendor’s AI/ML governance, and an obligation to make the vendor available to the insurer’s regulator.
- Require insurers to use a scale to measure the extent of their use of AI and ML.
The chefs’ model bulletin will contain four sections: introduction, definitions, regulatory standards, and regulatory oversight/examination. The regulatory standards section will focus on documented governance that includes risk management. The regulatory oversight section will set the expectation on what companies will need to be prepared to produce when examined. The bulletin is expected to be exposed over the summer and discussed at the NAIC Summer National Meeting.
- Workstream #1 of the Big Data and Artificial Intelligence (H) Working Group is working with 14 states to measure life insurers’ use of AI/ML techniques. To avoid throwing in the kitchen sink, this survey focuses on three operational areas — pricing and underwriting, marketing, and risk management. However, it does ask insurers to list other areas in which they use AI/ML. Other items on the menu include:
- The survey requests for each type of AI/ML used — the level of deployment, name of the model, the ML techniques used, whether it was developed internally or externally, the level of influence (i.e., the model makes the decision without human intervention, the model suggests an answer, the model supports a human decision), the types of data used, and whether there is a model governance framework in place.
- The governance section seeks to understand an insurer’s awareness of specific risk areas tied to the NAIC’s artificial intelligence principles.
- The FAQs explain that if a vendor contract does not allow insurance regulators to review the information from the vendor, that contract might be void for public policy reasons. In any event, the information used by an insurer is subject to the regulatory authority of the participating states.
- Workstream #2 of the Big Data and Artificial Intelligence (H) Working Group seeks to give regulators the proper utensils to ask insurance companies about models and data via their draft model and data regulatory questions for regulators. At the group’s March 22 meeting, Commissioner Doug Ommen recognized the need for changes based on the comments received from interested parties on the draft questions. The reviews on the draft questions include the following subjects:
- The scope of the questions and the need for a limited, principles-based approach to encourage innovation without overregulation, including redundant regulations.
- Compliance costs for smaller companies and vendors.
- The impact of making insurers responsible for third-party vendors.
- The need to safeguard vendors’ proprietary information.
- Methods for testing data and models.
- Based on the comments received, Ommen stated that a revised draft should be prepared by the end of May.
- The Accelerated Underwriting (A) Working Group issued a draft guidance document as another tool to assist regulators when reviewing accelerated underwriting programs used by life insurers. This tool provides sample questions and areas for review by regulators when preparing their dish to serve up to insurance companies. The draft incorporates various ingredients like the NAIC’s artificial intelligence principles, in framing questions that facilitate regulators’ assessment of whether the accelerated underwriting programs are fair, transparent, and secure, as required by existing law. The group explained:
Making sure that the use of accelerated underwriting is fair to consumers is important because its use impacts both the availability and affordability of life insurance to consumers. Ensuring that insurers use accelerated underwriting in a transparent manner is important because consumers should understand what personal data is being accessed by insurers and how that data is being used. Lastly, insurers accessing sensitive consumer data have a duty to secure that data to protect consumers from the harm of unauthorized disclosure.
Colorado Division of Insurance
- Before its February 7 stakeholder meeting, the Colorado Division of Insurance issued its draft proposed “Algorithm and Predictive Model Governance Regulation” and solicited informal comments on the draft. The informal comments are now available on the division’s website. The division has not taken any further action, so it seems the draft regulation is still cooking.
- On April 6, 2023, the division held its first stakeholder meeting on private passenger auto insurance. The meeting was an appetizer to the main course to come. Specifically, the division reiterated the goal of legislation and introduced the issues raised by insurers’ use of external data and AI in the context of private passenger auto insurance. The division opened the floor to questions in which interested parties got their bite at the onion. Interested parties provided feedback on the legislation, with the main concerns focusing on moving too quickly and undercooking the final dish.
Other States
- New York
In January 2019, the New York State Department of Financial Services issued Circular Letter No. 1 to advise insurers authorized to write life insurance in New York of their statutory obligations regarding the use of external consumer data and information sources in underwriting for life insurance. The letter also requires insurers to (1) determine that the external tools or data sources do not collect or use prohibited criteria and (2) establish that the underwriting or rating guidelines are not unfairly discriminatory. The letter goes on to warn that an insurer “may not simply rely on a vendor’s claim of non-discrimination or the proprietary nature of a third-party process as a justification for a failure to independently determine compliance with anti-discrimination laws. The burden remains with the insurer at all times.”
- Connecticut
The Connecticut Insurance Department issued a notice on April 20, 2022, on the usage of big data and avoidance of discriminatory practices. The notice sought to remind insurers of the expectation that they will comply with anti-discrimination laws in the use of technology and big data. The department discussed its authority to require insurers and third-party data vendors, model developers, and bureaus to provide the department with access to data used to build models or algorithms included in any rate, form, and underwriting filings. The department emphasized the importance of data accuracy, context, completeness, consistency, timeliness, relevancy, and other critical factors of responsible and secure data governance.
- California
On June 30, 2022, the California Department of Insurance released its bulletin titled “Allegations of Racial Bias and Unfair Discrimination in Marketing, Rating, Underwriting, and Claims Practices by the Insurance Industry.” It reminded insurers of their “obligation to market and issue insurance, charge premiums, investigate suspected fraud, and pay insurance claims in a manner that treats all similarly-situated persons alike.” The department posited that “conscious and unconscious bias or discrimination … can and often does result from the use of artificial intelligence, as well as other forms of ‘Big Data’ (i.e., extremely large data sets analyzed to reveal patterns and trends)” and warned that the use of algorithms and models must have a sufficient actuarial nexus to the risk of loss. It further noted that even when the “models and data may suggest an actuarial nexus to risk of loss, unless a specific law expressly states otherwise, discrimination against protected classes of individuals is categorically and unconditionally prohibited.”