State
|
Type
|
Bill Number
|
Description
|
Arizona
|
Payor-Focused
|
HB 2175
|
Prohibits use of AI to deny a claim or prior authorization for medical necessity, experimental status, or other reason that involves the use of medical judgment.
|
Connecticut
|
Payor-Focused
|
HB 5587
|
Prohibits health insurers from using AI as the primary method to deny health insurance claims.
|
Connecticut
|
Payor-Focused
|
SB 447
|
Prohibits a health carrier from using AI in the evaluation and determination of patient care.
|
Connecticut
|
Payor-Focused
|
SB 817/
HB 5590
|
Prohibits a health insurer from using AI to automatically downcode or deny a health insurance claim without peer review.
|
Florida
|
Payor-Focused
|
SB 794
|
Requires that an insurer’s decision to deny a claim is made by a qualified human professional and that an AI model may not serve as the sole basis for determining whether to adjust or deny a claim.
|
Illinois
|
Provider-Focused
|
SB 2259
|
Requires a health facility, clinic, physician’s office, or office of a group practice that uses generative AI for patient communications to include
- a disclaimer that the communication was created by generative AI; and
- clear instructions describing how a patient may contact a human healthcare provider, employee, or other appropriate person.
|
Illinois
|
Payor-Focused
|
SB 1425
|
Prohibits an insurer from issuing a denial—or reducing or terminating an insurance plan solely based on the use of an AI system—and requires disclosure of an insurer’s use of AI.
|
Indiana
|
Provider-Focused
|
HB 1620
|
Requires healthcare providers to disclose the use of AI technology when AI is used to (1) make or inform decisions involving the healthcare of an individual or (2) generate patient communications.
|
Indiana
|
Payor-Focused
|
HB 1620
|
Requires insurers to disclose the use of AI technology when AI is used to (1) make or inform decisions involving coverage or (2) generate communications to insureds regarding coverage.
|
Maryland
|
Payor-Focused
|
HB 820
|
Prohibits a health insurance carrier from using AI tools to deny, delay, or modify health services.
|
Massachusetts
|
Payor-Focused
|
S 46
|
Requires carriers or utilization review organizations that use AI algorithms or tools for utilization review or utilization management to implement certain safeguards and provide disclosures related to its use. The bill also requires that determinations of medical necessity are made only by a licensed healthcare professional.
|
Massachusetts
|
Payor-Focused
|
H. 1210
|
Requires carriers to disclose if AI algorithms or automated decision tools will be utilized in the claims review process.
|
Massachusetts
|
General
|
H 94
|
Requires developers and deployers of “high-risk AI systems”—including any entity using AI systems to make decisions impacting consumers in the state—to implement certain safeguards and provide disclosures to protect consumers against algorithmic discrimination and mitigate risk related to the use of AI systems. Massachusetts defines “high-risk AI systems” to include systems that materially influence decisions that have significant legal, financial, or personal implications on healthcare services.
|
Massachusetts
|
General
|
H 1210
|
Grants patients and residents of health facilities the right to be informed if the information they receive is generated by AI, as well as the ability to contact a human health provider in the event the information was not previously reviewed and approved by a provider.
|
Nebraska
|
General
|
LB 642
|
Requires developers and deployers of “high-risk AI systems”—including any person doing business in Nebraska that uses a “high-risk AI system”—to implement certain safeguards and provide disclosures to protect consumers from the known risks of algorithmic discrimination. Nebraska defines “high-risk AI systems” to include systems that have a material legal or similarly significant effect on the provision or denial of healthcare services without human review or intervention.
|
New Mexico
|
General
|
HB 60
|
Requires developers and deployers of “high-risk AI systems”—including any person who uses AI systems—to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination by implementing certain safeguards and requiring disclosures on the use of AI. New Mexico defines “high-risk AI systems” to include systems that make (or have a substantial factor in making) a decision that has a material legal or similarly significant effect on the provision or denial of healthcare services.
|
New York
|
Payor-Focused
|
A3991
|
Requires healthcare service plans that use AI algorithms or tools for utilization review or utilization management to implement certain safeguards and provide disclosures related to its use. It also requires that determinations of medical necessity are made only by a licensed healthcare professional.
|
New York
|
Payor-Focused
|
A3993
|
Prohibits insurers from using clinical algorithms in their decision-making that discriminate on the basis of race, color, national origin, sex, age, or disability.
|
New York
|
Payor-Focused
|
A1456
|
Requires health insurers to notify enrollees about the use or lack of use of AI-based algorithms in the utilization review process.
|
New York
|
General
|
A3356
|
Requires developers and operators of “high-risk advanced AI systems” to obtain a license from the state. “High-risk advanced AI systems” include those that manage, control, or significantly influence healthcare or healthcare-related systems, including but not limited to diagnosis, treatment plans, pharmaceutical recommendation, or storing of patient records.
|
Oklahoma
|
Provider-Focused
|
HB 1915
|
Requires hospitals, physician practices, or other healthcare facilities responsible for implementing AI devices for patient care purposes to implement a quality assurance program and establish an AI governance group for the safe, effective, and compliant use of AI devices in patient care.
|
Rhode Island
|
Payor-Focused
|
H 5172/SB 13
|
Requires health insurers to disclose the use of AI to manage claims and coverage, including the use of AI to issue adverse determinations to enrollees, and that any adverse determinations are reviewed by a healthcare professional.
|
Tennessee
|
Payor-Focused
|
HB 1382
|
Requires health insurance issuers that use AI for utilization review or utilization management to implement safeguards related to equitable use, compliance, and disclosure. It also requires that determinations of medical necessity are made only by a licensed healthcare professional.
|
Texas
|
Provider-Focused
|
SB 1411
|
Prohibits a physician or healthcare provider from using AI-based algorithms when providing a medical or healthcare service to discriminate on the basis of race, color, national origin, gender, age, vaccination status, or disability.
|
Texas
|
Payor-Focused
|
SB 815
|
Prohibits a health benefits utilization reviewer from using automated decision systems, including AI systems, to make adverse determinations.
|
Texas
|
Payor-Focused
|
SB 1411
|
Prohibits a health benefit plan issuer from using AI-based algorithms in the issuer’s decision-making to discriminate on the basis of race, color, national origin, gender, age, vaccination status, or disability.
|
Texas
|
Payor-Focused
|
SB 1822
|
Requires issuers of health insurance policies to disclose to enrollees or any physician or healthcare provider whether the issuer or the issuer’s utilization agent uses AI-based algorithmics in conducting utilization reviews.
|
Texas
|
General
|
HB 1709
|
Requires developers and deployers of “high-risk AI systems”—including any person doing business in the state that puts into effect or commercializes a “high-risk AI system”—to implement certain safeguards and provide disclosures to protect consumers against algorithmic discrimination and mitigate risk related to use of AI systems. Texas defines “high-risk AI systems” to include systems that are a substantial factor in decisions that have material, legal, or similarly significant effect on a consumer’s access to, the cost of, or terms or conditions of a healthcare service or treatment.
|