Fighting the Robots: Texas Attorney General Settles “First-of-its-Kind” Investigation of Healthcare AI Company

Lathrop GPM
Contact

Lathrop GPM

In what it describes as a “First-of-its-Kind Healthcare Generative AI Investigation”, the Texas Attorney General (AGO) recently reached a settlement agreement with an artificial intelligence (AI) healthcare technology company. The company at issue, Pieces Technology, Inc. (Pieces), developed, marketed and sold products and services, including generative AI technology, for use by hospitals and other health care providers.

The technology was marketed as a tool that “summarizes, charts and drafts clinical notes for your doctors and nurses in the [Electronic Health Record] – so they don’t have to”. As described in this alert, the AGO alleged that certain claims made by Pieces about its AI violated state laws prohibiting deceptive trade practices.  The settlement suggests that regulators are becoming increasingly proactive in their scrutiny of this world-changing technology.

AI in Healthcare: It’s Already Here:

Technology that bears hallmarks to AI has long had a home in healthcare. Providers, for instance, have for many years been using clinical decision support tools to assist in making treatment choices. The Centers for Medicare and Medicaid Services (CMS) has acknowledged the value of AI. CMS recently clarified, for example, that Medicare Advantage Organizations (MAOs) are permitted to deploy AI for use in making coverage determinations on behalf of beneficiaries, though the MAO remains responsible for ensuring that the “artificial intelligence complies with all applicable rules for how coverage determinations” are made. This sounds simple, but might be hard to operationalize in practice. It means, for instance, that AI that bases decisions about medical necessity using an algorithm based on a larger data set—and not on the individual patient’s medical history (e.g., diagnosis, conditions and functional status), physician recommendations and notes—may not comply with Medicare Advantage requirements for medical necessity determinations. Meanwhile, Medicare is already paying for the use of AI software in some situations; for example, five of seven Medicare Administrative Contractors have now approved payment for a type of AI enabled CT-based heart disease test.

Falling to Pieces

Pieces made its services available to several different Texas hospitals. According to the AGO, the hospitals provided their patients‘ data to Pieces in real time so that its generative AI could summarize the patients’ conditions for use by physicians and other medical; staff in treating the patients. The AGO alleged that in marketing this technology, Pieces offered a series of metrics and benchmarks that purported to show the accuracy of its AI, including details about the product’s “hallucination” rate. The term “hallucination” is used to describe the phenomena of generative AI products creating an output that is incorrect or misleading. Pieces advertised and marketed the accuracy of its products and services by claiming they have a “critical hallucination rate” and “severe hallucination rate” of “<.001%” and “<1 per 100,000”.

In announcing the settlement, the AGO noted that its investigation found that “Pieces made deceptive claims about the accuracy of its healthcare AI products”. An AGO press release stated that the metrics used were “likely inaccurate and may have deceived hospitals about the accuracy and safety” of the products. The AGO alleged that the representations made about the generative AI products may have violated the State’s Deceptive Trade Practices Act because they were false, misleading or deceptive. Pieces agreed to a number of commitments as part of the Assurance of Voluntary Compliance it entered into with the AGO to resolve the matter. These include commitments to:

  • Use clear and conspicuous disclosures about the meaning or definitions of any metrics or benchmarks, including the method or process used to calculate those metrics or benchmarks, in marketing or advertising its generative AI products.
  • Not make any false, misleading or unsubstantiated representations about the AI products, including related to their accuracy, reliability or efficacy or the procedures / methods used to test or monitor the products.
  • Not misrepresent or mislead customers or users of its products or services regarding the accuracy, functionality, purpose or any other feature of the products.
  • All customers must be provided with clear and conspicuous disclosures of any “known or reasonably knowable harmful uses or misuses” of the products or services. Among other things, these disclosures need to include “known or reasonably knowable” risks to patients and providers (including physical or financial injury in connection with inaccurate outputs) and misuses that could increase the risk of inaccurate outputs or harm to individuals.

Pieces denies any wrongdoing or liability, contends that it has not engaged in any conduct that violates Texas law and that it accurately represented its hallucination rate. The Assurance will remain in place for five years. 

Turbulent Times Ahead

It will likely be a long time before any type of national framework is in place to regulate AI in healthcare. In the meantime, states are increasingly looking to regulate in this area. Here is a snapshot of a few recent developments:

More extensive state regulatory schemes

States are approaching regulation of healthcare in AI in a variety of ways. Several of the takeaways from the Pieces settlement—including transparency around AI and disclosures about how AI works and when it is deployed—appear in some of these approaches. Other states have followed different strategies.

Earlier this year, for example, Colorado enacted the Colorado AI Act (CAIA). Among other things, the CAIA regulates developers and deployers of “high risk” AI systems (systems that are involved in making “consequential” decisions, including decisions that have a material effect on the cost or provision of health care services) and imposes duties on deployers and developers to avoid algorithmic discrimination in the use of such systems, as well as a variety of reporting obligations between developers and deployers, to the Colorado Attorney General and to consumers. The CAIA will become effective on February 1, 2026.

The Artificial Intelligence Policy Act (AI Act) went into effect in Utah on May 1, 2024 and requires disclosure to consumers, in specific situations, about AI use. For example, physicians are required to prominently disclose the use of AI in advance to patients. The Utah law also created a new agency, the Office of Artificial Intelligence Policy charged with regulation and oversight. This Office recently announced a new initiative to regulate the use of mental health chatbots.

A similar effort occurred in Massachusetts, where legislation was introduced in 2024 that would regulate the use of AI in providing mental health services. The Massachusetts bill would require mental health professionals that want to use AI in their practice to first obtain approval from the relevant state licensing board, disclose use of the AI to patients (and obtain their informed consent) and continuously monitor the AI to ensure its safety and effectiveness. The Massachusetts Attorney General also issued an Advisory in April 2024 that makes a number of critical points about use of AI in that state. The Advisory notes that activities like falsely advertising the quality, value or usability of AI systems or mispresenting the reliability, manner of performance, safety or condition of an AI system, may be considered unfair and deceptive under the Massachusetts Consumer Protection Act.

A variety of initiatives occurred in California. For example, assembly bill 1502 (which did not pass) would have prohibited health plans from discriminating based on race, color, national origin, sex, age or disability using clinical algorithms in its decision-making. California’s governor did sign into law assembly bill 3030 (which becomes effective on January 1, 2025) and requires health facilities, clinics, physician offices or group practice offices that use generative AI to generate patient communications to ensure that those communications include disclaimer language explaining that they were generated by AI along with instructions on how the patient may contact a human health care provider.

In Illinois, legislation was introduced in 2024 that would require hospitals that want to use diagnostic algorithms to treat patients to ensure certain standards are met. Hospitals would need to first confirm that that the diagnostic algorithm has been certified by the Illinois Department of Public Health and the Department of Innovation and Technology, has been shown to achieve as or more accurate diagnostic results than other diagnostic means and is not the only method of diagnosis available to the patient. The bill would also require that patients be told when a diagnostic algorithm is used to diagnose them; give patients the option of being diagnosed without the diagnostic algorithm; and require their consent for use of the diagnostic algorithm.

DOJ Interest

In a speech in early 2024, the U.S. Deputy Attorney General noted that the DOJ will seek stiffer sentences for offenses made significantly more dangerous by misuse of AI. The most daunting federal enforcement tool is the False Claims Act (FCA) with its potential for treble damages, enormous per claim exposure—including minimum per claim fines of $13,946—and financial rewards to whistleblowers who file cases on behalf of the DOJ. The potential for FCA exposure where AI uses inaccurate or improper billing codes or otherwise generates incorrect claims that are billed to federal health care programs is easy to understand. Further, as the capability of AI continues to grow it seems foreseeable that at some point a whistleblower or regulator might assert that the AI actually “performed” the service that was billed to government programs, as opposed to the provider employing the AI as a tool in their performance of the service. Depending on the circumstances, there could also be the potential for violation of state laws regulating the unlicensed practice of medicine or prohibiting the corporate practice of medicine.

Similarly, as AI evolves to act with increasing autonomy (or providers using AI gradually exercise less oversight of the AI) it is possible that the AI may start to be seen as crossing over into generating its own “orders” for health care services. This could be problematic for a variety of reasons, including Medicare payment rules mandating that diagnostic tests be “ordered by the physician who … treats [the] beneficiary for a specific medical problem and who uses the results in the management of the beneficiary’s specific medical problem”. Diagnostic tests that do not satisfy this requirement are not reasonable and necessary, which means they cannot be billed to Medicare.

Prosecutors have had success in bringing FCA cases against developers of health care technology. For example, in July 2023 the electronic health records (EHR) vendor NextGen Healthcare, Inc., agreed to pay $31 million to settle FCA allegations. During the time period at issue in that matter, health care providers could earn substantial financial support from HHS by adopting EHRs that satisfied specific federal certification standards and by demonstrating the meaningful use of the EHR in the provider’s clinical practice. DOJ’s allegations included claims that NextGen falsely obtained certification that its EHR software met clinical functionality requirements necessary for providers to receive incentive payments for demonstrating the meaningful use of EHRs.

Along a similar vein, in 2021 DOJ intervened in an FCA case filed against an integrated health system that involved allegations of submitting improper diagnosis codes for its Medicare Advantage enrollees in order to receive higher reimbursement. Medicare Advantage plans are paid a per-person amount to cover the needs of enrolled beneficiaries. These amounts can be increased based on the beneficiaries’ risk scores. Beneficiaries with more severe diagnoses generally lead to higher risk scores, which results in larger risk-adjusted payments from CMS to the plan. The defendants allegedly pressured physicians to create addendums to medical records after patient encounters occurred to create risk-adjusting diagnoses that patients did not actually have and / or were not actually considered or addressed during the encounter. DOJ’s complaint alleges that natural language processing, which involved the use of sophisticated algorithms that purported to better read the natural language of medical records to identify potential undiagnosed diagnoses, was one of the tools used to identify these potential claims and thus capture higher reimbursement. This involved, for example, applying natural language processing to capture patients with evidence of aortic atherosclerosis, informing the relevant coding department that the patients “have been pre-screened and are being sent to you to consider capturing the diagnosis”. This matter remains in litigation.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Lathrop GPM

Written by:

Lathrop GPM
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Lathrop GPM on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide