Artificial Intelligence in Health Care: Key Considerations for Oncology

Foley & Lardner LLP
Contact

Foley & Lardner LLP

Artificial intelligence (AI) has the power to revolutionize health care. In oncology, there are now opportunities to apply AI to support diagnostics, predictive analytics, and administrative functions.

This hot topic was discussed in sessions on Digital Innovation (a cancer-specific AI use case) and Advanced Clinical Pathways at the September 2024 Cancer Care Business Exchange. While the business case became clear through these lively sessions, it is important to note that AI technology development is far outpacing the development of an appropriate and thoughtful legislative and regulatory framework for this critical technology.

Hospitals, providers, payors and third-party vendors should take practical steps in selecting and implementing any new AI solutions. Below are a few key considerations to bear in mind when delving into this critical area:

Ensure Compliance with All Applicable Privacy and Security Standards

AI, and often the success of the AI solution, is dependent on data. Nearly all AI solutions require enormous volumes of data, which require the integration and ingestion of data from many sources, including electronic medical record systems, provider platforms, and patient portals. Understanding how and who will use and support the AI solution is critical to ensuring health care organizations comply with federal and state privacy and security laws.[1] Vendors that create and support AI solutions for health care providers must ensure that their solution meets and complies with the best practices for baseline privacy and security that health care providers will expect.

Health care providers must understand at a granular level how the AI solution operates in order to assess the regulatory protections required. What type of data will the vendor require to operate and support the AI solution? If patient information will be shared with the vendor, it is likely that the provider will need to have a business associate agreement in place with the vendor, at a minimum. Health care providers should understand what type of patient notice, if any, is required. Some state laws require that individuals be put on notice if their personally identifiable information will be used by an AI solution for purposes of providing a recommendation or feedback. Providers should also watch for signs that their vendors will seek to use health data for their own purposes, such as training their AI solution and utilizing the solution with the provider’s competitors. Any use of data for a vendor’s own purposes (not related to administration of their customer contract) would need to be specifically permitted in the business associate agreement, and any data would need to be de-identified in accordance with HIPAA and pursuant to the health care provider’s permission.

Actively Monitor Legislative and Regulatory Activity in the Area as Oversight is Likely to Increase

With AI moving at a breakneck pace, governmental regulation has been slow; however, we can expect to see new laws and official guidance with increasing frequency. A number of states have created task forces and councils to study and monitor the use of AI. In March, Utah passed the “Artificial Intelligence Act,” which established the first disclosure requirements for the use of generative AI in regulated occupations starting May 1, 2024. And in May, Colorado passed the “Consumer Protections for Interactions with Artificial Intelligence” bill (with an effective date of February 1, 2026, the first state law regulating “high-risk artificial intelligence systems.)[2] For more information, please see Foley’s blog Colorado Passes New AI Law to Protect Consumer Interactions.

AI-Driven Medical Devices May Require Approvals Before They Can Be Implemented

Providers should confirm whether the proposed AI-driven solutions require regulatory approval (e.g., as a medical device), and if not, what regulatory guidance supports marketing their product along with appropriate representations and warranties backing such statements. While AI can be trained to learn from mistakes and evolve to improve performance, existing models of regulation are designed to evaluate the static framework in which the AI operates. As a result, many medical devices may need but not have appropriate regulatory approval, or could be subject to an approval requirement in the future. The Food and Drug Administration (FDA) is exploring ways to more fully evaluate AI solutions (and as of August 2024 has authorized 950 AI/ML-enabled medical devices), but ultimately may look to Congress for guidance in the form of new legislation. In the meantime, providers should vet any proposed AI-driven medical devices with counsel before adoption.

Take Steps to Protect Yourself Through Diligence and Under Contract

Imagine the worst-case scenario. For example, your AI vendor suffers a cybersecurity attack. Or, a diagnosis is missed, because of a problem with an AI product and a provider’s overreliance on the AI solution. Providers should carefully implement policies and procedures outlining the organization’s approved uses of AI, diligence their vendors and the AI solution thoroughly, have contracts reviewed to build in appropriate protections, and ensure insurance and other liability protections are in place. Appropriate diligence areas include whether the vendor has appropriately registered to protect its intellectual property; whether the vendor vets its employees and agents to guard against health fraud and other high-risk areas; what insurance coverage (including cyber coverage) the vendor has in place; and what resources are available if a claim is brought. Contracts should proactively allocate responsibility accordingly. Many vendors offer up boilerplate contracts. These should be carefully reviewed to ensure they include the following legal terms:

  • Basic performance expectations
  • Representations and warranties regarding compliance and performance
  • Confidentiality/privacy and security
  • Non-solicitation obligations or other restrictive covenants (particularly for vendors who will be on-site)
  • Indemnification provisions – look for indemnity tied to negligence, breach, willful misconduct, cyber, and IP infringement.
  • Liability limits – many vendors seek to cap at one to three years’ lookback on fees.

Develop and Implement Proper IT and AI Governance Procedures

A well-defined corporate governance framework can combat concerns with the ethical implementation of AI, build trust in the AI solution, and help minimize a company’s liability down the road. This includes setting up policies and procedures, including appropriate training for personnel that will access or use the AI solution, and that will enable the organization to vet, oversee, and monitor AI during the implementation process and on a go-forward basis. However, it is important to continually vet and iterate on such governance policies as technology and use cases of AI continue to develop as well as to monitor compliance with such governance policies within your organization.

Prioritize Solutions that Support or Enhance Reimbursement

AI can help drive efficiency and innovation that can most importantly lead to better patient care and outcomes, but savvy oncology practices will also look for AI solutions that may also enhance their bottom line. Is the AI solution itself reimbursable, and/or does the vendor have a path to reimbursement? Is the product certified for compliance with the Centers for Medicare & Medicaid Services’ (CMS) quality reporting programs such as Merit-based Incentive Payment System (MIPS) and state programs on promoting interoperability? The Office of the National Coordinator for Health Information Technology (ONC) certification is recognized by CMS to confirm adherence to security, functionality, and technology requirements.[3]

However, depending on the level of integration with the AI solution, vendors may have to take steps to comply with ONC’s requirements for generative AI solutions and clinical decision support tools.

Next Steps

As AI becomes increasingly part of the operational fabric of health care, these practical steps can be taken to reduce the risks to compliance, privacy and security processes.

*The authors acknowledge the contributions of special counsel, Jacqueline Acosta, and Francesca Camacho, a student at the Boston University School of Law and 2024 summer associate at Foley & Lardner LLC

[1] Many states have privacy and security standards that in many cases exceed the Federal baseline that should be reviewed.

[2] SB205 (Consumer Protections for Interactions with Artificial Intelligence).

[3] ONC recently issued revised certification criteria for “decision support interventions,” “patient demographics and observations,” and “electronic case reporting,” as well as a new baseline version of the United States Core Data for Interoperability (USCDI) standard to Version 3.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Foley & Lardner LLP

Written by:

Foley & Lardner LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Foley & Lardner LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide