As the New Year begins, questions surrounding how recent election results will impact technology regulation across industries loom large. It’s hardly a bold prediction that artificial intelligence (“AI”) and the regulation thereof will remain front and center in 2025 as policymakers, businesses, professionals, and consumers grapple with its transformative potential and equally prominent challenges. However, throughout his campaign, President-Elect Trump, his nominees, and his advisors have signaled[1] that they will take a very different approach to AI regulation than what was outlined in the Biden-Harris Executive Order on AI issued on Oct. 30, 2023.[2] So what can we expect in this dynamic political environment?
On Dec. 17, 2024, the House Bipartisan Task Force on Artificial Intelligence (the “Task Force”) published its comprehensive 253-page report (the “Report”), which provides guiding principles and recommendations that will serve as a blueprint on AI issues to be considered by Congress. While the Report addresses implications of AI usage and development within various domains such as national security, education, and intellectual property, stakeholders in the highly regulated health care industry will benefit from close attention to the Report’s recommendations surrounding AI in healthcare.
AI’s Increased Role in Healthcare
As is commonly discussed in healthcare circles, AI and machine learning (“ML”) promise to revolutionize the healthcare landscape, including the way health care is provided and paid for. The Report highlights some of the most consequential developments of AI within areas such as drug and medical device development and testing, diagnosis and clinical decision-making, population health management, electronic health records (“EHRs”), and health insurance.
Within the drug and medical device development space, the Task Force seems particularly impressed by AI’s potential to “decrease the time and cost required to get a drug to market” from the current average of twelve years (not including preclinical testing) and estimated range of between $314 million and $2.8 billion. The increased efficiency wrought by machine and algorithmic learning is particularly important for the development of so-called “orphan drugs,” as noted in the Report,[3] and such efficiency could also lead to faster, cheaper, more efficient, and more successful clinical trials. As the Task Force notes, the FDA’s Center for Devices and Radiological Health already has approved “over 800 non-generative AI/ML-enabled devices under its existing medical device authorities.”
AI also shows great promise in assisting clinicians in diagnostic decision-making, whether by prioritizing patients identified as suffering from a more advanced disease state or at greater risk of developing a pathology or by earlier detection of disease through different imaging technologies. But the Report emphasizes that the exercise of professional judgment by medical practitioners in interpreting AI-augmented images remains “essential,” a position supported by the 2024 guidance of the Federation of State Medical Boards (“FSMB”), which asserts that physicians using AI in clinical decision support are deemed to accept responsibility for their responses to AI’s recommendations.
AI is also making strides, the Report notes, in streamlining EHR data collection and assisting in administrative services, such as creating draft portal-based patient communications for clinician review and use and even transforming verbal discussions with patients into draft clinical notes. Reducing such administrative duties will almost certainly decrease pressure on physicians and “can also help with physician burnout, as physicians consistently cite the administrative burden related to EHR documentation as a top contributor to burnout.” However, the Report cautions that AI-created EHR documentation should be carefully evaluated for bias and other errors to ensure that “AI systems are not inadvertently diminishing the quality of care or efficiency in healthcare.” Such an evaluation might come in the form of a process of surveillance, adverse event reporting, and self-validation modeled after the FDA’s post-market evaluation process for drugs and devices.
One of the most discussed and often controversial applications for AI in the healthcare field lies in the realm of health insurance coverage decisions. Logically, AI tools can assess vast pools of patient data and project healthcare coverage needs for sick patients. However, AI-assisted decisions to deny care or coverage tend to be criticized by stakeholders citing a “lack of transparency in coverage decisions.” In one cited example, 90% of AI-based denials of elderly patients’ claims for skilled nursing or in-home care were reversed upon appeal to federal administrative law judges. The Report suggests that concerns about lack of individualized coverage decisions, especially in the Medicare Advantage arena,[4] led to an April 2023 CMS final rule requiring Medicare Advantage plans to “make medical necessity determinations ‘based on the circumstances of the specific individual … as opposed to using an algorithm or software that doesn’t account for an individual’s circumstances.’”[5]
Policy Challenges and Key Findings
Beyond simply highlighting the growing role of AI technologies in the medical field, the Report also foreshadows policy hurdles and pitfalls caused by the adoption of such technologies. Notably, the Report discusses a major challenge for effectively scaling AI systems in healthcare: the interoperability among disparate technology and data systems. The Report notes “[t]he lack of ubiquitous, uniform standards for medical data and algorithms impedes system interoperability and data sharing.” Although interoperability is a challenge for implementing AI in several industries, it is particularly troublesome for healthcare providers because the sensitivity of patient information often leads to highly fragmented and restricted data sets. As a practical matter, most AI systems require enormous volumes of accurate and reliable training data, so only a handful of large providers will have access to meaningful training sets in the absence of standardization. The Report recommends the implementation of “standardized testing and voluntary guidelines” to address the challenge of developing AI tools in this environment and maneuvering between diverse health technology platforms and software. As an example, the Report suggests that the Department of Commerce, “through its work developing general standards for AI risk management and evaluation, could work with HHS and relevant stakeholders to establish best practices … to facilitate the development, implementation, and use of AI technologies.”
The Task Force also addressed the limited legal and ethical guidance regarding liability standards “when AI produces incorrect diagnoses or harmful recommendations[,]” noting the complexity that comes from multiple parties becoming involved in developing, deploying, and managing AI systems and sensitive information. Although the Report observes that both the FSMB and the Office for Civil Rights have placed responsibility on healthcare providers – and not the developers – for AI-related actions, it is premature to assume that all future liability for reliance on AI-enabled tools will rest squarely or solely on the shoulders of clinicians using such tools.
In addition to considering interoperability challenges and liability allocation, the Report also raises a broader question surrounding reimbursement of medical professionals. In a world where AI tools might drastically increase efficiency, thus reducing a physician’s time spent providing services, should time-based reimbursement for physician services be reconsidered? As the Report notes, “CMS calculates reimbursements by accounting for physician time, acuity of care, and practice expense. Considering that AI tools streamline these practices and reduce time spent on services, current payment mechanisms cannot adequately reimburse these tools.” Even if AI tools lessen the time it takes to reach a diagnosis or identify a treatment plan, physicians and health systems will have to spend huge sums to purchase and integrate such tools into their workflows, so perhaps reimbursement will simply be reallocated to the health care facilities investing in the tools.
In the meantime, AI is being used in other ways to save money on health care spending. The U.S. Department of Health and Human Services announced in Jan.2023 a pilot program designed to combat Medicare billing fraud by using AI models to ferret out improper claims and identify new types of criminal billing activity. Some estimates place Healthcare fraud at “3 to 10% of total healthcare spending,” according to the Report. Then-acting HHS Chief Information Officer Karl Mathias stated about the AI program, “It’s still in a pilot phase but they’ve seen some success with this, and they intend to keep growing it.”[6]
Ultimately, the Report reduced its research to the following two “Key Findings”:
- AI's use in healthcare can potentially reduce administrative burdens and speed up drug development and clinical diagnosis.
- The lack of ubiquitous, uniform standards for medical data and algorithms impedes system interoperability and data sharing.
Task Force Recommendations
Based on its detailed study of the current challenges of developing and implementing AI systems in Healthcare, the Report ultimately made five recommendations to serve as guiding principles for future policymaking related to AI and healthcare, as listed below:
- Encourage the practices needed to ensure AI in healthcare is safe, transparent, and effective.
- Maintain robust support for healthcare research related to AI.
- Create incentives and guidance to encourage risk management of AI technologies in healthcare across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes.
- Support the development of standards for liability related to AI issues.
- Support appropriate payment mechanisms without stifling innovation.
While nothing in the Report is binding, the Task Force’s bipartisan leadership declared its intention that the Report “inform future congressional policymaking.” Indeed, the Report offers valuable insight into the current sense of Congress and suggests potential action items as the nation continues to grapple with a rapidly changing AI environment. Those healthcare professionals, administrators, and policymakers who thoroughly assess these findings and recommendations are likely to benefit from the Report’s efforts to identify and distill industry best practices and can more thoughtfully make risk-informed decisions about whether to implement new AI systems as they develop.
[1] See, e.g., 2024 GOP Platform: Make America Great Again!, p.9. (July 7, 2024) (committing to repeal and replace the Biden-Harris Executive Order on AI), available at https://www.documentcloud.org/documents/24795052-2024-gop-platform-july-7-final/.
[2] Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023), available at https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
[3] “Orphan drugs,” as noted by a Report-cited study, are “medicinal products intended for the diagnosis, prevention, or treatment of a rare disease for which no other satisfactory medicinal product is approved in the Community or which represent significant improvement over the existing alternatives.” Irissarry, Carla, and Thierry Burger-Helmchen. “Using Artificial Intelligence to Advance the Research and Development of Orphan Drugs.” Businesses, vol. 4, no. 3, September 2024 at 454, https://www.mdpi.com/2673-7116/4/3/28.
[4] See The Report p.211 n.57 (citing Casey Ross & Bob Herman, Denied by AI: How Medicare Advantage plans use algorithms to cut care for seniors in need, STAT (March 13, 2023), available at https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/).
[5] Id. p.211 & n.60 (quoting Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs of All-Inclusive Care for the Elderly, 42 CFR Parts 417, 422, 423, 455, and 460 [CMS-4201-F] (April 12, 2023), available at https://www.federalregister.gov/documents/2023/04/12/2023-07115/medicare-program-contract-year-2024-policy-and-technical-changes-to-the-medicare-advantage-program).
[6] HHS CIO Mathias says tree-based AI models helping to combat Medicare Fraud, FedScoop (Jan. 18, 2023), available at https://fedscoop.com/hhs-cio-mathias-says-tree-based-ai-models-helping-to-combat-medicare-fraud/.