Though responsible and ethical use of artificial intelligence (AI) has been a hot topic for the past few years, there has not yet been significant adoption of laws or regulations aimed specifically at regulating the use of AI by healthcare providers in clinical practice. Even outside the healthcare context, AI regulation remains somewhat underdeveloped, as various authorities struggle to keep pace with technological developments, leading to a confusing regulatory patchwork. In the meantime, various organizations have weighed in with nonbinding guidance in efforts to bridge that gap as stakeholders seek to strike the right balance between promoting innovation and managing risk.
The Federation of State Medical Boards (FSMB) is the latest to do so. In April 2024, the FSMB House of Delegates issued a report titled Navigating the Responsibility and Ethical Incorporation of Artificial Intelligence into Clinical Practice (the Report).[1] Such guidance is significant because the FSMB represents 71 medical and osteopathic boards across the United States, and those medical boards are the regulatory agencies with authority to regulate physicians as they use AI in clinical practice. Notably, the FSMB committee responsible for the Report included representatives from nine different medical boards. Therefore, though the Report is nonbinding, healthcare providers should note it provides insight into how a range of medical boards are likely thinking about these issues.
The guidance itself states that it is intended to aid both physicians and state medical boards on the topic and further states that “[b]y thoughtfully addressing the opportunities and challenges posed by AI in healthcare, state medical boards can promote the safe, effective, and ethical use of AI as a tool to enhance, but generally not replace, human judgment and accountability in medical practice.” The guidance also cautions against “over-regulation and regulatory overreach” by medical boards, and promotes consistency in regulation across jurisdictions.
The Report focuses on the following areas: (1) education, (2) accountability, (3) medical records, (4) informed consent and data privacy, (5) equity and bias, and (6) governance. The following are a few key takeaways:
Education
The Report states that medical education should include an emphasis on advanced data analytics and use of AI in a clinical setting, and physicians should engage in continuing medical education programs on the topic to help them understand both the benefits and the risks.
Accountability
The Report states that “the physician is ultimately responsible for the use of AI and should be held accountable for any harms that occur.” More generally, this is an area of uncertainty and debate, including whether, or to the extent, a vendor should be responsible (e.g., under a product liability claim) when a physician uses the tool to engage in clinical decision-making. The Report notes that the extent to which the physician should be held accountable by a medical board depends on the context, noting that risks to patients increase with tools that “more closely model the practice of medicine” and the level of regulatory scrutiny and accountability should correspondingly increase. By way of example, FSMB considers administrative tasks on the low end of the risk spectrum and clinical decision support on the high end, with chatbots and talk therapy not far behind. With respect to clinical decision support tools, the Report states that a physician should always be prepared to provide the rationale for their decision-making, whether following the AI’s recommendations or deviating from those recommendations. In short, as stated in the Report, “failure to apply human judgement to any output of AI is a violation of a physician’s professional duties.” The Report also suggests that where AI tools can be utilized to provide superior patient care, a physician’s failure to avail themselves of the benefits of such tools could contribute to a physician’s failure to meet their professional obligations and the applicable standard of care.
Medical records
The Report cautions against utilizing AI to assist with medical record documentation without proper oversight, which may lead to inaccurate documentation. Physicians should also ensure that appropriate security measures are in place.
Informed consent and data privacy
The Report makes clear that the duty to explain diagnosis and treatment options, risks and benefits, and reasonable alternatives as part of the informed consent process includes the use of AI. Aside from concerns regarding autonomy in the care received, there are also concerns regarding autonomy over how an individual’s data is used. The Report is unequivocal in the view that “physicians should receive a patient’s consent prior to application of a tool to a patient’s care.”
Equity and bias
The Report states that physicians have a professional responsibility to identify and eliminate biases in providing care, including bias via AI, while also noting that patients should strive to ensure that “all patients have equitable access to the benefits of AI.” Addressing bias in the development of AI (such as due to bias in the underlying data used to train the tool) is a significant topic well beyond the scope of this article. However, the topic of ensuring equitable access to AI does not exacerbate existing disparities is not addressed as frequently. Given the immense potential of AI to move the needle on quality of care and patient outcomes, it is an important topic that healthcare providers should consider when deploying AI tools.
AI governance through ethical principles
The Report cautions against medical boards attempting to regulate particular applications of AI given the speed of technological advancement. Instead, the Report lists key ethical principles and recommendations for various stakeholders across the ecosystem (including FSMB). For example, the Report recommends the following:
-
FSMB should develop FAQs to serve as a resource for medical boards.
-
Medical boards should develop guidelines regarding physician disclosure to patients regarding use of AI. They should also review the definition of the “practice of medicine” in their jurisdiction, to ensure oversight of “those who provide healthcare, human or otherwise.”
-
Hospital systems and insurers that select AI tools for clinical decision support should educate physicians on such tools, provide access to performance reports, and develop a process to regularly review such tools for efficacy.
The FSMB Report follows other notable guidance by industry organizations, including the American Medical Association’s (AMA) recent report, AMA Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care.[2] Additional regulation is also anticipated at the state and federal levels in the coming years, and healthcare providers are well-advised to keep a close eye on such developments. For example, Georgia enacted a law earlier this year that prohibits any action involving clinical care “based solely on results derived from the use or application of artificial intelligence” and requires “meaningful review” of AI outputs utilized by clinicians.[3] That bill also requires the Georgia Composite Medical Board to adopt rules and regulations governing AI, including establishing disciplinary standards for physicians who fail to comply with standards established for the safe and ethical use of AI in clinical practice.
Takeaways
-
The Federation of State Medical Boards (FSMB) recently issued a report addressing responsible and ethical artificial intelligence (AI) use.
-
The report is significant given medical boards’ oversight roles.
-
Key topics addressed include physician accountability for AI use.
-
Key topics addressed also include informed consent in patient care.
-
Beyond FSMB and others’ guidance, additional regulatory oversight is underway.
* Amy Joseph is a Partner at Orrick, Herrington & Sutcliffe LLP, and Jeremy Sherer is a Partner at Orrick, Herrington & Sutcliffe LLP.
1 Federation of State Medical Boards, Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice (Adopted by FSMB House of Delegates), Executive Summary, April 2024, https://www.fsmb.org/siteassets/advocacy/policies/incorporation-of-ai-into-practice.pdf.
2 See American Medical Association, AMA Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care, February 26, 2024, https://www.ama-assn.org/practice-management/digital/ama-future-health-emerging-landscape-augmented-intelligence-health-care.
3 Georgia House Bill 887, 2024, https://www.legis.ga.gov/api/legislation/document/20232024/220941.