AI On the Prize: Decoding FDA's Latest Guidance

Venable LLP
Contact

Venable LLP

Last week, the U.S. Food and Drug Administration (FDA) issued two significant draft guidance documents concerning the use of artificial intelligence (AI) in medical devices and in drug and biological product development. While these documents fit within the agency's holistic approach to AI over the product life cycle, they provide the most detailed operational insight into how FDA evaluates and considers AI-enabled technologies in areas including transparency, bias mitigation, and using tools like a predetermined change control plan (PCCP) to address challenges such as data drift. Although still in draft form, these guidance documents are useful for drug, biologic, and device developers, manufacturers, clinical teams, regulatory professionals, and researchers using AI in healthcare.

AI-Enabled Medical Devices

The first draft guidance, "Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations," addresses how FDA intends to regulate devices that incorporate AI, starting with the threshold issue of which devices the agency considers to be "AI-enabled devices." Most helpfully, FDA provides details on how it expects sponsors to address AI in marketing applications for AI-enabled medical devices, including 510(k) premarket submissions, De Novo classification requests, and Premarket Approval (PMA) applications. The guidance explains not only which areas of the submission should address AI, but also why the information should be included and what factors are relevant to FDA's evaluation of the technology. Key recommendations include addressing:

  • Device Description: Sponsors should include detailed statements describing how AI is used to achieve the device's intended purpose
  • User Interface and Labeling: Submissions should include descriptions of the user interface and the labeling information to be provided
  • Risk Assessment: Risk management should take into account all users throughout the Total Product Lifecycle (TPLC), as well as the risk of misunderstood, misused, or unavailable information
  • Data Management: Submissions should specify the data used to train the AI model to ensure generalizability to the intended use population. The Agency suggests using unbiased and representative models for data training, considering source bias, over-representation, and other confounding factors
  • Model Validation: Users must be able to understand and interact with the device so that it performs as intended

FDA also suggests that proactive performance monitoring could be an appropriate risk control measure for data drift. For performance validation, manufacturers should use representative validation datasets and ensure masking of device outputs from both clinical reference standards and model developers to avoid bias in study design. Helpfully, FDA clarifies that an AI-enabled device can be considered substantially equivalent to a non-AI-enabled device, provided it does not introduce different questions of safety and effectiveness. And finally, FDA recommends how should be included in public submission summaries to more transparently describe the purpose and function of AI within the device, including a recommended (but not required) template "model card."

AI in Drug and Biological Product Development

The second draft guidance, "Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products," outlines the Agency's approach to the use of AI in the development of drugs and biological products. This guidance document is limited to the use of AI models to inform the agency's evaluation of safety, efficacy, and quality of the product. It does not offer any guidance on the use of AI-enabled drug development tools. FDA introduces a risk-based framework for assessing the credibility of AI models, structured around a seven-step process centered on the Context of Use (COU).

FDA urges submitters to craft detailed credibility assessment plans to demonstrate AI suitability, mitigate delays, reduce risks, and ensure regulatory compliance. The plans should include performance criteria tailored to the AI model's COU and testing on independent data from different trials or healthcare systems. The guidance also suggests how sponsors might evaluate the ultimate adequacy of the AI model, including potential outcomes if the model does not prove to be sufficiently established.

For Further Consideration

Though highly detailed, the guidance documents leave several issues open. For example, the lingering question remains of when a follow-up 510(k) is necessary for a previously cleared AI-enabled device that has adapted to new data inputs. Also, an AI model's suitability for its COU depends on a credibility assessment report, but FDA provides relatively limited clarity on criteria for determining appropriateness for drugs and biologics. And finally, both guidance documents also leave some ambiguity around the transparency in AI training data and post-market performance monitoring. A recent Congressional Research Service report emphasized the need for structured, risk-based oversight to balance innovation with patient safety.

Next Steps

For both guidance documents, FDA is requesting public comment by April 7, 2025. In addition, the Agency will hold a webinar on February 18, 2025, to discuss the AI-enabled device draft guidance.

Written by:

Venable LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Venable LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide