New FDA Guidance on AI and Medical Products

Troutman Pepper
Contact

Troutman Pepper

On March 15, the U.S. Food and Drug Administration (FDA) published a paper titled " Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together." This paper describes the FDA's strategy for incorporating artificial intelligence (AI) into medical products within the FDA's Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and Office of Combination Products (OCP). The report reflects the FDA's commitment to both fostering innovation in medical technology and safeguarding patient health through regulation, standard-setting, and monitoring.

The FDA identifies four priorities for the development and use of AI across the medical product life cycle:

(1) Fostering collaboration to safeguard public health;

(2) Advancing the development of regulatory approaches that support innovation;

(3) Promoting the development of standards, guidelines, best practices, and tools for the medical product life cycle; and

(4) Supporting research related to the evaluation and monitoring of AI performance.

The FDA intends to collaborate with outside entities like developers, academic institutions, global regulators, and patient groups to "cultivate a patient-centered regulatory approach that emphasizes collaboration and health equity." In practice, this means the FDA will solicit input from interested parties regarding their concerns about the use of AI in medical products, such as algorithmic bias, transparency, cybersecurity, and quality assurance. In addition, the FDA will promote educational programs to teach industry participants about safe and ethical AI use in medical product development and the incorporation of AI into medical products. The FDA remains committed to working with global regulators to promote international cooperation on standards, guidelines, and best practices for AI use in the medical product industry.

CBER, CDER, CDRH, and OCP plan to promulgate policies that provide comprehensive and clear guidance on the use of AI in developing the categories of products they each oversee to ensure regulatory predictability. These policies will address the continued monitoring of emerging issues, including in regulatory submissions, to identify both pitfalls and opportunities. This will allow the FDA to adapt to the changing AI landscape and provide clarity regarding the use of AI throughout the medical product life cycle. The agency plans to build from existing initiatives like the Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) and the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan to continue and expand evaluation and regulation of AI use across the medical products industry. Industry members should expect forthcoming guidance on the use of AI-enabled device software functions that addresses marketing submission recommendations for predetermined change control plans, life cycle management considerations and premarket submission recommendations, and the use of AI to support regulatory decision-making for drugs and biological products.

The FDA further intends to build on existing Good Machine Learning Practice Guiding Principles (developed in conjunction with Health Canada and the UK's Medicines and Healthcare Products Regulatory Agency in 2021) to ensure safety and efficacy across AI medical products. The FDA's current goals include refining and developing ethical and safety considerations related to the use of AI in the medical product life cycle, identifying and promoting best practices that support "long-term safety and real-world performance monitoring" of AI-enabled products, exploring best practices regarding training and testing data sets for AI models, and developing a robust quality assurance framework for AI-enabled tools.

Finally, the agency emphasized the importance of supporting continuing research related to AI's impact on medical product safety and effectiveness. To this end, the FDA plans to support demonstration projects investigating the points at which bias can be introduced into the AI development life cycle and how these can be addressed, projects focused on addressing health inequities associated with the use of AI in medical product development and ensuring data representativeness, and projects that support ongoing monitoring of AI tools within medical product development to ensure both compliance with relevant standards and continued reliability.

The FDA's comprehensive approach to the use of AI in medical products shows that the agency is committed not only to upholding standards meant to protect patient safety, but also to encouraging the continued development and use of innovative technologies in medical products. Industry members, health care providers, and patients alike can expect a future where medical technology is integrated into patient care in a way that is mindful of the potential for bias and is proactive about addressing safety and efficacy concerns while encouraging continued innovation.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Troutman Pepper

Written by:

Troutman Pepper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Troutman Pepper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide