Regulatory Landscape for AI-enabled MedTech in APAC

Ropes & Gray LLP
Contact

Ropes & Gray LLP

Regulation of artificial intelligence (AI) in Asia Pacific remains nascent, mostly governed by existing regulatory frameworks designed for other technologies and products. It’s a work in progress. 

AI techniques – machine learning (ML), deep learning, and natural language processing – are transformative and ever more widely used. However, they present challenges and concerns, such as bias and discrimination, fake content and misinformation, privacy and security, ethical issues, and unintended consequences. 

So, how are the regulators responding?  What follows is a snapshot of the progress being made in Asia Pacific, Europe and North America. It draws on an APAC Med webinar given on 25 March and a Roundtable organized by Vivli on 16 April in Tokyo.   

Asia Pacific

Across Asia Pacific, there is no overarching law currently in place to govern AI, but this will change in the coming months. 

  • China: The National People’s Congress has urged the State Council to draft an overarching statute. The current regulatory system consists mainly of administrative regulations and standards. 
  • Japan: AI regulation occurs at the individual sector level in relevant industries. In the healthcare and life sciences industries, there are laws to regulate AI/ML-enabled tools. These include the 2023 Next-Generation Medical Infrastructure Act to facilitate the use of AI in research and development of AI-enabled medical diagnostic tools.
  • Australia: The government’s intention is to adopt a principles-based or list-based approach to define “high-risk AI”, similar to the EU (see below).
  • Singapore: Frameworks are in place to guide AI deployment and promote the responsible use of AI, including key ethical and governance principles, and standardized tests to validate adoption of those principles. Its National Artificial Intelligence Strategy 2.0 maps out Singapore’s commitment to building a trusted and responsible AI ecosystem, while its Artificial Intelligence in Healthcare Guidelines aim to improve understanding, codify good practice and support the safe growth of AI in healthcare.
  • South Korea: The Digital Medical Products Act (January 2025) provides the basis for a regulatory framework governing digital medical devices. Besides that, the Basic AI Act (December 2024), is set to take effect on 22 January 2026 and apply to any AI activities that impact the local market, regardless of whether they originate domestically or overseas. It classifies AI based on risk.

Europe

The European Union’s AI Act (2024) lays down harmonized rules on AI and is the first overarching AI legislation. It applies to all AI systems placed on the market or put into service in the EU, categorizing them across four levels of risk from unacceptably high to minimal. Medical devices that incorporate AI/ML-enabled device functions will likely be classified as high-risk.

The AI Act sets out specific obligations for high-risk AI systems, many of which overlap with existing procedures required under Medical Devices Regulation (MDR) and In Vitro Medical Device Regulation (IVDR). These center around risk assessment and mitigation, compliance, record-keeping oversight, cybersecurity and other good practice. 

The U.S.

The U.S. at present has no AI-specific law and the Food and Drug Administration (FDA) has no distinct, established regulatory framework governing AI-enabled medical products. It means that if an AI/ML-enabled product meets the definition of “medical device”, the FDA will regulate it as “Software as a Medical Device” (SaMD) under its traditional regulatory framework for medical devices. 

Because the FDA’s traditional regulatory structure is not designed for adaptive AI/ML-enabled technology, it has adopted a flexible approach to regulation to facilitate innovation while balancing safety and effectiveness. Key considerations include applying good machine learning practices (GMLPs) in development, documenting changes, transparency and monitoring real-world performance data to fully understand the product use and identify issues at an early stage.

The FDA is coordinating with Health Canada and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) to develop internationally harmonized GMLPs that draw on an agreed set of guiding principles. This commitment was reaffirmed in March 2025 and highlights the shared vision of these regulators to adapt processes and standards that support innovation while ensuring patient safety in this field.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Ropes & Gray LLP

Written by:

Ropes & Gray LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Ropes & Gray LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide