How Can Companies Tackle Europe’s AI and Data Protection Rules?

American Conference Institute (ACI)
Contact

Life science companies will have to grapple with unique questions in complying with the European Artificial Intelligence Act, including the scope of the law’s research exemption and the use of AI in personalized medicine and real world-evidence.

European Union lawmakers signed the AI Act, the first binding worldwide regulation on AI, in June. It was published in the Official Journal of the European Union on July 12, 2024 and entered into force across all 27 EU member states on August 1. The law defines AI systems and classifies them into four categories of risk: unacceptable-risk, high-risk, limited-risk, and minimal-risk. Unacceptable systems are prohibited and must be phased out within six months while provisions pertaining to the other categories will apply 24 to 36 months after enactment.

The AI Act intersects with the EU’s General Data Protection Regulation (GDPR), which was put into effect in May 2018. That law requires businesses in the EU and US to submit a Data Protection Impact Assessment (DPIA) when the processing of data is likely to result in a high risk to the rights and freedoms of individuals.

Thibaut D’hulst, counsel at Van Bael & Bellis, said many of the principles underlying the AI Act are related to the GDPR, and life sciences companies can leverage their experience with the GDPR to achieve compliance with the AI Act.

“Applying the AI Act and the GDPR together, I imagine, is scaring US companies. So, we’re considering how to apply those two in parallel for the life sciences sector,” D’hulst said.

D’hulst will be speaking on a panel at the American Conference Institute’s Life Sciences AI Summit - Europe, to be held March 25-26 in Brussels. In an interview he discussed how businesses operating in the EU and US can meet the EU’s evolving regulatory requirements.

Complying With AI Act

D’hulst said to comply with the AI Act, organizations should first map out what AI systems they have and then classify them according to the law’s four risk categories. They then can start looking at the obligations they have under the law. One provision requires providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and other persons using the systems on their behalf so they can make informed decisions about the systems. This obligation to educate staff enters into force on 2 February 2025, so organizations should adopt AI literacy measures by then.

The AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The Act applies primarily to developers and deployers putting AI systems and general purpose artificial intelligence (GPIA) models into service or placing them on the EU market. It applies to those located in the EU, as well as to those in a a third country when the output produced by their systems is used in the EU.

The law says prohibited AI systems include those using subliminal or deceptive techniques to distort people’s behavior and impair informed decision-making. They also include systems classifying individuals or groups based on social behavior or personal characteristics, among other things.

High-risk AI systems are those that can potentially create an adverse impact on people’s health, safety or their fundamental rights. They include medical devices covered by certain EU harmonization legislation. Providers of high-risk AI systems must run a conformity assessment procedure before their products can be sold and used in the EU and comply with requirements for testing, data training, and cybersecurity. In some cases, they will have to conduct a fundamental rights impact assessment to ensure their systems comply with EU law.

AI systems that are intended to interact with people or generate content are categorized as limited risk if they pose risks of impersonation, manipulation or deception, such as chatbots, deep fakes and AI-generated content. Systems that are deemed to be minimal risk include spam filters and recommender systems.

D’hulst noted that companies in the life sciences sector already have a lot of experience dealing with ethical principles, such as good clinical practices and the rights of patients or data subjects, which can be used to comply with the AI Act.

Research Exemption and Real-World Evidence

The AI Act specifies that the regulation “does not apply to AI systems or AI models, including their ouput, specifically developed and put into service for the sole purpose of scientific research.” While this exemption eases the burden on life sciences businesses, there is uncertainty about its scope.

D’hulst said clinical trials and drug discovery probably fall under the research exception while research for commercial purposes may be subject to the law. It is unclear to what extent AI systems used to assist with research tasks will be considered research tools. And there are questions about the use of AI in personalized medicine. For example, D’hulst noted that radiologists are using AI to detect patterns in imaging data. Since these analyses are not for research purposes, organizations that use these tools will need to comply with the AI Act, he said.

There is also uncertainty about the use of AI systems for real-world evidence. This data is not collected specifically for research purposes, but is research used for a secondary purpose. “In many cases it will probably qualify as research, but there will also be a lot of commercial use of real-world evidence where you will have to take the AI Act into account,” D’hulst stated.

Risk Assessment, Explainability and Accountability

Dipak Kalra, president of The European Institute for Innovation through Health Data (i-HD), noted that other aspects of regulatory compliance are in the gray zone and need to be further explored. For example, the law requires developers of high-risk AI systems to perform a risk assessment. Kalra, who will also be speaking at the AI Life Sciences Summit, said some developers may not have expertise on what a good risk assessment looks like or how to perform it.

While a DPIA assesses the protection of private information about individuals, the risk assessment for AI extends to the possible harms an AI solution might cause, Kalra said. “You might develop an AI algorithm having perfectly used data but not correctly governed its algorithmic learning process, and therefore it starts to introduce inappropriate advice,” he said.

The regulation also requires developers to explain their AI systems, so users know how a decision using AI was made. Kalra said how a developer explains its system to a regulatory authority approving its product is different than the explanation it would give to a healthcare organization, clinician or patient. “How can we equip developers to have explainable solutions to different stakeholder groups?” he asked.

Businesses will also have to adopt good practices and measures to ensure accountability. For example, Kalra said, in plugging an AI solution into a local healthcare system, how do you make sure the solution connects to the electronic health records of patients, collects the right data from them, and delivers a copy of its advice into a storage environment so there is a record of it.

“It wouldn’t be acceptable in clinical practice to have a specialist look at your patient” and not have a record of their recommendation, Kalra said. “AI is like a specialist. It needs to have a traceable record of what advice it’s given, and that needs to be stored in the patient’s record so if anything goes wrong, there’s evidence of why the wrong course of action was taken.”

Data Protection Impact Statements

Businesses are still figuring out the complexities of the GDPR, which imposes obligations on organizations that target or collect data related to people in the EU. The law requires organizations to produce a data protection impact assessment (DPIA) for each data processing activity that is likely to result in a high risk to people’s personal information. The processing of health data qualifies as high risk, D’hulst said, so most life science companies should be familiar with the DPIA process.

A DPIA assesses the impact an organization’s technology will have on the right to the protection of personal data, and to some extent to other fundamental rights of data subjects, he said. D’hulst noted DPIAs are often used as a box-ticking compliance exercise.

“We see very often that organizations are just patting themselves on the back saying they’ve addressed all the risks," he said. “But I think it should be an opportunity to answer the question: ‘What could possibly go wrong?” He said DPIAs give companies the chance to identify risks beyond the standard elements in every clinical trial, such as the use of AI tools to improve the quality of data.

“For me, it’s a matter of the more you know, the better you’re going to be prepared, and that means less things can actually go wrong, so you’re protecting yourself against fines but also breaches of trust,” he stated.

He recommended that a DPIA be done throughout the development of a process rather than after it is completed. Companies then would not have to go back and add a mechanism to rectify an issue.

Assessments can also help companies improve their products and their communications with patients or users. For example, D’hulst said, a business could find that data flows create a higher risk of breach and then limit those flows. Or a company could include a feature in an app or platform to ask data subjects about their experience and if they feel in control of their data. In doing an assessment, “we consider security, as well as transparency and data accuracy,” he said.

C5 will be holding its “European AI Life Science Summit” on March 25-26, 2025, in Brussels. For more information, and to register, please visit: https://www.c5-online.com/eu-life-sciences-ai/

Written by:

American Conference Institute (ACI)
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

American Conference Institute (ACI) on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide