As recently described by The New England Journal of Medicine, the liability risks associated with using artificial intelligence (AI) in a health care setting are substantial and have caused consternation among sector participants. To illustrate that point:
“Some attorneys counsel health care organizations with dire warnings about liability and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things.”
“… in most states, plaintiffs alleging that complex products were defectively designed must show that there is a reasonable alternative design that would be safer, but it is difficult to apply that concept to AI. … Plaintiffs can suggest better training data or validation processes but may struggle to prove that these would have changed the patterns enough to eliminate the “defect.”
Accordingly, the article’s key recommendations include (1) a diligence recommendation to assess each AI tool individually and (2) a negotiation recommendation for buyers to use their current power advantage to negotiate for tools with lower (or easier to manage) risks.
Creating Risk Frameworks
Expanding from such considerations, we would guide health care providers to implement a comprehensive framework that maps each type of AI tool to specific risks to determine how to manage these risks. Key factors that such frameworks could include are outlined in the table below:
As both AI tools and the litigation landscape mature, it will become easier to build a robust risk management process. In the meantime, thinking through these kinds of considerations can help both developers and buyers of AI tools manage novel risks while achieving the benefits of these tools in improving patient care.
[View source.]