BYU Law Review


The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient.

The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the safety and efficacy of medical devices, has not effectively addressed AI system safety issues for its clearance processes. If the FDA cannot reasonably reduce the risk of injury for AI-enabled medical devices, injured patients should be able to rely on ex post recovery options, as in products liability cases. However, the Medical Device Amendments Act (MDA) of 1976 introduced an express preemption clause that the U.S. Supreme Court has interpreted to nearly foreclose liability claims, based almost completely on the comprehensiveness of FDA clearance review processes. At its inception, MDA preemption aimed to balance consumer interests in safe medical devices with efficient, consistent regulation to promote innovation and reduce costs.

Although preemption remains an important mechanism for balancing injury risks with device availability, the introduction of AI software dramatically changes the risk profile for medical devices. Due to the inherent opacity and changeability of AI algorithms powering AI machines, it is nearly impossible to predict all potential safety hazards a faulty AI system might pose to patients. This Article identifies key preemption issues for AI machines as they affect ex ante and ex post regulatory-tort allocation, including actual FDA review for parallel claims, bifurcation of software and device reviews, and dynamics of the technology itself that may enable plaintiffs to avoid preemption. This Author then recommends an alternative conception of the regulatory-tort allocation for AI machines that will create a more comprehensive and complementary safety and compensatory model.


© 2021 Brigham Young University Law Review