Legal Issues Raised by Medical AI: An Introductory Exploration

9 Min Read By: Carla L. Reyes, Victoria R. Nelson

IN BRIEF

  • This article provides foundational details regarding the technical aspects of AI and examines some of the core legal issues in the context of a specific medical AI device: automatic insulin pumps.
  • By exploring the issues through a detailed use case, both the creativity and care required in this nuanced and emerging practice area is apparent.
  • The article concludes by offering key lessons at the intersection of law and medical AI.

Introduction

Your pacemaker uses machine learning algorithms to detect irregularities in your breathing and make related predictions about the function of your heart.[1] Although this allows for more precise treatment of your condition, it may take the privacy and security concerns from your smart watch, a mere wearable, and literally implant them into your heart.[2] Surgeons using smart scalpels;[3] dermatologists using AI-assisted research and data-mining tools to assist with difficult diagnoses;[4] radiologists using deep-learning algorithms to read diagnostic imagery with greater precision than human capability;[5] precision AI to detect breast cancer as well as applications in cardiology, pathology, and ophthalmology[6] are only some of the examples of the ever-increasing availability and use of wearable and implantable medical AI.[7] Each such use of medical AI offers potential benefits of greater patient well-being through earlier detection and more effective treatment of disease, but with all technology, the benefits come with trade-offs.

Some of these trade-offs come in the form of legal uncertainty. Indeed, increasing use of medical AI raises a number of legal questions. For example, who is liable if your heart is hacked and damage results?[8] Does available insurance adequately cover the risks?[9] Can patients be expected to understand enough about how a device functions to fully comprehend the scope of potential downstream risk?[10] This article offers a brief introduction to these issues and points out areas that require careful attention by legal scholars and practitioners alike.

A (Very) Brief Introduction to AI

Many misunderstand AI at least in part because of the lack of a generally agreed-upon definition.[11] When speaking in the most general terms, experts explain AI as “a set of techniques aimed at approximating some aspect of human or animal cognition using machines.”[12] Although many view AI as a broad term used to refer to a large set of information sciences, each with its own growing domain of research and application,[13] advances in computer processing speed and the growth of big data promoted increased interest in a subdiscipline of AI generally referred to as machine learning.[14] Interest in machine learning is so widespread that popular discussion of AI often uses the term “AI” to refer to one or more types of machine learning.[15] Given that machine learning is typically used to make predictions, it often makes up some element of medical AI technologies.[16] As a result, the core issues that exist at the intersection of law and AI are also applicable in the medical AI context.[17] Complicating those already complex issues (because medical AI deals in large amounts of health data), medical AI also raises novel issues at the intersection of privacy law, cybersecurity obligations, and consumer protection.

Legal Issues in Medical AI: Automated Insulin Pumps

To explore the legal issues raised by medical AI, consider a specific use case. Medical professionals increasingly use AI to help treat chronic illnesses such as type 1 diabetes. An autoimmune disease that usually strikes children at the age of 12, medical professionals treat type 1 diabetes through the use of insulin. Insulin can be administered through daily injections or through the use of an insulin pump. Insulin pumps continually infuse insulin through a small catheter placed under the skin, which is changed out every two to three days.[18] The difficulty in treating type 1 diabetes lies in the regulation of blood sugar through this insulin infusion. Almost any external factor, such as food intake, water intake, exercise, temperature, and internal factors such as cortisol output, thyroid function, and other illnesses, can cause blood glucose readings to fluctuate wildly throughout any given day.[19] This fluctuation especially hits extremes in growing children and in those patients in the midst of puberty due to the natural hormone fluctuations that occur during that time.[20] In order to better control these blood sugar fluctuations, insulin pump manufacturers like Medtronic have begun to employ algorithmic and AI technology in their latest generation of insulin pumps.[21]

Medtronic’s 670G insulin pump uses data from a corresponding Continuous Glucose Monitor (CGM) worn by the patient to consistently alter insulin infusion.[22] The data flow supplied by the CGM allows the machine learning algorithm embedded in the insulin pump to automatically give less or more insulin as the patient’s blood glucose trend rises or falls.[23] This technology represents a significant step forward in the treatment of type 1 diabetes, and many view it as the next step forward for researchers working to create an “artificial pancreas,” an external device that would regulate blood sugars autonomously, without numerous interventions from the patient.[24]

Although this new insulin-regulating technology represents a significant step forward for patients and doctors, it highlights some of the key issues in the use of medical AI more broadly. The 670G pump uses “a human in the loop” type of AI[25] which utilizes machine learning but defers to humans for essential decisions.[26] Although this type of system can limit liability for the pump creator, it can impose a higher burden on patients because patients must interact with the pump repeatedly throughout the day and night.[27] Part of the difficulty in using a human-in-the-loop machine learning algorithm for treatment of chronic medical conditions relates to the “long tail problem.”[28] Essentially, a system may never get “smart” enough to truly be autonomous in some contexts because of the large quantity of variables that cannot be anticipated.[29] Wearable technology such as the 670G closed-loop hybrid insulin pump involves a vast number of variables internal and external to the body that greatly affect blood glucose values, and that limit the level of autonomy that can be achieved in this treatment context.[30]

Another set of issues raised by medical AI is cybersecurity and data privacy.[31] In the case of insulin pumps, many users are concerned about the capturing of their data and personal medical information by both insulin pump manufacturers and hackers.[32] This is especially important due to the rise of CGMs, which connect to a patient’s phone and computer automatically.[33] Although this connection can help the patient examine their blood glucose trends, it also makes sensitive medical data available to hackers who could manipulate readings, causing significant harm to the patient.[34] As the use of CGMs continues to rise not only in type 1 diabetics, but also in type 2 diabetics, cybersecurity will only continue to be a greater concern.[35] Notably, CGMS and the 670G pump represent examples of broader industry trends in which wearable medical technology use similar product approaches, triggering similar concerns.

Conclusion

In some medical contexts, AI has already proven itself effective in helping patients and doctors.[36] For example, the technology unquestionably improves diagnosis of diseases in certain contexts because information about diagnosis from imaging can be retrieved from a set of experts and input for evaluation by the computational device.[37] However, as evidenced by the example of the 670G insulin pump, the use of medical AI for ongoing treatment of chronic conditions poses some difficulties. Those difficulties, including heightened burden for patients using products that rely on a human-in-the-loop system, cybersecurity, and data privacy, represent issues that attorneys guiding companies in this context should keep in mind for the purpose of adequately conducting risk assessments and in the interest of serving patients well. If the future of medical AI is to extend beyond medical diagnosis of narrow conditions,[38] the law and lawyers guiding clients through the law as they build products should keep these issues in mind and seek workable solutions. Ultimately, medical AI represents an area to watch in that patients need the ability to make informed decisions about the trade-offs between potentially improved medical care and risks to privacy, security, and available remedies if something goes wrong with the device.


[1] Medtronic, PR Logic Algorithms: Cardiac Device Features.

[2] Neta Alexander, My Pacemaker Is Tracking Me From Inside My Body, The Atlantic (Jan. 27, 2018).

[3] Nat’l Health Service, Smart knife can tell cancer cells from healthy tissue (July 18, 2013).

[4] Esteva A, Kuprel B, Novoa RA, et al., Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks, 542 Nature 115, 115–18 (2017).

[5] J.G. Lee, S. Jun, Y.W. Cho, H. Lee, G.B. Kim, J.B. Seo, N. Kim, Deep Learning in Medical Imaging: General Overview, 18 Korean J. Radiol. 570 (2017).

[6] Adam Conner-Simons & Rachel Gordon, Using AI to Predict Breast Cancer and Personalize Care, MIT News (May 7, 2019).

[7] Changhyun Pang, Chanseok Lee & Kahp-Yang Suh, Recent Advances in Flexible Sensors for Wearable and Implantable Devices, 130 J. App. Polym. Sci. 1429 (2013).

[8] Medtronic, supra note 1.

[9] Id.

[10] Id.

[11] Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 403 (2017); Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Hvd. J. L. & Tech. 353, 359 (2016) (“Unfortunately, there does not yet appear to be any widely accepted definition of artificial intelligence even among experts in the field, much less a useful working definition for the purposes of regulation.”).

[12] Calo, supra note 11, at 403.

[13] M. Tim Jones, Artificial Intelligence: A Systems Approach 5 (2007).

[14] Calo, supra note 11, at 403; see also Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579, 590 (2018) (“Most AI systems are trained using vast amounts of data and over time hone the ability to suss out patterns that can help humans identify anomalies or make predictions. Most AI needs lots of data exposure to automatically perform a task.”).

[15] Levendowski, supra note 14, at 590 (“When journalists, researchers, and even engineers say ‘AI,’ they tend to be talking about machine learning, a field that blends mathematics, statistics, and computer science to create computer programs with the ability to improve through experience automatically.”). There are several types of machine learning, the details of which are beyond the scope of this short article. For more information, see Stuart J. Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 650 (2d ed. 2009).

[16] A. Michael Froomkin, Ian Kerr & Joelle Pineau, When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33, 39–48 (2019).

[17] See generally Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev. 1305 (2019) (describing machine learning and expert systems as the two preeminent forms of AI in use today and offering an overview of the current associated legal issues).

[18] See Mayo Clinic, Type 1 diabetes.

[19] Id.

[20] Id.

[21] Id.

[22] See Medtronic, MiniMed 670G Insulin Pump System.

[23] Id.

[24] Id.

[25] Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev. 1305 at 1320 (2019).

[26] Id.

[27] Id.

[28] Id.

[29] Id.

[30] Id.

[31] David C Klonoff, Cybersecurity for Connected Diabetes Devices, J. Diabetes Sci. & Tech. (2015); W. Nicholson Price II, Artificial Intelligence in Health Care: Applications and Legal Issues, 14 SciTech Law. 10 (2017).

[32] Klonoff, supra note 31.

[33] Id.

[34] Id.

[35] Id.

[36] Surden, supra note 26, at 1325.

[37] Id.

[38] Id.

By: Carla L. Reyes, Victoria R. Nelson

MORE FROM THESE AUTHORS

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login