Your pacemaker uses machine learning algorithms to detect irregularities in your breathing and make related predictions about the function of your heart. Although this allows for more precise treatment of your condition, it may take the privacy and security concerns from your smart watch, a mere wearable, and literally implant them into your heart. Surgeons using smart scalpels; dermatologists using AI-assisted research and data-mining tools to assist with difficult diagnoses; radiologists using deep-learning algorithms to read diagnostic imagery with greater precision than human capability; precision AI to detect breast cancer as well as applications in cardiology, pathology, and ophthalmology are only some of the examples of the ever-increasing availability and use of wearable and implantable medical AI. Each such use of medical AI offers potential benefits of greater patient well-being through earlier detection and more effective treatment of disease, but with all technology, the benefits come with trade-offs.
Some of these trade-offs come in the form of legal uncertainty. Indeed, increasing use of medical AI raises a number of legal questions. For example, who is liable if your heart is hacked and damage results? Does available insurance adequately cover the risks? Can patients be expected to understand enough about how a device functions to fully comprehend the scope of potential downstream risk? This article offers a brief introduction to these issues and points out areas that require careful attention by legal scholars and practitioners alike.
A (Very) Brief Introduction to AI
Many misunderstand AI at least in part because of the lack of a generally agreed-upon definition. When speaking in the most general terms, experts explain AI as “a set of techniques aimed at approximating some aspect of human or animal cognition using machines.” Although many view AI as a broad term used to refer to a large set of information sciences, each with its own growing domain of research and application, advances in computer processing speed and the growth of big data promoted increased interest in a subdiscipline of AI generally referred to as machine learning. Interest in machine learning is so widespread that popular discussion of AI often uses the term “AI” to refer to one or more types of machine learning. Given that machine learning is typically used to make predictions, it often makes up some element of medical AI technologies. As a result, the core issues that exist at the intersection of law and AI are also applicable in the medical AI context. Complicating those already complex issues (because medical AI deals in large amounts of health data), medical AI also raises novel issues at the intersection of privacy law, cybersecurity obligations, and consumer protection.
Legal Issues in Medical AI: Automated Insulin Pumps
To explore the legal issues raised by medical AI, consider a specific use case. Medical professionals increasingly use AI to help treat chronic illnesses such as type 1 diabetes. An autoimmune disease that usually strikes children at the age of 12, medical professionals treat type 1 diabetes through the use of insulin. Insulin can be administered through daily injections or through the use of an insulin pump. Insulin pumps continually infuse insulin through a small catheter placed under the skin, which is changed out every two to three days. The difficulty in treating type 1 diabetes lies in the regulation of blood sugar through this insulin infusion. Almost any external factor, such as food intake, water intake, exercise, temperature, and internal factors such as cortisol output, thyroid function, and other illnesses, can cause blood glucose readings to fluctuate wildly throughout any given day. This fluctuation especially hits extremes in growing children and in those patients in the midst of puberty due to the natural hormone fluctuations that occur during that time. In order to better control these blood sugar fluctuations, insulin pump manufacturers like Medtronic have begun to employ algorithmic and AI technology in their latest generation of insulin pumps.
Medtronic’s 670G insulin pump uses data from a corresponding Continuous Glucose Monitor (CGM) worn by the patient to consistently alter insulin infusion. The data flow supplied by the CGM allows the machine learning algorithm embedded in the insulin pump to automatically give less or more insulin as the patient’s blood glucose trend rises or falls. This technology represents a significant step forward in the treatment of type 1 diabetes, and many view it as the next step forward for researchers working to create an “artificial pancreas,” an external device that would regulate blood sugars autonomously, without numerous interventions from the patient.
Although this new insulin-regulating technology represents a significant step forward for patients and doctors, it highlights some of the key issues in the use of medical AI more broadly. The 670G pump uses “a human in the loop” type of AI which utilizes machine learning but defers to humans for essential decisions. Although this type of system can limit liability for the pump creator, it can impose a higher burden on patients because patients must interact with the pump repeatedly throughout the day and night. Part of the difficulty in using a human-in-the-loop machine learning algorithm for treatment of chronic medical conditions relates to the “long tail problem.” Essentially, a system may never get “smart” enough to truly be autonomous in some contexts because of the large quantity of variables that cannot be anticipated. Wearable technology such as the 670G closed-loop hybrid insulin pump involves a vast number of variables internal and external to the body that greatly affect blood glucose values, and that limit the level of autonomy that can be achieved in this treatment context.
Another set of issues raised by medical AI is cybersecurity and data privacy. In the case of insulin pumps, many users are concerned about the capturing of their data and personal medical information by both insulin pump manufacturers and hackers. This is especially important due to the rise of CGMs, which connect to a patient’s phone and computer automatically. Although this connection can help the patient examine their blood glucose trends, it also makes sensitive medical data available to hackers who could manipulate readings, causing significant harm to the patient. As the use of CGMs continues to rise not only in type 1 diabetics, but also in type 2 diabetics, cybersecurity will only continue to be a greater concern. Notably, CGMS and the 670G pump represent examples of broader industry trends in which wearable medical technology use similar product approaches, triggering similar concerns.
In some medical contexts, AI has already proven itself effective in helping patients and doctors. For example, the technology unquestionably improves diagnosis of diseases in certain contexts because information about diagnosis from imaging can be retrieved from a set of experts and input for evaluation by the computational device. However, as evidenced by the example of the 670G insulin pump, the use of medical AI for ongoing treatment of chronic conditions poses some difficulties. Those difficulties, including heightened burden for patients using products that rely on a human-in-the-loop system, cybersecurity, and data privacy, represent issues that attorneys guiding companies in this context should keep in mind for the purpose of adequately conducting risk assessments and in the interest of serving patients well. If the future of medical AI is to extend beyond medical diagnosis of narrow conditions, the law and lawyers guiding clients through the law as they build products should keep these issues in mind and seek workable solutions. Ultimately, medical AI represents an area to watch in that patients need the ability to make informed decisions about the trade-offs between potentially improved medical care and risks to privacy, security, and available remedies if something goes wrong with the device.
 Nat’l Health Service, Smart knife can tell cancer cells from healthy tissue (July 18, 2013).
 Esteva A, Kuprel B, Novoa RA, et al., Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks, 542 Nature 115, 115–18 (2017).
 J.G. Lee, S. Jun, Y.W. Cho, H. Lee, G.B. Kim, J.B. Seo, N. Kim, Deep Learning in Medical Imaging: General Overview, 18 Korean J. Radiol. 570 (2017).
 Adam Conner-Simons & Rachel Gordon, Using AI to Predict Breast Cancer and Personalize Care, MIT News (May 7, 2019).
 Changhyun Pang, Chanseok Lee & Kahp-Yang Suh, Recent Advances in Flexible Sensors for Wearable and Implantable Devices, 130 J. App. Polym. Sci. 1429 (2013).
 Medtronic, supra note 1.
 Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 403 (2017); Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Hvd. J. L. & Tech. 353, 359 (2016) (“Unfortunately, there does not yet appear to be any widely accepted definition of artificial intelligence even among experts in the field, much less a useful working definition for the purposes of regulation.”).
 Calo, supra note 11, at 403.
 M. Tim Jones, Artificial Intelligence: A Systems Approach 5 (2007).
 Calo, supra note 11, at 403; see also Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579, 590 (2018) (“Most AI systems are trained using vast amounts of data and over time hone the ability to suss out patterns that can help humans identify anomalies or make predictions. Most AI needs lots of data exposure to automatically perform a task.”).
 Levendowski, supra note 14, at 590 (“When journalists, researchers, and even engineers say ‘AI,’ they tend to be talking about machine learning, a field that blends mathematics, statistics, and computer science to create computer programs with the ability to improve through experience automatically.”). There are several types of machine learning, the details of which are beyond the scope of this short article. For more information, see Stuart J. Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 650 (2d ed. 2009).
 A. Michael Froomkin, Ian Kerr & Joelle Pineau, When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33, 39–48 (2019).
 See generally Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev. 1305 (2019) (describing machine learning and expert systems as the two preeminent forms of AI in use today and offering an overview of the current associated legal issues).
 Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev. 1305 at 1320 (2019).
 David C Klonoff, Cybersecurity for Connected Diabetes Devices, J. Diabetes Sci. & Tech. (2015); W. Nicholson Price II, Artificial Intelligence in Health Care: Applications and Legal Issues, 14 SciTech Law. 10 (2017).
 Klonoff, supra note 31.
 Surden, supra note 26, at 1325.