Artificial Intelligence is becoming an increasing large part of the healthcare sector. Along with the advances it brings, it also brings a variety of new and different legal concerns. Attorneys need a basic understanding of Artificial Intelligence and how its use impacts various legal concepts in order to counsel clients. To that end, the Health IT Task Force is pleased to provide this FAQ, offering a quick, bite-size introduction to this subject. The Health IT Task Force gratefully acknowledges Rebecca Henderson, a Solicitor with MacRoberts LLP in Scotland, for her invaluable assistance in creating this FAQ.
1. What is Artificial Intelligence?
Short answer–a tool!
Broadly, artificial intelligence (“AI”) is where technology or systems perform tasks and can analyze facts and/or situations independently of a human. AI comes in a few slightly different “flavors,” including Machine Learning and Deep Learning:
Machine Learning is when the system uses algorithms to review data in an iterative process until it “learns” how to make a determination or prediction; this can relieve humans of tedious tasks.
Deep Learning is a type of Machine Learning wherein the system is similar to human neural networks. It is fed an enormous amount of data (often labelled images) until it “learns” by example, discovering patterns in the data; driverless cars are the most obvious example of this—the system must learn the difference between a stop sign and a pedestrian and then apply that knowledge—but there are plenty of healthcare applications, too.
2. How is it used in healthcare?
Healthcare professionals envision that AI technology will streamline services and increase how quickly the healthcare system can react to ensure that the right people are seen at the right time in accordance with their medical needs, and that medical claims and other administrative and workflow tasks can be optimized.
AI is used in healthcare to complement human decision making, not to replace it. Current examples of AI in action can be found in robotic-assisted surgery, medical claims analysis, virtual observations, chronic care management, and just about any area where an abundance of high-quality data can improve patient outcomes and industry efficiencies.
For instance, rather than a pathologist viewing thousands of images to find the few that may be problematic, the system does the initial cut, freeing the pathologist to focus on the problematic slides.
Exciting new AI-based applications have recently been introduced to screen for diabetic retinopathy as well. Another example is chronic condition management program using Machine Learning: a recently implemented a system that identified diabetes patients who could benefit from additional monitoring, then analyzed the data collected from monitoring kits provided to those patients. Data is sent directly to the patient’s EHR, and care providers received an automated alert if intervention was called for.
The conducted a study in Oxford providing patients with complex respiratory needs with a tablet and probe which measured heart rate, blood oxygen levels, and more daily and reported this back to the clinical team at the local hospital. Over time, the AI system behind the app and system learned about the patient and their vital signs and learned to predict when certain drops in heart rate/blood oxygen levels meant that the patient required intervention from the medical team. During the time the trial was running, hospital admissions dropped by 17% for the group of participants as the app allowed the patient and clinical team to schedule appointments when the health of the individual began to deteriorate, instead of emergency hospital visits.
3. Any warning flags? Concerns to consider?
Trust
Many people may not feel comfortable with a machine making potentially life and death decisions about their healthcare. There is potential for such mistrust to impede the adoption of AI technologies in the healthcare sector where so many rely on personal interactions and the qualifications and experience of doctors and other healthcare professionals to feel comfortable. Resistance to or slow adoption of AI technologies in healthcare may come in part from fallout from the increasing ubiquity of AI technology in many parts of our lives.
Direct care remains at the core of healthcare; however, AI technology can assist in the creation and management of “personalized care,” allowing patients to feel empowered and in greater control of their own wellbeing. Healthcare experts are also keen to ensure that patients and/or external companies are aware that AI technology is not designed for nor is it intended for use in any way to “replace” doctors, nurses and other healthcare professionals. Medical care is built on “empathy”–a quality AI technology cannot replicate!
Further legislative governance of such technologies and how they are tested before being used in medical contexts may impact the level of trust patients possess towards AI.
Bias
As AI technology learns from data, bias is a concern; the implicit (or even explicit) racial, gender or other biases of the humans that code the algorithms or the data that is fed into the algorithms can skew the results. For instance, the data may not be representative of the population (e.g. ethnic minorities are usually under-represented in the medical studies that make up a lot of today’s medical data) and therefore may lead to some conditions which affect some areas of the population more (e.g. sickle cell anaemia) being under-represented in AI technology, meaning that the AI system output may not be appropriate for a patient who is a member of an under-represented population. Therefore, AI machines and technology are being trained on data which may not be representative of the population as a whole.
Improving the accuracy of and provision of more representative health data should lessen potential bias.
4. Regulatory Issues
Currently, regulations have not quite caught up with AI technology. If the AI is “wrong” and a patient is injured, who is liable? The software vendor? The doctor who used it? The hospital that paid for it? Until there is a regulatory framework to allocate risk, providers may be slow to fully embrace AI.
Part of the problem is that an AI-based system is, by definition, always learning, but regulatory approval is granted to a specific version or type of item; the type of AI-based device approved on Day 1 is not the same type in use on Day 2. Think of it this way: 2+2 will always equal 4, so the results of a device based on 2 plus 2 equaling 4 does not change, no matter how much data it looks at. But what if a device that adds 2 plus 2 on Day 1 “teaches” itself calculus by Day 5? The result of the calculus equation is different from the result of 2+2, .
FDA approval is generally required for technology or a device that provides a diagnosis without a healthcare professional’s review; recent approval of the AI-based device to detect diabetic retinopathy renders it, to date, the only AI-based device to be approved.
5. What are the data privacy and security concerns?
In the United States, HIPAA’s privacy regulations apply to protected health information, regardless of whether AI is involved. This creates an “input” problem; AI requires an enormous amount of data to “learn” and if that data is protected under HIPAA, either patients have to consent to its use as an input, or the protected health data will have to be de-identified before being fed into the system, or the regulations will have to be amended to permit its direct use.
The EU General Data Protection Regulation gives data subjects more control over their personal data and provides more protections for consumers. Health data is classed as “special category data” and there are special controls over this data to ensure it is protected, which would offer unique concerns for AI purposes.
There are potential issues here in obtaining data for use in AI technologies and identifying a legal basis for doing this. There may be certain sectors (for example wearable devices tracking personal fitness objectives) where the legal basis for obtaining this data is different from that of a hospital (i.e. in the U.S., wearables are viewed by the FDA as “low risk”’ whereas in the EU, the protected category of vital interests is more likely to apply).
The implementation of artificial intelligence in healthcare may entail transfers of data between the EU and U.S.: for example, if the hospital is in the UK but the technology being used is based in the U.S. This creates problems for UK hospitals and medical practitioners subject to GDPR and the U.S. companies processing the data of EU citizens.
6. IP challenges
Intellectual property challenges include a basic question of “who owns the input data?” If the AI system vendor doesn’t own it, have appropriate licenses or sufficient approval or consent been obtained? Is consent even necessary if it’s anonymous? Are there times where individuals could be identified using their nominally anonymous health information?
In addition, a vendor may want to protect the complex algorithms its systems use to train AI machines to make medical decisions or perform medical tasks against disclosure to or use by competitors. Presumably, current intellectual property laws (i.e., trade secret/copyright/patent) will operate to protect AI systems just as they do to protect software and systems generally, but wrinkles may arise in creating and maintaining such a framework.