In 2019, we are surrounded by AI—from our personal assistants Siri, Alexa, and Google Home Hub; our retailers predicting what we want before we do (think Amazon/Netflix recommended sections); and our cars that sense when braking is required in an emergency—and AI just keeps getting smarter and more accurate over time as it incorporates more data sets, meaning that AI has become more integrated and trusted within our society.
Throughout the world, healthcare systems are some of the most used and relied upon sectors, but they are stretched in terms of resourcing, technology, and funding. For many years, researchers and technology giants have wondered how we can harness the large amount of data that exists in the world for good—to help our healthcare system cope, grow, and thrive in modern times where we are all living longer and expecting more from our healthcare systems.
In this article, we explore some of the issues with AI and health care—the positive aspects and the barriers to integration and use that exist now or will in the future, as well as some of the trials already conducted in both the United States and the European Union.
What Is AI and How Does It Work?
In its broadest sense, AI is a tool where technology and/or a system can perform tasks and analyze facts/situations independently of a human. There are many different applications of AI, including machine learning, deep learning, and robotics.
- Machine learning is when a system uses algorithms to review data in an iterative process until it “learns” how to make a determination or prediction on its own.
- Deep learning is when a complex system similar to human neural networks is fed large amounts of data until it “learns” by example, discovering patterns in the data.
- Robotics is where a machine performs a task instead of a human; for example, where a machine is programmed to perform a simple (or complex) operation with precision and accuracy based on experience.
AI in the healthcare sector works by analyzing and learning from hundreds of thousands (even millions) of records, images, and/or scenarios to spot patterns, common traits, etc. of certain medical conditions, and to analyze findings and/or cut down the options for consideration by the medical professional.
AI and the Healthcare Sector
The prospect of further integration of AI in the healthcare sector is an exciting and promising development with the potential to transform healthcare systems to be more proactive, use fewer resources, lessen time spent on administrative matters, and, most importantly, focus on patient care.
Many are wary of AI in the healthcare sector, however—a sector based on human decisions, skill, and compassion that many feel uncomfortable relinquishing to a machine. Doctors, nurses, and other medical professionals are all highly trained and trusted to deliver high-quality and personalized health care to those in need. Naturally, there is some hesitation about ceding control of some of these tasks to a machine.
AI advocates are clear to point out that AI in health care is designed to complement and not replace human decision-making, experience, and care. AI is said to present the opportunity to free up more of health professionals’ time to care for patients instead of being burdened with administrative tasks and/or spending hours developing a tailored diagnosis and treatment plan.
Some recent examples of AI within the healthcare sector include the following:
- Machine learning was used to identify chronic heart failure and diabetic patients who required closer observation, then analyzed the results of monitoring kits provided to these patients. The patient’s healthcare providers were then automatically alerted when the patient required medical intervention.
- The National Institutes of Health and Global Good developed an algorithm that analyzes digital images of a woman’s cervix and can more accurately identify precancerous changes that will require medical intervention. This easy-to-use technology (which can be used with a camera phone) is an exciting development for those low-resource areas and countries where such screening is not prevalent.
- The United Kingdom’s National Health Service (NHS) provided patients with complex respiratory needs a tablet and probe that measured heart rate and blood oxygen levels on a daily basis. These results were logged and analyzed by the AI technology, reporting back to the clinical team at a local hospital when there were drops in heart rate and/or blood oxygen levels requiring medical intervention. During the period of this study, admissions at the local hospital for this group dropped 17 percent.
- The NHS Eye Hospital Moorfields worked with Google’s DeepMind for nine months and conducted a trial based on an algorithm developed to spot and diagnose eye conditions from scans. This was aimed at cutting down unnecessary referrals to NHS hospitals, allowing clinicians to focus on more serious and urgent cases.
- For pathologists, instead of individually assessing each image/slide, AI technology can be used to review all images/slides and flag the problematic ones for closer review.
In each of the examples above, the healthcare system saved time and resources by ensuring that only those patients requiring more immediate medical intervention were seen to, and that those who were stable were seen at nonemergency appointments to follow-up. Healthcare professionals were able to prioritize those cases with the most urgency while continuing to monitor the other, less urgent cases.
Another AI advantage is that patients are given more control over their own healthcare. They can monitor their own statistics and outcomes and are comforted that medical professionals can intervene when they have spotted concerning results. This also helps patients understand their own health and how their own body reacts to certain conditions/factors.
The Challenges
There are some concerns that have been raised around the integration of AI technologies into the healthcare system, including data protection, patient trust, biased data, and contractual, regulatory, and ownership issues.
Data Protection
In the United States, personally identifiable health information (PHI) is protected from unauthorized use and disclosure by a variety of laws and regulations, most notably the Health Insurance Portability and Affordability Act and the Health Information Technology for Economic and Clinical Health Act (together, HIPAA). Any use of PHI for AI would likely require new or different consent from the patient. Where the AI is used for the benefit of a particular patient, presumably that consent would not be difficult to obtain, but robust AI technologies require a significant amount of data to be effective, and because the use of that data is not necessarily for the benefit of any particular patient contributing data, consent for use of PHI in AI may not be so readily provided. A revision of HIPAA or applicable state laws to permit the disclosure of PHI for the purpose of AI technologies may be required, as well as additional protections to ensure that once the PHI goes into the “soup pot” of AI datasets, it cannot be individually identified again.
In the European Union, the General Data Protection Regulation (GDPR) came into force in 2018 and brought about significant changes in data protection regulation across the EU and beyond due to its enhanced territorial scope. One of the themes of the GDPR is that data subjects are given more control over their personal data. There are a few separate issues relating to data protection and AI technology when considering AI and healthcare applications:
- Legal basis. There will be different legal bases applicable to different organizations; for example, healthcare providers generally rely on vital interests (as the processing condition) and therefore may rely on the research and statistics exemption to repurpose such data collected for use in AI technology. Other organizations, such as those who utilize wearable technologies like fitness trackers or heart-rate monitors used by individuals (as opposed to patients), will generally rely on consent to process such data and therefore may not as easily repurpose such data collected, such as for the development of AI technology.
- Data subject rights. Under the GDPR, data subjects have enhanced rights; therefore, organizations must carefully consider the legal basis on which they are relying for processing. For example, when relying on consent, a data subject can withdraw that consent at any time, and the organization must stop processing and notify any third party with which the data was shared to stop its processing. Although data subjects have the right to request deletion of their personal data, from a practical perspective, how can data be deleted if it has become part of the algorithm?
- Data Transfers. A scenario likely to occur with AI technology development is the transfer of data between the United States and European Union and indeed across the globe. In a data transfer scenario, both parties (i.e., the transferor and the transferee) have an obligation to ensure that such transfers are protected and that the data is transferred using adequate measures as stipulated in the GDPR.
- It has been noted that medical data is now three times more valuable than credit-card details in illegal markets; therefore, organizations handling and sharing health data for any purpose must ensure that this data is protected from unlawful loss, access, or disclosure to avoid causing substantial distress to data subjects. When such personal data has been anonymized, then security is less of a concern from a data protection/privacy perspective because the data protection legislation does not apply to data where an individual can no longer be identified; however, anonymized data may still pose a commercial risk due to its value.
Patient Trust
Many individuals may not initially feel comfortable knowing that technology has made a potentially life-or-death decision about their healthcare and/or treatment plan. The healthcare sector is grounded in trust and personal care and compassion for those in need, and some see AI as removing that personal element in favor of a machine-led, batch-process type system where the individual and his or her needs may not be at the core of the decisions and care provided.
It could be argued, however, that AI and increased technology use within healthcare systems could actually improve personalized health care and give individuals more control over their own health by involving them in the process and allowing them to monitor their health remotely.
Many patients may also be uncomfortable with the lack of formal qualification and/or testing of AI technologies and machines, in contrast with healthcare professionals such as doctors and surgeons who often study and train for many years acquiring knowledge and skills in a particular area, which instils trust in patients.
Additionally, some patients may be unwilling to accept such AI technology as part of the healthcare system, given that in recent years the growth of AI technologies has brought about some general mistrust around the rapid growth of AI technologies in our daily lives.
How can the healthcare sector alleviate such concerns? All new technology experiences some bumps on the path to acceptance; distrust or suspicion of new technologies and fear of error takes time and education to overcome. Focusing on the patient and individual care, along with reassuring patients that AI technology will not replace doctors, may alleviate the fears of concerned patients. In addition, education about how AI technology will allow doctors to create more personalized healthcare treatment plans for patients and focus more on patient interaction and care will go a long way toward acceptance of AI technology in the healthcare sector. Medical care at its core is about empathy and care for patients, which cannot be replaced by AI technology, but AI technology can free up time and resources from those that provide that empathy and care by doing some of the “heavy lifting” so that the healthcare system can focus on the patients.
In terms of trust of the AI technology itself, there may need to be more legislative governance and/or accepted standards of testing for such technologies to reassure patients that the AI technology produces accurate results and has been thoroughly vetted as suitable to make decisions about health, diagnosis, and treatment.
There may also be a generational gap in that older patients are generally more wary of AI technologies and their infiltration into our daily lives. However, younger individuals—those who have been brought up in the age of social media, wearable devices, and other technologies—are generally more willing to accept and embrace AI than their parents and grandparents.
Overall, in time the benefits of AI to the healthcare experience (i.e., personalized care, diagnosis, and treatment; saved time and resources; and a more effective and cost-efficient healthcare system) will overcome patient mistrust in the technology.
Bias/Accuracy of Data
One concern that has been voiced by medical professionals is that the data available (from healthcare providers) to train AI technology and machines is not always accurate and often contain biases that may feed through into the technology, which may result in AI technology that is not representative of the population and therefore may not always make the correct decisions for every individual.
For example, in the United Kingdom especially, clinical trials (where most medical data is generated for research purposes) are dominated by white, middle-aged males; therefore, much of the data associated with medical trials is dominated in such a way. Ethnic minority populations, older people, and females are traditionally under-represented in medical trials; therefore, there may be implicit (or sometimes even explicit) bias in the data provided to an AI technology machine from which to learn. In other words, will the results provided by an AI technology based primarily on middle-aged, white males apply to individuals who are not middle-aged, white males? How will patients and providers know? However, there is also an argument that deliberately skewing the data the opposite way (e.g., by ensuring trials are reflective of all ethnic groups) could impact the effectiveness of a study where a condition may predominantly affect one group (e.g., sickle cell anaemia, which is most commonly found in those of African, Caribbean, Middle Eastern, and Asian origin).
Put another way, the output from an AI technology skewed in favor of one or more characteristics (i.e., the middle-aged male) may lead to inaccurate outputs and/or inappropriate treatment plans. In addition, some medical conditions are associated with certain groups more than others; therefore, AI technology may not be reflective of the conditions and medical needs of one group where the data used to train it were reflective of another group.
If AI technology is to reach its full potential in the healthcare system, care must be taken with the data used to train AI technologies to ensure that it is reflective of and includes a cross-section of the population, and is therefore fair and unbiased, so that its output is as accurate as possible.
Another issue, particularly with the NHS, is that hospitals are still very much reliant on paper-based records, although there has been for many years a push toward greater digitalization of healthcare records (which not only aids healthcare data-sharing for medical care purposes, but also assists in “feeding” such data to AI technology from which to learn). Nevertheless, legacy systems and the general lack of investment in technology has meant that moving toward any substantive ability to facilitate data sharing has been a slow process. The format of such records will also differ per area, data may not always be correctly labelled, and records are sometimes not kept as up-to-date as they should be. This lack of standardization creates gaps in information and could mean that the data from which the AI technology is learning is not the full picture of any one individual’s health/symptoms.
This brings questions about bigger issues in health care, especially in the United Kingdom, concerning whether it is possible to move forward with AI technology when the healthcare system is still not modernized enough to have easily accessible digital records. Although the United States is farther along in its adoption of electronic health care records and the digital data they contain, the implicit bias concern is equally strong in both countries. In addition, the lack of standardization of electronic data—both in the United States and between the United States and the European Union—makes ensuring a robust data input especially difficult. Although appropriate governmental regulations may address this, the market itself must figure out how to make it technically and financially viable.
Contractual and Regulatory Issues
There are a variety of potential complex contractual issues that must be addressed among developers and various stakeholders in the healthcare system before AI technology is rolled out, particularly regarding the allocation of liability. Where a doctor fails to diagnose correctly, prescribes the wrong dose of medication, or otherwise acts negligently, the patient has a claim against the doctor/healthcare provider, and the hospital or healthcare system in which the doctor/provider worked, for malpractice, negligence, and/or personal injury. However, who is liable where an AI technology failed to spot a cancerous tumour on a scan it analyzed?
There may be a lot of finger-pointing in this case. The doctors would argue that they were not liable because they (presumably) utilized the AI technology correctly, and that the fault lies with the hospital/healthcare system that required its use and/or the vendor/developer of the technology itself. The hospital/healthcare system might argue that is not liable because the third-party technology vendor developed the technology and trained the doctors in its use. The developer might argue that they are not responsible because AI technology is constantly “learning,” and only from the data it is given. It is important that the contracts between the developer and healthcare system, and between the healthcare system and its physicians, are clear on the allocation of liability in the event that a patient is harmed in relation to the use of AI technology.
Another issue that must be addressed by contract is the warranties (if any) that are provided by the developer to the healthcare provider. How likely is a developer to warrant that the AI technology is accurate? If unlikely, how could the healthcare provider understand its limitations? Is the training provided on the AI technology warranted to provide that information?
Regulatory issues also abound. In the United States, AI-enabled technologies may or may not be regulated as medical devices. Current regulations are unclear on this issue, but generally in both the European Union and United States, devices/technology used in the context of medical advice/health care requires approval. The problem with current regulatory approval processes is that approval is granted only to one specific version of a product and/or device, but AI technology and/or devices are constantly learning; if each iteration is a new “version,” then any approved version would be out-of-date almost immediately (and that’s without getting into “custom-made devices” within the medical device sector). Requiring regulatory approval for each version/iteration of the AI technology would be nonsensical. A new regulatory scheme tailored to the reality of AI technology (and other new and emerging technologies) in both the United States and European Union is needed.
Intellectual Property Ownership
Intellectual property and ownership issues regarding AI technologies include the following questions:
Who owns the data? For purposes of developing robust AI technologies, provided the bias issues discussed above are positively addressed, the more data, the better. Therefore, although AI developers/manufacturers could solicit the data from each data subject (i.e., the patient) directly, the more practical route is to acquire vast amounts of data from the healthcare provider, but who owns the data, and can the healthcare provider disclose/use the data this way?
In the United States, the patient generally does not own his or her medical information. The health record is generally owned by the provider that keeps the record (as a normal business record); HIPAA protects the privacy of the information for the benefit of the individual, but ownership of that information is not addressed in any federal law or the laws of 49 states (New Hampshire is the only state where the individual owns his or her information as a matter of statute). This structure applies only to the data fed into the AI technology. Who owns the output? Most likely, the developer or manufacturer will assert ownership to the results because it owns the algorithms that create the AI. What about results that are personal to an individual, such as a diagnosis or treatment plan? Isn’t that part of the health record owned by the provider?
In the United Kingdom, the person who developed the diagnosis and/or treatment plan owns the copyright in that plan (as the author of such plan); however, the personal information would still be owned by the patient (data subject) because it is personal to him or her. If the AI developer “owns” an individual’s diagnosis or treatment plan, can the developer sell or disclose it, or incorporate that information into other products, or use it for some other purpose? Currently, developers and users of AI technology are contracting around these issues, but that means that ownership, use, and disclosure are different across contracts as a result of individual leverage and market forces and, of course, such contracts leave out the patients entirely (unless the patient is providing the data to the AI developer directly).
In the European Union, there is a distinction between “ownership” and “control” over personal data. Data subjects (i.e., an individual) always retain ownership of their personal data (i.e., a company cannot own such information), but do not always have control over their personal data (e.g., a healthcare provider does not need permission to use one’s personal data because it was collected for the provider’s own purposes and control). Under the GDPR, data subjects are given enhanced rights over their own personal data; however, there are circumstances where a party who controls such personal data does not need to comply with the data subject’s requests and can continue to process the personal information (e.g., for medical treatment).
Where data is shared for the purposes of developing and/or testing AI technology, the key consideration should be transparency: Is the patient fully informed? Is there an appropriate legal basis? Without transparency, processing may be unlawful, and the patient could prevent it.
Who owns the algorithm? The algorithm is likely owned by the company who developed it for use in the device and/or AI technology; however, there are questions around whether someone can own something that is essentially a “self-learning” machine. Is the algorithm something tangible that can be explained? Or is the initial algorithm something tangible, but then the AI learns to improve this, and then the company no longer has control over the decision-making process?
Who owns the device/product/finished AI machine? This will depend on what the device or product is. Where the product is the technology, i.e., the algorithm, the healthcare provider may wish to own this to control more of the output. However, it is likely that the developer/manufacturer would want to claim ownership, especially where such use is novel in the sector.
UK/EU Thoughts
The United Kingdom has been investigating the role of AI in healthcare over the last few years, and in September 2018, the government published a code of conduct for data-driven healthcare technology. The code sets out 10 key principles (some relate to data protection and existing NHS codes of practice):
- Define the user—who is the product for, and what problem are you solving?
- Define the value proposition—why has it been developed?
- Be fair, transparent, and accountable about what data are used—use privacy-by-design principles and data protection impact assessments.
- Use data that are proportionate to the identified user need—use the minimum personal data required to achieve the purposes.
- Make use of open standards—build in current standards.
- Be transparent to the limitations of the data and understand the quality of the data.
- Make security integral to the design—have appropriate levels of security to safeguard data.
- Define the commercial strategy—commercial terms that benefit partnership between the commercial organization and healthcare provider.
- Show evidence of effectiveness for the intended use.
- Show the type of algorithm being developed or deployed, the evidence base for using that algorithm, how performance will be monitored on an ongoing basis, and how performance will be validated—show the learning method you are building.
It is clear from the United Kingdom’s willingness and prioritization of such a code of conduct that AI technology is seen as a method of advancing its healthcare system. It remains to be seen whether the code will be successful and ensure best practices among organizations working together to develop such technologies in the future. As of the date of this article, the government has more pressing priorities, and cooperation with the European Union in this area may be delayed.
The matter has also been discussed at an EU level, and in April 2018, the European Commission published its Communication on enabling the digital transformation of health and care in the Digital Single Market; empowering citizens and building a healthier society (the Communication). The Communication outlines the need for major reforms in the healthcare sector and how developing new and innovative ways of working (through the use of technology and digital platforms) could assist in transforming health care into a modern, innovative, and sustainable sector.
In December 2018, the European Economic and Social Committee (EESC) released its opinion on the Communication (the Opinion), which largely supports the Communication and the Commission’s roadmap for transformation of the healthcare sector, and outlined some observations of which to take note when implementing such a vision of transformation.
The Communication focuses on three key areas:
- Citizens’ secure access to and sharing of health data. The Commission highlighted that many data subjects would like to have better access to their health data and have more control/choice over with whom it is shared; however, there is limited electronic access to health records. Often, records are in paper form and scattered among different healthcare providers, i.e., not available electronically in one central location.
- Better data to promote research, disease, prevention, and personalized health care. Personalized health care is an emerging approach to health care that focuses on using data to better understand individual characteristics to enable care to be provided when necessary. The use of data have increased the healthcare sectors’ ability to monitor, identify, and predict healthcare conditions, which also means they are better equipped to diagnose and treat such conditions.
- Digital tools for citizen empowerment and for person-centered care. The Commission recognizes that to cope with the ever-increasing demand on healthcare services, health care must move away from treatments and toward health promotion and prevention, which will involve a move away from disease and toward well-being, as well as a move away from fragmented service provisions toward a community-based care model.
Conclusion
There has been significant recognition at national and supranational government levels that AI technology has a role to play in the development of health care; however, many obstacles remain before AI technologies are fully accepted into those healthcare systems. Such issues will require thoughtful and careful consideration by technology developers, healthcare providers, and healthcare professionals to develop a consistent approach to the issues identified as barriers to the full integration of AI technologies into the healthcare systems.
As noted, in the United Kingdom, the government has seen these potential issues arising in discussions about health care and AI and recognize the potential benefits to the NHS of adopting such technologies. The government has therefore published a code of conduct to ensure that healthcare organizations and those developing AI technologies are working together and upholding best practices when dealing with patient data.
In the European Union, the Commission has been considering more effective ways to encourage AI technology in the healthcare sector and has identified some barriers to adoption of AI technology in the healthcare sector and set out some proposals to remedy this.
In the United States, AI technology is becoming more accepted by patients and providers, but regulations are lagging behind innovation and acceptance, which may be dangerous to patients. In addition, the uncertainty around liability, ownership, etc. may be dampening progress in the United States, not to mention the uncertainty around whether AI technologies (and automation in general) will create jobs or eliminate them.
We hope that, moving forward, AI technology companies and the healthcare sector find a way to partner successfully, utilizing patient data in a safe and secure manner while training AI technology/machines to provide healthcare assistance in the future, and ensuring that our healthcare systems move with the times and cope with mounting pressure on staff, time, and resources to the benefit of all.
The authors thank the Health IT Task Force of the Cyberspace Committee for support and assistance with the article.