Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges?

7 Min Read By: Diane Holt, Carla L. Reyes, James Q. Walker, Michael Simon

An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behavior. 

A.M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

Computer scientists have been experimenting with artificial intelligence for decades. In 1950, Professor Alan Turing predicted that by the year 2000, a computer would be able to win his Imitation Game 70% of the time—by sounding like a human to another human. At that early date, Professor Turing knew that the main roadblocks to AI were storage space and speed. Nearly 70 years later, we have had a significant increase in both, along with a significant increase in large data sets that permit a broad range of experimentation. And as he predicted we have indeed constructed machines that can play and win the Imitation Game—at least, until they trip up and sound like the bots they are.

While there are many definitions of artificial intelligence, a distinguishing feature is the type of instructions humans provide to the machine. When we type numbers and functions into a calculator, we are providing step-by-step instructions and we know precisely what was done to obtain each output. We can even double-check the calculator ourselves. When a machine “learns,” it takes actions with data that go beyond merely calculating or following explicit instructions. For example, one important task that computers perform is grouping or clustering words, numbers, documents, images or other objects in very large data sets. This clustering effort is somewhat similar to playing numerous simultaneous games of Sesame Street’s “One of These Things Is Not Like the Other.” Once these data points are clustered, we can then make much more powerful inferences about the data that would not be possible if we had to examine or chart or graph individual data points. This technology allows us to say that certain contract provisions are like other contract provisions, for example, by looking at similarities in words. It allows us to teach a computer about previously diagnosed CT scans in order to use those inferences to detect illnesses in new CT scans.

Machine learning, neural networks and other types of artificial intelligence undertake such complex computational tasks that we often must in turn undertake substantial work to evaluate the results. This is one of the many potential problems Professor Turing anticipated in 1950. So once we have given up on understanding “quite what is going on inside,” how can we evaluate whether the computer did what we wanted? This is the new problem presented by the burgeoning use of advanced technology both in the practice of law and in the products and services produced by clients of legal service providers: how do we examine advanced technology for compliance with legal rules? What standards do lawyers have to meet when using or advising on advanced technology?

Ethical Framework for Lawyer Use of Machine Learning Technology

A lawyer has a duty under Rule 1.1 of the ABA Model Rules to provide “competent representation to a client,” which means that the lawyer must demonstrate the requisite knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. The ABA and many states have recognized that a lawyer’s duty of competence extends to the lawyer’s substantive knowledge of the areas of law pertinent to the representation and the tools used to provide legal services to the client. The lawyer has a duty of technological competence to the extent that technology is used to represent the client. The lawyer can fulfill this duty if the lawyer possesses the requisite technological knowledge personally, acquires the knowledge, or associates with one or more persons who possess the technological knowledge. See New York County Ethics Op. 749 (2017); see also ABA Commission on Ethics 20/20 Report (“in order to keep abreast of changes in law practice in a digital age, lawyers necessarily need to understand basic features of relevant technology”).

In addition, lawyers must understand the benefits and risks associated with technology. ABA Model Rule 1.1, Cmt. 8. Lawyers have an affirmative duty (1) to be proficient in the technology they use in the representation of a client; and (2) to consider technology that may improve the professional services the lawyer provides to his or her clients. With respect to the first duty, lawyers must have sufficient proficiency with the technology they use in their practice to ensure that they are using the technology effectively to serve their clients’ interests, and they must supervise any nonlawyers who assist them in the use of this technology to ensure that they are acting consistent with the lawyer’s professional obligations. Id.; see also ABA Model Rule 5.3; see, e.g., In Re Seroquel Products Liability Litig., 244 F.R.D. 648 (M.D. Fla. 2007) (“Ultimate responsibility for ensuring the preservation, collection, processing, and production of electronically stored information rests with the party and its counsel, not with the nonparty consultant or vendor.”). With respect to the second duty, lawyers have an ethical responsibility to consider whether the client may be better served if assisted by emerging technology, including tools that rely on machine learning.

Lawyers should be aware of machine learning bias in their AI tools as part of their exercise of technological competence. AI tools based on machine learning rely on the assumptions that determine the algorithm’s decision-making. Incomplete inputs, inadequate training, incorrect programming – in addition to the machine’s own elaborations of the initial inputs – can create biases that render the tool an inaccurate and ineffective tool for the client’s purposes. In turn, the lawyer’s use of an inaccurate and ineffective tool could cause the lawyer to fail to fulfill his or her duty of competence. Indeed, where the AI tool produces results that are materially inaccurate or discriminatory, the lawyer risks not only violating the duty of competence under Rule 1.1, but may unwittingly engage in conduct that violates Rule 8.4 (d) (engaging in conduct that is prejudicial to the administration of justice) or Rule 8.49(g) (unlawfully discriminating in the practice of law).

Examples of Algorithmic Bias

With artificial intelligence, we are no longer programming algorithms ourselves. Instead, we are asking a machine to make inferences and conclusions for us. Generally, these processes require large data sets to “train” the computer. What happens when we use a data set that contains biases? What happens when we use a data set for a new purpose? What happens when we identify correlations that reinforce existing societal norms that we are actually trying to change? In these instances, we may inadvertently teach the computer to replicate existing deficiencies — or we may introduce new biases into the system. From this point of view, system design and testing needs to uncover problems that may be introduced with the use of new technology.

We have seen instances of algorithm bias arise in many places, including racially-disparate risk classification in software used by criminal judges to evaluate recidivism risks, in ads that are presented to different racial and gender groups and within so-called “differential” pricing that sometimes offers better pricing to certain people. Even when we don’t see potential evidence of discrimination based upon protected categories, we are jarred by events such as the recent revelation that a “glitch” in the software supporting Wells Fargo’s mortgage modification efforts improperly denied relief to hundreds of families and cost over 400 their homes.

Moving Toward Algorithmic Rules and Standards

As a result of the growing awareness of the possibility of bias in algorithms guiding AI, we are now seeing efforts to provide guidance to deal with the problem. The most important of these comes from the EU’s General Data Protection Regulation (GDPR) in Article 22, which allows data subjects the right to object to the results of automated decision-making, to opt out of such systems, and to demand an explanation as to how the algorithms work. In fact, despite the opposition of privacy experts, the influential European Data Protection Board (formerly known as the Article 29 Working Party) has interpreted Article 22 as barring any automated decision-making that lacks a human review element. New York City, in an ordinance passed last year, has established a task force to examine the issue.

Meanwhile, industry groups, government entities and international organizations have articulated standards that may generate some consensus around audit standards and further legislation. The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) group’s principles are an excellent and brief example of the developments in this area. Their five principles – responsibility, explainability, accuracy, auditability and fairness – and the related social impact statement for these principles, provide a responsible structure for designing algorithmic systems.

MORE FROM THESE AUTHORS

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login