Law Bots: How AI Is Reshaping the Legal Profession

10 Min Read By: Matthew Stepka

Artificial Intelligence (AI) is disrupting almost every industry and profession, some faster and more profoundly than others. Unlike the industrial revolution that automated physical labor and replaced muscles with hydraulic pistons and diesel engines, the AI-powered revolution is automating mental tasks. While it may be merely optimizing some blue-collar jobs, AI is bringing about a more fundamental change to many white-collar roles previously thought safe from automation. Some of these professions are being completely transformed by the superhuman capabilities of AI to do things that were not possible before, augmenting — and to some degree replacing — their human colleagues in offices.

In this way, AI is having a profound effect on the practice of law. Though AI is more likely to aid than replace attorneys in the near term, it is already being used to review contracts, find relevant documents in the discovery process, and conduct legal research. More recently, AI has begun to be used to help draft contracts, predict legal outcomes, and even recommend judicial decisions about sentencing or bail.

The potential benefits of AI in the law are real. It can increase attorney productivity and avoid costly mistakes. In some cases, it can also grease the wheels of justice to increase the speed of research and decision-making. However, AI is not yet ready to replace human judgment in the legal profession. The risk of embedded bias in data that fuels AI and the inability to adequately understand the rationale behind AI-derived decisions in a way understandable to humans (i.e., explainability) must be overcome before using the technology in some legal contexts.

Superhuman Lawyers

Attorneys are already using AI, and especially Machine Learning (ML), to review contracts more quickly and consistently, spotting issues and errors that may have been missed by human lawyers. Startups like Lawgeex provide a service that can review contracts faster, and in some cases more accurately, than humans.

For some time, algorithms have been used in discovery — the legal process for identifying the relevant documents from an opponent in a lawsuit. Now, ML is also being used in this effort. One of the challenges of requesting and locating all the relevant documents is to think of all the different ways a topic may be described or referenced. At the same time, some documents are protected from scrutiny, and counsel (or the judge) may seek to limit the scope of the search so as not to overburden the producing party. ML threads this needle using supervised and unsupervised learning. Companies like CS Disco, which went public recently, provide AI-powered discovery services to law firms across the US.

Another area where AI is already used extensively in the practice of law is in conducting legal research. Practicing attorneys may not even be aware they are using AI in this area, since it has been seamlessly woven into many research services. One such service is Westlaw Edge, launched by Thomson Reuters more than three years ago. The keyword or boolean search approach that was the hallmark of the service for decades has been augmented by semantic search. This means the machine learning algorithms are trying to understand the meaning of the words, not just match them to keywords. Another example of an AI-powered feature from Westlaw Edge is Quick Check, which uses AI to analyze a draft argument to gain further insights or identify relevant authority that may have been missed. Quick Check can even detect when a case cited has been indirectly overturned.

Automated Legal Scholars

AI can generate content as well as analyze it. Unlike AI used to power self-driving cars where mistakes can have fatal consequences, generative AI does not have to be perfect every time. In fact, the unexpected and unusual artifacts associated with AI-created works are part of what makes it interesting. AI approaches the creative process in a fundamentally different way than humans, so the path taken or end result can sometimes be surprising. This aspect of AI is called “emergent behavior.” Emergent behavior may lead to new strategies to win games, discovering new drugs or simply expressing ideas in novel ways. In the case of written content, human authors are still needed to manage the creative process, selecting which of the many AI-generated phrases or versions to use.

Much of this is possible due to new algorithms and enormous AI models. GPT-3, created by OpenAI, is one such model. GPT-3 is a generative model that can predict the next token in a sequence, whether that token is audio or text. GPT-3 is a transformer, meaning it takes sequences of data in context, like a sentence, and focuses attention on the more relevant portions to extend the work in a way that seems natural, expected and harmonious. What makes GPT-3 unusual is that it is a pre-trained model, and it’s huge — using almost 200 billion parameters, and trained on half a trillion words.

This approach has already been used in creative writing and journalism, and there are now lots of generative text tools in that area, some built on GPT-3. With a short prompt, an AI writer can create a story, article or report — but don’t expect perfection. Sometimes the AI tool brings up random topics or ideas, and since AI lacks human experience, it may have some factual inaccuracies or strange references.

In order for AI to draft legal contracts, for example, it will need to be trained to be a competent lawyer. This requires that the creator of the AI collect the legal performance data on various versions of contract language, a process called “labeling.” This labeled data then is used to train the AI about how to generate a good contract. However, the legal performance of a contract is often context-specific, not to mention varying by jurisdiction and an ever-changing body of law. Plus, most contracts are never seen in a courtroom, so their provisions remain untested and private to the parties. AI generative systems training on contracts run the risk of amplifying bad legal work as much as good. For these reasons, it’s unclear how AI contract writers can get much better any time soon. AI tools simply lack the domain expertise and precision in language to be left to work independently. While these tools may be useful to draft language, human professionals are still needed to review the output before being used.

Judge-Bots

Another novel use of AI is predicting legal outcomes. Accurately assessing the likelihood of a successful outcome for a lawsuit can be very valuable. It allows an attorney to decide whether they should take a case on contingency, or how much to invest in experts, or whether to advise their clients to settle. Companies such as Lex Machina use machine learning and predictive analytics to draw insights on individual judges and lawyers, as well as the legal case itself, to predict behaviors and outcomes.

A more concerning use of AI is in advising judges on bail and sentencing decisions. One such application is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). COMPAS and similar AI tools are used by criminal judges in many states to assess the recidivism risk of defendants or convicted persons in decisions on pre-trial detention, sentencing or early release. There is much debate about the fairness or accuracy of these systems. According to a ProPublica study, such assessment tools seemed biased against black prisoners, disproportionately flagging them as being significantly more likely to reoffend than white prisoners.[1] Equivant, the company that developed COMPAS, sought to refute the ProPublica analysis and rejected its conclusions about racial bias.[2]

Regardless, using AI in this context may reflect, or even amplify, the inherent bias in the data of the criminal justice system. The data used to train the ML models is based on actual arrests and conviction rates that may be slanted against some populations. Thus it may enshrine past injustices, or worse, falsely cloak them in the vestment of computer-generated objectivity.

Is AI Ready to Practice Law?

AI raises a host of questions in the context of the legal profession:

  • Will the failure to use AI in some aspects of the law (like discovery) ever amount to malpractice? For example, if not using AI is shown to slow the discovery process, or results in incomplete disclosures, there may arise a professional obligation to use AI in discovery. 
  • Should criminal defendants have a right to access AI tools if helpful to their case?
  • Do attorneys need to disclose their use of AI in a case? If so, do they need to disclose the training data or other inputs used to configure the ML models?
  • Does the lack of effective transparency of ML models make them inappropriate for some applications in the law?
  • How can we ensure there is no embedded bias, reflecting sexist or racist sentiments?

As a way to make the process of law faster and more free from errors or omissions, AI is a welcome tool in the cause of justice. AI may be a more efficient way to resolve civil cases, while at the same time increasing predictability without creating a moral hazard.

Where it becomes more problematic is when AI is used to replace human judgment, especially in the criminal law context. AI is not ready for this for a number of reasons. For one, there may exist bias in the training data which will be amplified and further institutionalized by the resulting ML models. We may be able to overcome this problem; indeed, the process of driving bias out of our training data may cause us to realize and correct some of the inherent racism and sexism of our legal system.

However, there is also the due process problem of lack of transparency and explainability with using AI. One cannot cross-examine a deep learning artificial neural network… at least not yet! AI is a mirror to humanity, revealing some of our inherent flaws. The process of unwinding the reasons an AI makes a recommendation may lead us to better understand the reality and limitations of human explanations or rationalizations for their decisions.

But more importantly, the idea of allowing algorithms to make liberty-depriving decisions may simply be unconscionable. It is not inconceivable that machine learning algorithms will begin to predict when a person is likely to commit a future crime with high confidence, like the science fiction movie Minority Report. Another compelling reason to limit the use of AI in the criminal context may be that judges, lawyers and society as a whole could grow to have too much trust in these algorithms. Even if humans retain ultimate decision authority, it is not uncommon for them to become overly reliant on technology-based recommendations, a phenomenon called automation bias. With AI, this trust may be especially misplaced since the actual capabilities of the technology may not be as “intelligent” as they seem.

Note: This article is based on a lecture given by Matthew Stepka at UC Berkeley School of Law in November 2021. The article will be cross-posted on his blog “Making Sense of AI.”


Sources and further reading:

AI Will Transform The Field Of Law

Artificial Intelligence: Robots Replacing Lawyers | by Acorn Money | Leislat.io

Courts and Artificial Intelligence

Erasing the Bias Against Using Artificial Intelligence to Predict Future Criminality: Algorithms are Color Blind and Never Tire

How AI Is Changing Contracts

How long before machines can write your contracts? | Contractbook Blog 

How Predictive Coding Makes E-Discovery More Efficient

Legal AI Software | Above the Law Non-Event

Predicting Outcomes of Legal Cases based on Legal Factors using Classifiers

Racist Data? Human Bias is Infecting AI Development | by John Murray

The Dawn of Fully Automated Contract Drafting: Machine Learning Breathes New Life Into a Decades-Old Promise 

The evolution of legal research | Legal Blog

The Possible Implications of GPT-3 to the Business of Law 

The Rise of Artificial Intelligence in the Legal Field

Can the criminal justice system’s artificial intelligence ever be truly fair?

What is Automation Bias? – Databricks 


[1] Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

[2] Available at: https://www.equivant.com/response-to-propublica-demonstrating-accuracy-equity-and-predictive-parity. ProPublica responded to Equivant’s critique with both a more general defense of its approach and conclusions and a technical response to Equivant’s methodological criticisms, and it later wrote about research by other parties that suggested it is not “possible to create a formula that is equally predictive for all races without disparities in who suffers the harm of incorrect predictions.”

By: Matthew Stepka

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login