Ethical Implications of the Use of Legal Technologies by Innovative M&A Lawyers, including Special Considerations for use of AI in M&A Transactions

Project Chair and Lead Author: Matthew R. Kittay, Fox Rothschild LLP
Subcommittee Chair and Co-Author: Daniel Rosenberg, Charles Russell Speechlys LLP
Key Contributors: Haley Altman, LEGA; Anne McNulty, CARET; David Wang, Cooley LLP
Peer Reviewer: David Albin, Finn Dixon & Herling LLP
Committee Chair: Michael G. O’Bryan, Morrison & Foerster LLP

“We’re often and in fact almost always way behind the curve on what is actually happening in the market. As a result, we’re backing into the regulation of the market by observing what is actually happening in the market.” – David Wang, Chief Innovation Officer, Wilson Sonsini Goodrich & Rosati

Goal. The goal of this guidance as originally published in November 2021 was to review the ethical implications of the use of legal technologies by M&A lawyers. In 2024, the team that authored the original publication updated this guidance to include special considerations for use of AI in M&A Transactions. While the group that developed this guidance understands that negotiating changes to contracts with many popular service providers is impractical in most scenarios, we believe that there are safe, productive and client-focused steps that can and should be taken by all attorneys to improve their workflows and their clients’ legal product. Faced with the fact that most readers probably will accept this general premise, this guidance focuses on how to effectively counsel clients and provides items for action and consideration by attorneys, for example when clients (or lawyers on the other side of a transaction) ask to use a particular technology on a transaction.

Although the examples given in this guidance refer to M&A, much of this will be of wider implication including the concise list of key issues set out in Appendix A.

Key questions addressed include:

  • What are the ethical and other legal issues for lawyers to consider when engaging these technologies, with special considerations for AI technologies?
  • What are the ethical and practical considerations regarding “automation” and the “unauthorized practice of law”, with special considerations for AI technologies?
  • Where is data that lawyers upload onto technology platforms hosted, and what are the data sovereignty implications, with special considerations for AI technologies?
  • What rights (IP and other) do the technology platforms take over the data that lawyers upload, with focus on AI technologies?
  • What level of security/confidentiality should lawyers require from technologies that we use, with additional considerations for on AI technology?
  • How can lawyers effectively evaluate legal technologies, with practical advice related to AI technology?

1.0 Framework.

The key ethical frameworks that underlie the use of technology and may encourage or require leveraging technology in M&A practice follow. The American Bar Association’s Model Rules of Professional Conduct (the “Model Rules”), case law, and statutes help define the lawyer’s professional responsibility for utilizing technology in the practice of law, as well as the risks that must be addressed when certain technology is leveraged in the practice.

1.1 ABA Model Rules of Professional Conduct[1] . The specific Model Rules which govern or implicate requirements to use technology include: Rule 1.1 Duty of Technological Competence; Rule 1.5 Obligation not to collect unreasonable fees; and Rule 1.6 Duty of Confidentiality. 1.1.1. Model Rule 1.1 – Duty of Technological Competence (Comment 8):

“To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”

The profession has increasingly recognized a two-fold duty with respect to the use of technology. Namely, these are the obligations to assess technology and determine whether the technology improves the services and benefits to a client, and also to understand the technology and ensure its use does not jeopardize the confidentiality of client information.

1.1.1 Model Rule 1.5(a) – Obligation not to collect unreasonable fees:

“A lawyer shall not make an agreement for, charge, or collect an unreasonable fee or an unreasonable amount for expenses…”

For example, if a client needs exactly the same agreement duplicated, except with altered party names, dates, and contact information, a lawyer must consider what are reasonable fees to collect for the work.

1.1.2 Model Rule 1.6 – Confidentiality of Information:

(a) A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b).

(b) A lawyer may reveal information relating to the representation of a client to the extent the lawyer reasonably believes necessary [as listed[2]];

(c) A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.

When applying this Model Rule, the information may require additional security measures, and potentially could prohibit the use of technology depending on criteria including: the sensitivity of the information; the likelihood of disclosure if additional safeguards are not employed; the cost of employing additional safeguards; the difficulty of implementing the safeguards; and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients (e.g., by making a device or important piece of software excessively difficult to use).

Furthermore, when considering Model Rule 1.6, attorneys should consider obligations of confidentiality with respect to client data specific to the platform in question, taking into consideration, for example:

  • that technology platforms take different intellectual property rights over the data uploaded;
  • Opinion Number 477, which evaluates data breaches and possible ethical considerations[3];

and in addition to ethical obligations with respect to the data:

  • contractual obligations, regulatory and compliance obligations, IP rights, training, diligence of the vendors and client expectations and other business considerations.

Practically speaking, this means attorneys should, at a minimum, know where the data is; know that they protected client data; know that they own it, and maintain the ability to remove it from systems in a secure manner. By way of example, using cloud service could violate non-disclosure agreements and potentially result in heavy fines and a loss of trust among clients, as discussed immediately below in Section 1.2. [4]

Additionally, as AI continues to be integrated in M&A practice, Model Rules regarding supervision are implicated.

1.1.3 Model Rule 5.1 (Responsibilities of a Partner or Supervisory Lawyer) and Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance). Pursuant to these rules, lawyers are required to oversee both lawyers and nonlawyers who help them provide legal services to ensure their conduct complies with the ABA’s Rules of Professional Conduct (“RPC”). To the point, a language change to Rule 5.3 in 2012 ensures that the rule covers nonlawyer “assistance,” rather than “assistants,” . The effect of this change was to expand the ethical obligation to non-human assistance, including the work generated by technology (such as legal AI) that’s used in the provision of legal services. [5] In short, a lawyer must supervise an AI legal assistant just as they would any other legal assistant.

Model Rule 5.3

5.3: Partner/supervisory Lawyer Duties Regarding Nonlawyer Assistance

1.1.4 ABA AI-Specific Resolutions. The ABA also periodically passes resolutions for additional guidance to lawyers and other professionals, and in the past few years, it has passed three resolutions related to AI; all three are included for completeness, although 112 and 604 are more relevant for this guidance: [6]

    • ABA Resolution 112: Urges courts and lawyers to address the emerging ethical and legal issues related to the usage of AI in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI. (passed in August 2019)
    • ABA Resolution 700: Urges governments to refrain from using pretrial risk assessment tools unless data supporting risk assessment is transparent, publicly disclosed, and validated to demonstrate the absence of bias (passed in February 2022). Per the official Executive Summary, the resolution “advances the need to align court decisions on pretrial release from jail with the presumption of innocence by refraining from the use of risk assessment tools and pretrial release evaluations where data demonstrates continued conscious or unconscious racial and economic bias.”
    • ABA Resolution 604: Urges organizations that design, develop, deploy, and use AI systems and capabilities to follow several guidelines (passed in February 2023). Key aspects of Resolution 604 include:
      • Human Oversight and Control: Developers of AI should ensure their products, services, systems, and capabilities are subject to human authority, oversight, and control.
      • Accountability for AI Consequences: Organizations should be accountable for consequences related to their use of AI, including any legally recognizable harm or injury caused by their AI systems, unless they have taken reasonable steps to prevent such harm.
      • Transparency and Traceability of AI: Key decisions made regarding the design, risks, data sets, procedures, and outcomes underlying AI systems should be documented to ensure the transparency and traceability of AI systems.
      • Prevention of Discrimination and Bias: This includes efforts by various organizations and governmental bodies to ensure AI complies with anti-discrimination and privacy laws.
      • Legal Responsibility and AI: Legal responsibility for actions should not be shifted to computers or algorithms but should remain with responsible individuals and legal entities.
      • Guidance for Legal Professionals: Legal professionals should stay informed about AI-related issues, as understanding and addressing these issues is seen as part of their responsibility as lawyers.
    • ABA Formal Opinion 512[7]. In July 2024, the American Bar Association Standing Committee on Ethics and Professional Responsibility released its first formal opinion covering the growing use of generative artificial intelligence (GAI) in the practice of law, pointing out that model rules related to competency, informed consent, confidentiality and fees principally apply.

1.2 Laws and Regulations. In addition to the ethical obligations imposed by the Model Rules, there are several key legislative acts and case law decisions which lawyers need to consider.

1.2.1 Stored Communications Act (SCA). The Stored Communications Act (SCA), 18 U.S.C. §§ 2701 et seq., governs the disclosure of electronic communications stored with technology providers. Passed in 1986 as part of the Electronic Communications Privacy Act (ECPA), the SCA remains relevant to address issues regarding the privacy and disclosure of emails and other electronic communications.

As a privacy statute, diverse circumstances can give rise to SCA issues:

(a) Direct liability. The SCA limits the ability of certain technology providers to disclose information. It also limits third parties’ ability to access electronic communications without sufficient authorization.

(b) Civil subpoena limitations. Because of the SCA’s restrictions on disclosure, technology providers and litigants often invoke the SCA when seeking to quash civil subpoenas to technology providers for electronic communications.

(c) Government investigations. The SCA provides a detailed framework governing law enforcement requests for electronic communications. SCA issues often arise in motions to suppress and related criminal litigation. For example, a growing number of courts have found that the SCA is unconstitutional to the extent that it allows the government to obtain emails from an internet service provider without a warrant in violation of the Fourth Amendment. See U.S. v. Warshak , 631 F.3d 266 (6th Cir. 2010).

1.2.2 Microsoft Case. Microsoft had data hosted in one of its Ireland data centers. Microsoft was sued by a US government entity, and the prosecutors wanted to pull data from the Microsoft servers in Ireland. The case affirmed that the US government cannot access data in a foreign country. See U.S. v. Microsoft Corp., 584 US ___, 138 S. Ct. 1186 (2018).

1.2.3 The CLOUD Act. The Clarifying Lawful Overseas Use of Data Act (CLOUD Act) was passed in March 2018 in response to the Microsoft Case, and clarified related data sovereignty issues, confirming that a company can determine data residency by designating where information must be stored or resides as part of contract and company policies. This legislation added to the complexity of the data sovereignty laws (the laws to which a company’s data is subject) for multinational companies that store data in different regions, as it can conflict with US, UK (GDPR), EU, and Chinese data storage regulations.

1.2.4 Consumer Data Protections. There are of course also consumer protection laws and regulations protecting data and determining ownership. These regulations limit disclosure of information and protect people’s data. The General Data Protection Regulation (GDPR) is an excellent precedent for the tension between surging forward with automation of legal processes, and protecting against legal ethics and malpractice concerns. GDPR’s purpose is to give personal control of data back to the individual through uniform regulation of data and export control.

1.2.5 Global Problems for Global Law Firms. For law firms with offices in different regions and with different carriers, each office may be subject to different data storage rules applicable to a particular office. This requires law firms consider data sovereignty rules in connection with their cloud services providers and the related data licenses for global entities.

1.3 U.S. State Bar Association Guidance on AI

1.3.1 To date, relevant pieces of guidance published on Ethics and AI for lawyers have been published by various states, for example with the California Bar and the Florida Bar below. While these are not M&A specific, they apply to all lawyers including transactional attorneys, and need to be considered.

1.3.2 The California Bar’s “Recommendations from Committee on Professional Responsibility and Conduct on Regulation of Use of Generative AI by Licensees ” was adopted on November 16, 2023[8]. It is described as an “interim step to provide guidance on this evolving technology while further rules and regulations are considered,” according to the professional conduct committee that drafted the guidance, and includes guidance which:

i Calls for lawyers to consider disclosing use of generative AI to their clients.

ii Advises to not charge hourly fees for time saved by using the tech tools.

iii Urges lawyers to ensure that humans are scrutinizing AI-generated outputs for inaccuracy and bias.

iv Includes a call to work with state lawmakers and the California Supreme Court to reexamine the definition of unauthorized practice of law in light of generative AI.

v Highlights another danger: The technology has the potential to help close the access to justice gap, but “it could also create harm if self-represented individuals are relying on generative AI outputs that provide false information,” the professional conduct committee warned.

1.3.3 On January 19, 2024, the Florida Bar released Ethics Opinion 24-1 regarding the use of generative artificial intelligence in the practice of law[9]. Opinion 24-1 provides both positive as well as cautionary statements regarding emerging AI technologies. The Florida Bar’s guidance affirms that a lawyer may ethically utilize generative AI technologies but only to the extent that the lawyer can reasonably guarantee compliance with ethical obligations, and focuses topics including confidentiality; oversight; fees and costs; and advertising;

i Considerations to protect confidentiality include:

      • Obtain the affected client’s informed consent if the utilization would involve the disclosure of any confidential information.
      • Sufficiently understand the technology to satisfy ethical obligations, including whether the program is “self-learning.”
      • Ensure that the provider has an obligation to preserve the confidentiality and security of information, that the obligation is enforceable, and that the provider will notify you in the event of a breach or service of process requiring the production of client information.
      • Investigate the provider’s reputation, security measures, and policies, including any limitations on the provider’s liability.
      • Determine whether the provider retains information you submit before and after the discontinuation of services or asserts proprietary rights to the information.
      • Only submit information “necessary to complete work for a particular client” and no information about other clients.
      • Take reasonable precautions to avoid the inadvertent disclosure of confidential information.
      • Not attempt to access information previously provided to the generative AI by other lawyers.

ii Generative AI oversight should:

      • Ensure that your law firm has policies to reasonably assure that the conduct of the AI is compatible with your professional obligations.
      • Review the work product of a generative AI.
      • Verify the accuracy and sufficiency of all research performed by generative AI.
      • Carefully consider what functions may ethically be delegated to generative AI (e.g: nothing that could constitute the practice of law).
      • Take steps to ensure that a lawyer-client relationship is not created without your knowledge when individuals engage with AI.

iii Legal Fees and Costs considerations include:

      • Inform a client, preferably in writing, of your intent to charge a client the actual cost of using generative AI.
      • Ensure that the charges are reasonable and are not duplicative.

iv And finally, lawyer Advertising needs to consider:

      • Be careful when using generative AI chatbot for advertising and intake purposes to avoid provision of misleading information.
      • Inform prospective clients that they are communicating with an AI program and not with a lawyer or law firm employee.
      • Consider including screening questions that limit the chatbot’s communications if a person is already represented by another lawyer.
      • May advertise use of generative AI but cannot claim your generative AI is superior to those used by other lawyers or law firms unless your claims are objectively verifiable.

1.3.2 Additionally, New Jersey[10], Michigan[11], and Pennsylvania [12] have also recently published guidance on ethical implications for the use of AI.

2.0 Taxonomy of Data.

For any particular technology, lawyers need to take a step back and consider several issues, including: what is it actually trying to accomplish; what is the business goal of that technology; what is your goal in the representation; and how do those things interact. The professional responsibilities and consequences implicated will differ depending on the technology and the type of interaction.

2.1 Automation.

There is no practice too complex to be at least partially automated; it is a matter of cost. It’s not impossible for technology to solve many of the inefficiencies involved in drafting documents; it’s a matter of costs and the costs are decreasing over time. Drafting a complex, well-functioning and technically coherent merger agreement, for example, may be very hard and beyond the limits of technology even theoretically. But it is not a requirement for automation that the automation must “fully” automate everything about a process before the technology fundamentally disrupts the status quo. If even 50% of a merger agreement became automatable, it will change how these agreements are done and how the business of mergers are priced.

For example, AI can assist with the drafting of a merger agreement in many ways:

    • An AI tool could analyze all historical merger agreements in a firm’s document management system and learn from their content (including changes in content depending on the lawyers who drafted them, the nature of the deal, the size of the deal and other characteristics) – this learning can then be used to quickly generate language (either individual provisions or whole contracts) for new transactions.
    • An AI tool could learn from an individual lawyer’s changes and comments over time, and suggest similar changes in new contracts.
    • An AI tool could flag inconsistencies in drafting by conducting advanced proofreading, such as identifying incorrect use of defined terms or missed references.
    • During negotiation, an AI tool could compare opposing counsel’s markups (or their first draft) against a law firm’s playbook and identify departures from the law firm’s preferred positions.

While none of these examples constitute complete automation of drafting, each can provide material value and, when coupled with an experienced lawyer reviewing the AI’s output, significantly decrease drafting time. Careful review is particularly important in order to identify any instances where AI has “hallucinated” language that is inaccurate or nonsensical. Therefore, law firms considering the purchase of any AI tools should carefully evaluate both the underlying AI accuracy, and also the workflow tools provided by the system to facilitate the human review required.

2.2 Ethical Issues Arising from Structured Data.

2.2.1 Process elements, workflow management, due diligence software-all create deal process efficiencies but also have ethical implications. Often, a lawyer can invite collaborators-which can involve confidentiality breaches as well as eliminate attorney-client privilege. And closing automation tools require the data to be structured to automate the closing process, which requires the software to store facts about the specific transaction to close the deal. Likewise, transaction technology for populating contracts must process data of how a document is assembled and then incorporate some rules in its system. Initial data storage, active management during the deal, data retention and ultimately data destruction all need to be considered.

2.2.2 Examples of Implications on “Reasonable” Fees and the “Unauthorized Practice of Law” – Automated Cap Tables and NDAs . Cap tables’ inputs, outputs and procedures used in a transaction are largely the same as what computer programs and programmers use in data. There exists now working software that manages cap tables for private companies, public companies, and the individuals at these companies. A CEO or HR manager of a startup can access information directly, live at any time and handle transactions on the platform themselves if they choose. There are rules that go into the system and then there are processes-data inputs in a digitalized transaction to automatically populate form documents, check automatically whether a company is complying with limitations such as available shares in the plan, generate consents directly, and go back into the cap table automatically and update it. Other software, for example, undertakes automatic reviews of an NDA. The non-lawyer client or lawyer uploads the NDA, and the software will mark up the document, spot all the issues, produce an issues list by comparing it against the company’s playbook, and recommend edits to strengthen the client’s position.

2.2.3 No lawyer is involved in either the cap tables system or NDA review, and these technologies are deployed hundreds of times a day all over the country. There may be a one-time licensing fee or monthly contract for this service, no matter how many times it is utilized. How much can a law firm charge? If it’s more than a minimal amount per issuance, is the firm’s fee reasonable and consistent with the Model Rule 1.5(a)? And furthermore, are software developers or the individuals and companies that license the software engaged in the unauthorized practice of law? In reality, clients will likely always want their attorney to scrutinize and augment the output to ensure accurate and excellent legal work, but these questions should still be considered.

3.0 Ownership of Data, IP Rights and Client Rights.

3.1 Types of Data. When evaluating ownership issues, there are three types of “data” to consider, and the critical and harder questions relate to Mixed Data:

3.1.1 User-Created Data. For example, a photographer is clearly the owner of a picture they take, and ownership is protected by copyright laws. In the legal-services context, the attorney work product-the documents themselves, any work done on those documents, comments, tags, as well as any record that are generated on the basis of that work-are User-Created data.

3.1.2 Servicer-Created Data. Data created before uploading into the cloud has clear ownership and intellectual property claims by the creator or someone working on a paid basis for a business or organization, either licensed or sold to the end-user.

3.1.3 Mixed Data. “Gray areas” that are the result, for example, of data that is modified or processed. In these cases, data that has been created within the cloud could come with some strings attached. It’s incumbent on the end-user to properly claim and protect this data and intellectual property. This is difficult as the legal processes have not kept pace with the developed technology.

3.2 Laws and Lawyers Protecting Data Rights.

3.2.1 Laws and Regulations. There are of course laws and regulations protecting data and determining data ownership. These regulations limit disclosure of information and protect people’s data (infra, Section 1.2). Relying on laws and regulations, however, is not sufficient for an attorney to discharge their ethical obligations.

3.2.2 Contractual Protection. Underlying ownership needs to be clarified in license agreements-where that data needs to be located, the privacy that needs to be retained, and how that data can be used. Key concerns include:

(a) protection of the confidential data, particularly if it pertains to client confidential information, and

(b) controlling what technology providers do when they receive a lawyer’s data, including what happens to pieces of information they need to collect and store to provide the contracted service.

To protect confidentiality, privilege and work product, lawyers need to own the derivative works that the technology produces, and therefore usage terms and conditions need to be reviewed very carefully.

3.3 Artificial Intelligence (“AI”) Tools.

3.3.1 AI tools are key digital assets for lawyers. But most software, and the software that is most easily accessible, is built for consumers, not lawyers. These tools are typically free, and produce mixed data. Foreign language translation tools (a machine and a human may be doing the translation together to teach the software to be more accurate over time), have presented specific concerns. These “derivative works” often have meaningful, even beneficial, intents. The vendor may want to analyze and use the customer data to provide tailored services to the customer, or process and aggregate the customer data for commercial exploitation by creating new products and services; using the processed data to enhance its internal operations, products or services; or licensing the data to third parties. “Free” tools, however, may collect and use data in ways the end-users did not contemplate when they used the software. [13]

3.3.2 In addition, lawyers often need to review large volumes of contracts (and other documents) in the context of transactions or in regulatory reviews, or for the purpose of producing market intelligence or deal term studies. AI-assisted contract review software can facilitate these processes. When using this kind of software, there are two possibilities: the system can find what the lawyers need it to find out of the box (either using more traditional AI models or, potentially, using new generative AI technologies), or the lawyers will need to embed their own knowledge into the platform by “teaching” it to find the information they need it to find. Lawyers can teach AI systems to find custom information by identifying examples of that information in a document set that is representative of the types of documents they will need to review in practice. The software will then study those examples, figure out the pattern, and produce a “model.” This model would then be used to find that information in new documents imported into the software. The process for using AI-assisted contract review software to review contracts is generally straightforward: upload the contracts for review into the software. The platform then automatically extracts information from those contracts (via either pre-built models or custom models built by the lawyer’s organization) for the lawyer to review. If more junior lawyers are doing the initial review, they can flag problematic provisions for second-level review.

3.3.3 In considering the implications of using this kind of software, both rights to the uploaded documents and rights to the custom models must be considered. While in-house lawyers may be comfortable with giving software providers copies of or rights to their documents where contractually permissible to do so, law firm lawyers providing services to their clients likely would not be (at least not without their clients’ consent). Any software provider that serves professional services organizations would have an uphill battle if they attempted to take ownership or have rights to the data that is typically from their customers’ customers. It is also important to consider how the software license agreements deal with any intellectual property created when lawyers embed their own knowledge into the software by creating custom models. Custom models may represent the knowledge of expertly trained lawyers, and those lawyers’ organizations may want to control any use and/or sharing of that knowledge. While the code underlying the model may be retained by the software provider, it is important to confirm that the rights to use and share custom built models match the firm’s expectations around this issue. Furthermore, consider whether a firm that creates a custom model while completing transaction A for Client A has the right to use that model for the completion of a transaction B for Client B. There may be sensitive information in the custom model that should not leak somehow into the work for Client B and/or the permission granted by Client A for the use of AI for transaction A might be too narrow to allow some of that learning work to be reused for other transactions, especially for other clients.

To assist in review and negotiation of license agreements, please see attached Appendix A – Issues for Lawyers to Consider in Legal Technology Agreements.

3.3.4 AI’s Impact on Confidentiality and Non-Disclosure Obligations for M&A counterparties and their legal counsel.

Considering the recently published guidance for the California and Florida bars discussed above, there are myriad issues for lawyers to evaluate throughout the M&A process. Beginning with the NDA that starts the M&A process, confidentiality obligations may prevent the receiving party and its counsel from submitting due diligence materials to an AI program. Lawyers need to consider adding disclosure and permissions around the potential use of AI in evaluating NDA-covered “confidential” materials. A counterparty (and by extension, their legal counsel) likely may be in breach of confidentiality obligations by submitting a target’s contracts to an “open” AI program that learns from and retains information about the materials it digests. And while it is reasonably clear that putting your own client’s materials in an AI platform like Chat GPT raises concern about protecting your own client’s data, lawyers need to equally consider derivative issues, such as, limitations on putting the counterparty’s materials into an AI platform as well.

Similarly, at the end of the process, counsel needs to consider their client’s obligation and their law firm’s obligation (and ability) to return or destroy materials often agreed to in confidentiality obligations. We’re aware of no methods on publicly available AI platforms to claw-back and destroy materials that have been put into an AI program. Language such as “to the extent technically feasible” seems thin to rely on, and the typical carveout allowing a single digital copy for records retention almost certainly doesn’t apply. Narrower solutions, especially those specifically targeted at the legal profession, can provide control over how, where and for how long data is stored by default, as well as allow for outright deletion of data where necessary, but it’s important to diligence each solution and its contractual framework separately to ensure this is the case.

To be safe, counsel should not look for a clear prohibition from the target to use an AI program on their materials as the bar to use. Rather, explicit permission to use AI platforms needs to be obtained. While initially it is hard to imagine a counterparty granting such permission (66% of corporate clients expect law firms to use cutting-edge technology, including generative AI tools- while only 38% of corporate clients in the same Lexis-Nexis survey approved of law firms using generative AI tools in their legal matters[14]), the time and cost efficiency reducing of the diligence process from a weeks-long, human-intensive manual process to an AI process will incentivize the parties to mutually consent.

And while it’s true to some degree that some of the concerns mentioned seem equally applicable to uploading client data to the cloud, such as the use of cloud-based SaaS applications, Ai both amplifies those issues and presents novel issues. Rather than just being a depository of information in the cloud, AI synthesizes information and presents new positions as a result, to varying degrees of accuracy. Additionally, tracing and verifying the accuracy may become more difficult over time, as AI becomes more mainstream and more often relied on, with the results being fed into the documents, that then feed back in the models that train the AI. Taken to an extreme, AI poorly checked has the ability to make and change the state of the law or the state of the market in M&A. For example, if a law review article with hypotheticals trains AI models to suggest that “anti-sandbagging” as a “market position”, more lawyers might start to include that provision in the documents and point to it as “increasingly becoming market” (even though it almost never included in deals). Over time, anti-sandbagging could become more prevalent; unchecked, AI would in fact change the state of the market.

3.3.5 Taking AI In-House: Law Firms React to the Emerging Issues.

In response to the above considered ethical and legal issues resulting from AI use in their firms, a growing number of law firms have built their own generative AI-powered chatbots to experiment with and assist their attorneys internally. [15]

4.0 Conclusion.

There is no “one size fits all” solution to solve for the ethics issues presented when lawyers engage technology. This guidance, however, captures the issues and serves as a framework for evaluating these issues as they continue to develop. By focusing on these issues, law firms and their attorneys can continue to work with their clients and the legal industry, not just in compliance with their ethical obligations, but also as thought leaders at the intersection of law and technology.


APPENDIX A

Issues for Lawyers to Consider in Legal Technology Agreements

Legal technology agreements are not always abundantly clear, but consider addressing the following issues:

  1. Three types of data – original, derived and usage data
  2. How this data can be used, other than for the benefit of the system
  3. What “access rights” non-lawyers have
  4. What can the software provider aggregate and extrapolate from the data, and specifically, in instances where a law firm is uploading data into an AI system, confirmation that either the data will only be used to train models for the law firm’s use or the data will not be used for training at all
  5. How data is being delivered between the parties
  6. Where it is being stored to inform compliance with sovereignty requirements and data residency requirements?
  7. Specify how data can be stored for each of your different regions and then the global framework
  8. How does the user access the data across different regions without pulling data inadvertently from one location to other privacy policies and other protocols?
  9. How does the information get into the system?
  10. Storage requirements
  11. Data retention requirements
  12. Removal requirements and controls
  13. Control of data the lawyer inputs
  14. Control of new data and right to remove (complicated by cloud technology from different providers), and as implicated by GDPR
  15. Specific provisions regarding how data can be used, what derivative works can be created, what sort of aggregated de identified data can be leveraged in any sorts of contracts
  16. If the agreement is silent, assume this information can be used in different way
  17. “Derivative Works” provision, critical because part of the benefit of the solution is to provide the lawyer a derivative work, such as a fully compiled PDF version of the document with its appropriate signature pages; this is difficult because the vendor wants to make sure that the lawyer can do everything needed or promised by the technology
  18. Clarify no other uses of the data
  19. Add specific permissions around client confidential information
  20. Data residency requirements that tell the lawyer exactly where the data will be and cannot be shifted between regions
  21. Specify that all “Customer Data” (or “Company Content”) is owned by the customer and define customer data; any exceptions must be clearly spelled out.
  22. Confirmation whether there are exceptions to the otherwise applicable rules around data storage and access for the purposes of abuse monitoring or similar (e.g. to guard against hate speech or terrorism).

[2] (1) to prevent reasonably certain death or substantial bodily harm; (2) to prevent the client from committing a crime or fraud that is reasonably certain to result in substantial injury to the financial interests or property of another and in furtherance of which the client has used or is using the lawyer’s services; (3) to prevent, mitigate or rectify substantial injury to the financial interests or property of another that is reasonably certain to result or has resulted from the client’s commission of a crime or fraud in furtherance of which the client has used the lawyer’s services; (4) to secure legal advice about the lawyer’s compliance with these Rules; (5) to establish a claim or defense on behalf of the lawyer in a controversy between the lawyer and the client, to establish a defense to a criminal charge or civil claim against the lawyer based upon conduct in which the client was involved, or to respond to allegations in any proceeding concerning the lawyer’s representation of the client;  (6) to comply with other law or a court order; or (7) to detect and resolve conflicts of interest arising from the lawyer’s change of employment or from changes in the composition or ownership of a firm, but only if the revealed information would not compromise the attorney-client privilege or otherwise prejudice the client. 

[3] ABA Formal Opinion 477R: Securing Communication of Protected Client Information.

[4] Note, however, various potential benefits from technology: lower fees for clients; increased client retention; more accurately priced projects and the ability to show the breakdown of such fees; recruitment-associates want technology efficiencies, and they may prefer to perform tasks offsite and/or through automated systems instead of manually.

[8] https://aboutblaw.com/bbpZ. Last accessed March 10, 2024.

[13] See, e.g., https://www.theguardian.com/technology/2019/jul/26/apple-contractors-regularly-hear-confidential-details-on-siri-recordings.

MORE FROM THESE AUTHORS

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login