Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies

Imagine receiving a layoff notice because an AI evaluation tool predicted a higher risk of future underperformance due to your age. Or picture repeatedly having job applications rejected, only to find out the cause was an AI tool screening out candidates with a disability. These are just a few examples of real-world AI bias in the realm of hiring and employment, a growing issue that has already resulted in several notable lawsuits. How can companies effectively take advantage of AI in their employment practices while minimizing legal risks? This article discusses employment laws applicable to AI discrimination and provides practical strategies for companies to prevent potential government investigations, lawsuits, fines, class actions, or reputational damage.

A. AI Bias

A recent IBM article defines AI bias as “AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.”[1] Two major technical factors contribute to AI bias:

  1. Training Data: AI systems develop their decision-making based on training data; when those data overrepresent or underrepresent certain groups, it can cause biased results. A typical example is a facial recognition algorithm trained on data that overrepresents white people, which may result in racial bias against people of color in the form of less accurate facial recognition results. Moreover, mislabeled data, or data that reflect existing inequalities, can compound these issues. Consider an AI recruiting tool trained on a dataset where some applicant qualifications were incorrectly labeled. This could result in the tool rejecting qualified candidates who possess the necessary skills but whose résumés were not accurately understood by the tool.
  2. Programming Errors: AI bias may also arise from coding mistakes, wherein a developer inadvertently or consciously overweighs certain factors in algorithmic decision-making due to their own biases. In one good example discussed in the IBM piece, “indicators like income or vocabulary might be used by the algorithm to unintentionally discriminate against people of a certain race or gender.”

B. AI Employment Discrimination

Companies have increasingly used AI tools to screen and analyze résumés and cover letters; scour online platforms and social media networks for potential candidates; and analyze job applicants’ speech and facial expressions in interviews.[2] In addition, companies are using AI to onboard employees, write performance reviews, and monitor employee activities and performance.[3] AI bias can occur in any of the above use cases, throughout every stage of the employment relationship—from hiring to firing and everything in between—and can result in discrimination lawsuits.

In one notable example, the Equal Employment Opportunity Commission ( “EEOC”) settled its first AI hiring discrimination lawsuit in August 2023.[4] In Equal Employment Opportunity Commission v. iTutorGroup, Inc.,[5] the EEOC sued three companies providing tutoring services under the “iTutorGroup” brand name (“iTutorGroup”) on the basis that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (“ADEA”) because the AI hiring program it used “automatically reject[ed] female applicants age 55 or older and male applicants age 60 or older,” resulting in screening out over 200 applicants because of their age.[6] Subsequently, iTutorGroup entered into a consent decree with the EEOC, under which iTutorGroup agreed to pay $365,000 to the group of automatically rejected applicants, adopt antidiscrimination policies, and conduct training to ensure compliance with equal employment opportunity laws.

The ongoing Mobley v. Workday, Inc.[7] litigation, one of the first major class-action lawsuits in the United States alleging discrimination through algorithmic bias in applicant screening tools, presents another warning. The plaintiff, an African-American man over the age of forty with a disability, claims that Workday provides companies with algorithm-based applicant screening software that unlawfully discriminated against job applicants based on protected class characteristics of race, age, and disability and thus violated Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866,[8] the ADEA, and the ADA Amendments Act of 2008 (“ADAAA”). On January 19, 2024, the court granted Workday’s motion to dismiss the case, with leave for the plaintiff to amend the complaint.[9] On February 21, 2024, the plaintiff filed an amended complaint outlining further details to support his claims.[10]

With the foresight to prevent the kind of lawsuits discussed above, Amazon took proactive measures in 2018 by ceasing using an AI hiring algorithm after finding it discriminated against women applying for technical jobs; after being trained on a dataset of mostly men, the tool preferred applicants who used words that are more commonly used by men in their resumes, such as “executed” or “captured,” among other issues.[11]

These cases, along with Amazon’s decision to scrap its biased AI hiring tool, highlight the growing concern about algorithmic bias in recruitment. Given this evolving landscape, employers must carefully examine all applicable federal, state, and local laws, as well as EEOC guidelines, to ensure fair and unbiased hiring practices.

C. Governing Law

1. Federal Law

There is currently no federal law specifically targeting the use of AI in the employment context. However, most employers’ use of AI tools in their employment practices would be subject to federal laws prohibiting employment discrimination based on race, color, ethnicity, sex (including gender, sexual orientation, and gender identity), age, national origin, religion, disability, pregnancy, military services, and genetic information.

Below is a list of primary federal laws a company must consider when evaluating AI-based employment evaluation tools. The most highly litigated one is Title VII, which applies to private employers that employ fifteen or more employees.

  1. Title VII of the Civil Rights Act of 1964 (“Title VII”)[12]: prohibits employment discrimination based on race, color, religion, sex (including gender, pregnancy, sexual orientation, and gender identity), or national origin.
  2. Section 1981 of the Civil Rights Act of 1866[13]: prohibits discrimination based on race, color, and ethnicity.
  3. The Equal Pay Act[14]: prohibits sex-based wage discrimination.
  4. The Age Discrimination in Employment Act[15]: prohibits discrimination based on age (forty and over).
  5. The Immigration Reform and Control Act[16]: prohibits discrimination based on citizenship and national origin.
  6. Title I and Title V of the Americans with Disabilities Act (“ADA”)[17] (including amendments by the Civil Rights Act of 1991 and the ADAAA): prohibits employment discrimination against qualified individuals based on disability and those regarded as having a disability.
  7. The Pregnant Workers Fairness Act[18]: prohibits discrimination against job applicants or employees because of their need for a pregnancy-related accommodation.
  8. The Uniformed Services Employment and Reemployment Rights Act[19]: prohibits discrimination against past and current members of the uniformed services, as well as applicants to the uniformed services.
  9. The Genetic Information Nondiscrimination Act[20]: prohibits discrimination in employment and health insurance based on genetic information.

2. State and Local Law

To address concerns over the use of AI in employment, states and local governments have become more proactive. Three notable examples of legislation that have been enacted, discussed below, demonstrate the growing trend among policymakers to regulate AI usage in employment practices, underscoring the increasing importance placed on ensuring fairness and accountability in AI-driven decision-making.

i. Illinois

In 2020, Illinois adopted the Artificial Intelligence Video Interview Act (820 ILCS 42/1), which imposes several requirements on employers if they conduct video interviews and use AI analysis of such videos in their evaluation process. These requirements include (i) notifying applicants of the AI’s role, (ii) providing applicants with an explanation of the AI process and types of characteristics used for evaluating applicants, (iii) obtaining the applicants’ consent for such AI use, (iv) only sharing videos with those equipped with the expertise or technology to evaluate the applicant’s fitness for a position; and (v) destroying videos within thirty days of a request by the applicant.

ii. Maryland

While not explicitly targeting AI, Maryland’s 2020 facial recognition technology law prohibits an employer from using certain facial recognition services—many of which use AI processes—during job interviews unless the applicant consents.

iii. New York City

New York City began enforcing its law on Automated Employment Decision Tools (“AEDT Law”) on July 5, 2023. Under this law, passed in 2021, employers and employment agencies are prohibited from using an automated employment decision tool (“AEDT”), which includes AI, to assess candidates for hiring or promotion in New York City unless an independent auditor completes a bias audit of the AEDT before its use and the candidates who are New York City residents receive notice that the employer or employment agency uses an AEDT. A bias audit must include “calculations of selection or scoring rates and the impact ratio across sex categories, race/ethnicity categories, and intersectional categories.”[21] For each violation, offenders could face penalties ranging from $375–$1,500.

3. EEOC Guidance

The EEOC enforces federal laws prohibiting discrimination in hiring, firing, promotions, training, wages, benefits, and harassment. Employers with at least fifteen employees, labor unions, and employment agencies are subject to EEOC review. The EEOC has the authority to investigate discrimination charges against employers and, if necessary, file a lawsuit. Therefore, even though EEOC guidance is not legally binding, it proves valuable for companies seeking to avoid potential investigations or lawsuits when using AI tools.

i. EEOC 2022 Guidance on the ADA and AI

In May 2022, the EEOC issued technical guidance addressing how the ADA applies to the use of AI to assess job applicants and employees.[22] The guidance outlines several common ways that utilizing AI tools can violate the ADA, including, for example, relying on an algorithmic decision-making tool that intentionally or unintentionally excludes an individual with a disability, failing to provide necessary “reasonable accommodation,” or violating the ADA’s restrictions on disability-related inquiries and medical examinations.

Employers can implement practices recommended by the EEOC to effectively handle the risk associated with utilizing AI tools, such as the following:

  1. Disclose in advance the factors to be measured with the AI tool, such as knowledge, skill, ability, education, experience, quality, or trait, as well as how testing will be conducted and what will required.
  2. Ask employees and job applicants if they require a reasonable accommodation using the tool. If the disability is not apparent, the employer may ask for medical documentation when requested for a reasonable accommodation.
  3. Once the claimed disability is confirmed, provide a reasonable accommodation, including an alternative testing format.
  4. “Examples of reasonable accommodations may include specialized equipment, alternative tests or testing formats, permission to work in a quiet setting, and exceptions to workplace policies.”[23]

ii. EEOC 2023 Guidance on Title VII and AI

In May 2023, the EEOC issued new technical guidance on how to measure adverse impact when AI tools are used for employment selection, titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”[24]

Under this guidance, if the selection rate of individuals of a particular race, color, religion, sex, or national origin, or a “particular combination of such characteristics” (e.g., a combination of race and sex), is less than 80 percent of the rate of the non-protected group, then the selection process could be found to have a disparate impact in violation of Title VII, unless the employer can show that such use is “job related and consistent with business necessity” under Title VII.

If the AI tool is found to have an adverse impact under Title VII, the employer can take measures to reduce the impact or select a different tool. Failure to adopt a less discriminatory algorithm that was considered during the design process may subject the employer to liability.

Under both EEOC guidance documents discussed here, an employer will be held liable for the actions or inactions of an outside vendor who designs or administers an algorithmic decision-making tool on its behalf and cannot rely on the vendor’s assessment of the tool’s disparate impact.

D. Legal Strategies

Considering the applicable laws and EEOC guidance, it would be prudent for a company to consider the following strategies to reduce risk of AI bias in employment decisions:

  1. Prior to signing a contract with a vendor who designs or implements an AI-based employment tool, as part of the vendor due diligence process, a company’s legal team should work closely with its IT and HR teams to review and evaluate the vendor’s tools, including reviewing assessment reports and historical selection rates, based on the applicable laws and EEOC guidelines.

    In addition, any employers who are subject to New York City’s AEDT Law should have an independent auditor conduct a bias audit before utilizing the AI tool.

  2. To incentivize a vendor to deliver a high-quality, legally compliant AI tool while mitigating risks, carefully negotiate and draft the indemnity, warranty, liability cap carveouts, and other risk allocation provisions of the contract with the vendor. These provisions should obligate the vendor to bear liability for any issues arising from the use of the AI tool in employment contexts caused by the vendor’s fault.

  3. Prepare detailed internal documents clearly explaining the AI tool’s operation and selection criteria based on the review mentioned in item a to protect the company in case of government investigations or lawsuits.[25]

  4. The legal team should work closely with HR and the IT team to conduct bias audits on a regular basis.

  5. If an audit reveals the tool has disparate impacts at any point, the company should consider working with the vendor to implement bias-mitigating techniques, such as modifying the AI algorithms, adding training data for underrepresented groups, or selecting a different tool, unless the legal counsel determines that the use of this tool is “job related and consistent with business necessity.”

  6. Provide advance notice to candidates or employees who will be impacted by AI tools in accordance with applicable laws and EEOC guidance.

  7. Educate HR and IT teams regarding AI discrimination.

  8. Keep track of legal developments in this area, especially if your company has offices nationwide.

Faced with the looming threats of EEOC enforcement actions, class action lawsuits, and legislative uncertainty, employers may understandably feel apprehensive about charting a course that includes using AI in hiring or HR. However, consulting with attorneys to understand legal requirements and potential risks associated with AI employment bias—along with adopting proactive measures outlined in this article, staying informed about legal developments, and fostering collaboration across legal, HR, and IT teams—can help organizations effectively mitigate risks and confidently navigate the intricate landscape of AI employment bias.


  1. IBM Data and AI Team, “Shedding light on AI bias with real world examples,” IBM, October 16, 2023.

  2. Keith MacKenzie, “How is AI used in human resources? 7 ways it helps HR,” Workable Technology, December 2023.

  3. Aaron Mok, “10 ways artificial intelligence is changing the workplace, from writing performance reviews to making the 4-day workweek possible,” Business Insider, July 27, 2023.

  4. Annelise Gilbert, “EEOC Settles First-of-Its-Kind AI Bias in Hiring Lawsuit (1),” Bloomberg Law, August 10, 2023.

  5. Equal Employment Opportunity Commission v. iTutorGroup, Inc., No. 1:22-cv-2565-PKC-PK (E.D.N.Y. filed May 5, 2022) (Aug. 9, 2023, joint notice of settlement and request for approval and execution of consent decree).

  6. iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit,” U.S. Equal Employment Opportunity Commission, September 11, 2023.

  7. 3:23-cv-00770-RFL (N.D. Cal. filed Feb. 1, 2023).

  8. 42 U.S.C. § 1981.

  9. Joseph O’Keefe, Evandro Gigante, and Hannah Morris, “Judge Grants Workday, Inc.’s Motion to Dismiss in Groundbreaking AI Class Action Lawsuit Mobley v. Workday,” Law and the Workplace (blog), Proskauer, January 24, 2024.

  10. Daniel Wiessner, “Workday accused of facilitating widespread bias in novel AI lawsuit,” Reuters, February 21, 2024.

  11. Rachel Goodman, “Why Amazon’s Automated Hiring Tool Discriminated Against Women,” American Civil Liberties Union, October 12, 2018.

  12. 42 U.S.C. § 2000e.

  13. 42 U.S.C. § 1981.

  14. 29 U.S.C. § 206(d).

  15. 29 U.S.C. §§ 621–634.

  16. Pub. L. 99-603, 100 Stat. 3359 (1986)), as codified as amended in scattered sections of Title 8 of the United States Code.

  17. 42 U.S.C. §§ 12101–12113.

  18. 42 U.S.C. §§ 2000gg–2000gg-6.

  19. 38 U.S.C. § 4311.

  20. 42 U.S.C. § 2000ff.

  21. Automated Employment Decision Tools: Frequently Asked Questions,” NYC Department of Consumer and Worker Protection, June 6, 2023.

  22. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” U.S. Equal Employment Opportunity Commission, May 12, 2022.

  23. Id.

  24. Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” U.S. Equal Employment Opportunity Commission, May 18, 2023.

  25. See Lena Kempe, “AI Risk Mitigation and Legal Strategies Series No. 5: Explainable AI,” LK Law Firm, January 11, 2024.

True Lender and Rate Exportation: Reviewing the Major 2023 Legislation

In recent years, several state legislatures have enacted consumer credit laws designed to regulate FinTech companies operating through partnerships with depository institutions, or more generally to limit the interest rates charged by those depository institutions. 2023 was no exception, and this growing trend should be watched closely by state-chartered depository institutions and financial services companies.

Section 521 of the Depository Institutions Deregulation and Monetary Control Act (“DIDMCA”), authorizes FDIC-insured state-chartered banks to use both the most favored lender authority and federal exportation authority enjoyed by national banks under 12 U.S.C. 85 by preempting state law. DIDMCA Section 521 allows states to “opt out” of the federal preemption; if a state does not opt out by statute, constitutional amendment, or referendum, then the state law interest rate limitations are preempted by federal law. Iowa and Puerto Rico have already opted out, with Colorado joining them in July 2024 and opt-out legislation recently introduced in the District of Columbia. Meanwhile, many other states are increasing their scrutiny and adopting laws to impose regulations on bank–FinTech partnerships, rather than upon all state-chartered depository institutions.

On June 5, 2023, Colorado House Bill 23-1229 was signed, clarifying that consumer loans made in Colorado are excluded from the provisions of DIDMCA Section 521. HB 23-1229 amends the Colorado Uniform Consumer Credit Code to change the terms and finance charges that a lender may impose in consumer credit transactions. This amendment requires that out-of-state banks follow Colorado’s interest rate and fee restrictions when lending to Colorado residents in Colorado.

The Colorado opt-out will impact state-chartered banks issuing loans to Colorado residents, including those that have programs with FinTech companies. The opt-out arises in a long context of enforcement, notably the Avant-Marlette settlement that set forth the expectations for FinTech-bank programs. In that case, Colorado’s Consumer Credit Administrator directly challenged the out-of-state loans made by an out-of-state bank, in partnership with multiple FinTech companies. The state argued that the federal interest rate preemption could not be applied because the bank was not the true lender of the loans and the partner company could not stand in the bank’s shoes for loans sold by the originating bank. The case was settled in an agreement that put into place certain operational requirements for the programs, but it did not apply Colorado’s usury limits to the originating bank given that, at the time, Colorado had not opted out of Section 521.

State legislatures have also sought to enact laws regulating partnerships between depository institutions and FinTech companies. On June 29, 2023, Connecticut enacted legislation to join states that have previously adopted such laws, including Maine, New Mexico, and Illinois. The laws codify a predominant economic interest test, and create other tests seeking to determine when the FinTech—not the originating depository institution—should be viewed as the “true lender,” in which case the depository institution’s federal rate preemption should be disregarded. The laws also impose varying interest rate limits and rate calculation methodologies.

Similar legislation in Minnesota, Minn. S.F. 2744, was enacted on May 24, 2023, and became effective January 1, 2024. The law caps the annual percentage rate (“APR”) on consumer small loans and consumer short-term loans at a 50 percent all-in APR. Consumer small loan is defined as a consumer-purpose unsecured loan for an amount equal to or less than $350 that must be repaid in a single installment. A consumer short-term loan is a loan that has a principal amount, or an advance, on a credit limit of $1,300 or less, and requires a minimum payment of more than 25 percent of the principal balance or credit advance within sixty days. Minn. S.F. 2744 provides that, if the all-in APR exceeds 36 percent, the lender must perform an ability-to-pay analysis, reviewing evidence of the borrower’s net income, major financial obligations, and living expenses. Like statutes in other states, it also implements the predominant economic interest test and other tests for the true lender.

The tests utilized in these laws vary by state, but they often provide that a nonbank entity should be viewed as the lender if any of the following are true:

  1. the person holds, acquires, or maintains, directly or indirectly, the predominant economic interest in the consumer credit product at issue;
  2. the person markets, brokers, arranges, or facilitates the consumer credit product and holds the right, requirement, or right of first refusal to purchase the consumer credit products; or
  3. the totality of the circumstances indicate that such person is the lender and the transaction is structured to evade the applicable state law requirements.

The laws typically establish several factors to consider for the totality of the circumstances test. They may vary from state to state but typically include a review of:

  1. Indemnifying, insuring, or protecting an exempt person for any costs or risks related to the consumer credit product;
  2. predominantly designing, controlling, or operating the lending program; or
  3. purporting to act as an agent, service provider, or in another capacity for an exempt person (typically any depository institution) in the state while acting directly as a lender in another state.

To date, there has been little public enforcement activity regarding these laws, making it hard for commentators to assess their impact on partnerships between banks and non-banks. State legislative trends indicate that more states will continue to consider legislation regulating such programs or opting out of DIDMCA, leading to a more fragmented landscape in the United States compared to the consistency seen in other countries. DIDMCA opt-outs raise interesting questions regarding how a DIDMCA opt-out will actually impact banks located out of state, and whether it will actually reach such loans. Similarly, the interplay of federal rate exportation authority with laws seeking to curtail that exportation absent DIDMCA opt-out raise interesting enforceability questions that may lead to future litigation should the trend continue.

***

This article is related to a CLE program titled “True Lender and Rate Exportation: Analyzing the Impact of State Laws Restricting Bank Originated Loans” that was presented during the ABA Business Law Section’s 2023 Fall Meeting. To learn more about this topic, listen to a recording of the program, free for members.

 

Summary: Updating Disclosure Schedules: Market Trends

Last updated on March 1, 2025.

This is a summary of the Hotshot course “Updating Disclosure Schedules: Market Trends,” in which ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates discuss market trends for disclosure schedules updates provisions, drawing on data from the ABA M&A Committee’s Private Target Deal Points Study. View the course here.


Updating Disclosure Schedules: Market Trends

  • The 2023 ABA M&A Committee’s Private Target Deal Points Study looked at how often parties allow updates to a seller’s disclosure schedules between signing and closing.
    • The study found that in 2022 and the first quarter of 2023:
      • Updates were expressly permitted or required in 14% of deals;
      • Updates were expressly prohibited in 5% of deals; and
      • The remaining 81% of deals were silent on the point.
  • Over the years, the number of deals allowing updates has been consistently less than half.
    • 14% in 2022 to 2023;
    • 24% in 2020 to 2021;
    • 31% in 2018 to 2019; and
    • 28% in 2016 to 2017.
  • Of the deals that permitted or required updates in the latest study, there was a decrease in those that allowed updates for information occurring both pre- and post- signing, from 62% in the 2021 study to 60% in the 2023 study.
  • The buyer had a right to close and seek indemnification for updated matters in 67% of the deals that permitted or required updates.
    • This marks a significant decrease from the last study, where it was 90%.
  • The buyer’s right to terminate the agreement was not affected by updated disclosure in 80% of the deals in the 2023 study.
    • In 20% of the deals, the buyer could terminate because of the disclosure, but only within a specific time period.

The rest of the video includes interviews with ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates.

Download a copy of this summary here.

Summary: Updating Disclosure Schedules: Sample Provisions

This is a summary of the Hotshot course “Updating Disclosure Schedules: Sample Provisions,” a look at two disclosure schedules updates provisions. View the course here.


Negotiating a Disclosure Schedule Updates Provision

  • When negotiating a disclosure schedules updates provision parties typically focus on:
    • Whether the seller is obligated or merely permitted to make updates;
    • The scope of permitted updates; and
    • How the updates affect other rights and obligations of the parties.
Sample Seller-Friendly Disclosure Schedules Update Provisions

During the Pre-Closing Period, Seller shall have the right (but not the obligation) to update the Disclosure Schedules to the extent information contained therein or any representation or warranty of Seller becomes untrue, incomplete or inaccurate after the Agreement Date due to events or circumstances after the date hereof or facts of which the Seller becomes aware after the date hereof. [Buyer shall have the right to terminate this Agreement pursuant to Section [_] within five (5) days after receipt of such update if the updated portion or portions of the Disclosure Schedules disclose any facts and circumstances that would cause a failure of the Closing Condition set forth in Section [_]; provided, however, that if (a) Buyer is not entitled to, or does not timely exercise, such right to terminate this Agreement, or (b) Buyer consummates the Closing,] Buyer shall, in any such case, be deemed to have accepted such updated Disclosure Schedules, any such update shall be deemed to have amended the Disclosure Schedules, to have qualified the relevant representations and warranties contained in Article [_], and to have cured any breach of any representation or warranty that otherwise might have existed hereunder by reason of such event or circumstance. Nothing in this Agreement, including this Section [_], shall be interpreted or construed to imply that Seller is making any representation or warranty as of any date other than as otherwise set forth herein.

[Emphasis added.]

  • This provision says that the seller has the right, not the obligation, to update the disclosure schedules. This is good for the seller because when updates are required:
    • An inadvertent failure to disclose new facts could result in an indemnification claim for breach of the seller’s covenant to update the disclosure schedules.
    • Or the buyer could claim that the closing conditions weren’t satisfied because the seller didn’t comply with its obligation to perform under the covenant.
  • The next part of the first sentence allows any updates needed to complete or correct any information in the disclosure schedules or reps that becomes untrue, incomplete, or inaccurate because of “events or circumstances” or “facts of which the Seller becomes aware”—in each case after the date of the agreement.
    • This sets up a broad scope for updates, including anything that happens or is learned after signing.
    • The only way this provision could be more seller-friendly is if the seller were also allowed to include information known or that should have been known prior to signing.
  • Most of the rest of the provision covers the impact of the disclosure schedule updates on the buyer’s rights, and it’s also beneficial to the seller because the buyer’s only recourse in this version of the provision is to terminate the provision.
    • If the buyer completes the acquisition, it’s deemed to have accepted the new disclosure and can’t then bring an indemnity claim relating to the new facts.
Sample Buyer-Friendly Disclosure Schedules Update Provisions

From time to time prior to the Closing, Seller shall promptly supplement or amend the Disclosure Schedules hereto with respect to any matter arising after the date hereof, which, if existing, occurring or known at the date of this Agreement, would or should have been required to be set forth or described in the Disclosure Schedules (each a “Schedules Supplement”). Any disclosure in any such Schedules Supplement shall not be deemed to have cured any inaccuracy in or breach of any representation or warranty contained in this Agreement, including for purposes of the indemnification or termination rights contained in this Agreement or of determining whether or not the conditions set forth in Section [_] have been satisfied.

[Emphasis added.]

  • In this example, the seller is obligated to promptly update the disclosure schedules when it becomes aware of new facts that would have been required to be disclosed if they had arisen prior to signing.
    • This ensures that the buyer has complete information at closing.
    • Most sellers agree to this formulation because it’s a pretty convincing argument that the buyer has a right to know all new facts or events that could impact the business prior to closing.
  • The provision goes on to limit updates to matters that arise after signing that would or should have been disclosed if they had occurred prior to signing.
    • This is different from the seller-friendly version because here the seller isn’t allowed to update the disclosure schedules with facts that arose before signing.
    • Parties often agree to limit the scope of updates to new things that happen post-signing.
      • Drafting the provision this way removes any incentive for a seller to wait to disclose material information until after signing, when the buyer could be obligated to close the deal.
  • Finally, in this example updates don’t affect the buyer’s rights under the agreement.
    • So the buyer has the option not to close if the closing conditions aren’t satisfied.
    • If the acquisition does close, the buyer can still bring an indemnification claim based on the disclosure as it stood at signing.

The rest of the video includes interviews with ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates.

Download a copy of this summary here.

Summary: Updating Disclosure Schedules

This is a summary of the Hotshot course “Updating Disclosure Schedules,” an introduction to disclosure schedules updates provisions, including why parties include a right or obligation to update disclosure schedules, the scope of permitted updates, and the updates’ effect on other rights and obligations of the parties under the acquisition agreement. View the course here.


Why Update Disclosure Schedules?

  • The disclosure schedules to an M&A agreement, together with the reps and warranties they modify, provide a snapshot of the seller and the target as of the signing date.
  • If the deal doesn’t close at signing, it’s possible that the disclosure schedules and the reps and warranties could be inaccurate or incomplete when the parties are ready to close.
    • This could be due to:
      • New facts discovered between signing and closing that the parties weren’t aware of before signing; or
      • New developments, such as the target getting sued by a customer after the agreement is signed.
    • Parties sometimes deal with this possibility by allowing, or even requiring, updates to the disclosure schedules.
  • When parties agree to allow or require updates, they add a disclosure schedules updates provision in the interim covenants section of the agreement.
  • Updating disclosure schedules is good for sellers, because the more accurate their disclosure is at closing, the lower the risk of a post-closing indemnification claim.
  • And, in principle, buyers also like disclosure schedules to be as accurate as possible before closing so that they can negotiate changes to the deal or even walk away if new and adverse facts are disclosed.
  • But parties don’t often include the right to update the disclosure schedules because they can’t agree on:
    • How the updated disclosure affects the rest of the agreement; or
    • The scope of the permitted or required updates.
  • When updates are not allowed, the parties are often taking the position that they’d rather not speculate on outcomes that aren’t certain when the agreement is signed. Instead they agree to deal with any issues as they arise.

Scope of Updates and Impact on the Acquisition Agreement

  • Several areas of the acquisition agreement can be affected when parties allow updates to the disclosure schedules.
    • The first is the closing conditions.
      • An update to the disclosure schedules is essentially an amendment to the seller’s reps and warranties.
      • Most M&A agreements include a condition that the seller’s reps and warranties have to be true and correct or true and correct in all material respects as of the closing. So if something happens after the deal is signed that would make the seller’s reps and warranties incorrect at closing, the buyer doesn’t have to close.
      • But if disclosure schedule updates are allowed and the seller makes updates to reflect the new development, the buyer could be required to close even if the new disclosure materially amends the reps the seller made at signing.
    • This dynamic leads parties to think carefully about another aspect of the agreement, the buyer’s termination rights. For example:
      • Should the buyer have the right to terminate the acquisition agreement based on the new disclosure, especially when the buyer can no longer rely on the closing conditions to get out of the deal?
      • What if the new disclosure is minor and doesn’t materially change the deal?
    • A third issue the parties think about is the seller’s liability for a breach of the reps and warranties as they existed at signing. For example:
      • Does an update to the disclosure schedules cure that breach and relieve the seller from its indemnification obligations for any resulting damages?
Scope of Updates
  • If the parties are able to agree on those issues, they’ll include a provision that typically lays out:
    • Whether the seller is required to update the schedules or if updates are simply permitted;
    • The scope of updates that can be made; and
    • How an update affects the rest of the agreement, like the closing conditions, termination rights, and indemnification provisions.
  • Defining the parameters for updates can be tricky. The parties consider:
    • The type of rep or warranty;
    • When the new information arose; and
    • The materiality of the new disclosure.
Type of Rep or Warranty
  • Buyers might be more willing to allow updates to affirmative, rather than negative, disclosures.
    • For example, both parties will want any new material contracts to be disclosed as part of the seller’s rep regarding material contracts.
      • The buyer would expect this kind of update, since the seller agrees to continue operating the business in the ordinary course between signing and closing.
    • But the buyer may be less willing to allow an update to a negative rep or warranty, like the “no liabilities” or “no litigation” reps.
      • In those cases, the underlying facts are more likely to have a negative impact on the value of the business.
      • And these types of updates usually relate to matters outside the ordinary course, so allowing them could expose the buyer to an unpredictable amount of additional risk.
When the New Info Arose
  • The parties also may limit new disclosure based on when the underlying facts arose.
  • A seller has a pretty compelling case for updating the disclosure schedules to include things that happen after signing.
    • But should they also be able to add facts that were known or that should have been known before signing?
    • What if those facts weren’t disclosed at signing because of an honest mistake or because the seller was genuinely unaware?
  • On the other hand, if the seller is allowed to update the disclosure schedules with information that arose prior to signing, what’s preventing them from withholding material information at signing that would otherwise affect the deal?
Materiality
  • The materiality of new information may also affect whether or not the seller can include it in a disclosure schedules update.
  • Buyers are typically willing to allow updates relating to facts that arise in the ordinary course of business and that don’t affect the economics of the deal.
    • But they often want to prohibit updates for new circumstances that are financially or operationally material to the business.
    • Depending on how the buyer’s closing conditions and termination rights are drafted, the buyer could be forced to close despite the new material disclosure.
Rep and Warranty Insurance
  • One other thing to consider is that if a deal has rep and warranty insurance, the policy is typically issued when the acquisition agreement is signed.
    • The coverage will not extend to newly disclosed facts unless the insurer expressly agrees to an extension of the policy.
    • So, if updates to the disclosure schedules are permitted or required, there may be a gap in the insurance coverage.

The rest of the video includes interviews with ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates.

Download a copy of this summary here.

The Duty of Supervision in the Age of Generative AI: Urgent Mandates for a Public Company’s Board of Directors and Its Executive and Legal Team

This article is related to a Showcase CLE program titled “AI Is Coming for You: The Practical and Ethical Implications of How Artificial Intelligence Is Changing the Practice of Law” that took place at the American Bar Association Business Law Section’s 2024 Spring Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.

“This article highlights for busy board members and C-suite executives the dangers of not paying attention to Generative AI. The risk to publicly held companies from non-supervised implementation of Generative AI is significant. The authors make a solid case best practices are warranted to protect the corporation and the decision-makers.”—Kyung S. Lee, Shannon & Lee LLP, program co-chair

“Although at first glance this thoughtful article seems only tangentially related to the ethical use of Generative AI by lawyers, it actually provides an excellent framework for tackling the question of where, when, and how to use Generative AI capabilities inside the law firm or law department. Like their clients, a law firm or law department needs to consider many of the same issues. Does a potential use create a risk of data exposure? Could potential biases contained in underlying training data create biased outputs from the proposed application? How likely are “hallucinations,” and what damage can they cause? Suggested solutions for public company boards also apply to legal organizations. Education, bringing in experts, and creating systems and teams to vet uses all play their role in making sure legal teams use Generative AI responsibly. The article provides a useful roadmap to protecting legal organizations from the risks of Generative AI deployment.”—Warren Agin, program panelist


Introduction

Artificial intelligence is capturing the imagination of many in the business world, and one real-world message is unmistakable:

Any director or executive officer (CEO, CFO, CLO/GC, CTO, and others) of a publicly held company who ignores the risks, and fails to capitalize on the benefits, of Generative AI does so at his or her individual peril because of the risk of personal liability for failing to properly manage a prudent GenAI strategy.

Generative artificial intelligence, or GenAI,[1] is a technological marvel that is quickly transforming our lives and revolutionizing the way we communicate, learn, and make personal and professional decisions. Due to GenAI-powered technology and smart devices, all industries—ranging from the healthcare, transportation, energy, legal, and financial services industries to the education, technology, and entertainment industries—are experiencing almost logarithmic improvements. The use cases for GenAI seem boundless, balancing the opportunity to improve society with the risks that make one worry about the devastation that can be caused by GenAI if it operates without meaningful regulation or guardrails. Nowhere is the risk more fraught than in a specific type of highly regulated organization that is accountable to a myriad of stakeholders: U.S. publicly held companies.

Insofar as publicly held companies can be both (i) consumers of GenAI technology and (ii) developers and suppliers of GenAI technology, there are countless use cases, scenarios, and applications for a publicly held company. Common ways in which GenAI is used include data analysis and insights, customer services and support, financial analysis and fraud detection, automation and quality control in production and operation management, and marketing and sales.

Even though the specific applications of GenAI within a publicly held company depend on that company’s industry, goals, and challenges, every board of directors and in-house legal team managing a publicly held company must be keenly attuned to the corporate and securities litigation risks posed by GenAI. Indeed, as GenAI technologies become increasingly important for corporate success, board oversight of GenAI risks and risk mitigation is vital, extending beyond traditional corporate governance. Any publicly held company that does not establish policies and procedures regarding its GenAI use is setting itself up for potential litigation by stockholders as well as vendors, customers, regulatory agencies, and other third parties.

This article focuses on the principle that GenAI policies and procedures at a publicly held company must come from its board of directors, which, in conjunction with the executive team, must take a proactive and informed approach to navigate the opportunities and risks associated with GenAI, consistent with the board’s fiduciary duties.

Legal Background: The Duty of Supervision

Corporate governance principles require directors to manage corporations consistent with their fiduciary duty to act in the best interest of shareholders. The board’s fiduciary duty is comprised of three specific obligations: the duty of care,[2] the duty of loyalty,[3] and the more recently established derivative of the duty of care, the duty of supervision or oversight.[4]

The duty of supervision stems from the Caremark case, where the Delaware Court of Chancery expressed the view that the board has “a duty to attempt in good faith to assure that a corporate information and reporting system, which the board concludes is adequate, exists, and that failure to do so under some circumstances may, in theory at least, render a director liable for losses caused by non-compliance with applicable legal standards.”[5] The Caremark court later explained that liability for a “lack of good faith” depends on whether there was “a sustained or systematic failure of the board to exercise oversight — such as an utter failure to attempt to assure a reasonable information and reporting system exist . . . .”[6] In Stone v. Ritter, the Delaware Supreme Court explicitly approved the Caremark duty of oversight standard, holding that director oversight liability is conditioned upon: “(a) the directors utterly failed to implement any reporting or information system or controls; or (b) having implemented such a system or controls, [the directors] consciously failed to monitor or oversee its operations thus disabling themselves from being informed of risks or problems requiring their attention.”[7]

Thus, the first prong of the duty of supervision requires the board of directors to assure itself “that the corporation’s information and reporting system is in concept and design adequate to assure the board that appropriate information will come to its attention in a timely manner as matter of ordinary operations.”[8] If the board meets the standard in the first prong, the board can still violate the duty of supervision if it shows a “lack of good faith as evidenced by sustained or systematic failure of a director to exercise reasonable oversight.”[9]

The principles in Caremark were clarified further in a securities derivative suit against Boeing Corporation. In that now-classic case, the Delaware Court of Chancery established an enhanced duty of supervision where the nature of a corporation’s business presents unique or extraordinary risk. In Boeing, the Court permitted a Caremark claim to proceed against Boeing’s board of directors amidst a former director’s acknowledgement of the board’s subpar oversight of safety measures. The Court found that safety was a “mission-critical” issue for an aircraft company, and any material deficiencies in oversight systems in a vital area justified enhanced scrutiny of a board’s oversight of them.[10]

The Caremark duty of supervision was extended beyond the board level to executive management last year in a shareholder litigation against McDonald’s Corporation.[11] In McDonald’s, the Delaware Court of Chancery adopted the reasoning of Caremark when extending the duty of oversight to the management team because executive officers function as agents who report to the board, with an obligation to “identify red flags, report upward, and address the [red flags] if they fall within the officer’s area of responsibility.”[12]

Application of the Duty of Supervision in the Era of GenAI

Each new technology entering the corporate world stimulates a new round of corporate governance questions about whether and how the fiduciary duty of directors and executive officers of publicly held companies is transformed due to new business operations and the risks appurtenant to them. GenAI is no different. The nature of GenAI calls for immediate attention from the board of directors and the legal team at publicly held companies.

With the specters of privacy violations, AI “hallucinations” (where an AI model creates incorrect or misleading results), “deepfakes,” bias, lack of transparency, and difficulties in evaluating a “black box” decision-making process, many things can go wrong with the use of GenAI. Each of those things that can go wrong exposes a publicly held company to material risk. At this stage in the evolution of AI, there are certain categories of corporate, regulatory, and securities law risks that are most dangerous for public companies. Publicly held companies need to be especially mindful of public disclosures around AI usage; the impact of AI on their operations, competitive environment, and financial results; and whether AI strategy and usage is likely to have a material effect on overall financial performance and why.

Given the enormous benefits, opportunities, and risks emerging in the era of GenAI, the principles articulated in the Caremark line of cases are instructive for a board of directors and executive management of publicly held companies. Without question, the board of every publicly held company must implement reporting, information systems, and controls that govern the organization’s use of GenAI technology. The macro-implications of GenAI compel this conclusion, and the section below suggests specific practical takeaways and best practices.

When implementing GenAI-related systems and controls, the board and management team must contextualize the corporation’s use of AI so that the systems and controls align with the corporation’s business operations, financial goals, and shareholder interests. Publicly held companies that develop and sell GenAI products have different considerations and obligations than do companies that only use GenAI in their operations. When implementing these systems and controls, publicly held companies must be mindful of the fact that the duty of supervision equally applies to executive officers as well as to boards under the McDonald’s case. As the “conscience” of the organization, the legal team advising a publicly held company must consider day-to-day compliance tactics and measures in addition to adopting systems and controls at the board level that comply with the overarching principles of the duty of supervision.

Practical Takeaways and Best Practices

The following items are integral components of any public held company’s AI plan:

  1. Baseline technological GenAI knowledge. Every board member and executive team member must have and maintain a working understanding of what GenAI is, its different iterations and how each works, and how the organization uses and benefits from GenAI.
  2. Ongoing GenAI education. As GenAI technology or the organization’s use of it changes, board members and the executive team should continue to keep themselves informed on issues of significance or risk to the company through regularly scheduled updates.
  3. Institutionalization of GenAI risk oversight. Publicly held companies should build a team of stakeholders from across the entire organization for GenAI oversight. That team must include individuals from business, legal, and technology departments—both high-level executives and operational experts—responsible for evaluating and mitigating GenAI-related risks.
  4. Inclusion of AI experts in board composition. Publicly held companies must modify the composition of their boards to include members with expertise in AI, technology, and data science. The goal is to have well-rounded perspectives on AI-related matters. To meet the legal demands of GenAI supervision, boards should consider recruiting members with legal expertise in technology, data privacy, and AI regulations, as well as board members who are expert at identifying new technology risks.
  5. AI committee. A publicly held company should establish an AI committee charged with additional oversight of GenAI risks and opportunities.
  6. Adoption of written policies. The board and executive team must create a written framework for making policies and materiality determinations regarding public disclosure in the context of GenAI usage, reporting GenAI incidents with advice of counsel, and setting standards for professionals who oversee GenAI systems and controls.
  7. Understanding of GenAI legal and regulatory compliance. The board and executive team must understand and stay apprised of AI-related legislation and regulations and oversee policies, systems, and controls to ensure that GenAI use complies with new legal requirements.
  8. Ethical GenAI governance. The board and executive team should address ethical standards for GenAI usage, development, and deployment, including issues such as bias, transparency, and accountability.
  9. SEC disclosure. Public companies must understand how Securities and Exchange Commission requirements affect GenAI and incorporate those requirements into their disclosure protocols. Boards must stay informed about regional and global variations in GenAI regulations and adapt corporate policies to ensure compliance with securities regulations and avoid legal pitfalls.
  10. Performance monitoring: The board and the executive team should implement mechanisms to monitor the performance of any GenAI controls and to assess the impact on key performance indicators, as well as regularly review and adapt the company’s GenAI strategies based on other performance metrics.
  11. Collaboration with legal counsel. Close collaboration between boards and legal counsel is essential to minimize GenAI risk. Legal experts should be integral to the decision-making process, providing guidance on compliance, risk management, and the development of legal strategies pertaining to GenAI.

Conclusion

Artificial intelligence, including GenAI, has the power to drive substantial change in our daily lives and in the ways that companies conduct business. With that power comes an emerging and significant risk that publicly held companies and their board members and executives—ever the target of shareholder litigation—must take seriously by implementing robust AI-focused policies, procedures, and risk-management initiatives.


  1. Although earlier generations of artificial intelligence (and technology generally) can afford great benefits and pose material risks, this article focuses on Generative Artificial Intelligence, or GenAI, because of the unique challenges GenAI poses due to machine learning capabilities, training data biases and challenges, privacy issues, and the “black box” nature of the technology.

  2. Smith v. Van Gorkom, 488 A.2d 858, 872 (Del. 1985).

  3. Cede & Co. v. Technicolor, Inc., 634 A.2d 345, 361 (Del. 1993).

  4. In re Caremark Int’l Inc. Deriv. Litig., 698 A.2d 959, 970 (Del. Ch. 1996).

  5. Id. at 971.

  6. Id. (emphasis added). The second prong in Caremark often is characterized as “consciously disregarding ‘red flags.’”

  7. Stone v. Ritter, 911 A.2d 362, 370 (Del. 2006).

  8. Caremark at 970.

  9. Id. at 971.

  10. In re The Boeing Co. Derivative Litig., No. 2019-0907-MTZ, 2021 WL 4059934 (Del. Ch. Sept. 7, 2021).

  11. In re McDonald’s Corp. S’holder Derivative Litig., 289 A.3d 343 (Del. Ch. 2023) (“Although the duty of oversight applies equally to officers, its context-driven application will differ. Some officers, like the CEO, have a company-wide remit. Other officers have particular areas of responsibility, and the officer’s duty to make a good faith effort to establish an information system only applies within that area.”).

  12. Id. at 366.

 

 

Has a New Day Dawned? The Corporate Transparency Act and Amended ABA Model Rule 1.16

This article is related to a Showcase CLE program titled “Has a New Day Dawned? Practical Advice on the Legal Ethics and Regulatory Compliance Obligations of the Corporate Transparency Act and Amended ABA Model Rule 1.16” that took place at the American Bar Association Business Law Section’s 2024 Spring Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.


The enactment of the Corporate Transparency Act (“CTA”) and the adoption of ABA Model Rule of Professional Conduct 1.16 have brought back into focus a lawyer’s ethical obligations of confidentiality under ABA Model Rule 1.6, the attorney-client privilege, and the ABA’s stated position that lawyers are not gatekeepers. Both the CTA and ABA Model Rule 1.16 were enacted or adopted to address “illicit finance,” especially money laundering and terrorist financing, though—as will be explained—Rule 1.16 applies more broadly. Both involve an element of due diligence by the lawyer: one—the CTA—involves an entity client’s obligation to identify (and report on a new federal database named the Beneficial Ownership Secure System, or “BOSS”) the beneficial owners of most entities, and the other—Rule 1.16, as amended—involves a lawyer’s ethical duties to inquire and assess, both at the outset of a representation and at unspecified times during a representation, whether the prospective client or client intends to use (or is using) the lawyer’s services to perpetuate a crime or fraud. Both the CTA and Rule 1.16 are certainly applicable to the business lawyer, though Rule 1.16 is not so limited, and in fact, the Rule may well apply to a litigator who falls victim to an online scammer.

The Corporate Transparency Act

Enacted in 2021, the CTA became effective as of January 1, 2024 (though nonexempt entities in existence as of the effective date need not comply with the CTA’s reporting requirements until December 31, 2024). The focus of the CTA, according to the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (“FinCEN”), is to “make it harder for bad actors to hide or benefit from their ill-gotten gains through shell companies and other opaque ownership structures.”[1] To accomplish this, information about “beneficial ownership” of nonexempt entities must be reported to the FinCEN BOSS database that will be accessible to federal and state law enforcement agencies. “Reporting companies” formed after January 1, 2024, have ninety days within which to input their information to BOSS, and those entities formed on or after January 1, 2025, will have thirty days after formation to file the same detailed report.

There are twenty-three categories of exempted entities that need not report beneficial ownership under the CTA. Those exemptions range from banks and credit unions to accounting firms and large operating companies. However, if the entity is a nonexempt corporation, limited liability company, or other entity created through filing a document with a secretary of state, the entity is a reporting company for which personal information about its beneficial owners must be input into the BOSS database. Beneficial owners are every individual who, directly or indirectly, either exercises “substantial control” over the reporting company or owns or controls at least 25 percent of the ownership interests of a reporting company.

Many, if not most, clients will feel the required information is intrusive because it includes the beneficial owner’s date of birth, residential address, and a copy of a valid driver’s license or passport. Further, reporting companies must file an updated report when there is a change to a beneficial owner’s information (e.g., a move to a new residence, a name change resulting from a marriage or divorce, a new passport number). There is a limited workaround to the beneficial owner’s obligation to report that personal information to the reporting company, known as a “FinCEN Identifier,” but all of these issues are complicated, and there are still questions surrounding how one is to file and who is a “company applicant.”

At the same time, failure to comply with the CTA has significant ramifications, including fines and the possibility of jail time. Several business lawyers have suggested that their business clients consider appointing a “CTA Compliance Officer” who is entrusted to oversee the information collection and report submission role, which may require board resolutions or amendments to the company’s formation documents.

ABA Model Rule 1.16

The amendment to ABA Model Rule 1.16 was likewise adopted with the intent of combatting money laundering and terrorist financing, though the ABA did not so limit the breadth of the Rule. The formal Revised ABA Report for Resolution 100 accompanying the proposed rule change stated:

This Resolution constitutes another piece of the ABA’s longstanding and ongoing efforts to help lawyers detect and prevent becoming involved in a client’s unlawful activities and corruption[.] . . . The proposed amendments will help lawyers avoid entanglement in criminal, fraudulent, or other unlawful behavior by a client, including tax fraud, mortgage fraud, concealment from disclosure of assets in dissolution or bankruptcy proceedings, human trafficking and other human rights violations, violations of U.S. foreign policy sanctions and export controls, and other U.S. national security violations.[2]

The Report asserted that amended Rule 1.16 would not impose new ethical obligations on lawyers with respect to conducting client due diligence; however, even if that were the case, the obligation is now part of the Black Letter ABA Model Rule 1.16. Rule 1.16, as amended, provides that lawyers have an obligation to “inquire into and assess the facts and circumstances of each representation to determine whether the lawyer may accept or continue the representation.”[3] Furthermore, if “the client or prospective client seeks to use or persists in using the lawyer’s services to commit or further a crime or fraud, despite the lawyer’s discussion pursuant to Rules 1.2(d) and 1.4(a)(5) regarding the limitations on the lawyer assisting with the proposed conduct,” then the lawyer must decline the representation or, if the representation has already commenced, the lawyer must withdraw from the representation.[4]

Importantly, the inquiry or assessment required by Rule 1.16 is a fact-specific risk-based analysis. As new Comment 2 to Rule 1.16 explains:

Under paragraph (a)(4), the lawyer’s inquiry into and assessment of the facts and circumstances will be informed by the risk that the client or prospective client seeks to use or persists in using the lawyer’s services to commit or further a crime or fraud. This analysis means that the required level of a lawyer’s inquiry and assessment will vary for each client or prospective client, depending on the nature of the risk posed by each situation.

The depth of required due diligence remains unclear, although a list of five nonexclusive factors have been added to the Comments. These five factors provide a rough guide to lawyers on what to inquire upon and how deeply to do so, and they include:

  1. the identity of the client, including the beneficial owners of the client if it is an entity;
  2. the lawyer’s “experience and familiarity with the client”;
  3. the “nature of the requested legal services”;
  4. the “relevant jurisdictions involved in the representation” and, specifically, “whether a jurisdiction is considered at high risk for money laundering or terrorist financing”; and
  5. the “identities of those depositing into or receiving funds from the lawyer’s client trust account, or any other accounts in which client funds are held.”

In addition to these factors, new Comments to Rule 1.16 identify a number of documents to assist lawyers in “assessing risk,” including the Financial Action Task Force (“FATF”) Guidance for a Risk-Based Approach for Legal Professionals, the Organization for Economic Cooperation and Development (“OECD”) Due Diligence Guidance for Responsible Business Conduct, the U.S. Department of the Treasury’s Specially Designated Nationals and Blocked Persons List, and ABA publications on the topic.

At least for now, the lawyer is not obligated to disclose any information that the lawyer comes to learn as a result of the inquiry or assessment and, indeed, under ABA Model Rules 1.6 and 1.18, depending on the circumstances, the lawyer may be prohibited from making that disclosure. Finally, it remains unclear at present what “triggers” a lawyer’s obligation to undertake an inquiry or assessment after the representation has begun. Does the duty arise when the lawyer “knows,” as that term is defined by Rule 1.0(f), that the client is using the lawyer’s services to commit a crime or fraud? What does Rule 1.0(f)’s “actual knowledge of the fact in question,” which “may be inferred from circumstances,” mean in this context?

Conclusion

These new (or highlighted) obligations present practical issues for lawyers. What information may the lawyer need to collect if the lawyer is asked to assist a client in its obligation to comply with the CTA—and, whether or not so requested, what (or how) should the lawyer communicate to the client about these obligations? What possible traps for business clients and the lawyers who represent them does the CTA present, and how can the risks of those traps be mitigated? What resources are available and appropriate to conduct client due diligence? What triggers the obligation to conduct due diligence after a representation has begun? While Rule 1.16 certainly applies to the transactional lawyer, does it apply to litigators as well? And what does the future hold for the lawyer’s duty of confidentiality and the attorney-client privilege given the momentum evidenced by the CTA and ABA Model Rule 1.16? These and other issues will be addressed in the CLE panel discussion connected to this article at the American Bar Association Business Law Section’s 2024 Spring Meeting.


  1. FinCEN Beneficial Ownership Information Reporting FAQ A.2., issued Sept. 18, 2023.

  2. ABA Standing Committee on Ethics and Professional Responsibility and Standing Committee on Professional Regulation Revised Report to the House of Delegates for Resolution 100 (2023), at 1 and 2.

  3. See ABA Model Rule 1.16(a).

  4. See Rule 1.16(a)(4).

 

 

D&O Coverage Considerations for M&A and Government Investigations

Macroeconomic factors and market indicators point to a rebound in M&A activity in 2024. An increase in deals necessarily affects many directors and officers (D&O) insurance policies in ways that could lead to future coverage disputes, as policy provisions addressing changes in control take effect and so-called tail policies are implemented. At the same time, federal and state regulators continue to put pressure on companies and their directors and officers, whether through increased regulatory requirements—like the recent amendments to Form PF adopted by the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), impacting how large hedge fund advisers report investment exposures—or through heightened enforcement associated with developments like the SEC’s new cybersecurity disclosure rules and growing crypto assets and cyber unit. These and other circumstances heighten the risk of both M&A and regulatory exposures that may implicate coverage under D&O and management liability policies.

Last month, a California federal judge held that a D&O liability insurer must advance subpoena-related defense costs on behalf of two former biotech directors and officers after the insurer could not provide conclusive evidence that the subpoenas alleged actual wrongdoing by the individuals after the company’s merger, as required to trigger the policy’s “Change in Control” exclusion.[1] The decision highlights the interplay of two significant D&O coverage issues—government investigations and M&A transactions—and underscores why policyholders must pay close attention to how their liability insurance policies may by impacted by a merger, acquisition, asset sale, or similar deal.

Background

In 2020, KBL Merger Corp. IV (KBL) purchased primary and excess D&O policies with $5 million in limits. Later that year, KBL changed its name to 180 Life in a merger that involved the resignation of its CEO and all directors. Following the merger, the SEC opened an investigation and issued subpoenas to KBL’s former CEO and a former director. 180 Life demanded coverage for the expenses it advanced to the former director and officer in connection with the subpoenas. The primary insurer filed a declaratory action asserting that 180 Life was not an insured under the policy issued to KBL (the pre-merger entity) and, in any event, that the subpoena-related expenses were subject to the policy’s “Change in Control” exclusion.

The parties disagreed over whether the policy’s advancement clause was triggered when there were “potentially covered” claims or whether the duty to advance defense costs was limited to “actually covered” claims. After consideration of the policy language and relevant case law, the court found that the advancement clause required the insurers to advance defense costs for potentially covered claims, consistent with California law. The court then turned to whether coverage for the SEC subpoenas was barred by the application of the Change in Control exclusion, which turns on when the alleged wrongful acts are alleged to have been committed or attempted.

The Change in Control Exclusion

The Change in Control exclusion barred coverage for claims “alleging in whole or in part any Wrongful Acts committed, attempted or allegedly committed or attempted” by the individual insureds after the merger. The court first noted that an insurer that wishes to rely on an exclusion has the burden of proving, through “conclusive evidence,” that the exclusion “applies in all possible worlds.” As applied in the context of the Change in Control exclusion, the insurers needed to show that the subpoenas alleged post-merger wrongful acts by the insureds.

That showing, the court acknowledged, was difficult to make in the context of a subpoena, which merely requests documents, compared to a civil complaint containing specific allegations of wrongdoing.

The insurers emphasized that the SEC subpoenas requested documents relating to both pre-merger and post-merger time periods and urged the court to infer from those requests that the subpoenas allege both pre-merger and post-merger wrongful acts. The court declined to make that “logical leap” and ultimately concluded that the insurers failed to meet the high burden of conclusively showing that the Change in Control exclusion applied. The court recognized, however, that the insurers may be able to make this showing at a later date. If so, they will be able to recoup any advanced defense costs that turn out not to be covered.

Takeaways

Time and time again, insurers argue that government subpoenas are not covered under D&O policies because they do not allege any wrongful acts by insureds and, as a result, cannot trigger insuring agreements requiring that demands for documents be “for Wrongful Acts.” Here, where it supported a complete disclaimer, the insurers took the opposite position that the subpoenas alleged wrongful acts simply because the document requests related to post-merger time periods. The court correctly applied the high burden required for insurers to deny coverage based on exclusions where it determined that the subpoenas did not allege wrongful acts—they only requested documents.

More broadly, the context in which this dispute arose—a government investigation following a merger—also highlights the importance of coordinating coverage in the context of M&A deals. This includes not only carefully examining the scope of exclusions that may defeat coverage for claims that only allege in part conduct during the excluded time period, but also reviewing liability policies covering directors and officers before and after closing to ensure continuity of coverage and avoid unexpected gaps in coverage.

All companies, whether currently contemplating a transaction or not, need to understand how their D&O policies account for changes in control. Legacy D&O policies typically provide going-forward coverage in the event of a sale, and imprecise policy wording or overbroad exclusions may result in finger-pointing and coverage gaps if left unaccounted for. Working closely with insurance brokers, consultants, and outside coverage counsel can help identify these issues, work towards mitigating deal-related risks, and minimize surprises if a D&O claim arises after closing.


  1. See AmTrust Int’l Underwriters DAC v. 180 Life Scis. Corp., No. 22-CV-03844-BLF, 2024 WL 557724 (N.D. Cal. Feb. 12, 2024).

 

The Continued Rise of Representations and Warranties Insurance: 2024 Forecast

It is no secret that merger and acquisition (“M&A”) activity has caused whiplash in recent years. In the two short years between 2021 and 2023, global deal values were cut in half—from a whopping $5 trillion to $2.5 trillion.[1] However, middle market deals have proven resilient in this challenging economic and geopolitical environment and are expected to rise in 2024.[2] This article discusses trends in representations and warranties (“R&W”) provisions in M&A transactions, including those that may spark disputes and litigation, as well as the role of R&W insurance policies in reallocating risks associated with transactions and limiting litigation expenses, particularly in middle market deals.

M&A Trends and Common Provisions

Approximately one-third of M&A deal disputes in North America arise out of an alleged breach of a seller’s R&W.[3] R&W provisions commonly include materiality and knowledge qualifiers and are frequently subject to survival periods, each of which often favor the seller by limiting the scope of disclosures and, therefore, reducing the risk of a buyer’s claim for breach.

Material Adverse Effect (“MAE”) provisions are pervasive in M&A transactions. In 2023, only 5 percent of private target M&A deals went without an MAE clause or chose not to define its meaning.[4] In this context, MAE clauses are frequently heavily negotiated and are intended to allow buyers to terminate a transaction should certain agreed-upon events occur.[5] Typically, MAE definitions contain forward-looking language and carveouts for particular events, such as war, changes in law, or pandemics.[6] However, MAE provisions have been historically difficult to prove and, therefore, often work to the benefit of the seller.[7]

Materiality scrape clauses, however, have seen a sharp increase in the last two decades, from being identified in approximately 15 percent of deals in 2005[8] to 82 percent of deals in 2022 (including in 64 percent of deals to determine breach).[9] Materiality scrape provisions are included in the indemnification section of a transaction document to remove materiality qualifiers for the purposes of determining breach, damages, or both, thus opening the door for buyers to successfully assert a claim for breach.

Materiality scrapes also appear in R&W insurance policies. Notably, a New York court recently found that a materially scrape in the R&W insurance policy at issue was ambiguous and decided that the representation, for the purpose of insurance, required only an adverse effect instead of a materiality showing.[10] The court reasoned that if it applied the materiality scrape as it was drafted in the R&W insurance policy, then the scrape would remove the entire “Material Adverse Effect” phrase, which creates an ambiguity, and ambiguities are typically resolved against the drafter.[11]

Knowledge qualifiers are also widespread. Generally, “knowledge” definitions are constructive.[12] But companies should carefully draft such definitions, as knowledge qualifiers may lead to ambiguity if not properly defined, resulting in the need to determine what constitutes knowledge and who must possess the same.

The survival period of a seller’s R&W is also commonly identified. In 2023, general survival clauses were identified in 93 percent of deals that did not procure R&W insurance and in 67 percent of those that did.[13] While 67 percent is a decline from 2019 (where 79 percent of deals with R&W insurance contained a general survival of a seller’s R&W), it is an increase from 2020, 2021, and 2022, where deals with R&W insurance contained a general survival of a seller’s R&W at the rate of 64 percent, 64 percent, and 50 percent, respectively.[14] Since 2018, the median survival period has been fifteen months.[15] Companies should note, however, that there are typically carveouts for certain R&W that are assigned longer survival periods, such as taxes and capitalization.[16] R&W relating to taxes and capitalization account for two of the three most common claims relating to breaches of R&W.[17] In 2022, taxes and capitalization R&W accounted for 45 percent and 9 percent of such claims, respectively.[18]

R&W Insurance Coverage Solutions

Given the above, it is not surprising the use of R&W insurance has increased over the years, as these insurance policies respond to cover loss resulting from a breach of representation or warranty. A party making a representation or warranty commits a breach if a representation or warranty proves to be inaccurate. Where R&W insurance is available, the non-breaching party may seek to recover its losses from the R&W insurer instead of seeking recovery from an established escrow account or directly from the seller under the transaction agreement. R&W insurance is often preferred over escrow accounts because the escrow funds cannot be used by either party during the period specified in the transaction agreement. R&W insurance frees up the capital that would otherwise be tied up in the escrow account. Plus, if R&W insurance negates the need for an escrow account, or lessens the amount needed, the seller may receive all of the purchase price, or more of it, at closing. Some deals may still require the seller to indemnify for claims within the R&W insurance retention and fund an escrow account in that amount, but that amount is necessarily smaller than if there was no R&W insurance.

To recover under an R&W policy, the non-breaching party usually must establish a breach of a covered representation or warranty and a loss resulting from such breach. Policyholders should be aware, however, that the definition of “breach” under the policy may carve out certain representations and warranties. In other words, the policyholder should not assume that breach of a certain representation or warranty is covered just because the representation or warranty is included in the transaction agreement. Some representations and warranties may be explicitly excluded from the policy’s definition of breach or otherwise carved out. Under these circumstances, the R&W policy will not cover any losses resulting from an inaccuracy in the excluded representation or warranty. Accordingly, companies should advocate for coverage of specific representations and warranties, especially those that often lead to disputes. Compared to other kinds of policies, the terms of the R&W policy are typically more negotiable.

The negotiation and purchase of R&W policies can be an integral part of the due diligence process. While generally an important part of M&A transactions, due diligence becomes integral to obtaining an R&W insurance policy. R&W insurers will seek to mitigate the risk they acquire by ensuring that the buyer has completed an appropriate amount of due diligence. Such due diligence frequently includes an in-depth legal, tax, and accounting review, including memorandums addressing “red flags” or potential issues on the aforementioned topics. Underwriters review such information along with the corresponding data room to understand the depth and accuracy of the due diligence conducted. In fact, insurers may access the data room and request copies of any diligence reports that may impact the underwriting of the R&W policy.

Both buyers and sellers can procure R&W insurance. One key difference between buyer-side and seller-side policies is that under a buyer-side policy, the buyer makes the claim against the insurer for the losses incurred because of the seller’s breach. In contrast, under a seller policy, the seller pays the buyer for the seller’s breach of a covered representation, and then the seller may make a claim against the insurer for reimbursement.

While both buyers and sellers can procure R&W insurance, buyer-side policies are more common. Buyer-side policies typically offer broader coverage than seller-side policies. For example, a buyer-side policy usually covers seller fraud, while a seller-side policy will often exclude coverage for fraud. Buyer-side policies can also extend the survival period for the representations and warranties, meaning the buyer has more time to determine whether a breach occurred. For this reason, survival clauses are more prevalent in deals not involving R&W insurance. In other words, because a R&W policy has its own survival clause, the insurance may eliminate the need for a survival clause in the transaction agreement. Importantly, R&W insurance—regardless of which party purchases the policy—allows both parties to potentially avoid post-closing disputes and related expenses, including the costs of arbitration and litigation.

Takeaways

Recent M&A trends and the forecasts for the upcoming year highlight the importance of mitigating the risks and costs associated with disputes arising from transactions. As deals increase in value and frequency, companies may become more susceptible to potential losses. R&W insurance, in particular, is an important tool for mitigating losses that arise from inaccurate representations and warranties made by the seller or target company during the transaction. The R&W insurance market has continued to evolve, and like transaction agreements, insurance policies require negotiations and careful review of specific policy language, as coverage disputes often arise. As a result, companies should consult counsel with comprehensive expertise and experience in M&A deals, as well as competent coverage counsel to limit losses and maximize insurance recovery where losses occur.


  1. Brian Levy, The M&A starting bell has rung. Are you ready?, PWC (Jan. 23, 2024).

  2. Capstone Partners & IMAP Survey Finds Middle Market M&A Outperforms Broader Market Despite Global Deal Flow Decline, Capstone Partners (Jan. 30, 2024).

  3. Berkeley Research Group, M&A Disputes Report 2022: Global Economic Headwinds Impact M&A Market and Drive Disputes (2022), 23.

  4. American Bar Association Business Law Section Mergers & Acquisitions Committee, 2023 Private Target M&A Deal Points Study (US Deals) (2023).

  5. Stephen M. Kotran, Material Adverse Change Provisions: Mergers and Acquisitions, Practical Law Practice Note 9-386-4019.

  6. SRS Acquiom, 2023 M&A Deal Terms Study (2023), 29–30.

  7. Id.

  8. Daniel Avery, 2021 Trends in Private Target M&A: The ‘Materiality Scrape,’ Bloomberg Law (June 2022).

  9. SRS Acquiom, 2023 M&A Deal Terms Study (2023), 56.

  10. Novolex Holdings, LLC v. Illinois Union Ins. Co., No. 655514/2019 (N.Y. Sup. Ct. Jan. 18, 2024).

  11. Id.

  12. SRS Acquiom, 2023 M&A Deal Terms Study (2023), 32.

  13. SRS Acquiom, 2023 M&A Deal Terms: Three Trends to Watch (2023), 2.

  14. SRS Acquiom, 2023 M&A Deal Terms Study (2023), 53.

  15. Id. at 59.

  16. Id. at 60.

  17. SRS Acquiom, 2022 M&A Claims Insights Report (2022), 14.

  18. Id.

Recent Developments in Artificial Intelligence and Blockchain Cases 2024

Editor

Bradford K. Newman

Co-Chair of the ABA AI and Blockchain Subcommittee
Chair of North America Trade Secrets Practice
Baker McKenzie
600 Hansen Way
Palo Alto, CA 94304
(650) 856-5509
[email protected]

Assistant Editor

Adam Aft

Partner, IPTech
Chair North America Technology Transactions Practice
Baker McKenzie
300 E. Randolph St., Suite 5000
Chicago, IL 60001
(312) 861-2904
[email protected]

Contributors

Bryce Bailey, Cynthia Cole, Loic Coutelier, Alex Crowley, Lothar Determann, Rachel Ehlers, Jacqueline Gerson, Sinead Kelly, Mackenzie Martin, Avi Toltzis, and Jennifer Trock



§ 8.1. Introduction


This year’s Chapter comes at the conclusion of perhaps the busiest 12 months in the history of both AI and blockchain—two innovative technologies that continue to proliferate across industries and use cases. The legal issues presented by GenAI in particular, coupled with renewed domestic regulatory and enforcement interest in both of these fields, has resulted in a year filled with big court cases, legislative proposals and complex issues for business law practitioners and judges.

The goal of this Chapter has never been to report on any case that merely references or mentions “AI” or “blockchain.” Rather, our goal is to produce a practical guide for business law practitioners who seek to enhance their understanding of these areas and to identify clear trends that are relevant for lawyers and business court judges. This year, in the (Gen) AI arena, the focus continues to be on IP copyright battles centered on the tension between allegations of algorithmic “infringement” and defenses of “fair use.” The key players at the forefront of AI development will also continue to shape the area of law both in their technological developments as well as potential disputes (for example, Elon Musk’s recent complaint against OpenAI in California Court). Additional areas of litigation and regulator focus with regard to AI algorithms and their use center on bias, transparency, and personal privacy. Finally, as the well-publicized Aviana Airlines and Michel (criminally convicted former member of the popular group The Fugees) cases make clear, judges and State Bar regulators are placing increased focus on devising and promulgating rules for lawyers which govern the ethical use of GenAI in all aspects of practicing law.

In the blockchain space, the government continues to push its view that all crypto other than Bitcoin is a security, and to pursue enforcement actions against retail exchanges and issuers. The spectacular collapse of Sam Bankman-Fried and FTX, coupled with the recent guilty plea to a federal charge by Changpeng Zhao (“CZ”), the founder of Binance, make clear that the decentralized promise of what blockchain can offer has, for the last several years, been waylaid by highly centralized projects that are susceptible to familiar economic shenanigans long present on Wall Street and in traditional finance. We will have to wait a bit longer, likely into 2025 and beyond, for the United States Supreme Court to decide if and to what extent securities laws apply to this technology, and whether or not various actions by agencies like the Securities and Exchange Commission are proper or have exceeded lawful bounds. However, when it comes to crypto, government agencies and the plaintiff’s bar will continue to aim their sights at those viewed as defrauding retail investors.

I am often asked to present CLEs that teach the law (and technology) of AI and blockchain to judges, practitioners, clients, and law students across the country. In October of 2023, I was invited to testify as an AI expert before a United States Senate AI Subcommittee hearing focused on responsibly legislating AI in the employment context, and I continue to represent clients, both domestically and internationally, in all aspects of AI and blockchain matters (from governance, oversight and compliance to litigation). Many people often ask me, “When will Congress pass comprehensive regulation in these areas?” My answer is that while I believe Congress will eventually act, in the meantime, the de facto initial legal rules will actually be developed by the decisions rendered in the nationwide trial and appellate courts confronted almost daily with these cases and the complicated issues they raise. For those of us who love and are intrigued by these technologies, it is our responsibility to understand the legal evolution of this exciting technology, and where possible, to help shape the law. Hopefully, this Chapter does its small part each year in educating and empowering business law practitioners—whether new to the field or experienced veterans—to participate in this fascinating and quickly developing ecosystem.

Like in prior years, we made certain judgments as to what should be included. We omitted cases decided prior to 2023 that were reported in previous iterations of the Chapter after evaluating whether there were any significant updates to those cases with respect to AI; in most cases there were not significant updates. And as AI is a subject of an exponential number of legislative proposals, we omitted the 2020–2022 legislative updates that were included in the Chapter in prior years and focus on legislative trends from 2023.

Finally, I want to thank my colleagues Adam Aft, Bryce Bailey, Cynthia Cole, Loic Coutelier, Alex Crowley, Lothar Determann, Rachel Ehlers, Jacqueline Gerson, Sinead Kelly, Mackenzie Martin, Avi Toltzis, and Jennifer Trock for their assistance in preparing this chapter.

We look forward to continuing to track the trends in AI and Blockchain for the next several years.


§ 8.2. Artificial Intelligence Cases of Note


§ 8.2.1. United States Supreme Court

There were no qualifying decisions by the United States Supreme Court in 2023.

Chief Justice Roberts, however, noted AI as the latest technological frontier in the 2023 Year-End Report on the Federal Judiciary.[1] In a view of the current state of the potential for use of AI in the Federal Courts, Chief Justice Roberts notes “studies show a persistent public perception of a ‘human-AI fairness gap,’ reflecting the view that human adjudications, for all of their flaws, are fairer than whatever the machine spits out.”[2]

§ 8.2.2. First Circuit

There were no qualifying decisions within the First Circuit in 2023.

Pending Cases of Note

Baker v. CVS Health Corporation 1:23-cv-11483 (D.Mass. Filed June 30, 2023). The plaintiff job applicant alleges that CVS contracts for HireVue, an AI-based job candidate screening tool, which includes video interview sessions that HireVue claims uses an applicants’ facial expressions to identify an applicant’s lies and embellishments. Plaintiff filed a putative class action suit on behalf of similarly situated job applicants, claiming CVS failed to provide candidates with written notice of its use of a lie detector test, as required by Massachusetts law. In February 2024, the court denied CVS’s partial motion to dismiss.

§ 8.2.3. Second Circuit

Doe v. EviCore Healthcare MSI, LLC, No. 22-530-cv, 2023 U.S. App. LEXIS 4794 (2d Cir. Feb. 28, 2023). The Second Circuit Court of Appeals affirmed district court dismissal of False Claims Act charges based on Rule 9(b), because plaintiffs failed to plead fraud with sufficient particularity. Plaintiffs asserted that service provider eviCore deployed artificial intelligence systems to approve health insurance requests based on flawed criteria and without manual review and that, as a result, eviCore provided worthless services to insurance companies and caused those insurance companies to bill the government for unnecessary and fraudulently approved medical services. The court held that “the services eviCore provided were not so worthless that they were the equivalent of no performance at all.”

In re Celsius Network LLC, 655 B.R. 301 (Bankr. S.D.N.Y. 2023). Creditor in bankruptcy dispute submitted a report from its valuation expert, Hussein Faraj, that was written by generative AI at Mr Faraj’s direction. The court found that while the written report was inadmissible as it lacked reliability, the expert’s live testimony could be given in a bench trial.

Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 U.S. Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023). The United States District Court for the Southern District of New York sanctioned attorneys for misconduct because they included cites to non-existing cases in motions and made misleading statements to the court. Attorneys representing an individual plaintiff against an airline had used ChatGPT to research case law. ChatGPT delivered inaccurate output including citations to cases that did not exist. When opposing counsel called the existence of the cases in question, plaintiffs’ counsel went back to ChatGPT and asked for full copies of the cases. ChatGPT delivered excerpts of cases that did not exist, citing to other cases that did not exist.

Pending Cases of Note

Authors Guild, et al. v. OpenAI Inc. et al., No. 1:23-cv-8292 (S.D.N.Y. Filed Sept. 19, 2023). Plaintiff writers filed a putative class action against defendant AI developers, who created AI that can make derivative works based on, mimicking, summarizing, or paraphrasing plaintiffs’ works, without seeking permission or a license. The plaintiffs allege this conduct amounts to (1) direct infringement, (2) vicarious infringement, and (3) contributory infringement of their copyrights.

Basbanes et al. v. Microsoft Corp. el at., No. 1:24-cv-00084 (S.D.N.Y. Filed Jan. 5, 2024). Plaintiff journalists allege that the defendant AI developers use their written works to train generative AI models, constituting (1) direct infringement, (2) vicarious infringement, and (3) contributory infringement of the journalists’ copyrights.

Huckabee et al. v. Meta Platforms, Inc. et al., No. 1:23-cv-09152 (S.D.N.Y. Filed Oct. 17, 2023). Former Arkansas governor Mike Huckabee filed this action on behalf of a proposed class of authors in a copyright infringement suit against Meta, Microsoft, Bloomberg, and artificial intelligence research institute EleutherAI, claiming that the defendants trained their AI tools on data sets that comprised the 183,000 e-book “Books3” dataset, without the plaintiffs’ permission. The complaint alleges these actions constitute (1) direct copyright infringement, (2) vicarious copyright infringement, (3) removal of copyright management information in violation of DMCA, (4) conversion, (5) negligence, and (6) unjust enrichment. In December 2023, the Claims against Meta and Microsoft defendants were transferred to the Northern District of California to be consolidated with the Kadrey lawsuit (see below), with the claims against the Bloomberg defendants remaining in the Southern District of New York. Bloomberg has filed a motion to dismiss, in response to which the plaintiffs filed an amended complaint, withdrawing the indirect copyright infringement, DMCA and state-law claims.

Sancton v. OpenAI Inc. el al., No. 1:23-cv-10211 (S.D.N.Y. Filed Nov. 21, 2023). Plaintiff authors filed a putative class action suit against OpenAI challenging Chat-GPT and its underlying “large language models,” which use the copyrighted works of thousands of authors as a training dataset. Plaintiffs allege the training amounts to direct and contributory copyright infringement.

The New York Times Co. v. Microsoft Corp. et el., No. 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023). The New York Times sued Microsoft and OpenAI. The Times alleged that OpenAI created unauthorized reproductions of Times works during training the large language model ChatGPT, reproduced verbatim excerpts of Times content in response to user prompts, misappropriated referrals, and generated hallucinations that falsely attributed statements to the Times. The complaint alleges these actions constitute: (1) copyright infringement, (2) vicarious copyright infringement, (3) contributory copyright infringement, (4) removal of copyright management information in violation of DMCA, (5) common law unfair competition by misappropriation, and (6) trademark dilution. In February 2024, the defendants moved to dismiss parts of the direct infringement claims, as well as full dismissal of the contributory infringement, DMCA, and unfair competition claims.

§ 8.2.4. Third Circuit

There were no qualifying decisions within the Third Circuit in 2023.

§ 8.2.5. Fourth Circuit

Thomson Reuters Enter. Ctr. GmbH v. Ross Intel. Inc., No. 1:20-cv-613-SB, 2023 U.S. Dist. LEXIS 170155 (D. Del. Sep. 25, 2023). Plaintiff alleged defendant, an artificial intelligence startup, illegally infringed on plaintiff’s copyrighted content by using plaintiff’s content to train its machine learning search tool. The court largely denied five summary judgement motions filed by the parties, including (1) plaintiff’s copyright-infringement claim (but granting summary judgement only on element of “actual copying”), (2) cross-motions based on fair use (but granting plaintiff’s motion for summary judgment on defendant’s miscellaneous defenses), (3) plaintiff’s claim of tortious interference with contract (but granting partial summary judgment on two elements of tortious interference (existence of a contract and harm) of plaintiffs’ bot and password-sharing.); and (4) defendant’s claim of tortious interference (but granting defendant’s preemption defense with respect to plaintiffs’ anti-competition tortious-interference claim).

Recentive Analytics, Inc. v. Fox Corp., Civil Action No. 22-1545-GBW, 2023 U.S. Dist. LEXIS 166196 (D. Del. Sep. 19, 2023). The court granted defendant’s motion to dismiss claims that defendant used patented machine-learning systems to develop enhancements for scheduling and broadcasting of local programming. The court found that the claims were directed to patent-ineligible material, as both claims were directed to abstract ideas and the machine learning involved no inventive concept.

§ 8.2.6. Fifth Circuit

Commodity Futures Trading Comm’n v. Mirror Trading Int’l Proprietary Ltd., No. 1:22-cv-635-LY, 2023 U.S. Dist. LEXIS 76759 (W.D. Tex. Apr. 24, 2023). Court ruled the CFPB acted outside of its authority granted by Congress when it updated its examination manual for financial institutions to broaden its authority to regulate unfair, deceptive, or abusive acts to include discriminatory acts. For our purposes, the Court discussed CFPB’s authority to regulate new technologies, like AI, by including discrimination. If CFPB had been allowed to alter the manual to include discrimination, financial institutions may be more limited in how they can use new technologies, including those with algorithmic decision-making, as they would be required to provide explanations under the Equal Credit Opportunity Act.

§ 8.2.7. Sixth Circuit

Pending Cases of Note

McComb v. Best Buy Inc., No. 3:23-cv-28, 2024 U.S. Dist. LEXIS 8492, at *3 (S.D. Ohio Jan. 16, 2024) As part of its order granting leave to a pro se plaintiff to file a second amended complaint, the court required the plaintiff to file an affidavit “verifying that he has not used Artificial Intelligence (‘AI’) to prepare case filings” and prohibited all parties from using AI for the case. Penalties for use of AI in the case included sanctions, contempt, and dismissal of the case.

Bond v. Clover Health Invs., Corp., No. 3:21-cv-00096, 2023 U.S. Dist. LEXIS 24749, at *9–10 (M.D. Tenn. Feb. 14, 2023) The court granted a motion for class certification in relation to a claim that Clover Health Investments Corp. defrauded investors, in part based on false statements regarding use of Clover Health’s AI-powered software called Clover Assistant. The original case, Bond v. Clover Health Invs., Corp., 587 F. Supp. 3d 641 (M.D. Tenn. 2022), is discussed further in the 2023 version of this chapter at “Recent Developments in Artificial Intelligence 2023.”

Ruggierlo, Velardo, Burke, Reizen & Fox, P.C. v. Lancaster, No. 22-12010, 2023 U.S. Dist. LEXIS 160755, at *5 n.5 (E.D. Mich. Sep. 11, 2023) Pro se defendant cited non-existent cases in his objection to plaintiff law firm’s claims that defendant failed to pay his legal bills. The court avoided speculating whether the non-existent cases were from defendant’s “imagination, a generative artificial intelligence tool’s hallucination, both, or something else entirely.” In any event, the non-existent cases wasted time and resources, and destroyed defendant’s opportunity to state legitimate objections. The court warned that citing non-existent cases could lead to sanctions on the citing party.

In re Upstart Holdings, Inc. Sec. Litig., No. 2:22-cv-02935, 2023 U.S. Dist. LEXIS 175451, at *6, *36–*45, *73–*74 (S.D. Ohio Sep. 29, 2023) The court denied a motion to dismiss a securities fraud case relating to statements made about Upstart’s artificial intelligence-based lending platform. Some of Upstart’s statements went beyond puffery and were found to be material misstatements actionable under SEC Rule 10b-5 (prohibiting manipulative and deceptive practices), including statements having specific but inaccurate descriptions of how the AI model underlying its platform supposedly performed better than traditional FICO-based lending models.

Concord Music Grp., Inc. el al.. v. Anthropic PBC, No. 3:23-cv-01092 (M.D. Tenn. Filed Oct. 18, 2023). Several music publishing companies, led by Universal Publishing Group, sued Anthropic PBC, alleging that the artificial intelligence company infringes the plaintiffs’ copyrighted song lyrics with its Claude series of large language AI models without paying the same licensing fees as other lyrics aggregators do. The plaintiffs allege that Anthropic’s activities constitute (1) direct copyright infringement, (2) contributory infringement, (3) vicarious infringement, and (4) removal or alteration of copyright management information. In November 2023, the defendants filed 12(b)(2) and 12(b)(3) motions to dismiss, which were pending at the time of publication.

Barrows et al. v. Humana, Inc., No. 3:23-cv-00654 (W.D. Ky. Filed December 12, 2023). Class action plaintiffs allege that Humana has been using an AI system called nH Predict to wrongfully deny elderly patients care owed to them under Medicare Advantage Plans and intentionally limits its employees’ discretion to deviate from the nH Predict AI Model predictions by to setting targets to keep stays at post-acute care facilities within 1% of those predicted by the AI model. According to the complaint these actions amount to a breach of contract, a breach of the implied covenant of good faith and fair dealing, unjust enrichment, violations of North Carolina’s unfair claims settlement practices and insurance bad faith.

§ 8.2.8. Seventh Circuit

Dinerstein v. Google, LLC, 73 F.4th 502 (7th Cir. July 11, 2023). The University of Chicago and its medical center provided several years of anonymized patient medical records to Google for the purpose of training algorithms that could anticipate future health needs in order to improve patients’ healthcare outcomes. The plaintiff brought a number of claims including breach of contract with respect to a privacy notice, unjust enrichment, tortious interference of contract, and intrusion upon seclusion. The Seventh Circuit affirmed dismissal of the plaintiff’s claims on the basis that the plaintiff lacked standing essentially due to the plaintiff’s failure to allege any plausible, concrete, or imminent injury (i.e., merely being included in an anonymized data set itself was insufficient to establish standing).

Frier v. Hingiss, No. 23-cv-0290-bhl, 2023 U.S. Dist. LEXIS 164077 (E.D. Wisc. Sept. 15, 2023). The Court identified briefing rife with errors that, due to hallucinations regarding case citations, the Court suspected may have been the result of AI, admonishing counsel: “To the extent the briefing was prepared using ‘artificial intelligence,’ counsel is reminded that he remains responsible for any briefing he files, regardless of the tools employed.”

Huskey v. State Farm Fire & Cas. Co., No. 22 C 7014, 2023 U.S. Dist. LEXIS 160629 (N.D. Ill. Sep. 11, 2023). Plaintiffs filed a class-action suit against Defendant, alleging Defendant’s use of machine learning to help detect fraud was biased against Black homeowners because it scrutinized certain claims more closely based on race which resulted in Black homeowners having to go through more hurdles when they submitted claims. One of Plaintiffs’ claims survived motion to dismiss, specifically a claim under §3604(b) of the Fair Housing Act. The Court also held that Plaintiff’s had sufficiently alleged a disparate impact claim because they cited statistical evidence and connected the evidence to the algorithms being used.

§ 8.2.9. Eighth Circuit

Pending Cases of Note

Estate of Gene B. Lokken et al. v. UnitedHealth Group, Inc. et al. (D.Minn. Filed November 11, 2023). Class plaintiffs accuse UnitedHealth of deploying the AI model, nH Predict, to override physicians’ judgment as to medically necessary case determinations and unlawfully deny patients care owed to them under their Medicare Advantage Plans. The complaint recites claims for breach of contract, a breach of the implied covenant of good faith and fair dealing, unjust enrichment, violations of Wisconsin’s unfair claims settlement practices and insurance bad faith. In February 2024, the defendants moved for dismissal for lack of jurisdiction. That motion is pending as of publication.

§ 8.2.10. Ninth Circuit

Andersen v. Stability AI Ltd., No. 23-cv-00201-WHO, 2023 U.S. Dist. LEXIS 194324 (N.D. Cal. Oct. 30, 2023). Putative class action on behalf artists against Stability AI—the developer of Stable Diffusion, an image generation AI tool—as well as against Deviant Art and Midjourney, both of which developed AI products incorporating or using Stable Diffusion. The complaint alleged that because Stable Diffusion was trained on plaintiffs’ works of art to be able to produce images in the style of particular artists, it constitutes direct and indirect infringement of the plaintiffs’ copyrights. Defendants’ motions to dismiss granted in respect of direct infringement claims against Deviant Art and Midjourney and all indirect infringement claims. Plaintiffs given leave to amend.

Doe v. Github, Inc., No. 22-cv-06823-JST, 2023 U.S. Dist. LEXIS 86983 (N.D. Cal. May 11, 2023). Software developers alleged that Github, an online hosting service for open source software projects, infringed their privacy and property interests, in addition to myriad other alleged violations under the Digital Millennium Copyright Act (DMCA), the Lanham Act and other laws, through its development and operation of Copilot and Codex. Copilot and Codex are artificial intelligence-based coding tools that employ machine learning algorithms trained on billions of lines of publicly available code, including plaintiffs’ code on Github repositories. The court dismissed the privacy and property rights claims on the basis that the plaintiffs lacked Article III standing because the allegations failed to establish that the plaintiffs had suffered injury, but allowed a claim seeking injunctive relief in respect of potential future harms. A claim that Github had unlawfully removed copyright management information in violation of DMCA also survived dismissal.

Newman v. Google LLC, No. CV 20-cv-04011-VC, 2022 U.S. Dist. LEXIS 238876 (N.D. Cal. Nov. 28, 2022). The court granted the defendant’s motion to dismiss the plaintiff’s claim that YouTube’s algorithm violates the promise in the Community Guidelines because it considers the plaintiffs’ individual characteristics when deciding whether to remove, restrict, or monetize content, in part because the complaint does not adequately allege that the plaintiffs have been treated differently based on those characteristics.

Newman v. Google LLC, No. CV 20-cv-04011-VC, 2023 U.S. Dist. LEXIS 144686 (N.D. Cal. Aug. 17, 2023) The court granted the defendant’s motion to dismiss the plaintiff’s claim that YouTube’s content-moderating algorithm discriminates against them based on their race (the plaintiffs are African American and Hispanic content creators) in violation of YouTube’s promise to apply its Community Guidelines (which govern what type of content is allowed on YouTube) to everyone equally—regardless of the subject or the creator’s background, political viewpoint, position, or affiliation. The court found that plaintiffs had not adequately alleged the existence of a contractual promise.

Rivera v. Amazon Web Servs., No. 2:22-cv-00269, 2023 U.S. Dist. LEXIS 129517 (W. D. Wash. Jul. 26, 2023) The District Court for the Western District of Washington denied defendant’s motion to dismiss plaintiff’s claim that defendant’s facial recognition software (using biometric data) used by defendant’s clients without said clients properly notifying members of the public of said use.

Mobley v. Workday, Inc., No. 23-cv-00770-RFL, 2024 U.S. Dist. LEXIS 11573 (N.D. Cal. Jan. 19, 2024). Plaintiff , an African-American man over the age of 40 with anxiety and depression, applied for 80 to 100 jobs with companies that use the defendant’s applicant screening tools. Mobley alleged that the screening tools that Workday offers employers discriminate on the basis of age, race, and disability. The court granted Workday’s motion to dismiss on the basis that the plaintiff had failed to exhaust his remedies with the Equal Employment Opportunity Commission as to his intentional discrimination claims and because the factual allegations of the complaint were insufficient to demonstrate that Workday is an “employment agency” under the anti-discrimination statutes at issue.

Pending Cases of Note

Jobiak, LLC v. Botmakers LLC, No. 2:23-cv-08604-DDP-MRW (N.D. Cal. Filed Oct. 12, 2023). Plaintiff AI-based recruitment platform alleges the defendant has been “scraping” job posting data from plaintiff’s proprietary database and incorporating its contents directly into its own job listings. The complaint alleges these actions amount to: (1) copyright infringement (2) violations of the Computer Fraud and Abuse Act, (3) violations of the California Comprehensive Computer Access and Fraud Act, (4) violations of the California Unfair Competition Act, and (5) the removal of copyright management information under DMCA § 1201.

Kadrey et al., v. Meta Platforms, Inc., No. 3:23-cv-03417 (N.D. Cal. Filed July 7, 2023). Three authors filed a putative class action suit against Meta challenging LLaMA, a set of large language models trained in part on copyrighted books, including plaintiffs’. Unlike GPT models, LLaMA is “open source” and allows developers to create variations for free. Plaintiffs alleged, on behalf of all those similarly situated, the following causes of action: (1) direct copyright infringement, (2) vicarious copyright infringement, (3) Removal of copyright management information under DMCA § 1202(b), (4) unfair competition under, (5) negligence, and (6) unjust enrichment. In November 2023, the court dismissed all claims with leave to amend except for the negligence claim which was dismissed with prejudice.

T. et al. v. OpenAI LP et al., No. 3:23-cv-04557-VC (N.D. Cal. Filed Sept. 5, 2023). This class action lawsuit arises from defendants’ unlawful and harmful conduct in developing, marketing, and operating their AI products, including ChatGPT-3.5, ChatGPT-4.0, 4 Dall-E, and Vall-E (the “Products”), which use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge. Plaintiffs seek relief under (1) the Electronic Communications Privacy Act, (2) the Computer Fraud and Abuse Act, (3) the California Invasion of Privacy Act, (4) the California Unfair Competition Law, (5) negligence, (6) invasion of privacy, (7) intrusion upon seclusion, (8) larceny/receipt of stolen property, (9) conversion, (10) unjust enrichment, and (11) New York’s General Business Law. The defendants filed motions to dismiss which were pending at the time of publication.

Tremblay, et al. v. OpenAI, Inc., et al., No. 3:23-cv-03223-AMO (N.D. Cal. Filed June 28, 2023). Two authors filed a class action suit against OpenAI challenging Chat-GPT and its underlying large language models, GPT-3.5 and GPT-4, which use the copyrighted works of thousands of authors as a training dataset. The training dataset allows the GPT programs to produce written text. The plaintiffs allege, on behalf of similarly situated authors of copyrighted works used to train the GPT models, that such conduct constitutes: (1) direct copyright infringement, (2) vicarious copyright infringement, (3) Removal of copyright management information under DMCA § 1202(b), (4) unfair competition under CA law, (5) negligence, and (6) unjust enrichment.

Main Sequence, Ltd. v. Dudesy, LLC, No. 2:24-cv-00711 (C.D. Cal Filed January 25, 2024). The estate of the late comedian George Carlin filed suit against the defendant media company alleging that the defendant employed generative AI to create a script for a fake comedic special featuring Carlin and used voice generation tools to create a “sound-alike” to perform the generated script. The complaint alleges these acts violated copyright and Carlin’s posthumous right of publicity.

Matsko v. Tesla 4:22-cv-05240 (N.D. Cal. Filed September 14, 2022). Putative class plaintiffs allege that defendant Tesla’s representations concerning its automobiles’ “Autopilot,” “Enhanced Autopilot,” and “Full Self-Driving Capability” features mislead consumers about the company’s autonomous driving capabilities and therefore violate the Magnuson-Moss Warranty Act, the California Unfair Competition Law, the California Consumer Legal Remedies Act, false advertising standards, along with breaches of express and implied warranties. In September 2023, the court granted the defendants motion to dismiss with leave for the plaintiff to amend its complaint.

Faridian v. DoNotPay, Inc. 3:23-cv-01692 (N.D. Cal. Filed March 3, 2023). Class action plaintiff brought this action against DoNotPay, which bills itself as the “world’s first robot lawyer,” alleging that the defendant violated California’s e Unfair Competition Law by engaging in the unauthorized practice of law.

§ 8.2.11. Tenth Circuit

There were no qualifying decisions within the Tenth Circuit in 2023.

§ 8.2.12. Eleventh Circuit

Athos Overseas Ltd. Corp. v. Youtube, Inc., No. 21-21698-Civ, 2023 U.S. Dist. LEXIS 85462 (S.D. Fla. May 16, 2023). Plaintiff, video producer who holds the title to Mexican films allegedly uploaded to Defendant YouTube without authorization, contends that YouTube violated the Digital Millennium Copyright Act (DMCA), and therefore abandoned its safe harbor protections by its failure to employ advanced video detection software (Content ID) to identify infringing videos uploaded to the platform. The plaintiff argued that because YouTube has access to its automated software that scans uploaded videos to identify infringing content, it has knowledge of every infringing video on its platform. Court granted YouTube’s summary judgment motion, citing Second and Ninth Circuit decisions rejecting suggestions that an ISP can have specific knowledge of non-noticed infringements simply because they have access to video surveillance capabilities.

United States v. Grimes, No. 1:20-CR-00427-SCJ, 2023 U.S. Dist. LEXIS 40282, at *1 (N.D. Ga. Mar. 10, 2023). During his entry into the US at the Atlanta airport, defendant was flagged by facial recognition software used during preliminary entry processing, which identified him as being potentially linked to child sexual exploitation material. Based on this, officers undertook a search of the defendant and found recently deleted photos of child pornography on the defendant’s electronic devices. Defendant moved to suppress evidence obtained from this search on the basis that the officers lacked a reasonable suspicion to conduct the searches, because the Government had provided no evidence on the reliability of the facial recognition software. The court rejected the motion (in part) because, absent some suggestion that the facial recognition system is unreliable, it is a sufficient basis for reasonable suspicion.

§ 8.2.13. D.C. Circuit

Thaler v. Perlmutter, Civil Action No. 22-1564 (BAH), 2023 U.S. Dist. LEXIS 145823 (D.D.C. Aug. 18, 2023). Court granted a motion for summary judgment filed by Perlmutter, Register of Copyrights and Director of the United States Copyright Office, finding that the Copyright Office properly denied copyright registration for a piece of visual art autonomously created by a computer algorithm running on a machine (i.e., wholly created by AI) because U.S. copyright law protects only works of human creation. The court noted that copyright has never stretched so far as to protect works generated by new forms of technology operating absent any guiding human hand. Human authorship is a bedrock requirement of copyright.

USA v. Michel, Dist. Ct. D.C., No. 1:19-cr-00148, January 11, 2024. Michel, a former member of the Fugees, was convicted in April of 2023 on 10 criminal counts, including waging a back-channel lobbying campaign to end an investigation of Malaysian tycoon Jho Low. Michel’s new legal team is seeking a new trial based on assertions that his prior lawyer used an experimental Gen AI program to draft his closing argument, and failed to disclose that he had a financial stake in the company that developed it.

§ 8.2.14. Court of Appeals for the Federal Circuit

There were no qualifying decisions within the Court of Appeals for the Federal Circuit in 2023.


§ 8.3. Administrative


§ 8.3.1. Patent Trial and Appeal Board

Ex parte Iaremenko et al, 2022 Pat. App. LEXIS 5639 (PTAB Nov. 22, 2022). The USPTO Patent Trial and Appeal Board sustained an Examiner’s decision to reject claim 1 under 35 U.S.C. § 112 for lacking adequate written description support, where the relevant claim language recited a “PLD machine learning module . . . configured to detect an anomaly in at least one . . . of the ingress traffic and the egress traffic, and to send an anomaly indication to the PLD firewall.” (Additional rejections were sustained as well, but this summary focuses on the Section 112 issue pertaining to machine learning.) The applicant argued that the specification described “how the PLD machine learning process is trained and how anomalies are defined.” But the Board nonetheless determined that the applicant failed to address the Examiner’s finding that the written description did not include the specific technique (i.e., classification, regression, dimensionality reduction, clustering) used to identify deviations in patters relating to the ranges of various ingress and/or egress parameters. “In other words, Appellant’s argument that the machine learning module is defined by its training process does not apprise us of error in the Examiner’s finding that the training process disclosed in the Specification defines the patterns against which anomalies may be identified but fails to describe how (or by what technique) the PLD machine learning module learns such patterns in the parameters during the simulated transactions learning process.”


§ 8.4. Legislation


With the wider adoption of AI tools by both consumers and businesses as well as a growing awareness of the risks and downsides of these tools, 2023 witnessed a profusion of legislative proposals emerge at both the federal and state levels. As the EU moved closer toward the enactment of its AI Act, a comprehensive legislative package that will regulate AI across a wide range of industries and use cases, the approach in the US has been comparatively piecemeal and tentative. However, in the absence of a monolith like the AI Act, proposed legislation addressing specific aspects or risks associated with AI have proliferated in US legislatures.

Although it would be beyond the scope of this chapter to list out each of the scores of proposed statutes and regulation exhaustively, several broad trends are noteworthy.

§ 8.4.1. Policy and Governance

Some of the most prominent legislative activity has concerned the larger policy and governance issues that have emerged with the new technology. Among the common features of these laws has been the establishment of new bodies to oversee and regulate the activity of both private and public actors utilizing AI. For example, the Digital Platform Commission Act of 2023 (S.1671), introduced in the Senate in May 2023 and subsequently referred to the Committee on Commerce, Science, and Transportation, would mandate the establishment of a “Federal Digital Platform Commission” to provide comprehensive regulation of digital platforms and AI products, with the intention of protecting consumers, promoting competition, and safeguarding public interest. The proposed five-member Commission would have the power to hold hearings, conduct investigations, levy fines, and engage in public rulemaking to establish regulations for digital platforms. Other similar federal proposals include the ASSESS AI Act of 2023 /Assuring Safe, Secure, Ethical, and Stable Systems for AI Act (S.1356) (aiming to establish a cabinet-level AI Task Force to identify existing policy and legal gaps in the federal government’s AI policies), the National AI Commission Act (H.R.4223) (seeking to create a bi-partisan independent commission within the legislative branch focused on AI called the “National AI Commission”). Similarly, some bills have sought to situate policymaking frameworks within the existing regulatory apparatus, such as the Oversee Emerging Technology Act of 2023 (S.1577), which would mandate that certain federal agencies appoint a senior official as an emerging technology lead to advise on the responsible use of emerging technologies, including AI and to offer expertise on policies and practices, collaborate with interagency coordinating bodies and contribute input for procurement policies. These themes have extended to state legislation as well; Illinois House Bill 3563, passed in August 2023, establishes a Generative AI and Natural Language Processing Task Force to investigate and report on generative artificial intelligence software and natural language processing software.

At the state level, a related theme has been the oversight of the procurement and deployment of AI systems by state agencies. California Assembly Bill AB302, enacted in October 2023, requires the Department of Technology, in coordination with other interagency bodies, to conduct, on or before September 1, 2024, a comprehensive inventory of all high-risk automated decision systems proposed for use, development, or procurement by, or are being used, developed, or procured by, state agencies. Likewise in June 2023, Connecticut passed An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy (S1103) , which requires annual audits of AI systems used by state agencies and the establishment of policies regarding state agency use of AI systems. Texas’ House Bill 2060, enacted in June 2023, combines these characteristics, both calling for the creation of a seven-member Artificial Intelligence Advisory Council to study the development of AI, as well as requiring state agencies to submit inventory reports of automated decision systems they use.

§ 8.4.2. Algorithmic Accountability

Another significant legislative theme in 2023 has been the prevention of discrimination and other harms that may arise from our growing reliance on algorithms to inform, drive, and in some cases supplant human decision-making.

At the federal level, the White House led the charge by issuing an Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government in February, which directs federal agencies to develop and use artificial intelligence in ways that advance equity and root out bias. The AI Accountability Act (H.R.3369), introduced in the House in May 2023 and subsequently referred to the Committee on Energy and Commerce, directs the Assistant Secretary of Commerce for Communications and Information to conduct a comprehensive study on accountability measures for AI systems.

State legislatures largely focused on the potential introduction algorithmic bias from the use of automated decision systems in the provision of particular services, especially financial services and healthcare, or in employment contexts. New Jersey’s Senate Bill S1402, introduced in February 2023, would prohibit financial institutions from using automated decision-making tools to discriminate against members of a protected class in making decisions regarding the extension of credit or eligibility for insurance or health care services. Under Illinois’ HB 3773, introduced in February 2023, employers that use predictive data analytics in their employment decisions would be restricted from using race (or zip code when used as a proxy for race) in employment decisions. California’s AB1502, introduced in February 2023, would prohibit health care service plans from discrimination on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decision-making.

Some legislatures took aim more broadly at algorithmic bias by considering mandates that wouldn’t be limited to particular use cases or services. These include Washington DC’s Stop Discrimination by Algorithms Act (B25-0114) introduced in February 2023, which would prohibit the use of algorithmic eligibility determination in a discriminatory manner.

§ 8.4.3. Transparency

A related area of legislative interest concerns laws that promote transparency in the use of AI. These laws often require disclosures to customers or the public of an organization’s deployment of AI and can apply in a variety of contexts. Some laws may also include provisions requiring audits or human oversight of some AI functions, especially where the AI is being used to assist processes that produce significant effects on a person’s rights or interests.

At the federal level, the AI Labeling Act of 2023 (S. 2691), introduced in July 2023, would mandate clear disclosures for all AI-generated content including images, videos, audio, multimedia, and text and outlines obligations for developers and licensees of generative AI systems to prevent the removal of these disclosures. Likewise, the proposed REAL Political Advertisements Act of 2023 (S.1596) would mandate the inclusion of a disclaimer in political advertisements that utilize AI to generate images or video content and seeks to increase transparency and accountability in political campaigns and advertisements that make use of AI.

State lawmakers have also sought to promote transparency in the use of AI across a variety of contexts. California’s AB 331 on automated decision tools, introduced in January 2023, would require deployers to disclose automated decision tools that are used to make a consequential decision to individuals subject to such decisions. New York’s A7858, introduced in July 2023, would amend the labor law to require disclosure when an employer uses an automated employment decision tool to screen candidates. In Massachusetts, HB1974 was introduced in February 2023 and would require mental health care professionals who use AI to provide mental health services to inform patients of such use. Illinois’ legislature introduced the Artificial Intelligence Consent Act (HB3285) in February 2023, requiring that a person using artificial intelligence to mimic or replicate another’s voice or likeness in a manner that would otherwise deceive an average viewer to provide a disclosure upon publication, unless the person whose voice or likeness is being mimicked consents.

§ 8.4.4. Other

As the adoption of AI has come to permeate ever-wider areas of human activity, so too the scope of proposed AI regulation has grown to encompass areas not previously associated with AI.

On the dawn of a momentous general election—and with tremendous attention being paid to the fairness and security of elections—2023 marked the arrival of AI considerations to voting laws. In one of the few state measures to actually pass in 2023, in April 2023, the Arizona legislature approved Senate Bill 1565, which would restrict the use of AI or learning hardware, firmware, or software in voting machines. Despite its passage, Arizona’s governor vetoed the legislation. Another focus area has been the potential use of synthetic media, that is AI-generated video, audio, or images to mimic candidates and deceive or manipulate voters. For example, Illinois Senate Bill 1742, introduced in February 2023, would amend the election code to make it a misdemeanor for a person to create a “deepfake” video and cause the deep fake video to be published or distributed within 30 days of an election with the intent to injure a candidate or influence an election.

Some proposed laws have sought to mitigate potential harms from “deepfakes” more generally. The No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act of 2024 (No AI FRAUD Act), introduced in January 2024, would establish civil liability for those who publish unauthorized digital depictions or who distribute “a personalized cloning service.” A more focused federal proposal in the form of the Do Not Disturb Act, also introduced in January 2024, would crack down on robocalls using digitally emulated voices and strengthen the Telephone Consumer Protection Act (TCPA) protections against such conduct. Notably, in February, the Federal Communications Commission issued an opinion clarifying that the TCPA does indeed apply to the use of AI to make a robocall.

State lawmakers have also focused attention on the risks around “deepfakes.” Foreshadowing some of the provisions of the No AI FRAUD Act proposals, Illinois’ legislature introduced the Artificial Intelligence Consent Act (HB3285) in February 2023, which would require a person using artificial intelligence to mimic or replicate another’s voice or likeness in a manner that would otherwise deceive an average viewer to provide a disclosure upon publication, unless the person whose voice or likeness is being mimicked consents. Louisiana’s SB175, enacted in August 2023, criminalizes the creation (or possession) of deepfake videos depicting minors engaged in sexual acts.

Myriad other areas of activity have also been subject to proposed AI legislation. For instance, Illinois’ Anti-Click Gambling Data Analytics Collection Act, introduced in February 2023, would restrict online gambling platforms from collecting data from gamblers with the intention of using the data to predict their gambling behavior. A trio of New Jersey bills introduced in 2023 and aimed to assuage fears over the potential loss of jobs due to the replacement of human labor with automated processes. These bills proposed measures including requiring the Department Of Labor And Workforce Development to track job loss due to automation (Assembly Bill 5150), mandating tuition-free enrollment in public universities to students impacted by automation (Assembly Bill 5224), and providing tax relief for employers who hire worker affected by automation-related job loss (Assembly Bill 5451). And in New Hampshire, House Bill 1599, proposed in January 2024, seeks to affirm the right to use autonomous artificial intelligence for personal defense.


§ 8.5. Blockchain Cases of Note


SEC v. Celsius Network Limited. et al., No. 1:23-cv-6005, filed 7/13/2023. The SEC alleges that Celsius and its founder and CEO influenced the price of the CEL token through unregistered and fraudulent offers and sales through its “Earn Interest Program” to fraudulently raise billions of dollars from investors.

SEC v. Coinbase, No. 1:23-cv-04738, filed on 6/06/2023. The SEC charged Coinbase as operating as an unregistered broker through offering Coinbase Prime and Coinbase Wallet, and offering a staking program without first registering with the SEC. The SEC also alleged Coinbase operated a trading platform allowing U.S. customers to buy, sell, and trade cryptocurrency without registering with the SEC as a broker, national securities exchange, or clearing agency.

SEC v. Justin Sun, et al., No. 1:23-cv-02433, filed on 3/03/2023. Justin Sun and three of his companies, Tron Foundation Limited, BitTorrent Foundation Ltd., and Rainberry Inc., were charged with offering and selling Tronix (TRX) and BitTorrent (BTT) without registering, and for manipulating the secondary market for TRX through wash trading. Eight celebrities were charged in connection with touting TRX and/or BTT without publicly disclosing that they were compensated for doing so.

SEC v. Avraham Eisenberg, No. 1:23-cv-00503, filed on 1/20/2023. The SEC charged Eisenberg for organizing an attack on Mango Markets, a cryptocurrency trading platform, through the MNGO governance token, which is offered and sold as a security.

SEC v. Genesis Global Capital, LLC and Gemini Trust Company, LLC, No. 1:23-cv-00287, filed on 1/12/2023. The SEC alleges that, through the Gemini Earn cryptocurrency asset lending program, Genesis and Gemini engaged in the unregistered offer and sale of securities to U.S. retail investors.

James v. Mek Global Limited and Phoenixfin PTE Ltd d/b/a KuCoin, No. 1:20-cv-02806-GBD-RWL, filed on 3/09/2023. New York Attorney General Letitia James sued KuCoin for failing to register with the State of New York prior to allowing investors to buy and sell cryptocurrencies on its platform.

SEC v. LBRY, Inc., No. 1:21-cv-00260- PB, filed on 3/29/2021, appeal filed on 8/8/2023. The SEC alleged LBRY sold unregistered securities when it issued its own token, LBC, and received approximately $12.2 million in proceeds. Judge Barbadoro of the United States District Court for the District of New Hampshire ordered LBRY to pay a civil penalty of $111,614 and permanently enjoined LBRY from further violations of the registration provisions of the federal securities laws and from participating in unregistered offerings of crypto asset securities.

§ 8.5.1. Sam Bankman-Fried’s Conviction

In 2022, the SEC charged Sam Bankman-Fried with violating Section 17(a) of the Securities Act, and 10(b)(5) of the Exchange Act for organizing a scheme to defraud equity investors in FTX Trading Ltd., a crypto trading platform which Bankman-Friend was CEO and co-founder.

While promoting FTX as a safe crypto asset trading platform, Bankman-Fried improperly used FTX customers’ funds for his privately-held crypto hedge fund, Alameda Research LLC, and gave Alameda special treatment on the FTX platform.

Bankman-Fried also failed to inform investors of the risk from FTX’s exposure to Alameda’s holdings of overvalued, illiquid assets, such as FTX-affiliated tokens.

In November of 2023, Bankman-Fried was convicted, and his sentencing is scheduled for March 28, 2024.

§ 8.5.2. Indictment of CZ from Binance

In June of 2023, the SEC brought 13 charges against Binance Holdings Limited, and its CEO, Changpeng Zhao for violations of federal securities laws. Binance operates the largest crypto asset trading platform in the world.

The SEC alleged that Binance and Zhao conducted the unregistered offer and sale of crypto assets and mislead investors.

Binance and Zhao claimed U.S. customers were restricted from Binance.com, but Zhao and Binance secretly allowed high-value U.S. customers to continue trading on the platform. Also, Zhao and Binance improperly exercised control of customers’ assets, which were then commingled and diverted, including to Zhao’s own entity, Sigma Chain.

In November of 2023, Binance pleaded guilty and agreed to pay over $4 billion for violations of the Bank Secrecy Act, failure to register as a money transmitting business, and the International Emergency Economic Powers Act.

Zhao, a Canadian citizen, pleaded guilty for failing to maintain an effective anti-money laundering program in violation of the Bank Secrecy Act and stepped down as Binance’s CEO.

§ 8.5.3. Recent Developments from Federal Agencies

On January 3, 2024, the FDIC issued a joint statement on crypto-asset risks to banking organizations. See Joint Statement on Crypto-Asset Risks to Banking Organizations (fdic.gov).

The joint statement highlights what the FDIC perceives to be the key risks associated with crypto-assets and crypto-asset sector participants, including legal uncertainties related to custody practices, redemptions, and ownership rights and inaccurate or misleading representations and disclosures by crypto-asset companies, for example.

On January 10, 2024, SEC Chair Gary Gensler announced the Commission’s approval of the listing and trading of a number of spot bitcoin exchange-traded product (ETP) shares. See SEC.gov | Statement on the Approval of Spot Bitcoin Exchange-Traded Products.

On February 29, 2024, the House Financial Services Committee advanced a bipartisan measure to eliminate a 2022 SEC staff accounting bulletins on accounting for custodied crypto assets. See McHenry Delivers Opening Remarks at Markup of Fintech, Housing, and National Security Legislation | Financial Services Committee (house.gov).


  1. https://www.supremecourt.gov/publicinfo/year-end/2023year-endreport.pdf.

  2. Id. at page 6.