Sifting through the Corporate Transparency Act: Key Elements to Understand

Effective January 1, 2024, a new filing requirement was imposed on many business entities, particularly smaller privately held companies, by a new federal law named the Corporate Transparency Act (“CTA”). However, soon after, on March 1, 2024, a federal judge in the U.S. District Court for the Northern District of Alabama held the CTA to be unconstitutional as a matter of law.[1] That case will undoubtedly be appealed. In National Small Business United v. Yellen, the district court concluded that the CTA exceeded the Constitution’s limits on congressional authority—specifically characterizing the CTA as regulating incorporation, which is a “purely internal affair” that is (i) not clearly economic or commercial in nature and (ii) too incidental to tax administration. Consequently, the court declared the CTA unconstitutional and permanently enjoined the United States Department of the Treasury and the Financial Crimes Enforcement Network (“FinCEN”), a bureau of the Treasury, from enforcing the CTA against the plaintiffs in that case, mostly members of the National Small Business Association (NSBA), the plaintiff organization that appeared before the court. Thus, the effect of this court decision is narrow and limited: the CTA remains in full effect except as to members of the NSBA and possibly reporting companies in the Northern District of Alabama. The following discussion concisely summarizes the CTA and what it entails to help businesses be prepared.

The CTA was enacted as part of the Anti-Money Laundering Act of 2020 and applies to entities deemed to be “Reporting Companies,” discussed below. The goal of the CTA is to address concerns about illicit activity by the use of obscure U.S. business entities—including money laundering, the financing of terrorism, tax fraud, human and drug trafficking, counterfeiting, piracy, securities fraud, financial fraud, and acts of foreign corruption—by requiring Reporting Companies to provide governmental authorities with information about their beneficial owners and controlling persons. The focus of the CTA is not on larger companies but rather on smaller and medium-sized legal entities, including shell companies, that generally either:

  1. are not subject to supervision by other regulatory agencies (e.g., entities regulated by the Securities and Exchange Commission and Commodity Futures Trading Commission, or organizations that are tax-exempt under the Internal Revenue Code), or
  2. employ fewer than twenty-one full-time employees and generate less than $5 million in annual U.S. revenue.

Specifically, a “Reporting Company” is: (a) any company that is created by filing a document with a governmental agency (including a federally recognized Indian Tribe), such as a corporation, a limited liability company, or a limited partnership; or (b) a foreign-formed entity that is registered or registers to do business in the United States. Although the CTA includes twenty-three categories of businesses that are exempt from the CTA, these exempt businesses generally are already extensively regulated by the federal government or a state government. Also exempt from the CTA, for example, are sole proprietorships and general partnerships, because they are not formed by filing a document with a governmental agency.

Reporting Companies are required to file a Beneficial Ownership Information Report, or “BOIR,” with FinCEN (more information can be found at FinCEN’s beneficial ownership information webpage). The BOIR requires information about the legal entity as well as personal information about those in substantial control of the Reporting Company. Such control can derive from (a) ownership interests (the BOIR must include anyone with 25 percent ownership interest of the entity), and/or (b) the right or ability to exercise substantial control over the legal entity through official positions or contractual, familial, or other arrangements. In addition, for a Reporting Company established or registered to do business in the U.S. in 2024 or later, the BOIR must also include information about its “Company Applicant(s),” which generally means (i) the person who directly files the pertinent document with a governmental agency and (ii), where more than one person is involved in the filing of such document, the person primarily responsible for directing or controlling that filing. If a person has reason to believe that a BOIR filed with FinCEN contains inaccurate information and voluntarily submits a report correcting the information within ninety days of the deadline for the original BOIR, then the CTA creates a safe harbor from penalty. In addition, once a Reporting Company has satisfied its CTA obligation by filing a BOIR, it must file an updated BOIR within thirty days of any change to the information about the entity or its beneficial owners, and it must file a corrected BOIR within thirty days of becoming aware of, or of when it should know of, any inaccuracy in the information about the entity or its beneficial owners on a filed BOIR. However, a Reporting Company is not required to file an updated or corrected BOIR if information about its Company Applicant changes. The information in a filed BOIR will not be available to the general public, but it will be available to federal and state law enforcement agencies.

Although the legal entity subject to the reporting requirement has the primary obligation to file a BOIR with FinCEN, individuals or other entities may also be found to have violated the CTA (e.g., because they caused the Reporting Company to fail to satisfy its reporting obligations). There are two ways a person can violate the CTA and be liable: (a) reporting violations (i.e., knowingly causing a Reporting Company not to timely file or update its BOIR or providing or assisting in the knowing provision of false or fraudulent information to FinCEN); or (b) disclosure-and-use violations (i.e., knowingly disclosing or using beneficial ownership information provided to FinCEN for an unauthorized purpose). For reporting violations, the CTA establishes: (i) civil penalties of up to $500 for each day a violation continues or has not been remedied; and (ii) criminal penalties of up to $10,000, imprisonment for up to two years, or both. For disclosure-and-use violations, the CTA establishes: (i) civil penalties of up to $500 for each day a violation continues or has not been remedied; and (ii) criminal penalties of up to $250,000, imprisonment for up to five years, or both. Based on the foregoing, FinCEN will determine the appropriate enforcement response for willful failure to report complete or updated beneficial ownership information to FinCEN (or failure to report at all) as required under the CTA.


  1. Nat’l Small Bus. United v. Yellen, No. 5:22-cv-1448-LCB (N.D. Ala. Mar. 1, 2024).

Big Data, Big Problems: The Legal Challenges of AI-Driven Data Analysis

Machine learning and artificial intelligence (AI) are having a moment. Some models are busy extracting information—recognizing objects and faces in video, converting speech to text, summarizing news articles and social media posts, and more. Others are making decisions—on loan approvals, detecting cyberattacks, bail and sentencing recommendations, and many other issues. ChatGPT and other large language models are busy generating text, and their image-based counterparts are generating images. Although these models do different things, all of them ingest data, analyze the data for correlations and patterns, and use these patterns to make predictions. This article looks at some legal aspects of using this data.

Defining Machine Learning and AI

Machine learning and AI are not quite the same, but they are often used interchangeably. One version of the Wikipedia entry for AI defines it as “intelligence of machines or software, as opposed to the intelligence of other living beings.” Some AI systems use predefined sets of rules (mostly made by human experts) to make their decisions, while other AI systems use machine learning, in which a model is given data and told to figure out the rules for itself.

There are two basic types of machine learning. In supervised learning, the input data used for model training has labels. For instance, if you were training a model to recognize cats in images, you might give the model some images labeled as depicting cats, and some images labeled as depicting items other than cats. During training, the model uses the labeled images to learn how to distinguish a cat from a non-cat. In unsupervised learning, the training data does not have labels, and the model identifies characteristics that distinguish one type of input from another type of input. In either type of learning, training data is used to train a model, and test or validation data is used to confirm that the model does what it is supposed to do. Once trained and validated, the model can be operated using production data.

Contracting for AI Solutions

Joe Pennell, Technology Transactions Partner at Mayer Brown, notes: “The approach to contracting for AI depends on where your client sits in the AI ecosystem. A typical AI ecosystem contains a number of parties, including talent (e.g., data scientists), tool providers, data sources, AI developers (who may assemble the other parties to deliver an integrated AI system or solution), and the end user, buyer, or licensee of the AI system or solution. The contracts between these parties will each have their own types of issues that will be driven by the unique aspects of specific AI solutions. For example, those might include the training data, training instructions, input/production data, AI output, and AI evolutions to be created during training and production use of the AI.”

Intellectual Property Considerations

In addition to, or in the absence of clear contract provisions, intellectual property rights may also govern AI models and training data, as well as the models’ inputs and outputs. Patent, copyright, and trade secret issues can all be implicated.

Patents (at least in the United States) protect a new or improved and useful process, machine, article of manufacture, or composition of matter. However, abstract ideas (for example, math, certain methods of organizing human activity, mental processes), laws of nature, and natural phenomena unless integrated into a practical application are not patent-eligible. Case law delineating what is patent-eligible is a moving target. Thus, a model training or testing method, or a model itself, might be patentable, but not input data (because data is not a process or machine) or output data (because only humans can be inventors—so far).

Copyright (at least in the United States) protects original works of authorship including literary, dramatic, musical, and artistic works, such as poetry, novels, movies, songs, computer software, and architecture—but not facts, ideas, systems, or methods of operation (although copyright may protect the way in which these things are expressed). Thus, input data, depending on what it is and how it is arranged, might be copyrightable, including as alleged in a much-covered copyright lawsuit recently filed by the New York Times against OpenAI. Because only humans can be copyright holders (at least so far), protecting AI output via copyright requires that a human must have played a role in generating the AI output, and the output must be sufficiently transformed from copyrighted input data. How much of a role? How much of a transformation? Courts are only beginning to grapple with these questions. In addition, model training/testing methods and the model itself are probably not copyrightable, because they’re not original works of authorship.

Trade secrets are information that is not generally known to the public and that confers economic benefit on its holder because the information is not publicly known, and trade secret protection only applies if the holder makes reasonable efforts to maintain its secrecy. So, a model’s architecture, training data, and training method might be protectable as a trade secret, but having to explain model output can defeat the required secrecy.

Privacy Considerations

Moreover, AI training and input data can often implicate privacy issues. Much of that data comes from sources that would be considered as some form of personal data under various federal or state laws.

US enforcement agencies—including the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission, the Federal Trade Commission (FTC), and the Civil Rights Division of the Department of Justice—have made it clear that they will use privacy as a lever to regulate AI. The FTC has even gone so far as to effectively confiscate AI models trained on data that was obtained or maintained in violation of privacy laws seven times in the last four years. However, beyond federal agencies, because the US currently lacks any generally applicable/non-sectoral data privacy law, much of the action to protect consumers may fall to the states. More than a dozen states have passed general data privacy laws. Some of these state laws, including the Colorado Privacy Act, and as proposed for the California Consumer Privacy Act, contain detailed requirements on privacy notifications and obtaining consent on certain forms of what they call “automated decision-making.”

The first state civil complaint concerning data privacy has already been filed, and state attorneys general have begun bringing actions under state unauthorized and deceptive practices (UDAP) acts. At current count, forty-one state UDAP laws entail a private right of action. Class action attorneys have used those UDAP laws, along with state constitutional privacy claims, to bring massive actions against data brokers.

From a European perspective, perhaps the greatest risk to businesses comes from training data. If the training data is personal data (and the definition of that in the GDPR is significantly wider than the definitions generally found in US state laws), the GDPR applies, and if the data underlying the AI has been processed in a manner that is not GDPR compliant, this could create significant risks and liability to the businesses who are using those data.

Counsel for any organization that uses AI or machine learning should be clear about what information has been collected and the basis of such collection, and they should also ensure that any required permissions have been obtained. With the enactment of the European Union’s Artificial Intelligence Act this year, the penalties for getting it wrong may be significant—and would be in addition to the penalties that might already apply under the GDPR.

AI Bias Risks

In addition to privacy issues, bias in training data can negatively impact the safety and accuracy of deployed AI solutions. Common biases found in datasets are biased labeling, over- or underrepresentation of a demographic, and data that reflects a society’s existing or past prejudices. Biased labeling occurs when a programmer labels or classifies data in a way that incorporates her own biases. Data that reflects a society’s existing or past prejudices creates a similar outcome without manual labeling because the datasets come from a society with systemic exclusion, stereotyping, and marginalization of different groups of people. Over- or underrepresentation in data occurs when the use case of the AI solution is broader or more diverse than the data on which it is trained.

To avoid liability, businesses should confirm that the training dataset of AI they use mirrors the diversity of the intended use case. Sometimes, a particular bias in the dataset is not known until model deployment. In such cases, pre-deployment testing, specifically for bias, is crucial. Companies are well advised to implement data governance standards and bias checks at key points, including in connection with dataset collection/selection, algorithm training, pre-deployment testing, and post-deployment monitoring. Risks can be substantially mitigated if anti-bias data governance is made an integral part of creating, training, and monitoring AI and machine learning models.

***

This article is based on a CLE program titled “Big Data, Big Problems: The Legal Challenges of AI-Driven Data Analysis” that took place during the ABA Business Law Section’s 2023 Fall Meeting. To learn more about this topic, listen to a recording of the program, free for members.

 

Navigating Sponsor-Led Liquidity Solutions in Today’s Private Equity Market

Recent market trends in the Canadian private equity landscape indicate a growing appetite for sponsor-led liquidity solutions amid challenging market conditions. There has been a notable increase in secondary transactions and alternative exit strategies as sponsors seek to unlock value and provide liquidity to investors. While traditional exit routes such as initial public offerings (IPOs) have become less viable due to market volatility and regulatory uncertainties, sponsor-led solutions offer greater flexibility and efficiency in achieving liquidity objectives. Secondary market transactions—including fund restructurings, tender offers, and strip sales—have emerged as preferred alternatives, enabling sponsors to optimize portfolio performance and generate returns for investors.

These trends underscore the importance of agility and innovation in navigating the evolving private equity (PE) landscape, with sponsors leveraging strategic partnerships and sophisticated financial instruments to maximize value and mitigate risks.

This article explores several potential alternatives that PE sponsors may employ to meet increasing demands to address the liquidity needs of the fund, investors, and portfolio companies.

Fund Restructuring

A fundamental aspect of PE funds is their limited lifespan. However, liquidating PE assets at the expiry of the fund’s term, usually within a ten-year time frame, may not always be an optimal strategy, especially in a challenging macroeconomic environment. In such circumstances, one sponsor-led solution would be to create a continuation fund to acquire one or more portfolio companies from the existing fund. Under this structure, sponsors can retain control over managing the fund’s assets for an extended period until these assets achieve their maximum potential.

Continuation funds typically have a shorter term than the existing fund (e.g., two to six years). Furthermore, the investors of the existing fund generally will have the following options when the continuation fund is established:

  1. selling their interest in the existing fund and receiving a pro rata share of the cash purchase price for the transfer of the assets to the continuation fund,
  2. rolling over their interest into the continuation fund, or
  3. occasionally, a combination of the previous options.

In the rollover option, investors may be allowed to roll over their interest on either a reset or a status quo basis. On a reset basis, the investor participates in the continuation fund on updated economic terms, which could involve lower management fees and higher carried interest rates. In return for the favorable economic terms under the reset basis, the sponsor would seek to lock in its carried interest earned in managing the existing fund to date. On a status quo basis, investors continue to participate in the continuation fund on substantially the same economic terms (i.e., same management fees and carried interest and no crystallization of carried interest on the transferred assets).

Numerous factors must be carefully considered when forming a continuation fund, including tax implications and structural complexities. However, a critical aspect is to address the sponsor’s conflict of interest, ensuring it complies with its fiduciary obligations to investors. To mitigate the conflict of interest risk, the sponsor can undertake measures such as seeking a fairness opinion from an independent valuation expert and providing adequate disclosure to all investors with respect to the terms of the restructuring process.

Historically, continuation funds have not been widely utilized in Canada. However, in recent years, there has been a growing trend towards their adoption, offering investors the flexibility to either withdraw from their investment in the portfolio company or remain invested by rolling into the continuation fund.

Tender Offers

Sponsors may also consider organizing a secondary sale process directly to facilitate liquidity for existing investors, allowing them to either maintain their interest in the existing fund or sell their interest to a secondary buyer.

Compared to a fund restructuring, a tender offer represents a simpler alternative, as it does not involve establishing a continuation fund, freeing the sponsor from the complications of investor negotiations and expenses associated with a continuation fund transaction. Additionally, a tender offer may prove particularly advantageous when the secondary buyer commits to subscribe for a “stapled” interest in another fund being raised by the sponsor.

Strip Sales

In a strip sale, the sponsor partially sells the fund’s portfolio company investments at a price negotiated with the secondary buyer. Buyers in these transactions typically consist of other PE funds that do not intend to acquire a controlling stake in the underlying assets. These sales offer partial liquidity in a well-performing portfolio without surrendering complete control of the underlying assets. However, it implies that the existing fund will surrender a percentage of the potential appreciation of the assets.

Similarly to when considering a continuation fund, sponsors considering strip sales should carefully review conflict of interest issues and financing arrangements of the portfolio companies.

Preferred Equity Options

Another sponsor-led liquidity strategy is preferred equity options, which allow a new investor to inject additional capital into the fund, in exchange for which the new investor receives priority over the distributions from the assets held by the fund.

This type of mechanism is typically structured by transferring the assets to a newly established special-purpose vehicle, which issues preferred shares to the new investor. Alternatively, the sponsor may admit the new investor to the fund and issue a preferred interest to such investor.

This strategy offers the benefit of providing liquidity to existing investors while contributing extra capital to the fund. Nonetheless, these transactions may require an amendment to the fund documentation to allow the issuance of preferred equity, typically requiring a higher level of consent from the limited partners.

Net Asset Value (NAV) Loans

Fund finance has traditionally consisted of subscription facilities, in which credit facilities are secured by the uncalled capital commitments of the limited partners. However, NAV loans have recently emerged as an attractive alternative to provide liquidity for funds when market conditions render asset sales difficult.

NAV loans, generally used for later-stage funds, allow sponsors to borrow against the value of their portfolio holdings, offering them flexible and efficient access to extra capital while avoiding potential discounts associated with other secondary market deals. NAV loans appeal to sponsors aiming to accelerate distributions to investors and finance add-on investors without requiring additional capital calls.

In NAV loan arrangements, lenders typically have recourse to the fund’s portfolio investments, with the borrowing base calculated on the net asset value of the fund’s portfolio assets. However, securing NAV loans will generally require a comprehensive due diligence review of the fund, the limited partners, and the portfolio assets. In this regard, reviewing the fund’s organizational documents is crucial to ascertain the feasibility of NAV finance. If the fund’s organizational documents do not expressly contemplate NAV borrowings, sponsors must carefully interpret the relevant borrowing provisions and determine the need for amendments or investor consent.

The current economic landscape continues to favor NAV loans, and sponsors should stay attuned to the evolving fund documentation and legal considerations surrounding these types of arrangements.

Risk Management and Mitigation Strategies

Effective risk management is paramount in sponsor-led liquidity transactions to safeguard investor interests and preserve value. Key risk factors include valuation uncertainties, conflicts of interest, regulatory compliance, and market volatility, all of which require proactive mitigation strategies.

Valuation Uncertainties:

Sponsors considering sponsor-led liquidity solutions must grapple with valuation uncertainties, particularly in volatile market conditions. The valuation of portfolio assets can fluctuate significantly, impacting the attractiveness and feasibility of liquidity options. To mitigate this risk, sponsors should employ robust valuation methodologies, leveraging industry best practices and engaging qualified valuation experts to ensure transparency and accuracy in the valuation process. Additionally, sponsors should conduct thorough due diligence on portfolio assets, scrutinizing financial performance, market dynamics, and potential risk factors to inform valuation assessments.

Conflicts of Interest:

Sponsor-led liquidity transactions inherently involve conflicts of interest, as sponsors seek to balance the interests of various stakeholders, including investors, portfolio companies, and themselves. To effectively manage conflicts of interest, sponsors should implement rigorous governance structures and adopt transparent communication practices throughout the transaction process. This may involve establishing independent committees or hiring third-party advisors to oversee the transaction and ensure fairness and impartiality. Moreover, sponsors should adhere to fiduciary duties and regulatory requirements, prioritizing the best interests of investors and maintaining integrity and ethical standards in decision-making.

Financial and Operational Risks:

Sponsor-led liquidity solutions entail inherent financial and operational risks, including potential disruptions to portfolio company operations, exposure to adverse market conditions, and unforeseen liabilities. Sponsors should conduct comprehensive risk assessments and scenario analyses to identify and mitigate potential risks, developing contingency plans and risk mitigation strategies to safeguard against adverse outcomes. This may involve stress-testing liquidity options under various market scenarios, assessing the impact of financial covenants and performance metrics on portfolio assets, and implementing robust risk monitoring and management frameworks to proactively address emerging risks.

Investor Relations and Transparency:

Maintaining strong investor relations and transparency is essential for fostering trust and confidence among stakeholders throughout the sponsor-led liquidity process. Sponsors should communicate openly and transparently with investors, providing timely updates and disclosures regarding transaction developments, risks, and potential outcomes. This includes facilitating meaningful dialogue and engagement with investors, addressing concerns and inquiries promptly, and soliciting feedback to inform decision-making. By prioritizing investor relations and transparency, sponsors can mitigate concerns regarding conflicts of interest and enhance investor confidence in the transaction process.

By addressing these key risk management and mitigation strategies, sponsors can navigate the complexities of sponsor-led liquidity solutions with greater confidence and resilience, effectively managing risks and maximizing value for all stakeholders involved.

Conclusion

Determining the optimal liquidity alternative for a PE fund will depend on various factors associated with the existing market conditions, interest rates, and the fund’s valuation. Sponsors are encouraged to evaluate the different liquidity options available carefully, considering the fund’s investment strategy and the provisions outlined in its organizational documents and portfolio-level agreements. Moreover, in structuring sponsor-led transactions, sponsors must navigate other critical considerations, including the previously described risk management and mitigation strategies, as well as skillful negotiation of the economic terms of the proposed transaction.

Looking ahead, we anticipate sustained growth in sponsor-led solutions in the United States and Canada, as the need to maximize liquidity in today’s market remains a top priority for sponsors and investors. Through strategic planning and execution, sponsors will be well positioned to achieve optimal results.

Legacy of Former BLS Chair John J. McCann Lives On

John J. McCann, former chair of the American Bar Association Business Law Section (1992–1993), passed away on March 13, 2024, but his work as a lawyer and leader of the Business Law Section (BLS) lives on. Most notably, McCann was instrumental in his leadership role in creating a new publication for the Business Law Section: Business Law Today (BLT). Debuting in 1992 as a print magazine, it evolved during the last thirty years into an electronic magazine and then a dedicated business law website, www.businesslawtoday.org.

“The work of John McCann ensured that our members would receive a steady stream of analytical articles on a wide range of business law practices,” said Lynette Hotchkiss, BLT’s current editor-in-chief. “In many ways, John was a true visionary, and he has left this amazing content resource for business lawyers, students, and academics.”

Over the course of more than forty years in business law, McCann’s legal expertise and knowledge was invaluable to both his clients and his business law colleagues. A graduate of Columbia Law School, John was admitted to the New York, New Jersey, and Florida bars and was partner in the New York law firm of Donovan, Leisure, Newton & Irvine; in-house counsel to the Prudential Insurance Company; and in-house counsel to Orion Specialty Insurance Company.

His contributions to BLS content allowed him to be recognized as a BLS leader and led to his appointment as an officer, and then as chair of the Section for the 1992–1993 bar year.

“I clearly remember that one evening at a BLS leadership meeting at the Ritz-Carlton in Florida,” said Maury Poscover, former chair of the Business Law Section (1997–1998). “I was in the chair’s suite with Lorrie, my spouse, talking with Herb and Ruthie Wander. John walked in, and he was so excited because he had just been told that he would be nominated as secretary of the Section. The memory stands out because Lorrie and I were equally excited because our son had just told us he was engaged.”

In September 1991, McCann announced to the membership the creation of a new periodical, Business Law Today, that would have its debut during his bar year. “This 64-page magazine will be a significant member benefit,” said McCann. “Business Law Today will enable us to publish many of the excellent submissions that we are unable to publish in The Business Lawyer. It will provide committees with a vehicle for regular content on all the practice areas of business law.”

And John’s Business Law Today has now morphed into a website that features articles, videos, podcasts, and other business law resources. Truly, a significant contribution to the ABA’s Business Law Section and the legal profession.

 

Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies

Imagine receiving a layoff notice because an AI evaluation tool predicted a higher risk of future underperformance due to your age. Or picture repeatedly having job applications rejected, only to find out the cause was an AI tool screening out candidates with a disability. These are just a few examples of real-world AI bias in the realm of hiring and employment, a growing issue that has already resulted in several notable lawsuits. How can companies effectively take advantage of AI in their employment practices while minimizing legal risks? This article discusses employment laws applicable to AI discrimination and provides practical strategies for companies to prevent potential government investigations, lawsuits, fines, class actions, or reputational damage.

A. AI Bias

A recent IBM article defines AI bias as “AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.”[1] Two major technical factors contribute to AI bias:

  1. Training Data: AI systems develop their decision-making based on training data; when those data overrepresent or underrepresent certain groups, it can cause biased results. A typical example is a facial recognition algorithm trained on data that overrepresents white people, which may result in racial bias against people of color in the form of less accurate facial recognition results. Moreover, mislabeled data, or data that reflect existing inequalities, can compound these issues. Consider an AI recruiting tool trained on a dataset where some applicant qualifications were incorrectly labeled. This could result in the tool rejecting qualified candidates who possess the necessary skills but whose résumés were not accurately understood by the tool.
  2. Programming Errors: AI bias may also arise from coding mistakes, wherein a developer inadvertently or consciously overweighs certain factors in algorithmic decision-making due to their own biases. In one good example discussed in the IBM piece, “indicators like income or vocabulary might be used by the algorithm to unintentionally discriminate against people of a certain race or gender.”

B. AI Employment Discrimination

Companies have increasingly used AI tools to screen and analyze résumés and cover letters; scour online platforms and social media networks for potential candidates; and analyze job applicants’ speech and facial expressions in interviews.[2] In addition, companies are using AI to onboard employees, write performance reviews, and monitor employee activities and performance.[3] AI bias can occur in any of the above use cases, throughout every stage of the employment relationship—from hiring to firing and everything in between—and can result in discrimination lawsuits.

In one notable example, the Equal Employment Opportunity Commission ( “EEOC”) settled its first AI hiring discrimination lawsuit in August 2023.[4] In Equal Employment Opportunity Commission v. iTutorGroup, Inc.,[5] the EEOC sued three companies providing tutoring services under the “iTutorGroup” brand name (“iTutorGroup”) on the basis that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (“ADEA”) because the AI hiring program it used “automatically reject[ed] female applicants age 55 or older and male applicants age 60 or older,” resulting in screening out over 200 applicants because of their age.[6] Subsequently, iTutorGroup entered into a consent decree with the EEOC, under which iTutorGroup agreed to pay $365,000 to the group of automatically rejected applicants, adopt antidiscrimination policies, and conduct training to ensure compliance with equal employment opportunity laws.

The ongoing Mobley v. Workday, Inc.[7] litigation, one of the first major class-action lawsuits in the United States alleging discrimination through algorithmic bias in applicant screening tools, presents another warning. The plaintiff, an African-American man over the age of forty with a disability, claims that Workday provides companies with algorithm-based applicant screening software that unlawfully discriminated against job applicants based on protected class characteristics of race, age, and disability and thus violated Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866,[8] the ADEA, and the ADA Amendments Act of 2008 (“ADAAA”). On January 19, 2024, the court granted Workday’s motion to dismiss the case, with leave for the plaintiff to amend the complaint.[9] On February 21, 2024, the plaintiff filed an amended complaint outlining further details to support his claims.[10]

With the foresight to prevent the kind of lawsuits discussed above, Amazon took proactive measures in 2018 by ceasing using an AI hiring algorithm after finding it discriminated against women applying for technical jobs; after being trained on a dataset of mostly men, the tool preferred applicants who used words that are more commonly used by men in their resumes, such as “executed” or “captured,” among other issues.[11]

These cases, along with Amazon’s decision to scrap its biased AI hiring tool, highlight the growing concern about algorithmic bias in recruitment. Given this evolving landscape, employers must carefully examine all applicable federal, state, and local laws, as well as EEOC guidelines, to ensure fair and unbiased hiring practices.

C. Governing Law

1. Federal Law

There is currently no federal law specifically targeting the use of AI in the employment context. However, most employers’ use of AI tools in their employment practices would be subject to federal laws prohibiting employment discrimination based on race, color, ethnicity, sex (including gender, sexual orientation, and gender identity), age, national origin, religion, disability, pregnancy, military services, and genetic information.

Below is a list of primary federal laws a company must consider when evaluating AI-based employment evaluation tools. The most highly litigated one is Title VII, which applies to private employers that employ fifteen or more employees.

  1. Title VII of the Civil Rights Act of 1964 (“Title VII”)[12]: prohibits employment discrimination based on race, color, religion, sex (including gender, pregnancy, sexual orientation, and gender identity), or national origin.
  2. Section 1981 of the Civil Rights Act of 1866[13]: prohibits discrimination based on race, color, and ethnicity.
  3. The Equal Pay Act[14]: prohibits sex-based wage discrimination.
  4. The Age Discrimination in Employment Act[15]: prohibits discrimination based on age (forty and over).
  5. The Immigration Reform and Control Act[16]: prohibits discrimination based on citizenship and national origin.
  6. Title I and Title V of the Americans with Disabilities Act (“ADA”)[17] (including amendments by the Civil Rights Act of 1991 and the ADAAA): prohibits employment discrimination against qualified individuals based on disability and those regarded as having a disability.
  7. The Pregnant Workers Fairness Act[18]: prohibits discrimination against job applicants or employees because of their need for a pregnancy-related accommodation.
  8. The Uniformed Services Employment and Reemployment Rights Act[19]: prohibits discrimination against past and current members of the uniformed services, as well as applicants to the uniformed services.
  9. The Genetic Information Nondiscrimination Act[20]: prohibits discrimination in employment and health insurance based on genetic information.

2. State and Local Law

To address concerns over the use of AI in employment, states and local governments have become more proactive. Three notable examples of legislation that have been enacted, discussed below, demonstrate the growing trend among policymakers to regulate AI usage in employment practices, underscoring the increasing importance placed on ensuring fairness and accountability in AI-driven decision-making.

i. Illinois

In 2020, Illinois adopted the Artificial Intelligence Video Interview Act (820 ILCS 42/1), which imposes several requirements on employers if they conduct video interviews and use AI analysis of such videos in their evaluation process. These requirements include (i) notifying applicants of the AI’s role, (ii) providing applicants with an explanation of the AI process and types of characteristics used for evaluating applicants, (iii) obtaining the applicants’ consent for such AI use, (iv) only sharing videos with those equipped with the expertise or technology to evaluate the applicant’s fitness for a position; and (v) destroying videos within thirty days of a request by the applicant.

ii. Maryland

While not explicitly targeting AI, Maryland’s 2020 facial recognition technology law prohibits an employer from using certain facial recognition services—many of which use AI processes—during job interviews unless the applicant consents.

iii. New York City

New York City began enforcing its law on Automated Employment Decision Tools (“AEDT Law”) on July 5, 2023. Under this law, passed in 2021, employers and employment agencies are prohibited from using an automated employment decision tool (“AEDT”), which includes AI, to assess candidates for hiring or promotion in New York City unless an independent auditor completes a bias audit of the AEDT before its use and the candidates who are New York City residents receive notice that the employer or employment agency uses an AEDT. A bias audit must include “calculations of selection or scoring rates and the impact ratio across sex categories, race/ethnicity categories, and intersectional categories.”[21] For each violation, offenders could face penalties ranging from $375–$1,500.

3. EEOC Guidance

The EEOC enforces federal laws prohibiting discrimination in hiring, firing, promotions, training, wages, benefits, and harassment. Employers with at least fifteen employees, labor unions, and employment agencies are subject to EEOC review. The EEOC has the authority to investigate discrimination charges against employers and, if necessary, file a lawsuit. Therefore, even though EEOC guidance is not legally binding, it proves valuable for companies seeking to avoid potential investigations or lawsuits when using AI tools.

i. EEOC 2022 Guidance on the ADA and AI

In May 2022, the EEOC issued technical guidance addressing how the ADA applies to the use of AI to assess job applicants and employees.[22] The guidance outlines several common ways that utilizing AI tools can violate the ADA, including, for example, relying on an algorithmic decision-making tool that intentionally or unintentionally excludes an individual with a disability, failing to provide necessary “reasonable accommodation,” or violating the ADA’s restrictions on disability-related inquiries and medical examinations.

Employers can implement practices recommended by the EEOC to effectively handle the risk associated with utilizing AI tools, such as the following:

  1. Disclose in advance the factors to be measured with the AI tool, such as knowledge, skill, ability, education, experience, quality, or trait, as well as how testing will be conducted and what will required.
  2. Ask employees and job applicants if they require a reasonable accommodation using the tool. If the disability is not apparent, the employer may ask for medical documentation when requested for a reasonable accommodation.
  3. Once the claimed disability is confirmed, provide a reasonable accommodation, including an alternative testing format.
  4. “Examples of reasonable accommodations may include specialized equipment, alternative tests or testing formats, permission to work in a quiet setting, and exceptions to workplace policies.”[23]

ii. EEOC 2023 Guidance on Title VII and AI

In May 2023, the EEOC issued new technical guidance on how to measure adverse impact when AI tools are used for employment selection, titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”[24]

Under this guidance, if the selection rate of individuals of a particular race, color, religion, sex, or national origin, or a “particular combination of such characteristics” (e.g., a combination of race and sex), is less than 80 percent of the rate of the non-protected group, then the selection process could be found to have a disparate impact in violation of Title VII, unless the employer can show that such use is “job related and consistent with business necessity” under Title VII.

If the AI tool is found to have an adverse impact under Title VII, the employer can take measures to reduce the impact or select a different tool. Failure to adopt a less discriminatory algorithm that was considered during the design process may subject the employer to liability.

Under both EEOC guidance documents discussed here, an employer will be held liable for the actions or inactions of an outside vendor who designs or administers an algorithmic decision-making tool on its behalf and cannot rely on the vendor’s assessment of the tool’s disparate impact.

D. Legal Strategies

Considering the applicable laws and EEOC guidance, it would be prudent for a company to consider the following strategies to reduce risk of AI bias in employment decisions:

  1. Prior to signing a contract with a vendor who designs or implements an AI-based employment tool, as part of the vendor due diligence process, a company’s legal team should work closely with its IT and HR teams to review and evaluate the vendor’s tools, including reviewing assessment reports and historical selection rates, based on the applicable laws and EEOC guidelines.

    In addition, any employers who are subject to New York City’s AEDT Law should have an independent auditor conduct a bias audit before utilizing the AI tool.

  2. To incentivize a vendor to deliver a high-quality, legally compliant AI tool while mitigating risks, carefully negotiate and draft the indemnity, warranty, liability cap carveouts, and other risk allocation provisions of the contract with the vendor. These provisions should obligate the vendor to bear liability for any issues arising from the use of the AI tool in employment contexts caused by the vendor’s fault.

  3. Prepare detailed internal documents clearly explaining the AI tool’s operation and selection criteria based on the review mentioned in item a to protect the company in case of government investigations or lawsuits.[25]

  4. The legal team should work closely with HR and the IT team to conduct bias audits on a regular basis.

  5. If an audit reveals the tool has disparate impacts at any point, the company should consider working with the vendor to implement bias-mitigating techniques, such as modifying the AI algorithms, adding training data for underrepresented groups, or selecting a different tool, unless the legal counsel determines that the use of this tool is “job related and consistent with business necessity.”

  6. Provide advance notice to candidates or employees who will be impacted by AI tools in accordance with applicable laws and EEOC guidance.

  7. Educate HR and IT teams regarding AI discrimination.

  8. Keep track of legal developments in this area, especially if your company has offices nationwide.

Faced with the looming threats of EEOC enforcement actions, class action lawsuits, and legislative uncertainty, employers may understandably feel apprehensive about charting a course that includes using AI in hiring or HR. However, consulting with attorneys to understand legal requirements and potential risks associated with AI employment bias—along with adopting proactive measures outlined in this article, staying informed about legal developments, and fostering collaboration across legal, HR, and IT teams—can help organizations effectively mitigate risks and confidently navigate the intricate landscape of AI employment bias.


  1. IBM Data and AI Team, “Shedding light on AI bias with real world examples,” IBM, October 16, 2023.

  2. Keith MacKenzie, “How is AI used in human resources? 7 ways it helps HR,” Workable Technology, December 2023.

  3. Aaron Mok, “10 ways artificial intelligence is changing the workplace, from writing performance reviews to making the 4-day workweek possible,” Business Insider, July 27, 2023.

  4. Annelise Gilbert, “EEOC Settles First-of-Its-Kind AI Bias in Hiring Lawsuit (1),” Bloomberg Law, August 10, 2023.

  5. Equal Employment Opportunity Commission v. iTutorGroup, Inc., No. 1:22-cv-2565-PKC-PK (E.D.N.Y. filed May 5, 2022) (Aug. 9, 2023, joint notice of settlement and request for approval and execution of consent decree).

  6. iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit,” U.S. Equal Employment Opportunity Commission, September 11, 2023.

  7. 3:23-cv-00770-RFL (N.D. Cal. filed Feb. 1, 2023).

  8. 42 U.S.C. § 1981.

  9. Joseph O’Keefe, Evandro Gigante, and Hannah Morris, “Judge Grants Workday, Inc.’s Motion to Dismiss in Groundbreaking AI Class Action Lawsuit Mobley v. Workday,” Law and the Workplace (blog), Proskauer, January 24, 2024.

  10. Daniel Wiessner, “Workday accused of facilitating widespread bias in novel AI lawsuit,” Reuters, February 21, 2024.

  11. Rachel Goodman, “Why Amazon’s Automated Hiring Tool Discriminated Against Women,” American Civil Liberties Union, October 12, 2018.

  12. 42 U.S.C. § 2000e.

  13. 42 U.S.C. § 1981.

  14. 29 U.S.C. § 206(d).

  15. 29 U.S.C. §§ 621–634.

  16. Pub. L. 99-603, 100 Stat. 3359 (1986)), as codified as amended in scattered sections of Title 8 of the United States Code.

  17. 42 U.S.C. §§ 12101–12113.

  18. 42 U.S.C. §§ 2000gg–2000gg-6.

  19. 38 U.S.C. § 4311.

  20. 42 U.S.C. § 2000ff.

  21. Automated Employment Decision Tools: Frequently Asked Questions,” NYC Department of Consumer and Worker Protection, June 6, 2023.

  22. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” U.S. Equal Employment Opportunity Commission, May 12, 2022.

  23. Id.

  24. Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” U.S. Equal Employment Opportunity Commission, May 18, 2023.

  25. See Lena Kempe, “AI Risk Mitigation and Legal Strategies Series No. 5: Explainable AI,” LK Law Firm, January 11, 2024.

True Lender and Rate Exportation: Reviewing the Major 2023 Legislation

In recent years, several state legislatures have enacted consumer credit laws designed to regulate FinTech companies operating through partnerships with depository institutions, or more generally to limit the interest rates charged by those depository institutions. 2023 was no exception, and this growing trend should be watched closely by state-chartered depository institutions and financial services companies.

Section 521 of the Depository Institutions Deregulation and Monetary Control Act (“DIDMCA”), authorizes FDIC-insured state-chartered banks to use both the most favored lender authority and federal exportation authority enjoyed by national banks under 12 U.S.C. 85 by preempting state law. DIDMCA Section 521 allows states to “opt out” of the federal preemption; if a state does not opt out by statute, constitutional amendment, or referendum, then the state law interest rate limitations are preempted by federal law. Iowa and Puerto Rico have already opted out, with Colorado joining them in July 2024 and opt-out legislation recently introduced in the District of Columbia. Meanwhile, many other states are increasing their scrutiny and adopting laws to impose regulations on bank–FinTech partnerships, rather than upon all state-chartered depository institutions.

On June 5, 2023, Colorado House Bill 23-1229 was signed, clarifying that consumer loans made in Colorado are excluded from the provisions of DIDMCA Section 521. HB 23-1229 amends the Colorado Uniform Consumer Credit Code to change the terms and finance charges that a lender may impose in consumer credit transactions. This amendment requires that out-of-state banks follow Colorado’s interest rate and fee restrictions when lending to Colorado residents in Colorado.

The Colorado opt-out will impact state-chartered banks issuing loans to Colorado residents, including those that have programs with FinTech companies. The opt-out arises in a long context of enforcement, notably the Avant-Marlette settlement that set forth the expectations for FinTech-bank programs. In that case, Colorado’s Consumer Credit Administrator directly challenged the out-of-state loans made by an out-of-state bank, in partnership with multiple FinTech companies. The state argued that the federal interest rate preemption could not be applied because the bank was not the true lender of the loans and the partner company could not stand in the bank’s shoes for loans sold by the originating bank. The case was settled in an agreement that put into place certain operational requirements for the programs, but it did not apply Colorado’s usury limits to the originating bank given that, at the time, Colorado had not opted out of Section 521.

State legislatures have also sought to enact laws regulating partnerships between depository institutions and FinTech companies. On June 29, 2023, Connecticut enacted legislation to join states that have previously adopted such laws, including Maine, New Mexico, and Illinois. The laws codify a predominant economic interest test, and create other tests seeking to determine when the FinTech—not the originating depository institution—should be viewed as the “true lender,” in which case the depository institution’s federal rate preemption should be disregarded. The laws also impose varying interest rate limits and rate calculation methodologies.

Similar legislation in Minnesota, Minn. S.F. 2744, was enacted on May 24, 2023, and became effective January 1, 2024. The law caps the annual percentage rate (“APR”) on consumer small loans and consumer short-term loans at a 50 percent all-in APR. Consumer small loan is defined as a consumer-purpose unsecured loan for an amount equal to or less than $350 that must be repaid in a single installment. A consumer short-term loan is a loan that has a principal amount, or an advance, on a credit limit of $1,300 or less, and requires a minimum payment of more than 25 percent of the principal balance or credit advance within sixty days. Minn. S.F. 2744 provides that, if the all-in APR exceeds 36 percent, the lender must perform an ability-to-pay analysis, reviewing evidence of the borrower’s net income, major financial obligations, and living expenses. Like statutes in other states, it also implements the predominant economic interest test and other tests for the true lender.

The tests utilized in these laws vary by state, but they often provide that a nonbank entity should be viewed as the lender if any of the following are true:

  1. the person holds, acquires, or maintains, directly or indirectly, the predominant economic interest in the consumer credit product at issue;
  2. the person markets, brokers, arranges, or facilitates the consumer credit product and holds the right, requirement, or right of first refusal to purchase the consumer credit products; or
  3. the totality of the circumstances indicate that such person is the lender and the transaction is structured to evade the applicable state law requirements.

The laws typically establish several factors to consider for the totality of the circumstances test. They may vary from state to state but typically include a review of:

  1. Indemnifying, insuring, or protecting an exempt person for any costs or risks related to the consumer credit product;
  2. predominantly designing, controlling, or operating the lending program; or
  3. purporting to act as an agent, service provider, or in another capacity for an exempt person (typically any depository institution) in the state while acting directly as a lender in another state.

To date, there has been little public enforcement activity regarding these laws, making it hard for commentators to assess their impact on partnerships between banks and non-banks. State legislative trends indicate that more states will continue to consider legislation regulating such programs or opting out of DIDMCA, leading to a more fragmented landscape in the United States compared to the consistency seen in other countries. DIDMCA opt-outs raise interesting questions regarding how a DIDMCA opt-out will actually impact banks located out of state, and whether it will actually reach such loans. Similarly, the interplay of federal rate exportation authority with laws seeking to curtail that exportation absent DIDMCA opt-out raise interesting enforceability questions that may lead to future litigation should the trend continue.

***

This article is related to a CLE program titled “True Lender and Rate Exportation: Analyzing the Impact of State Laws Restricting Bank Originated Loans” that was presented during the ABA Business Law Section’s 2023 Fall Meeting. To learn more about this topic, listen to a recording of the program, free for members.

 

Summary: Updating Disclosure Schedules: Market Trends

Last updated on March 1, 2025.

This is a summary of the Hotshot course “Updating Disclosure Schedules: Market Trends,” in which ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates discuss market trends for disclosure schedules updates provisions, drawing on data from the ABA M&A Committee’s Private Target Deal Points Study. View the course here.


Updating Disclosure Schedules: Market Trends

  • The 2023 ABA M&A Committee’s Private Target Deal Points Study looked at how often parties allow updates to a seller’s disclosure schedules between signing and closing.
    • The study found that in 2022 and the first quarter of 2023:
      • Updates were expressly permitted or required in 14% of deals;
      • Updates were expressly prohibited in 5% of deals; and
      • The remaining 81% of deals were silent on the point.
  • Over the years, the number of deals allowing updates has been consistently less than half.
    • 14% in 2022 to 2023;
    • 24% in 2020 to 2021;
    • 31% in 2018 to 2019; and
    • 28% in 2016 to 2017.
  • Of the deals that permitted or required updates in the latest study, there was a decrease in those that allowed updates for information occurring both pre- and post- signing, from 62% in the 2021 study to 60% in the 2023 study.
  • The buyer had a right to close and seek indemnification for updated matters in 67% of the deals that permitted or required updates.
    • This marks a significant decrease from the last study, where it was 90%.
  • The buyer’s right to terminate the agreement was not affected by updated disclosure in 80% of the deals in the 2023 study.
    • In 20% of the deals, the buyer could terminate because of the disclosure, but only within a specific time period.

The rest of the video includes interviews with ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates.

Download a copy of this summary here.

Summary: Updating Disclosure Schedules: Sample Provisions

This is a summary of the Hotshot course “Updating Disclosure Schedules: Sample Provisions,” a look at two disclosure schedules updates provisions. View the course here.


Negotiating a Disclosure Schedule Updates Provision

  • When negotiating a disclosure schedules updates provision parties typically focus on:
    • Whether the seller is obligated or merely permitted to make updates;
    • The scope of permitted updates; and
    • How the updates affect other rights and obligations of the parties.
Sample Seller-Friendly Disclosure Schedules Update Provisions

During the Pre-Closing Period, Seller shall have the right (but not the obligation) to update the Disclosure Schedules to the extent information contained therein or any representation or warranty of Seller becomes untrue, incomplete or inaccurate after the Agreement Date due to events or circumstances after the date hereof or facts of which the Seller becomes aware after the date hereof. [Buyer shall have the right to terminate this Agreement pursuant to Section [_] within five (5) days after receipt of such update if the updated portion or portions of the Disclosure Schedules disclose any facts and circumstances that would cause a failure of the Closing Condition set forth in Section [_]; provided, however, that if (a) Buyer is not entitled to, or does not timely exercise, such right to terminate this Agreement, or (b) Buyer consummates the Closing,] Buyer shall, in any such case, be deemed to have accepted such updated Disclosure Schedules, any such update shall be deemed to have amended the Disclosure Schedules, to have qualified the relevant representations and warranties contained in Article [_], and to have cured any breach of any representation or warranty that otherwise might have existed hereunder by reason of such event or circumstance. Nothing in this Agreement, including this Section [_], shall be interpreted or construed to imply that Seller is making any representation or warranty as of any date other than as otherwise set forth herein.

[Emphasis added.]

  • This provision says that the seller has the right, not the obligation, to update the disclosure schedules. This is good for the seller because when updates are required:
    • An inadvertent failure to disclose new facts could result in an indemnification claim for breach of the seller’s covenant to update the disclosure schedules.
    • Or the buyer could claim that the closing conditions weren’t satisfied because the seller didn’t comply with its obligation to perform under the covenant.
  • The next part of the first sentence allows any updates needed to complete or correct any information in the disclosure schedules or reps that becomes untrue, incomplete, or inaccurate because of “events or circumstances” or “facts of which the Seller becomes aware”—in each case after the date of the agreement.
    • This sets up a broad scope for updates, including anything that happens or is learned after signing.
    • The only way this provision could be more seller-friendly is if the seller were also allowed to include information known or that should have been known prior to signing.
  • Most of the rest of the provision covers the impact of the disclosure schedule updates on the buyer’s rights, and it’s also beneficial to the seller because the buyer’s only recourse in this version of the provision is to terminate the provision.
    • If the buyer completes the acquisition, it’s deemed to have accepted the new disclosure and can’t then bring an indemnity claim relating to the new facts.
Sample Buyer-Friendly Disclosure Schedules Update Provisions

From time to time prior to the Closing, Seller shall promptly supplement or amend the Disclosure Schedules hereto with respect to any matter arising after the date hereof, which, if existing, occurring or known at the date of this Agreement, would or should have been required to be set forth or described in the Disclosure Schedules (each a “Schedules Supplement”). Any disclosure in any such Schedules Supplement shall not be deemed to have cured any inaccuracy in or breach of any representation or warranty contained in this Agreement, including for purposes of the indemnification or termination rights contained in this Agreement or of determining whether or not the conditions set forth in Section [_] have been satisfied.

[Emphasis added.]

  • In this example, the seller is obligated to promptly update the disclosure schedules when it becomes aware of new facts that would have been required to be disclosed if they had arisen prior to signing.
    • This ensures that the buyer has complete information at closing.
    • Most sellers agree to this formulation because it’s a pretty convincing argument that the buyer has a right to know all new facts or events that could impact the business prior to closing.
  • The provision goes on to limit updates to matters that arise after signing that would or should have been disclosed if they had occurred prior to signing.
    • This is different from the seller-friendly version because here the seller isn’t allowed to update the disclosure schedules with facts that arose before signing.
    • Parties often agree to limit the scope of updates to new things that happen post-signing.
      • Drafting the provision this way removes any incentive for a seller to wait to disclose material information until after signing, when the buyer could be obligated to close the deal.
  • Finally, in this example updates don’t affect the buyer’s rights under the agreement.
    • So the buyer has the option not to close if the closing conditions aren’t satisfied.
    • If the acquisition does close, the buyer can still bring an indemnification claim based on the disclosure as it stood at signing.

The rest of the video includes interviews with ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates.

Download a copy of this summary here.

Summary: Updating Disclosure Schedules

This is a summary of the Hotshot course “Updating Disclosure Schedules,” an introduction to disclosure schedules updates provisions, including why parties include a right or obligation to update disclosure schedules, the scope of permitted updates, and the updates’ effect on other rights and obligations of the parties under the acquisition agreement. View the course here.


Why Update Disclosure Schedules?

  • The disclosure schedules to an M&A agreement, together with the reps and warranties they modify, provide a snapshot of the seller and the target as of the signing date.
  • If the deal doesn’t close at signing, it’s possible that the disclosure schedules and the reps and warranties could be inaccurate or incomplete when the parties are ready to close.
    • This could be due to:
      • New facts discovered between signing and closing that the parties weren’t aware of before signing; or
      • New developments, such as the target getting sued by a customer after the agreement is signed.
    • Parties sometimes deal with this possibility by allowing, or even requiring, updates to the disclosure schedules.
  • When parties agree to allow or require updates, they add a disclosure schedules updates provision in the interim covenants section of the agreement.
  • Updating disclosure schedules is good for sellers, because the more accurate their disclosure is at closing, the lower the risk of a post-closing indemnification claim.
  • And, in principle, buyers also like disclosure schedules to be as accurate as possible before closing so that they can negotiate changes to the deal or even walk away if new and adverse facts are disclosed.
  • But parties don’t often include the right to update the disclosure schedules because they can’t agree on:
    • How the updated disclosure affects the rest of the agreement; or
    • The scope of the permitted or required updates.
  • When updates are not allowed, the parties are often taking the position that they’d rather not speculate on outcomes that aren’t certain when the agreement is signed. Instead they agree to deal with any issues as they arise.

Scope of Updates and Impact on the Acquisition Agreement

  • Several areas of the acquisition agreement can be affected when parties allow updates to the disclosure schedules.
    • The first is the closing conditions.
      • An update to the disclosure schedules is essentially an amendment to the seller’s reps and warranties.
      • Most M&A agreements include a condition that the seller’s reps and warranties have to be true and correct or true and correct in all material respects as of the closing. So if something happens after the deal is signed that would make the seller’s reps and warranties incorrect at closing, the buyer doesn’t have to close.
      • But if disclosure schedule updates are allowed and the seller makes updates to reflect the new development, the buyer could be required to close even if the new disclosure materially amends the reps the seller made at signing.
    • This dynamic leads parties to think carefully about another aspect of the agreement, the buyer’s termination rights. For example:
      • Should the buyer have the right to terminate the acquisition agreement based on the new disclosure, especially when the buyer can no longer rely on the closing conditions to get out of the deal?
      • What if the new disclosure is minor and doesn’t materially change the deal?
    • A third issue the parties think about is the seller’s liability for a breach of the reps and warranties as they existed at signing. For example:
      • Does an update to the disclosure schedules cure that breach and relieve the seller from its indemnification obligations for any resulting damages?
Scope of Updates
  • If the parties are able to agree on those issues, they’ll include a provision that typically lays out:
    • Whether the seller is required to update the schedules or if updates are simply permitted;
    • The scope of updates that can be made; and
    • How an update affects the rest of the agreement, like the closing conditions, termination rights, and indemnification provisions.
  • Defining the parameters for updates can be tricky. The parties consider:
    • The type of rep or warranty;
    • When the new information arose; and
    • The materiality of the new disclosure.
Type of Rep or Warranty
  • Buyers might be more willing to allow updates to affirmative, rather than negative, disclosures.
    • For example, both parties will want any new material contracts to be disclosed as part of the seller’s rep regarding material contracts.
      • The buyer would expect this kind of update, since the seller agrees to continue operating the business in the ordinary course between signing and closing.
    • But the buyer may be less willing to allow an update to a negative rep or warranty, like the “no liabilities” or “no litigation” reps.
      • In those cases, the underlying facts are more likely to have a negative impact on the value of the business.
      • And these types of updates usually relate to matters outside the ordinary course, so allowing them could expose the buyer to an unpredictable amount of additional risk.
When the New Info Arose
  • The parties also may limit new disclosure based on when the underlying facts arose.
  • A seller has a pretty compelling case for updating the disclosure schedules to include things that happen after signing.
    • But should they also be able to add facts that were known or that should have been known before signing?
    • What if those facts weren’t disclosed at signing because of an honest mistake or because the seller was genuinely unaware?
  • On the other hand, if the seller is allowed to update the disclosure schedules with information that arose prior to signing, what’s preventing them from withholding material information at signing that would otherwise affect the deal?
Materiality
  • The materiality of new information may also affect whether or not the seller can include it in a disclosure schedules update.
  • Buyers are typically willing to allow updates relating to facts that arise in the ordinary course of business and that don’t affect the economics of the deal.
    • But they often want to prohibit updates for new circumstances that are financially or operationally material to the business.
    • Depending on how the buyer’s closing conditions and termination rights are drafted, the buyer could be forced to close despite the new material disclosure.
Rep and Warranty Insurance
  • One other thing to consider is that if a deal has rep and warranty insurance, the policy is typically issued when the acquisition agreement is signed.
    • The coverage will not extend to newly disclosed facts unless the insurer expressly agrees to an extension of the policy.
    • So, if updates to the disclosure schedules are permitted or required, there may be a gap in the insurance coverage.

The rest of the video includes interviews with ABA M&A Committee members John F. Clifford from McMillan LLP and Ann Beth Stebbins from Skadden, Arps, Slate, Meagher & Flom LLP & Affiliates.

Download a copy of this summary here.

The Duty of Supervision in the Age of Generative AI: Urgent Mandates for a Public Company’s Board of Directors and Its Executive and Legal Team

This article is related to a Showcase CLE program titled “AI Is Coming for You: The Practical and Ethical Implications of How Artificial Intelligence Is Changing the Practice of Law” that took place at the American Bar Association Business Law Section’s 2024 Spring Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.

“This article highlights for busy board members and C-suite executives the dangers of not paying attention to Generative AI. The risk to publicly held companies from non-supervised implementation of Generative AI is significant. The authors make a solid case best practices are warranted to protect the corporation and the decision-makers.”—Kyung S. Lee, Shannon & Lee LLP, program co-chair

“Although at first glance this thoughtful article seems only tangentially related to the ethical use of Generative AI by lawyers, it actually provides an excellent framework for tackling the question of where, when, and how to use Generative AI capabilities inside the law firm or law department. Like their clients, a law firm or law department needs to consider many of the same issues. Does a potential use create a risk of data exposure? Could potential biases contained in underlying training data create biased outputs from the proposed application? How likely are “hallucinations,” and what damage can they cause? Suggested solutions for public company boards also apply to legal organizations. Education, bringing in experts, and creating systems and teams to vet uses all play their role in making sure legal teams use Generative AI responsibly. The article provides a useful roadmap to protecting legal organizations from the risks of Generative AI deployment.”—Warren Agin, program panelist


Introduction

Artificial intelligence is capturing the imagination of many in the business world, and one real-world message is unmistakable:

Any director or executive officer (CEO, CFO, CLO/GC, CTO, and others) of a publicly held company who ignores the risks, and fails to capitalize on the benefits, of Generative AI does so at his or her individual peril because of the risk of personal liability for failing to properly manage a prudent GenAI strategy.

Generative artificial intelligence, or GenAI,[1] is a technological marvel that is quickly transforming our lives and revolutionizing the way we communicate, learn, and make personal and professional decisions. Due to GenAI-powered technology and smart devices, all industries—ranging from the healthcare, transportation, energy, legal, and financial services industries to the education, technology, and entertainment industries—are experiencing almost logarithmic improvements. The use cases for GenAI seem boundless, balancing the opportunity to improve society with the risks that make one worry about the devastation that can be caused by GenAI if it operates without meaningful regulation or guardrails. Nowhere is the risk more fraught than in a specific type of highly regulated organization that is accountable to a myriad of stakeholders: U.S. publicly held companies.

Insofar as publicly held companies can be both (i) consumers of GenAI technology and (ii) developers and suppliers of GenAI technology, there are countless use cases, scenarios, and applications for a publicly held company. Common ways in which GenAI is used include data analysis and insights, customer services and support, financial analysis and fraud detection, automation and quality control in production and operation management, and marketing and sales.

Even though the specific applications of GenAI within a publicly held company depend on that company’s industry, goals, and challenges, every board of directors and in-house legal team managing a publicly held company must be keenly attuned to the corporate and securities litigation risks posed by GenAI. Indeed, as GenAI technologies become increasingly important for corporate success, board oversight of GenAI risks and risk mitigation is vital, extending beyond traditional corporate governance. Any publicly held company that does not establish policies and procedures regarding its GenAI use is setting itself up for potential litigation by stockholders as well as vendors, customers, regulatory agencies, and other third parties.

This article focuses on the principle that GenAI policies and procedures at a publicly held company must come from its board of directors, which, in conjunction with the executive team, must take a proactive and informed approach to navigate the opportunities and risks associated with GenAI, consistent with the board’s fiduciary duties.

Legal Background: The Duty of Supervision

Corporate governance principles require directors to manage corporations consistent with their fiduciary duty to act in the best interest of shareholders. The board’s fiduciary duty is comprised of three specific obligations: the duty of care,[2] the duty of loyalty,[3] and the more recently established derivative of the duty of care, the duty of supervision or oversight.[4]

The duty of supervision stems from the Caremark case, where the Delaware Court of Chancery expressed the view that the board has “a duty to attempt in good faith to assure that a corporate information and reporting system, which the board concludes is adequate, exists, and that failure to do so under some circumstances may, in theory at least, render a director liable for losses caused by non-compliance with applicable legal standards.”[5] The Caremark court later explained that liability for a “lack of good faith” depends on whether there was “a sustained or systematic failure of the board to exercise oversight — such as an utter failure to attempt to assure a reasonable information and reporting system exist . . . .”[6] In Stone v. Ritter, the Delaware Supreme Court explicitly approved the Caremark duty of oversight standard, holding that director oversight liability is conditioned upon: “(a) the directors utterly failed to implement any reporting or information system or controls; or (b) having implemented such a system or controls, [the directors] consciously failed to monitor or oversee its operations thus disabling themselves from being informed of risks or problems requiring their attention.”[7]

Thus, the first prong of the duty of supervision requires the board of directors to assure itself “that the corporation’s information and reporting system is in concept and design adequate to assure the board that appropriate information will come to its attention in a timely manner as matter of ordinary operations.”[8] If the board meets the standard in the first prong, the board can still violate the duty of supervision if it shows a “lack of good faith as evidenced by sustained or systematic failure of a director to exercise reasonable oversight.”[9]

The principles in Caremark were clarified further in a securities derivative suit against Boeing Corporation. In that now-classic case, the Delaware Court of Chancery established an enhanced duty of supervision where the nature of a corporation’s business presents unique or extraordinary risk. In Boeing, the Court permitted a Caremark claim to proceed against Boeing’s board of directors amidst a former director’s acknowledgement of the board’s subpar oversight of safety measures. The Court found that safety was a “mission-critical” issue for an aircraft company, and any material deficiencies in oversight systems in a vital area justified enhanced scrutiny of a board’s oversight of them.[10]

The Caremark duty of supervision was extended beyond the board level to executive management last year in a shareholder litigation against McDonald’s Corporation.[11] In McDonald’s, the Delaware Court of Chancery adopted the reasoning of Caremark when extending the duty of oversight to the management team because executive officers function as agents who report to the board, with an obligation to “identify red flags, report upward, and address the [red flags] if they fall within the officer’s area of responsibility.”[12]

Application of the Duty of Supervision in the Era of GenAI

Each new technology entering the corporate world stimulates a new round of corporate governance questions about whether and how the fiduciary duty of directors and executive officers of publicly held companies is transformed due to new business operations and the risks appurtenant to them. GenAI is no different. The nature of GenAI calls for immediate attention from the board of directors and the legal team at publicly held companies.

With the specters of privacy violations, AI “hallucinations” (where an AI model creates incorrect or misleading results), “deepfakes,” bias, lack of transparency, and difficulties in evaluating a “black box” decision-making process, many things can go wrong with the use of GenAI. Each of those things that can go wrong exposes a publicly held company to material risk. At this stage in the evolution of AI, there are certain categories of corporate, regulatory, and securities law risks that are most dangerous for public companies. Publicly held companies need to be especially mindful of public disclosures around AI usage; the impact of AI on their operations, competitive environment, and financial results; and whether AI strategy and usage is likely to have a material effect on overall financial performance and why.

Given the enormous benefits, opportunities, and risks emerging in the era of GenAI, the principles articulated in the Caremark line of cases are instructive for a board of directors and executive management of publicly held companies. Without question, the board of every publicly held company must implement reporting, information systems, and controls that govern the organization’s use of GenAI technology. The macro-implications of GenAI compel this conclusion, and the section below suggests specific practical takeaways and best practices.

When implementing GenAI-related systems and controls, the board and management team must contextualize the corporation’s use of AI so that the systems and controls align with the corporation’s business operations, financial goals, and shareholder interests. Publicly held companies that develop and sell GenAI products have different considerations and obligations than do companies that only use GenAI in their operations. When implementing these systems and controls, publicly held companies must be mindful of the fact that the duty of supervision equally applies to executive officers as well as to boards under the McDonald’s case. As the “conscience” of the organization, the legal team advising a publicly held company must consider day-to-day compliance tactics and measures in addition to adopting systems and controls at the board level that comply with the overarching principles of the duty of supervision.

Practical Takeaways and Best Practices

The following items are integral components of any public held company’s AI plan:

  1. Baseline technological GenAI knowledge. Every board member and executive team member must have and maintain a working understanding of what GenAI is, its different iterations and how each works, and how the organization uses and benefits from GenAI.
  2. Ongoing GenAI education. As GenAI technology or the organization’s use of it changes, board members and the executive team should continue to keep themselves informed on issues of significance or risk to the company through regularly scheduled updates.
  3. Institutionalization of GenAI risk oversight. Publicly held companies should build a team of stakeholders from across the entire organization for GenAI oversight. That team must include individuals from business, legal, and technology departments—both high-level executives and operational experts—responsible for evaluating and mitigating GenAI-related risks.
  4. Inclusion of AI experts in board composition. Publicly held companies must modify the composition of their boards to include members with expertise in AI, technology, and data science. The goal is to have well-rounded perspectives on AI-related matters. To meet the legal demands of GenAI supervision, boards should consider recruiting members with legal expertise in technology, data privacy, and AI regulations, as well as board members who are expert at identifying new technology risks.
  5. AI committee. A publicly held company should establish an AI committee charged with additional oversight of GenAI risks and opportunities.
  6. Adoption of written policies. The board and executive team must create a written framework for making policies and materiality determinations regarding public disclosure in the context of GenAI usage, reporting GenAI incidents with advice of counsel, and setting standards for professionals who oversee GenAI systems and controls.
  7. Understanding of GenAI legal and regulatory compliance. The board and executive team must understand and stay apprised of AI-related legislation and regulations and oversee policies, systems, and controls to ensure that GenAI use complies with new legal requirements.
  8. Ethical GenAI governance. The board and executive team should address ethical standards for GenAI usage, development, and deployment, including issues such as bias, transparency, and accountability.
  9. SEC disclosure. Public companies must understand how Securities and Exchange Commission requirements affect GenAI and incorporate those requirements into their disclosure protocols. Boards must stay informed about regional and global variations in GenAI regulations and adapt corporate policies to ensure compliance with securities regulations and avoid legal pitfalls.
  10. Performance monitoring: The board and the executive team should implement mechanisms to monitor the performance of any GenAI controls and to assess the impact on key performance indicators, as well as regularly review and adapt the company’s GenAI strategies based on other performance metrics.
  11. Collaboration with legal counsel. Close collaboration between boards and legal counsel is essential to minimize GenAI risk. Legal experts should be integral to the decision-making process, providing guidance on compliance, risk management, and the development of legal strategies pertaining to GenAI.

Conclusion

Artificial intelligence, including GenAI, has the power to drive substantial change in our daily lives and in the ways that companies conduct business. With that power comes an emerging and significant risk that publicly held companies and their board members and executives—ever the target of shareholder litigation—must take seriously by implementing robust AI-focused policies, procedures, and risk-management initiatives.


  1. Although earlier generations of artificial intelligence (and technology generally) can afford great benefits and pose material risks, this article focuses on Generative Artificial Intelligence, or GenAI, because of the unique challenges GenAI poses due to machine learning capabilities, training data biases and challenges, privacy issues, and the “black box” nature of the technology.

  2. Smith v. Van Gorkom, 488 A.2d 858, 872 (Del. 1985).

  3. Cede & Co. v. Technicolor, Inc., 634 A.2d 345, 361 (Del. 1993).

  4. In re Caremark Int’l Inc. Deriv. Litig., 698 A.2d 959, 970 (Del. Ch. 1996).

  5. Id. at 971.

  6. Id. (emphasis added). The second prong in Caremark often is characterized as “consciously disregarding ‘red flags.’”

  7. Stone v. Ritter, 911 A.2d 362, 370 (Del. 2006).

  8. Caremark at 970.

  9. Id. at 971.

  10. In re The Boeing Co. Derivative Litig., No. 2019-0907-MTZ, 2021 WL 4059934 (Del. Ch. Sept. 7, 2021).

  11. In re McDonald’s Corp. S’holder Derivative Litig., 289 A.3d 343 (Del. Ch. 2023) (“Although the duty of oversight applies equally to officers, its context-driven application will differ. Some officers, like the CEO, have a company-wide remit. Other officers have particular areas of responsibility, and the officer’s duty to make a good faith effort to establish an information system only applies within that area.”).

  12. Id. at 366.