This article is related to a Showcase CLE program titled “AI Is Coming for You: The Practical and Ethical Implications of How Artificial Intelligence Is Changing the Practice of Law” that took place at the American Bar Association Business Law Section’s 2024 Spring Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.
“This article highlights for busy board members and C-suite executives the dangers of not paying attention to Generative AI. The risk to publicly held companies from non-supervised implementation of Generative AI is significant. The authors make a solid case best practices are warranted to protect the corporation and the decision-makers.”—Kyung S. Lee, Shannon & Lee LLP, program co-chair
“Although at first glance this thoughtful article seems only tangentially related to the ethical use of Generative AI by lawyers, it actually provides an excellent framework for tackling the question of where, when, and how to use Generative AI capabilities inside the law firm or law department. Like their clients, a law firm or law department needs to consider many of the same issues. Does a potential use create a risk of data exposure? Could potential biases contained in underlying training data create biased outputs from the proposed application? How likely are “hallucinations,” and what damage can they cause? Suggested solutions for public company boards also apply to legal organizations. Education, bringing in experts, and creating systems and teams to vet uses all play their role in making sure legal teams use Generative AI responsibly. The article provides a useful roadmap to protecting legal organizations from the risks of Generative AI deployment.”—Warren Agin, program panelist
Introduction
Artificial intelligence is capturing the imagination of many in the business world, and one real-world message is unmistakable:
Any director or executive officer (CEO, CFO, CLO/GC, CTO, and others) of a publicly held company who ignores the risks, and fails to capitalize on the benefits, of Generative AI does so at his or her individual peril because of the risk of personal liability for failing to properly manage a prudent GenAI strategy.
Generative artificial intelligence, or GenAI,[1] is a technological marvel that is quickly transforming our lives and revolutionizing the way we communicate, learn, and make personal and professional decisions. Due to GenAI-powered technology and smart devices, all industries—ranging from the healthcare, transportation, energy, legal, and financial services industries to the education, technology, and entertainment industries—are experiencing almost logarithmic improvements. The use cases for GenAI seem boundless, balancing the opportunity to improve society with the risks that make one worry about the devastation that can be caused by GenAI if it operates without meaningful regulation or guardrails. Nowhere is the risk more fraught than in a specific type of highly regulated organization that is accountable to a myriad of stakeholders: U.S. publicly held companies.
Insofar as publicly held companies can be both (i) consumers of GenAI technology and (ii) developers and suppliers of GenAI technology, there are countless use cases, scenarios, and applications for a publicly held company. Common ways in which GenAI is used include data analysis and insights, customer services and support, financial analysis and fraud detection, automation and quality control in production and operation management, and marketing and sales.
Even though the specific applications of GenAI within a publicly held company depend on that company’s industry, goals, and challenges, every board of directors and in-house legal team managing a publicly held company must be keenly attuned to the corporate and securities litigation risks posed by GenAI. Indeed, as GenAI technologies become increasingly important for corporate success, board oversight of GenAI risks and risk mitigation is vital, extending beyond traditional corporate governance. Any publicly held company that does not establish policies and procedures regarding its GenAI use is setting itself up for potential litigation by stockholders as well as vendors, customers, regulatory agencies, and other third parties.
This article focuses on the principle that GenAI policies and procedures at a publicly held company must come from its board of directors, which, in conjunction with the executive team, must take a proactive and informed approach to navigate the opportunities and risks associated with GenAI, consistent with the board’s fiduciary duties.
Legal Background: The Duty of Supervision
Corporate governance principles require directors to manage corporations consistent with their fiduciary duty to act in the best interest of shareholders. The board’s fiduciary duty is comprised of three specific obligations: the duty of care,[2] the duty of loyalty,[3] and the more recently established derivative of the duty of care, the duty of supervision or oversight.[4]
The duty of supervision stems from the Caremark case, where the Delaware Court of Chancery expressed the view that the board has “a duty to attempt in good faith to assure that a corporate information and reporting system, which the board concludes is adequate, exists, and that failure to do so under some circumstances may, in theory at least, render a director liable for losses caused by non-compliance with applicable legal standards.”[5] The Caremark court later explained that liability for a “lack of good faith” depends on whether there was “a sustained or systematic failure of the board to exercise oversight — such as an utter failure to attempt to assure a reasonable information and reporting system exist . . . .”[6] In Stone v. Ritter, the Delaware Supreme Court explicitly approved the Caremark duty of oversight standard, holding that director oversight liability is conditioned upon: “(a) the directors utterly failed to implement any reporting or information system or controls; or (b) having implemented such a system or controls, [the directors] consciously failed to monitor or oversee its operations thus disabling themselves from being informed of risks or problems requiring their attention.”[7]
Thus, the first prong of the duty of supervision requires the board of directors to assure itself “that the corporation’s information and reporting system is in concept and design adequate to assure the board that appropriate information will come to its attention in a timely manner as matter of ordinary operations.”[8] If the board meets the standard in the first prong, the board can still violate the duty of supervision if it shows a “lack of good faith as evidenced by sustained or systematic failure of a director to exercise reasonable oversight.”[9]
The principles in Caremark were clarified further in a securities derivative suit against Boeing Corporation. In that now-classic case, the Delaware Court of Chancery established an enhanced duty of supervision where the nature of a corporation’s business presents unique or extraordinary risk. In Boeing, the Court permitted a Caremark claim to proceed against Boeing’s board of directors amidst a former director’s acknowledgement of the board’s subpar oversight of safety measures. The Court found that safety was a “mission-critical” issue for an aircraft company, and any material deficiencies in oversight systems in a vital area justified enhanced scrutiny of a board’s oversight of them.[10]
The Caremark duty of supervision was extended beyond the board level to executive management last year in a shareholder litigation against McDonald’s Corporation.[11] In McDonald’s, the Delaware Court of Chancery adopted the reasoning of Caremark when extending the duty of oversight to the management team because executive officers function as agents who report to the board, with an obligation to “identify red flags, report upward, and address the [red flags] if they fall within the officer’s area of responsibility.”[12]
Application of the Duty of Supervision in the Era of GenAI
Each new technology entering the corporate world stimulates a new round of corporate governance questions about whether and how the fiduciary duty of directors and executive officers of publicly held companies is transformed due to new business operations and the risks appurtenant to them. GenAI is no different. The nature of GenAI calls for immediate attention from the board of directors and the legal team at publicly held companies.
With the specters of privacy violations, AI “hallucinations” (where an AI model creates incorrect or misleading results), “deepfakes,” bias, lack of transparency, and difficulties in evaluating a “black box” decision-making process, many things can go wrong with the use of GenAI. Each of those things that can go wrong exposes a publicly held company to material risk. At this stage in the evolution of AI, there are certain categories of corporate, regulatory, and securities law risks that are most dangerous for public companies. Publicly held companies need to be especially mindful of public disclosures around AI usage; the impact of AI on their operations, competitive environment, and financial results; and whether AI strategy and usage is likely to have a material effect on overall financial performance and why.
Given the enormous benefits, opportunities, and risks emerging in the era of GenAI, the principles articulated in the Caremark line of cases are instructive for a board of directors and executive management of publicly held companies. Without question, the board of every publicly held company must implement reporting, information systems, and controls that govern the organization’s use of GenAI technology. The macro-implications of GenAI compel this conclusion, and the section below suggests specific practical takeaways and best practices.
When implementing GenAI-related systems and controls, the board and management team must contextualize the corporation’s use of AI so that the systems and controls align with the corporation’s business operations, financial goals, and shareholder interests. Publicly held companies that develop and sell GenAI products have different considerations and obligations than do companies that only use GenAI in their operations. When implementing these systems and controls, publicly held companies must be mindful of the fact that the duty of supervision equally applies to executive officers as well as to boards under the McDonald’s case. As the “conscience” of the organization, the legal team advising a publicly held company must consider day-to-day compliance tactics and measures in addition to adopting systems and controls at the board level that comply with the overarching principles of the duty of supervision.
Practical Takeaways and Best Practices
The following items are integral components of any public held company’s AI plan:
- Baseline technological GenAI knowledge. Every board member and executive team member must have and maintain a working understanding of what GenAI is, its different iterations and how each works, and how the organization uses and benefits from GenAI.
- Ongoing GenAI education. As GenAI technology or the organization’s use of it changes, board members and the executive team should continue to keep themselves informed on issues of significance or risk to the company through regularly scheduled updates.
- Institutionalization of GenAI risk oversight. Publicly held companies should build a team of stakeholders from across the entire organization for GenAI oversight. That team must include individuals from business, legal, and technology departments—both high-level executives and operational experts—responsible for evaluating and mitigating GenAI-related risks.
- Inclusion of AI experts in board composition. Publicly held companies must modify the composition of their boards to include members with expertise in AI, technology, and data science. The goal is to have well-rounded perspectives on AI-related matters. To meet the legal demands of GenAI supervision, boards should consider recruiting members with legal expertise in technology, data privacy, and AI regulations, as well as board members who are expert at identifying new technology risks.
- AI committee. A publicly held company should establish an AI committee charged with additional oversight of GenAI risks and opportunities.
- Adoption of written policies. The board and executive team must create a written framework for making policies and materiality determinations regarding public disclosure in the context of GenAI usage, reporting GenAI incidents with advice of counsel, and setting standards for professionals who oversee GenAI systems and controls.
- Understanding of GenAI legal and regulatory compliance. The board and executive team must understand and stay apprised of AI-related legislation and regulations and oversee policies, systems, and controls to ensure that GenAI use complies with new legal requirements.
- Ethical GenAI governance. The board and executive team should address ethical standards for GenAI usage, development, and deployment, including issues such as bias, transparency, and accountability.
- SEC disclosure. Public companies must understand how Securities and Exchange Commission requirements affect GenAI and incorporate those requirements into their disclosure protocols. Boards must stay informed about regional and global variations in GenAI regulations and adapt corporate policies to ensure compliance with securities regulations and avoid legal pitfalls.
- Performance monitoring: The board and the executive team should implement mechanisms to monitor the performance of any GenAI controls and to assess the impact on key performance indicators, as well as regularly review and adapt the company’s GenAI strategies based on other performance metrics.
- Collaboration with legal counsel. Close collaboration between boards and legal counsel is essential to minimize GenAI risk. Legal experts should be integral to the decision-making process, providing guidance on compliance, risk management, and the development of legal strategies pertaining to GenAI.
Conclusion
Artificial intelligence, including GenAI, has the power to drive substantial change in our daily lives and in the ways that companies conduct business. With that power comes an emerging and significant risk that publicly held companies and their board members and executives—ever the target of shareholder litigation—must take seriously by implementing robust AI-focused policies, procedures, and risk-management initiatives.
Although earlier generations of artificial intelligence (and technology generally) can afford great benefits and pose material risks, this article focuses on Generative Artificial Intelligence, or GenAI, because of the unique challenges GenAI poses due to machine learning capabilities, training data biases and challenges, privacy issues, and the “black box” nature of the technology. ↑
Smith v. Van Gorkom, 488 A.2d 858, 872 (Del. 1985). ↑
Cede & Co. v. Technicolor, Inc., 634 A.2d 345, 361 (Del. 1993). ↑
In re Caremark Int’l Inc. Deriv. Litig., 698 A.2d 959, 970 (Del. Ch. 1996). ↑
Id. at 971. ↑
Id. (emphasis added). The second prong in Caremark often is characterized as “consciously disregarding ‘red flags.’” ↑
Stone v. Ritter, 911 A.2d 362, 370 (Del. 2006). ↑
Caremark at 970. ↑
Id. at 971. ↑
In re The Boeing Co. Derivative Litig., No. 2019-0907-MTZ, 2021 WL 4059934 (Del. Ch. Sept. 7, 2021). ↑
In re McDonald’s Corp. S’holder Derivative Litig., 289 A.3d 343 (Del. Ch. 2023) (“Although the duty of oversight applies equally to officers, its context-driven application will differ. Some officers, like the CEO, have a company-wide remit. Other officers have particular areas of responsibility, and the officer’s duty to make a good faith effort to establish an information system only applies within that area.”). ↑
Id. at 366. ↑