There has been a notable uptick in clinical research sites being bought out by private equity and venture capital companies. This trend signifies the growing recognition of the value these sites hold, not just in terms of their operational capabilities, but also through the critical data they generate. However, for both buyers and sellers, there are significant considerations to keep in mind to navigate these transactions successfully and ethically.
Before purchasing clinical trial sites, a private equity or venture capital company (collectively, “Buyer”) must have a clear thesis as to why such acquisitions make sense. Various reasons have included wanting to:
acquire the data associated with clinical trial participants;
dominate a clinical research market for a specific disease state;
dominate a specific clinical research geographical area; or
vertically integrate into the clinical research space.
Small clinical trial sites are typically structured to ensure that a physician is providing services to the clinical trial site—i.e., the physician serves as a contractor to the clinical trial site and is providing medical services as part of that contract. This, however, can raise significant issues during an acquisition. This article discusses several crucial considerations for Buyers and one option for addressing them.
Privacy Considerations
The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule requires compliance with “national standards to protect individuals’ medical records and other individually identifiable health information” (“protected health information” or PHI). HIPAA applies to a variety of stakeholders who conduct certain healthcare transactions electronically. State laws also have a meaningful impact on data collection and privacy in this space. California investors alone may need to deal with the California Consumer Privacy Act of 2018, the California Privacy Rights Act of 2020 amending it, and the state’s Confidentiality of Medical Information Act. Without a federal law unifying and simplifying requirements, this hodgepodge of privacy requirements has been, and continues to be, a challenge for Buyers and must be appropriately reviewed to minimize susceptibility to seemingly unlimited fines and penalties.
Compliance with privacy requirements can be pivotal for a deal. Buyers hoping to acquire data associated with clinical trial participants have approached sites to acquire access to subject data in the context of licensing or a sale and thereby enable data brokers to maximize their data chests. However, the lack of appropriate and preexisting consent has often stymied such goals due to the inability to contact past clinical trial subjects at scale.
Regulatory Due Diligence
The quality of clinical research and the integrity of the data produced hinge on robust quality programs and oversight mechanisms. For Buyers, assessing the effectiveness of these programs at the target site is crucial. This assessment includes evaluating the site’s adherence to Good Clinical Practice guidelines and whether a competent quality assurance team is present. An effective quality assurance program may often include having in place defined goals, a clear list of standard operating procedures that are routinely updated and that staff are routinely trained on, and audits.
These audits should be conducted by both internal stakeholders and external consultants to minimize bias. Externally, clinical trial sponsors and clinical research organizations will routinely conduct such audits, since they are intended to improve the functioning at an individual site and also to ensure that if the US Food and Drug Administration (FDA) ever audits the site, the records are appropriately maintained. However, the FDA may nevertheless find problems at the clinical trial site. Such findings, even if addressed, have been deemed devastating by multiple clinical trial sites. It is therefore important for a potential investor to identify potential audit findings and evaluate the implications of such findings on valuation.
Preventing the Corporate Practice of Medicine
One of the foremost considerations in the context of clinical trial site acquisition is the consideration and prevention of the corporate practice of medicine. This doctrine, which varies by state, generally prohibits corporations or non-physicians from practicing medicine or employing physicians to provide professional medical services. It is intended to ensure that medical decisions are made by qualified medical professionals rather than corporate entities driven by profit motives, or individuals who may not adequately appreciate the medical decision-making process.
Some argue that research is exempt from corporate practice of medicine rules. Nevertheless, this conclusion is generally deemed to be premature and may need to be evaluated on a case-by-case basis. By way of example, while the definition of the practice of medicine varies from state to state, the implementing regulations of the Texas Medical Practice Act specifically define “Actively engaged in the practice of medicine” as including “clinical medical research” and “the practice of clinical investigative medicine.”
Some states, such as Michigan, will not allow physicians to be employed by non-physicians and only allow physicians to form professional corporations, professional associations, or professional limited liability companies so that they are owned exclusively by physicians. Accordingly, in such states, a clinical trial site engaged in the practice of medicine cannot be owned by a Buyer who is not a physician. On the other hand, several states, including Arizona, allow non-physicians to own a portion of a professional corporation that practices medicine, but they limit this to a 49 percent interest or other noncontrolling interest. In certain states, this also means that only the physician’s office can bill for medical services. In other states, however, no such requirements are imposed on clinical trial sites. This variability can have a dramatic impact on the value of a clinical trial site, and the appropriate structure of the relationship between a physician and a clinical trial site. It is therefore important to conduct a state-by-state analysis to evaluate the definition of the practice of medicine, its application, and its implication for the corporate practice of medicine to evaluate how it applies to your target research site.
Ownership Considerations
When a Buyer purchases a clinical trial site, they hope to not only own the site, but also prevent the physician performing the research from starting a competing clinical trial site next door.
As discussed above, depending on the state, Buyers may not be able to actually own the doctor’s office or the research site that performs medical services—since that could violate state law.
When the Buyer can neither purchase the physician’s office due to state law, nor prevent the physician from starting a competitor next door, it can be unclear what the Buyer is actually buying.
A Structural Solution
There is, however, a simple, time-tested way to address many of the privacy, regulatory, and corporate practice of medicine problems described above: creating a management service organization (MSO) to handle the nonmedical aspects of the clinical research site. Such a structure enables physicians to maintain control over medical decisions at their medical office, while the MSO can be owned by the Buyer and will provide the physician’s office with services related to clinical research. Such services may include regulatory assistance, sales and marketing, training, and more.
In such a situation, PHI provided to a doctor’s office is subject to HIPAA. However, a HIPAA waiver would be obtained from the patient to enable sharing information with the MSO. This would have the further advantage of sharing the same PHI with pharmaceutical companies or medical device companies (collectively, “Sponsors”), which are also not “covered entities” as defined by HIPAA and are therefore not subject to HIPAA regulations in the context of research. This is especially important since most Sponsors refuse to sign a HIPAA “Business Associate Agreement.” The signing of the HIPAA waiver reduces the risk of privacy-related liability.
In the event a Buyer has a holding company holding multiple MSOs, working with this MSO structure minimizes impact to the holding company and related companies. For example, if a single MSO is affected by a regulatory concern related to the FDA, or privacy, a Buyer may choose to disband or disavow that individual MSO and continue to operate its remaining MSOs without all of them being tainted by the regulatory finding.
Conclusion
For clinical research sites, partnering with Buyers can provide much-needed resources and support, but it also requires careful planning and due diligence to ensure that the partnership is aligned with the mission and values of both sides. Investors and sites preparing for sale and purchase must understand the nuances of complying with corporate practice of medicine doctrines, ensuring proper patient consent for data use, and evaluating the strength and quality of programs to ensure a smooth acquisition process.
In response to the misuse of generative artificial intelligence (“GAI”) in court filings, courts nationwide have promulgated standing orders and local rules on how parties should use GAI in the courtroom. This article will summarize those local rules and standing orders and identify common issues in cases where attorneys’ misuse of GAI resulted in potential sanctions. Of the approaches courts have taken thus far, the local rule set forth by the United States District Court for the Eastern District of Texas presents one notable model for courts considering promulgating a rule on the use of GAI, because it provides guidance on the use of GAI in court filings while remaining able to adapt to GAI’s rapid advancements.
An Overview of GAI
In a nutshell, GAI refers to machine learning algorithms that are “trained on data to recognize patterns and generate new content based on the ‘rules and patterns’ they have learned.”[1] There are many different GAI programs that serve many different purposes. For example, ChatGPT is a GAI that can generate pages of material and has infamously been responsible for generating court filings that included fake cases. However, Grammarly and Microsoft Copilot are GAI that serve to help with clarity of writing. Moreover, Westlaw and LexisNexis have developed GAI to help with case research, which could streamline attorney work products and save money for law firms and clients.
Legal Standard
Current case law surrounding GAI has invoked Rule 11 and Rule 8 of the Federal Rules of Civil Procedure. Rule 11 provides that any document filed with the court must be signed by at least one attorney of record who certifies that “after an inquiry reasonable under the circumstances . . . the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.”[2] Rule 11 also requires certification that any document filed with the court does not “needlessly increase the cost of litigation . . . [and] the factual contentions have evidentiary support or . . . will likely have evidentiary support after a reasonable opportunity for further investigation or discovery.”[3] Rule 8 provides that “a pleading that states a claim for relief must contain . . . a short and plain statement of the claim showing that the pleader is entitled to relief.”[4] Although Rule 26 has not been implicated yet, discovery requests, responses, and objections could be drafted using GAI. Similar to Rule 11, Rule 26(g) requires that at least one attorney of record sign discovery requests and responses and certify that after a reasonable inquiry all filings are warranted by existing law, are nonfrivolous, and do not needlessly increase the cost of litigation.
A violation of Rule 8 can lead to a dismissal of the complaint, while violations of Rule 11 and Rule 26 can result in a range of sanctions. If a court decides to issue sanctions sua sponte, it should only do so “upon a finding of subjective bad faith.”[5] When parties sign and file their affirmations and make no inquiries as to the accuracy of their assertions, it supports a finding of subjective bad faith.[6] When parties use GAI to file documents that include fake cases, it inherently supports a finding of subjective bad faith because it demonstrates a lack of inquiry sufficient to impose sanctions sua sponte. Therefore, courts possess the power to sanction parties that misuse GAI and do not need to promulgate additional filing requirements.
Local Rules and Standing Orders Relating to the Use of GAI
Courts across the country have varied on how to address the use of GAI in court filings. Court rules on the topic have ranged from guidance implementing no additional requirements to a complete prohibition on GAI. However, most courts have promulgated a rule on GAI that requires some form of disclosure and certification when a party uses GAI.
A. Disclosure and Certification When GAI Is Used to Draft Filings
Courts that require disclosure when GAI is used to draft portions of a filing have variations on their requirements. Some courts only require a verification that the contents of the filing are accurate, while others require a separate certification in addition to the filing. For example, in 2023 the United States Bankruptcy Court for the Western District of Oklahoma promulgated a general order that requires that any document drafted by GAI be accompanied by a certification that
(1) identif[ies] the program used and the specific portions of text for which [GAI] was utilized; (2) certif[ies] the document was checked for accuracy using print reporters, traditional legal databases, or other reliable means; and (3) certif[ies] the use of such program has not resulted in the disclosure of any confidential information to any unauthorized party.[7]
B. Disclosure and Certification When GAI Is Used to Prepare a Filing
Some courts require disclosure and certification when parties use GAI in any capacity to prepare filings with the court. However, these courts do not distinguish between GAI that can generate work products and other forms of GAI that can help clarify writing or facilitate legal research. For example, Judge Palk of the United States District Court for the Western District of Oklahoma created a standing order that is representative of this issue and requires parties that used GAI to draft or prepare a court filing to disclose “that [G]AI was used and the specific [G]AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by [G]AI, including all citations and legal authority.”[8] This suggests that to comply with the standing order, parties must disclose and certify every filing where they used legal search engines that incorporate GAI to help streamline search results or proofreading software such as Grammarly or Microsoft Word.
Some courts, mainly in Texas, take this a step further to require a certification regarding GAI regardless of whether it was used; they require that parties certify either that they did not use GAI to draft or prepare a filing or, if they did, that the parties will check “any language drafted by [GAI] . . . for accuracy, using print reporters or traditional legal databases, by a human being.”[9] Overly broad disclosure and certification requirements can be cumbersome and difficult to enforce, and they may create confusion among individuals trying to file.
C. Prohibitions on the Use of GAI
A minority of courts prohibit parties from using GAI to draft documents that are filed with the court.[10] Some judges prohibit the use of GAI in any capacity. Although these rules typically create a carve out to allow parties to use search engines that incorporate GAI, these orders do not create the same carve out for proofreading software that utilizes GAI for clarity of writing.[11] For example, Judge Newman of the United States District Court of the Southern District of Ohio stipulates that “[n]o attorney for a party, or a pro se party, may use Artificial Intelligence (‘AI’) in the preparation of any filing submitted to the Court.” This magnifies the problem with not distinguishing between different forms of GAI discussed above because as more proofreading software incorporate GAI to assist with clarity of writing, this standing order will become increasingly arduous to comply with. Further, it would be impossible to consistently determine whether a party has used GAI to assist with clarity of writing or not, which would make such a standing order too far-reaching to the point that it is moot. As a result, these courts will likely have to change their local rules and standing orders in the near future.
D. Rules That Provide Guidance and Do Not Impose Additional Requirements
A handful of courts have addressed the use of GAI as guidance rather than imposing an additional filing requirement. For example, the United States District Court for the Eastern District of Texas promulgated a local rule that stated if a party used GAI to prepare or draft a court filing, Federal Rule of Civil Procedure 11 still applies. The local rule also reminds parties to review the generated content for accuracy if they use GAI, in order to avoid sanctions.[12] This approach can achieve a court’s goal of addressing the use of GAI while also being able to adapt to the inevitable widespread adoption of GAI.
Common Issues That Arise in GAI Sanctions Jurisprudence
The main issue that the courts that have sanctioned litigants for misuse of GAI have encountered is the “hallucination” of cases when parties use GAI to generate work products. The United States District Court for the Southern District of New York addressed this issue in the infamous case Mata v. Avianca, where an attorney used ChatGPT to draft an Affirmation in Opposition that cited mostly fake cases.[13] Since then, citing fake cases has been the main reason parties have been sanctioned for using GAI.[14] In Kruse v. Karlen, in addition to hallucinating cases, the GAI also provided erroneous information about state statutes.[15]
Courts have also dismissed pleadings generated with GAI because they violated Federal Rule of Civil Procedure 8(a). In Whaley v. Experian Information Solutions, Inc., a pro se litigant filed a 144-page complaint alleging a violation of the Fair Credit Reporting Act and used GAI to generate a portion of it.[16] The complaint was verbose and confusing, and it lacked accurate citations. Therefore, the court dismissed the complaint without prejudice because it violated Rule 8(a).[17]
The United States Bankruptcy Court for the Southern District of New York has also addressed the use of GAI in an expert witness report. In In re Celsius Network LLC, an expert witness generated a 172-page report using GAI in seventy-two hours. He admitted that a “comprehensive human-authored report would have taken over 1,000 hours to complete.”[18] The report “contained numerous errors, ranging from duplicated paragraphs to mistakes in its description of the trading window selected for evaluation . . . [and] contain[ed] almost no citations to facts or data underlying the majority of the methods, facts, and opinions set forth therein.”[19] As a result, Judge Glenn excluded the report from the record.[20]
Although Rule 26 has not been at issue in cases noted thus far, GAI could easily be used in discovery requests, responses, and objections. Some courts have anticipated this possibility in their standing orders and stated that Rule 26 sanctions apply in addition to Rule 11 sanctions.
Conclusion
Although courts should rightfully be concerned about the widespread use of GAI, they already have the tools to address any issue that may arise without promulgating an additional rule. If parties use fictitious sources, they inherently violate the certification requirement under Rule 11 and Rule 26. The Fifth Circuit acknowledged this on June 11, 2024, and decided not to promulgate a rule on GAI because, as Law360 summarized it, “court rules already require attorneys to check filings for accuracy, and using AI doesn’t excuse lawyers from ‘sanctionable offenses.’”[21] Imposing additional certification requirements or prohibitions is likely unnecessary and could burden parties and courts. Nevertheless, considering the changing landscape of GAI, a local rule similar to the one promulgated by the United States District Court for the Eastern District of Texas may be useful to inform litigants that the use of GAI is permitted and to serve as a reminder to check all sources for accuracy or else be subject to Rule 11 and Rule 26 sanctions.
Order re: Pleadings Using Generative A.I., General Order 23-01, Bankr. W.D. Okla. (2023). See also General Order on the Use of Unverified Sources, General Order 23-1, D. Haw. (2023) (requiring parties that used GAI to generate any filing with the court to disclose that they relied on an unverified source and confirm the language generated was not fictitious); Pleadings Using Generative Artificial Intelligence, General Order 2023-03, Bankr. N.D. Tex. (2023) (requiring parties to check for accuracy any portion of a document drafted by GAI through “print reporters, traditional legal databases, or other reliable means”); Blumenfeld Jr., J., Standing Order for Civil Cases, C.D. Cal. (last updated Mar. 1, 2024) (requiring a party that uses GAI to generate a portion of a filing to attach a separate document disclosing the use and certifying the accuracy of its content; Magistrate Judge Oliver of the same district also adopted this standing order); Vaden, J., [Standing] Order on Artificial Intelligence, Ct. Int’l Trade (2023) (requires that any submission that contains text drafted with GAI assistance be accompanied by (1) disclosure of what program was used and portions of the text that were so drafted and (2) a certification that the use of the program did not result in a breach of confidentiality to a third party). ↑
Starr, J., Mandatory Certification Regarding Generative Artificial Intelligence [Standing Order], N.D. Tex. (last visited Aug. 8, 2024). Judge Kacsmaryk of the same district, Judge Olvera of the United States District Court for the Southern District of Texas, and Judge Crews of the United States District Court for the District of Colorado have also adopted versions of this standing order. ↑
See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (an attorney cited nonexistent cases, and the judge referred her to the court’s Grievance Panel). See also United States v. Cohen, No. 18-CR-602 (JMF), 2024 WL 1193604, at *2 (S.D.N.Y. Mar. 20, 2024) (Michael Cohen’s lawyer cited three nonexistent cases generated by Google Bard); Ex parte Lee, 673 S.W.3d 755, 756 (Tex. App. 2023) (an attorney cited five sources in an appeal from an order of judgment; three were nonexistent, and the two published cases did not correspond to the reporter the cases were cited with); Will of Samuel, 206 N.Y.S.3d 888, 891, 896 (N.Y. Sur. 2024) (although counsel did not admit to using GAI, the court suspected use of GAI because five out of the six cases he cited were fake and ordered a hearing to determine the issue). ↑
Kruse v. Karlen, ED 111172, 2024 WL 559497, at *3 (Mo. Ct. App. Feb. 13, 2024). ↑
Financial institutions have utilized service providers such as third-party vendors and nonbank entities that partner with banks for a multitude of purposes over many years. The use of service providers has not historically been a controversial issue, and financial institutions have always had an obligation to manage relationships in a manner that is consistent with safety and soundness standards. Given this background, what should we do differently when evaluating so-called bank partnership programs that have received more scrutiny, particularly in the FinTech context? The answer: closely monitor state legislation, given how rapidly evolving state law has created a patchwork of legal and regulatory issues for these programs, similar to but more complicated than prior waves of legislation regulating mortgage brokers, loan servicers, and debt collectors.
In June 2023, the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) issued guidance on managing risks associated with third-party relationships (Guidance). This Guidance replaces and rescinds prior guidance and frequently asked questions that date back to 2008. The Guidance acknowledges the long-standing use of service providers—“[b]anking organizations routinely rely on third parties for a range of products, services, and other activities”—and the benefit of such relationships: “The use of third parties can offer banking organizations significant benefits, such as quicker and more efficient access to technologies, human capital, delivery channels, products, services, and markets.” However, it notes the use of a third party does not diminish or negate the financial institution’s responsibility to ensure its activities are run in a safe and sound manner and comply with applicable laws and regulations. In other words, a financial institution cannot avoid liability by delegating certain responsibilities to their service provider.
The Guidance emphasizes the need for an appropriate risk assessment of service provider relationships, as well as tailoring the compliance management system and oversight to be commensurate with the risk presented by the service provider. For financial institutions that wish to partner with a nonfinancial institution in a “bank partner” model, this Guidance provides a good framework on how to develop policies and procedures to ensure safe and sound banking practices.
At a glance, this should be the end of the story—create solid risk management practices and appropriately manage your relationships. However, state licensing regimes and the interplay of federal and state law create complex issues, particularly when analyzing a consumer lending bank partner program. Both financial institutions and their partners that are not financial institutions must be cognizant of the rapidly changing landscape on the state level. States have threatened, and currently are attempting, to opt out of the Depository Institutions Deregulation and Monetary Control Act (DIDMCA). The purpose of DIDMCA was to place national and state banks on a level playing field. Other state legislation has created “predominant economic interest” and other so-called “true lender” tests to determine whether the financial institution is in fact the lender of record, or whether the loans should be treated as if the nondepository partner were the lender.
As a result, while the general premise of a bank partnership is old news, the current wave of legislation brings both an old concept (state licensing and supervision) and a new concept (substantively regulating the terms of credit extended by financial institutions through legislation purportedly applicable only to the nondepository entity) to regulating such partnerships. The complexity and sheer volume of state laws aimed at exercising authority over financial services products being provided by financial institutions means that both financial institutions and their partners must be diligent when crafting their relationship and monitoring ongoing legislative changes. Up-front consideration should be taken in developing the program, assigning responsibilities, developing comprehensive compliance management systems, and ensuring ongoing diligence.
Imagine shopping for Christmas gifts online without knowing that AI is tracking your facial expressions and eye movements in real time and guiding you towards more expensive items by prioritizing the display of similar high-priced items. Now picture a job candidate whose quiet demeanor is misinterpreted by an AI recruiter, resulting in the denial of his dream job. Emotional AI, a subset of an AI that “measures, understands, simulates, and reacts to human emotions,”[1] is rapidly spreading. Used by at least 25 percent of Fortune 500 companies as of 2019,[2] with the market size projected to reach $13.8 billion by 2032,[3] this technology is turning our emotions into data points.
This article examines the data privacy, manipulation, and bias risks of Emotional AI, analyzes relevant United States (“US”) and European Union (“EU”) legal frameworks, and proposes compliance strategies for companies.
Emotional AI, if not operated and supervised properly, can cause severe harm to individuals and subject companies to substantial legal risks. It collects and processes highly sensitive personal data related to an individual’s intimate emotions and has the potential to manipulate and influence consumer decision-making processes. Additionally, Emotional AI may introduce or perpetuate bias. Consequently, the misuse of Emotional AI may result in violations of applicable EU or US laws, exposing companies to potential government fines, investigations, and class action lawsuits.
1. Emotional AI Defined
Emotional AI techniques can include analyzing vocal intonations to recognize stress or anger and processing facial images to capture subtle micro-expressions.[4] As this technology develops, it has the potential to revolutionize how we interact with technology by introducing more relatable and emotionally responsive ways of doing so.[5] Already, Emotional AI personalizes experiences across different industries. Call center agents tune into customer emotions, instructors personalize learning, healthcare chatbots offer support, and ads are edited for emotional impact. AI in trucking detects drowsiness for driver safety, while in games, it personalizes experiences.[6]
2. Data Privacy Concerns
Emotional AI relies on vast amounts of personal data to infer emotions (output data), raising privacy concerns. It may use the following input data:
Textual data: social media posts and emojis.
Visual data: images and videos, including facial expressions, body language, and eye movements.
Audio data: voice recordings, including tone, pitch, and pace.
Physiological data: biometric data (e.g., heart rate) and brain activity via wearables.
With emotions being one of the most intimate aspects of a person’s life, people are naturally more worried about the privacy of data revealing their emotions than other kinds of personal data. Imagine a loan officer using AI-based emotional analysis to collect and analyze loan applicants’ gestures and voices at interviews. Applicants may be concerned about how their data will be used, how they can control such uses, and the potential consequences of a data breach.
A. Legal Framework
The input and output data of Emotional AI (“Emotional Data”), if directly identifiable, relating to, or reasonably linked to an individual, fall under the broad definition of “Personal Data” and are thus protected under various US state data privacy laws and the European Union’s General Data Protection Regulation (“GDPR”),[8] which serves as the baseline for data privacy laws in EU countries.[9] For example, gestures and body movements, voice recordings, and physiological responses—all of which can be processed by Emotional AI—can be directly linked to specific individuals and therefore constitute Personal Data. Comprehensive data privacy laws in many jurisdictions require the disclosure of data collection, processing, sharing, and storage practices to consumers.[10] They grant consumers the rights to access, correct, and delete Personal Data; require security measures to protect Personal Data from unauthorized access, use, and disclosure; and stipulate that data controllers may only collect and process Personal Data for specified and legitimate purposes.[11] Additionally, some laws require minimizing the Personal Data used, limiting the duration of data storage, and reducing Personal Data to nothing beyond what is necessary to achieve the stated purposes of processing.[12]
Furthermore, if the Personal Data have the potential to reveal certain characteristics such as race or ethnicity, political opinions, religious or philosophical beliefs, genetic data, biometric data (for identification purposes), health data, or sex life and sexual orientation, they will be considered sensitive Personal Data (“SPD”). For instance, Emotional AI systems that analyze voice tone, word choice, or physiological signals to infer emotional states could potentially reveal information about an individual’s political opinions, mental health status, or religious beliefs—which is SPD—such as by analyzing a person’s speech patterns and stress levels during discussions on certain topics. Both the GDPR and several US state privacy laws provide strong protections for SPD. The GDPR requires organizations to obtain a data subject’s explicit consent to process SPD with certain exceptions.[13] It also mandates a data protection impact assessment when automated decision-making with profiling significantly impacts individuals or involves processing large amounts of sensitive data.[14] Similarly, several US state laws require a controller to perform a data protection assessment[15] and obtain valid opt-in consent.[16] California grants consumers the right to limit the use and disclosure of their SPD to what is necessary to deliver the services or goods.[17] The processing of SPD may also be subject to other laws, such as laws on genetic data,[18] biometric data,[19] and personal health data.[20] Depending on the context where Emotional AI is utilized, certain sector-specific privacy laws may apply, such as the Gramm-Leach Bliley Act (“GLBA”) for financial information, the Health Insurance Portability and Accountability Act (“HIPAA”) for health information, and the Children’s Online Privacy Protection Act (“COPPA”) for children’s information.
Emotional AI relies heavily on biometric data, such as facial expressions, voice tones, and heart rate. One of the most comprehensive and most litigated biometric privacy laws is Illinois’s Biometric Information Privacy Act (“BIPA”). Under the BIPA, “Biometric information” includes any information based on biometric identifiers that identify a specific person.[21] “Biometric identifiers” include “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.”[22] The BIPA imposes the following key requirements on private entities that collect, use, and store Illinois residents’ biometric identifiers and information:
Develop and make accessible to the public a written policy that outlines the schedules for retaining biometric data and procedures for its permanent destruction.
Safeguard biometric data with a level of care that meets industry standards or is equivalent to the protection afforded to other sensitive data.
Inform individuals about the specific purposes for which their biometric data is being collected, stored, or used, and the duration for which it will be retained.
Secure informed written consent from individuals before collecting or disclosing biometric data.
The adoption of biometric privacy laws is a growing trend across the country. Several states and cities, including Texas, Washington, New York City, and Portland, have also passed biometric privacy laws.
Current data privacy laws help address the data privacy concerns related to Emotional AI. However, Emotional AI presents unique challenges in complying with data minimization requirements. AI systems often rely on collecting and analyzing extensive datasets to draw accurate conclusions. For example, Emotional AI might use heart rate to assess emotions. However, a person’s heart rate can be influenced by factors beyond emotions, such as room temperature or physical exertion.[23] Data minimization mandates collecting only relevant physiological data, but AI systems might need to capture a wide range of data to account for potential external influences and improve the accuracy of emotional state inferences. This creates a situation where data beyond the core emotional state indicators is collected and what data is necessary may be contentious.
In addition, Emotional AI development may encounter difficulties in defining the intended purposes for data processing due to the inherently unpredictable nature of algorithmic learning and subsequent data utilization. In other words, the AI might discover unforeseen connections within a dataset, potentially leading to its use for purposes that were not defined and conveyed to consumers. For example, a customer service application could use Emotional AI to analyze customer voices during calls to identify frustrated or angry customers for priority handling. Over time, the AI could identify a correlation between specific speech patterns and a higher likelihood of customers canceling the service, a purpose not included in the privacy policy.
B. Legal Strategies
To effectively comply with the complex array of data privacy laws and overcome the unique challenges presented by Emotional AI, organizations developing and using Emotional AI should consider adopting the following key strategies:
Develop a comprehensive privacy notice that clearly outlines the types of Emotional Data collected, the purposes for processing that data, how the data will be processed, and the duration for which the data will be stored.
To address data minimization concerns, plan in advance the scope of Emotional Data necessary for and relevant to developing a successful Emotional AI, adopt anonymization or aggregation techniques whenever possible to remove personal data components, and enforce appropriate data retention policies and schedules.
To tackle the issue of purpose specification, regularly review data practices to assess whether Emotional Data in AI is used for the same or compatible purposes as stated in relevant privacy notices. If the new processing is incompatible with the original purpose, update the privacy notices to reflect the new processing purpose, and de-identify the Emotional Data, obtain new consent, or identify another legal basis for the processing.
If the Emotional Data collected can be considered sensitive Personal Data, implement an opt-in consent mechanism and conduct a privacy risk assessment.
Implement robust data security measures to protect Emotional Data from unauthorized access, use, disclosure, or alteration.
3. Risks of Emotion Manipulation
Emotional AI carries significant risks of being used for manipulation. In three experiments, AI has been shown to learn from participants’ responses to identify weaknesses used in decision-making and guide them toward desired actions.[24] Imagine an online social media platform using Emotional AI to detect and strengthen gamblers’ addictions to promote ads for its casino clients.
A. Legal Framework
I. EU Law
The EU recently enacted the Artificial Intelligence Act (the “EU AI Act”), addressing emotional AI abuse by prohibiting two key categories of AI systems:[25]
AI systems that use subliminal methods or manipulative tactics to significantly alter behavior, hindering informed choices and causing or likely causing significant harm.
Emotion recognition AI in educational and workplace settings except for healthcare or safety needs.
If an emotional AI system is not prohibited under the EU AI Act, such as when it does not cause significant harm, it is deemed a “high-risk AI system,” subjecting its providers and deployers to various requirements, including:
Providers must ensure transparency for deployers by providing clear information about the AI system, including its capabilities, limitations, and intended use cases. They must also implement data governance, promptly address any violation of the EU AI Act and notify relevant parties, implement risk and quality management systems, perform conformity assessments to demonstrate that the AI system meets the requirements of the EU AI Act, and establish human oversight mechanisms.
Deployers must inform consumers of significant decisions, conduct impact assessments, report incidents, ensure human oversight, maintain data quality, and monitor systems.[26]
II. US Law
There is no specific US law that addresses Emotional AI. However, section 5 of the Federal Trade Commission (“FTC”) Act prohibits unfair or deceptive acts or practices.[27] FTC attorney Michael Atleson stated in a 2023 consumer alert that the agency is targeting deceptive practices in AI tools, particularly chatbots designed to manipulate users’ beliefs and emotions.[28] Within the FTC’s focus on AI tools, one concern is the possibility of companies’ exploiting “automation bias,” where people tend to trust AI outputs perceived as neutral or impartial. Another area of concern is anthropomorphism, where individuals may find themselves trusting chatbots more when such bots are designed to use personal pronouns and emojis or otherwise provide more of a semblance of a human person. The FTC is particularly vigilant about AI steering people unfairly or deceptively into harmful decisions in critical areas such as finance, health, education, housing, and employment. It assesses whether AI-driven practices might mislead consumers into actions contrary to their intended goals and thus constitute deceptive or unfair behavior under the FTC Act. Importantly, these practices can be deemed unlawful even if not all consumers are harmed or if the affected group does not fall under protected classes in antidiscrimination laws. Companies must ensure transparency about the use of AI for targeted ads or commercial purposes and inform users if they are interacting with a machine or whether commercial interests are influencing AI responses. The FTC warns against cutting AI ethics staff and emphasizes the importance of risk assessment, staff training, and ongoing monitoring.[29]
B. Legal Strategies
To avoid regulatory scrutiny and potential claims of emotional manipulation, companies developing or deploying
Ensure transparency by clearly informing users when they are interacting with an Emotional AI and explaining in a privacy policy how the AI analyzes user data to infer emotion and how output data is used, including any potential commercial influences on AI responses.
Refrain from using subliminal messaging or manipulative tactics to influence user behavior. Conduct ongoing monitoring and periodic risk assessments to identify and address emotional manipulation risks.
If operating in the EU, evaluate the Emotional AI’s potential for causing significant harm and determine if it falls under the “prohibited” or “high-risk” category. For high-risk AI systems, comply with the applicable obligations under the EU AI Act.
Train staff on best practices for developing and deploying Emotional AI.
4. Risks of AI Bias
Emotional AI may have biased results, particularly if the training data lacks diversity. For instance, a system trained on images of people of only one ethnicity may not recognize facial expressions of another ethnicity, and cultural differences in gestures and vocal expressions may be misinterpreted by an AI system without diverse training data.[30] An example of the potential impact of such bias would be an Emotional AI trained on mental health patients from only one ethnic group that may misinterpret emotions and thereby overlook important symptoms in other groups, resulting in misdiagnosis.
A. Legal Framework
I. EU Law
The EU AI Act addresses bias by imposing stringent requirements on high-risk AI providers and deployers, with a particular emphasis on the provider’s obligation to implement data governance to detect and reduce biases in datasets.[31] The GDPR provides an additional layer of protection against AI bias. Under the GDPR, decision-making based solely on automated processing (including profiling), such as AI, is prohibited unless necessary for a contract, authorized by law, or done with explicit consent.[32] Data subjects affected by such decisions have the right to receive clear communication regarding the decision, seek human intervention, express their viewpoint, comprehend the rationale behind the decision, and contest it if necessary.[33] Data controllers are required to adopt measures to ensure fairness, such as using statistical or mathematical methods that avoid discrimination during profiling, implementing technical and organizational measures to correct inaccuracies in personal data and minimize errors, and employing methods to prevent discrimination based on SPD.[34] Automated decision-making and profiling based on SPD are only permissible if the data controller has a legal basis to do so under the GDPR.[35]
II. US Law
There is no specific federal law addressing AI bias in the US. However, existing antidiscrimination laws apply to AI. Notably, the FTC has taken action related to AI bias under the unfairness prong of Section 5 of the FTC Act. In December 2023, the FTC settled a lawsuit with Rite Aid over the alleged discriminatory use of facial recognition technology, setting a new standard for algorithmic fairness programs. This standard includes consumer notification and contesting options, as well as rigorous bias testing and risk assessment protocols for algorithms.[36] This case also establishes a precedent for other regulators with fairness authority, such as insurance commissioners, state attorneys general, and the Consumer Financial Protection Bureau, to use such authority for enforcement against AI bias.
On the state level, in May, Colorado enacted the Artificial Intelligence Act, the first comprehensive state law targeting AI discrimination, which applies to developers and deployers of high-risk AI systems doing business in Colorado.[37] This may extend to out-of-state businesses serving consumers in Colorado.[38] Emotional AI that significantly influences decisions with material effects in areas such as employment, finance, healthcare, and insurance is considered high-risk AI under the Act. Developers of such systems are required to provide a statement on the system’s uses; summaries of training data; information on the system’s purpose, benefits, and limitations; documentation describing evaluation, data governance, and risk mitigation measures, as well as intended outputs; and usage guidelines.[39] Developers must also publicly disclose types of high-risk AI systems they have developed or modified and risk management approaches, and they must report potential discrimination issues to the attorney general and deployers within ninety days.[40] Deployers must inform consumers of significant decisions, summarize deployed systems and discrimination risk management on their websites, explain negative decisions with correction or appeal options, conduct impact assessments, report instances of discrimination to authorities, and develop a risk management program based on established frameworks.[41]
In addition, most state data privacy laws stipulate that a data controller shall not process personal data in violation of state or federal laws that prohibit unlawful discrimination against consumers.[42] The use of Emotional AI in the employment context also subjects companies to various federal and state laws.[43]
B. Legal Strategies
To comply with antidiscrimination laws and address bias risks of Emotional AI, companies developing or deploying Emotional AI should consider adopting the following strategies:
Establish a robust data governance program to ensure diversity and quality of training data for Emotional AI systems, including regularly monitoring and auditing the training data.
Develop a risk management program based on established risk frameworks, such as the AI Risk Management Framework released by the National Institute of Standards and Technology.[44]
Conduct routine AI risk assessments and bias testing to identify and mitigate potential biases in Emotional AI systems, particularly those used in high-risk areas such as employment, finance, healthcare, and insurance.
Publicly disclose details about Emotional AI systems on the company website, including data practices, types of systems developed or deployed, and risk management approaches.
Inform consumers of significant decisions made by Emotional AI systems. Establish mechanisms to allow consumers to contest decisions and appeal unfavorable outcomes, notify consumers of their rights, and provide clear explanations for decisions made by Emotional AI systems.
In employment contexts, comply with federal and state laws, Equal Employment Opportunity Commission guidance, and Colorado’s and the EU’s AI Acts.[45]
5. Conclusion
The rapid growth of Emotional AI presents a complex challenge to legislators. The EU’s strict regulations on AI and data privacy more effectively safeguard consumers’ interests. However, will this approach hinder AI innovation? Conversely, the reliance of the United States on a patchwork of state and sector laws, along with federal government agencies’ guidance and enforcement, creates more room for AI development. Will this strategy leave consumer protections weak and impose burdensome compliance requirements? Should the United States consider federal legislation that balances innovation with consumer protections? This is an important conversation. In the meantime, companies must continue to pay close attention to Emotional AI’s legal risks across a varied legal landscape.
Meredith Somers, “Emotion AI, Explained,” MIT Sloan School of Management, March 8, 2019. ↑
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1. ↑
Currently, twenty US states have passed data privacy laws: California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Texas, Florida, Montana, Oregon, Delaware, New Hampshire, New Jersey, Kentucky, Nebraska, Maryland, Minnesota, and Rhode Island. ↑
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). ↑
This article is related to a Showcase CLE program that took place at the American Bar Association Business Law Section’s 2024 Fall Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.
The rapid advancement of artificial intelligence (“AI”) technologies, particularly generative AI (“GenAI”), presents both opportunities and challenges for the legal profession. While AI offers significant benefits to legal practice, it does not diminish the core ethical obligations of lawyers. In fact, it heightens the need for accountability, critical thinking, and professional judgment. The legal profession stands at a critical juncture, tasked with harnessing the power of AI while steadfastly maintaining the ethical standards that underpin the administration of justice. By embracing a thoughtful, accountable approach to AI integration, lawyers can enhance their practices while continuing to fulfill their paramount duty to clients and the legal system.
This panel will present a perspective that is neither pessimistic nor optimistic; our goal is not to declare that the glass is either half-empty or half-full. Instead, we will present practical guidance regarding what you need to know to effectively and ethically integrate AI into the practice of law.
The fulcrum for our discussion will be the American Bar Association’s recent Formal Opinion 512, issued on July 29, 2024 (“Opinion 512”). Opinion 512 is practically grounded in the present capabilities of GenAI. To that end, it focuses upon three core issues:
Lawyers remain fully accountable for all work product, regardless of how it is generated;
the existing rules of professional conduct are sufficient to govern AI use in legal practice; and
Our presentation will delve into specific ABA Model Rules of Professional Conduct and their implications for AI use, generally using the order that these are discussed in Opinion 512.
Rule 1.1 (Competence) requires lawyers to maintain technological competence. This necessitates a “trust but verify” approach to GenAI outputs that never compromises accountability. Competency with GenAI also means that lawyers need to understand its capabilities and limitations, not in some abstract technical way, but in ways sufficient to comprehend how it could impact their duties as lawyers. To that end, we will discuss how GenAI is not actually intelligent, but instead is simply “applied statistics”; how to leverage the power that this miracle of math provides; and, perhaps most importantly, how to avoid being deceived by AI creators into thinking that an AI tool is somehow a thinking, feeling person just like you.
Rule 1.6 (Confidentiality) mandates vigilance in protecting client information when using AI tools. Lawyers using GenAI need to understand whether the GenAI systems that they are using are “self-learning” and will thus send information—including confidential client information—as feedback to the system’s main database. Because the vast majority of such systems are self-learning, a healthy skepticism to disclosing any client information to GenAI is critical.
Rule 1.4 (Communication) may require client consultation about AI use in their matters, particularly when confidentiality concerns arise.
Rules 3.1, 3.3(a)(1), and 8.4(c) (Meritorious Claims, Candor to the Tribunal, and Misconduct) prohibit the use of AI-generated false or frivolous claims. This once again implicates our first core issue: As the lawyer, you are the one who is accountable, and “I trusted the AI (but forgot to verify)” is not going to be acceptable.
Rules 5.1 and 5.3 (Supervision of Lawyers and Nonlawyers) may one day raise complex questions of how human-level AI must be properly supervised. But for now, the New York Bar Association’s guidance provides the best set of guidelines (leveraging ABA Resolution 122 from 2019) to avoid letting a GenAI tool supplant the lawyer as the final decision-maker.
Rule 1.5 (Fees) presents challenges in balancing efficiency gains from AI with ethical billing practices.
Rule 5.5 (Unauthorized Practice of Law) necessitates vigilance to ensure AI tools do not cross into providing legal advice or exercising legal judgment without appropriate lawyer oversight.
Finally, we will look to the future, beyond the present-focused Opinion 512. As AI capabilities expand, we must all remain vigilant as lawyers in upholding our ethical duties, which are fundamentally rooted in human knowledge, judgment, and accountability. Because, until AI can credibly match such human qualities, it cannot—and should not—be able to claim such ethical responsibilities as, inter alia, attorney-client privilege.
The Illinois Biometric Information Privacy Act (“BIPA”) became effective in 2008. Alleged violations under BIPA have resulted in numerous lawsuits and defendants’ (businesses’) liability for substantial damages.[1] On May 16, 2024, the Illinois State Legislature passed Senate Bill 2979 (SB 2979) to amend BIPA, and sent the bill to Illinois Governor J.B. Pritzker. On August 2, 2024, the governor signed the legislation into law effective immediately. The amendments limit BIPA damages and provide for electronic consent. Key changes include:
A private entity that collects or discloses a person’s biometric data without consent can only be found liable for one BIPA violation per person regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. New 740 ILCS 14/20(b) and (c) modify the 2008 740 ILCS 14/20 text[2] “A prevailing party may recover for each violation . . . ,” which was interpreted by the courts as a “per scan” damages calculation.
Written consent for collection of biometric information under BIPA now includes electronic signatures. 740 ILCS 14/10 (Definitions) as amended adds a new definition, “electronic signature,” and includes it as part of the definition of “written release.”
These BIPA amendments underscore the need for businesses to review their contracts with vendors providing biometric devices. In particular these contracts should consider requiring, among other things, detailed functional specifications, as well as vendor warranties and indemnifications, concerning the biometric device’s abilities to capture, record, and preserve electronic signatures of users whose biometric data is captured by the devices, consistent with the proposed written consent provisions in BIPA.
It is important to note that these BIPA amendments do not eliminate all liabilities for violations under BIPA. Hypothetically, a business with a large number of employees or customers could still potentially be liable for substantial damages. For example, if a business was found to have intentionally or recklessly violated BIPA and was subject to liquidated damages of $5,000 or actual damages, and it has 1,000 employees or customers for whom it collected biometric data, then damages could be $5,000,000 (=$5,000 x 1,000) plus reasonable attorneys’ fees and costs. Of course, this is hypothetical and would be subject to the facts and the applicable law, but you can do the math and see that even with these BIPA amendments, BIPA violations can result in substantial damages.
In Cothron v. White Castle System, Inc.,[3] the Supreme Court of Illinois, citing to one of its earlier decisions,[4] recognized the potential for significant damages awards under BIPA:
This court explained that the legislature intended to subject private entities who fail to follow the statute’s requirements to substantial potential liability. The purpose in doing so was to give private entities “the strongest possible incentive to conform to the law and prevent problems before they occur.” As the Seventh Circuit noted,[5] private entities would have “little incentive to course correct and comply if subsequent violations carry no legal consequences.”[6]
The Supreme Court noted in Cothron: “It also appears that the General Assembly chose to make damages discretionary rather than mandatory under the Act.”[7] However, the Supreme Court held “that the plain language of section 15(b) and 15(d) shows that a claim accrues under the Act with every scan or transmission of biometric identifiers or biometric information without prior informed consent.”[8]
In a separate opinion upon denial of rehearing in Cothron, Justice David K. Overstreet[9] in a dissent stated:
Although the majority recognized that it “appear[ed]” that these awards would be discretionary, such that lower courts may award damages lower than the astronomical amounts permitted by its construction of the Act, the court did not provide lower courts with any standards to apply in making this determination. This court should clarify, under both Illinois and federal constitutional principles, that statutory damages awards must be no larger than necessary to serve the Act’s remedial purposes and should explain how lower courts should make that determination. Without any guidance regarding the standard for setting damages, defendants, in class actions especially, remain unable to assess their realistic potential exposure.[10]
In the Cothron decision, the Court found that the BIPA statutory language clearly supported plaintiff’s position.[11] Still, the Court stated:
Ultimately, however, we continue to believe that policy-based concerns about potentially excessive damage awards under the Act are best addressed by the legislature. See McDonald[12] . . . (observing that violations of the Act have the potential for “substantial consequences” and large damage awards but concluding that “whether a different balance should be struck *** is a question more appropriately addressed to the legislature”). We respectfully suggest that the legislature review these policy concerns and make clear its intent regarding the assessment of damages under the Act.[13] (emphasis added)
SB 2979 was the result of the Illinois legislature considering the Court’s invitation to amend BIPA.
The bottom line is that the courts and the legislature will continue to have to address the tension between the 2008 Illinois legislative findings[14] underlying BIPA and potentially excessive BIPA damages awards. This analysis should consider evolving artificial intelligence (“AI”) software’s potential to provide humanity with many benefits, but also risks, and AI’s use of biometric data (and ability to copy that biometric data). Hypothetically, consider an AI software provided with an individual’s compromised biometric data obtained in a cybersecurity event coupled with a BIPA violation; the individual could potentially suffer financial damages (e.g., where the biometric data allows unauthorized access to an individual’s financial accounts) or health damages (e.g., where the biometric data allows unauthorized access to an individual’s medical records and where the unauthorized access allows for changing the individual’s medical history concerning allergies or medications which, in an emergency, could be life threatening). The full ramifications of biometric technology and AI are not fully known. Legislators and the courts will need to consider the opportunities and risks these, and other, technologies present to society, and strive to achieve a judicial and legislative balance that will maximize the beneficial opportunities of these technologies, and contain, mitigate, or remove the risks.
This article was updated on September 4, 2024, after its original publication on June 17, 2024.
Many BIPA defendants paid these damages pursuant to a settlement agreement. ↑
SB 2979 relabeled 740 ILCS 14/20 to make the original text subpart (a) and add new subparts (b) and (c). ↑
Cothron, 216 N.E.3d at 929 (citations omitted). 740 ILCS 14/20 as adopted in 2008 actually concludes with text supportive of the discretion afforded courts regarding damages: “A prevailing party may recover for each violation: . . . (4) other relief, including an injunction, as the State or federal court may deem appropriate” (emphasis added). ↑
740 ILCS 14/5 (Legislative findings; intent) includes, without limitation: “(c) Biometrics are unlike other unique identifiers that are used to access finances or other sensitive information. For example, social security numbers, when compromised, can be changed. Biometrics, however, are biologically unique to the individual; therefore, once compromised, the individual has no recourse, is at heightened risk for identity theft, and is likely to withdraw from biometric-facilitated transactions. . . . (f) The full ramifications of biometric technology are not fully known.” ↑
This article is related to a Showcase CLE program that took place at the American Bar Association Business Law Section’s 2024 Fall Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.
Much of the Supreme Court’s docket affects businesses in some respect, but some cases address business issues directly. During the past two Supreme Court terms there have been several cases that dealt directly with business issues or will have heavy impact on businesses.
Some of the cases dealt with nation-level events. The chapter 11 proceeding of Purdue Pharma was perhaps the largest one. In that case, Harrington v. Purdue Pharma, the Court was called on to decide whether a proposed chapter 11 plan that resolved the bankruptcy could be confirmed if it required nondebtor claimants to release nondebtors who were financing the plan. The Court said no: nondebtors can’t be forced against their will to release other nondebtors. In a separate case, Truck Insurance v. Kaiser Gypsum, the Court gave broad standing to those with an interest in a plan to appear and object. One of the functions of bankruptcy court is to provide a forum where those affected by a party’s insolvency can be heard, so this decision buttresses this function.
As intellectual property continues its important role in the American economy, the Court continues to decide a steady stream of IP cases. Andy Warhol Foundation v. Goldsmith grappled with the scope of “fair use” of copyrighted works and held that Andy Warhol’s use of the plaintiff’s photograph of Prince was not a fair use. It remains to be seen how much remains of fair use beyond truly transformative noncommercial uses. In Warner Chappell Music v. Neely, the Court permitted copyright plaintiffs to recover damages incurred before the limitations period. Jack Daniels Properties v. VIP Products held that a parody is not immune from claims for trademark infringement or dilution. That case involved a dog toy designed to look like a Jack Daniels bottle, complete with humorous text. But the parodic humor did not insulate the product from claims under the Lanham Act. And Vidal v. Elster held that the Patent and Trademark Office did not violate the First Amendment by rejecting registration of “Trump Too Small” as a trademark; the Lanham Act’s caution not to register the name of a living person as a trademark was not unconstitutional. The plaintiff still had the right to use “Trump Too Small” as a slogan, but he couldn’t register it.
Securities law issues also were addressed. Slack Technologies v. Pirani held that, in a direct listing, only holders of securities sold under a registration statement could assert claims under § 11 of the Securities Act of 1933. In a separate case, Macquarie Infrastructure v. MOAB Partners, the Court held that securities fraud claims under § 10(b) of the Securities Exchange Act of 1934 and associated Rule 10b-5 cannot be premised on pure omissions Instead, some statement had to be misleading for a plaintiff to be able to sue.
Employment issues also featured on the Court’s docket. Groff v. DeJoy clarified that an employer can defeat a religious discrimination claim under Title VII by showing that a “reasonable accommodation” would impose a substantial cost; a mere de minimis cost is not enough. On the other hand, a Title VII plaintiff challenging a transfer need show only some harm even if not “significant,” under Muldrow v. City of St. Louis. And a plaintiff who seeks whistleblower protection under the Sarbanes-Oxley Act need prove only that his or her protected activity was a contributing factor to the adverse job action, with no need to prove retaliatory intent, per Murray v. UBS Securities.
Another perennial business topic for the Court is arbitration. Smith v. Spizziri held that when a court holds a dispute is arbitrable, the case is not dismissed but stayed. Coinbase v. Bielski held that when a court holds a dispute is not arbitrable, the case does not proceed to discovery while an appeal is pending. Instead the case in the lower court is stayed pending decision of the appeal. Coinbase v. Suski is an object lesson for drafters of contracts. When there is more than one arguably governing dispute resolution provision—one calling for arbitration and another for litigation—it is for a court rather than an arbitrator to decide which one governs, because the issue is whether there was an agreement to arbitrate at all.
The Commerce Clause came into play in interesting ways. National Pork Producers v. Ross held that California did not violate the dormant Commerce Clause by requiring that any pork sold in California was required to have been raised in specified humane conditions, even though almost all pork is raised outside California. Mallory v. Norfolk Southern upheld against a due process challenge a Pennsylvania statute under which a corporation that registers to do business in the state must consent to personal jurisdiction in the state for all purposes (but whether this passes Commerce Clause muster was left for another day).
Property rights also made an appearance. In Sheetz v. El Dorado County, the Court held that the Takings Clause can be violated by legislatively imposed fees and conditions that are not linked to the impact or conditions of a particular project. As a result, the owner of a newly built prefabricated home could challenge, as a Fifth Amendment taking, California’s imposition of various statutory charges in connection with the construction of his home.
Numerous other cases, including especially those concerning administrative law and Title VI, are likely to have substantial impact on business as well. The long-term impact of the Court’s recent decisions will become apparent in the marketplace and in follow-up litigation in the Court in coming years.
On April 11, the US Department of the Treasury announced a Notice of Proposed Rulemaking (NPRM) amending the regulations that govern the operations of the Committee on Foreign Investment in the United States (CFIUS, or the Committee). CFIUS is the US government body that reviews potential national security concerns resulting from foreign investments in and acquisitions of US businesses and certain real estate.
Intended to demonstrate the Committee’s “focus on monitoring, compliance, and enforcement,” the NPRM proposes to increase penalty amounts, expand CFIUS’s authority to request information, and tighten the time frame parties have to respond to mitigation agreement terms. (CFIUS sometimes conditions approval of a transaction on the parties accepting terms to mitigate perceived national security risk.)
The public comment period closed on May 15, and the proposed changes will not take effect until a final rule is issued. Regardless of the specifics of the final rule, the result will be a more robust CFIUS. US sellers, and foreign buyers and investors, need to plan accordingly.
Background on CFIUS
CFIUS is an interagency committee with the authority to review transactions involving foreign investment in the United States and in certain US real estate (covered transactions). Chaired by the Treasury Department Secretary, CFIUS includes representatives from the Departments of Commerce, Defense, Energy, Homeland Security, Justice, and State; the Office of the US Trade Representative; and the Office of Science & Technology Policy. Several White House offices also participate in the Committee.
CFIUS reviews the national security implications of covered transactions and has the authority to impose conditions on transactions to mitigate associated national security risks. Most submissions to CFIUS are made on a voluntary basis; however, certain circumstances require parties to submit a mandatory declaration. The Committee may also investigate non-notified transactions, which remain subject to potential CFIUS review indefinitely.
Transactions notified to the Committee can take two forms: a formal notice subject to a review period of forty-five days, or a shorter-form declaration subject to a thirty-day review period (though at the end of this period, CFIUS may request submission of a formal notice). At the conclusion of the forty-five-day review period, CFIUS may initiate a further investigation of another forty-five days. In rare cases, CFIUS may recommend that the president block a transaction. Parties must respond to requests for information from the Committee at any point in this process.
Increased Penalties
As described in the NPRM, CFIUS determined that the current maximum penalty for violations of CFIUS regulations—$250,000 or the value of the transaction (whichever is greater)—does not sufficiently deter certain violations. Given that the median value of covered transactions reviewed by CFIUS was $170 million in recent years and that the definition of “transaction” within the regulations can lead to substantial undervaluation of transactions, the US government understandably believes penalties should be increased.
In particular, the NPRM would increase maximum monetary penalties as follows:
For violations related to submitting a declaration or notice with a material misstatement or omission, or making a false certification: from $250,000 to $5 million per violation.
For violations related to failure to comply with the mandatory declarations regulations: from $250,000 to $5 million or the value of the transaction, whichever is greater. Note that these requirements do not apply to covered real estate transactions.
For violations of a material provision of a mitigation agreement, a material condition, or an order: from $250,000 to the greatest of (i) $5 million, (ii) the value of the transaction, or (iii) the value of the violating party’s interest in the US business (or real estate) at the time of the transaction or violation. Because the value of the interest in the US business at the time of the transaction or violation may be greater than the value of the transaction itself, option (iii) provides enhanced deterrence for mitigation-related violations.
The NPRM also proposes that penalties may be imposed against a party that makes a material misstatement or omission to the Committee outside of a declaration or notice. Most notably, this would cover requests by CFIUS for information pertaining to non-notified transactions.
The maximum penalty will not be imposed in every case, and CFIUS maintains discretion to determine an appropriate penalty in accordance with the CFIUS Enforcement and Penalty Guidelines. Relatedly, the NPRM extends the deadlines from fifteen days to twenty days both for a party to submit a petition for reconsideration of a penalty and for the Committee to issue a final penalty determination.
Expansion of Authority to Request Information
The NPRM also proposes to expand CFIUS’s authority to collect relevant information, including from nonparties (those not directly party to a transaction), to enforce its regulations.
First, the NPRM would grant CFIUS broader authority to investigate non-notified transactions. CFIUS is currently able to request information to determine whether a non-notified transaction is subject to CFIUS jurisdiction (i.e., “covered”). Under the proposed changes, CFIUS would be able to request information not only from parties to the transaction but also from nonparties to determine “whether [the non-notified] transaction may raise national security considerations . . . [or] meets the criteria for a mandatory declaration.” Parties would be required to respond to such requests.
Second, the NPRM would require parties to respond when the Committee requests information to: (1) “monitor compliance with or enforce the terms of a mitigation agreement, order, or condition” and (2) determine whether a material misstatement or omission was made by a transaction party. The regulations currently do not require parties to respond to such requests, though in practice they are rarely ignored.
Finally, the NPRM relaxes the standard under which CFIUS may exercise its subpoena authority to compel information from parties.
CFIUS has increasingly prioritized the review of non-notified transactions. In September 2023, Assistant Secretary of the Treasury for Investment Security Paul Rosen called CFIUS’s “non-notified work . . . one of [its] most important functions.” According to CFIUS’s latest annual report (the Annual Report), the Committee continues to hire dedicated staff and implement training for this purpose. As stated in the NPRM, expanding information gathering on non-notified transactions will promote “efficiency in connection with filings for transactions that may present an extant risk” by “allow[ing] the Committee to prioritize transactions that parties were required to submit . . . or that, in its view, otherwise warrant formal review.”
Tightening Mitigation Negotiation Timelines
Where CFIUS identifies a national security concern in connection with a transaction, it can propose mitigation measures to address those concerns in exchange for allowing a transaction to proceed. Currently, there is no specified timeline for parties to respond to mitigation agreement terms proposed by CFIUS. The Department of the Treasury believes that this “can sometimes result in a protracted process where parties may take longer than is reasonable to respond to the Committee’s proposed terms.”
To address this concern, the NPRM would require a “substantive response” to any proposed mitigation agreement terms within three business days, absent an extension. (The NPRM does not detail how CFIUS will decide whether to grant an extension.) A “substantive response” is expected to consist of an acceptance, a counterproposal, or a “detailed statement of reasons” as to why the parties cannot comply with the terms.
Given the complexity and inherently international nature of a transaction subject to a mitigation agreement, three business days is a very short turnaround. Yet parties to a proposed mitigation agreement must understand the proposed terms and their impact on the business, including their ability to comply with the terms going forward. Most fundamentally, the parties need to understand the extent to which proposed mitigation terms change the underlying deal. Underlying agreements typically permit the buyer or investor to halt the transaction if CFIUS approval requires material changes to the terms of the transaction.
In addition, failure to appropriately shape and implement mitigation terms can lead to violations of the mitigation agreement itself.
The three-day timeline was the most frequently cited concern among the public comments to the NPRM. Several commenters recommended that the Committee instead impose an abbreviated timeline on a case-by-case basis. Commenters also proposed an extended general deadline of five business days.
While it remains to be seen how the final rule will address the three-day timeline, CFIUS has clearly signaled its focus on compliance with and enforcement of mitigation agreements. Parties must therefore have a well-defined strategy as to what remediation measures will be palatable. In this regard, the Annual Report’s description of past mitigation measures and conditions is a helpful tool for considering potential measures.
New Tool for Countering China?
Some believe the changes proposed in the NPRM are primarily meant to create additional tools to counter China. In a September 2022 Executive Order, the “first-ever presidential directive” providing factors for CFIUS to consider in its reviews, the White House targeted areas seen as priorities of Chinese industrial development, including supply chains in the microelectronics, artificial intelligence (AI), quantum computing, and agricultural spaces. The Annual Report also notes that “economic, industrial, and cyber espionage by foreign actors like China . . . continues to represent a significant threat to US prosperity, security, and competitive advantage.” The changes proposed by the NPRM come as ByteDance’s ownership of TikTok is subject to increased congressional and regulatory scrutiny and as the Treasury issues new rules to limit US outbound investment into specific Chinese sectors.
Going Forward
The NPRM will establish a more muscular CFIUS, with further beefing up likely. There continue to be calls to broaden the scope of CFIUS jurisdiction. For example, the US-China Economic and Security Review Commission recently recommended in its 2023 Report to Congress that Congress pass legislation that would view foreign research contracts with universities as covered transactions subject to CFIUS review. Further, bipartisan concern over foreign investment in US agricultural land led to the March 2024 inclusion of the Department of Agriculture as a case-by-case member of the Committee for certain agriculture-related transactions. Proposed legislation also seeks to require that “detailed and timely . . . transaction data relevant to foreign investments in agricultural land” be provided to the Committee to ensure proper review of such transactions by CFIUS.
In an environment in which the scope of the Committee’s review and enforcement efforts is expanding, nearly any transaction involving a foreign investor or acquirer should be reviewed for CFIUS implications. And in a transaction involving legitimate national security issues, the parties should proactively consider potential mitigation measures in light of a truncated timeline for reviewing and responding to proposed measures.
The web is rife with articles explaining the importance of protecting a business’s trademarks. These articles usually (and correctly) point out that, if someone is potentially infringing on your business’s trademark, it’s important to send a cease and desist letter or, if necessary, file a lawsuit, because if others start to use your mark (or something like it) and the business doesn’t protect it, eventually you can lose trademark protection.
However, sometimes it might be better not to start the legal ball rolling. I say this even though I’m a litigator and, yes, one of the ways I earn my living is by helping businesses sue for trademark infringement. Why? Well, a few recent cases highlight the importance of taking a step back and thinking things through before sending that cease and desist letter or filing a lawsuit.
Trademark Suits Should Not Be Used for Purposes Other than Addressing Trademark Infringement
One example is Trader Joe’s case against its employee union, Trader Joe’s United. The union, in its efforts to raise money for organizing locations throughout the supermarket chain, sells mugs, T-shirts, and other merchandise branded with its Trader Joe’s United logo. Trader Joe’s claimed trademark infringement and sued.
The district court granted the union’s motion to dismiss (see pages 3 and 4 of the linked order to compare Trader Joe’s marks and what the union used), writing that it felt “compelled to put legal formalisms to one side and point out the obvious. This action is undoubtedly related to an existing labor dispute, and it strains credulity to believe that the present lawsuit—which itself comes dangerously close to the line of Rule 11—would have been filed absent the ongoing organizing efforts that Trader Joe’s employees have mounted (successfully) in multiple locations across the country.” In other words, the court was saying that the real reason Trader Joe’s sued was to try and shut down the union, and, as it noted in a subsequent decision, that given the “extensive and ongoing legal battles over the Union’s organizing efforts at multiple stores, Trader Joe’s claim that it was genuinely concerned about the dilution of its brand resulting from [the Union’s] mugs and buttons cannot be taken seriously.” The court went on to hold that no reasonable consumer would think that the union’s merchandise originated with Trader Joe’s—the central inquiry in a trademark infringement case. The court also awarded the union its legal fees, noting in its decision that the case stood out “in terms of its lack of substantive merit.”
Think about What Happens If (When) a Cease and Desist Letter Becomes Public
Famed restaurateur David Chang and his company, Momofuku, also recently lost a trademark battle that they probably wish they hadn’t started. On the bright side for Chang and Momofuku, there was no lawsuit, and they weren’t unceremoniously kicked out of court like Trader Joe’s. However, they did have to issue an apology after sending cease and desist letters to several other businesses owned by Asian Americans demanding that they cease and desist using the term “chile crunch” or “chili crunch.” (For those of you who mostly stick to milder foods, Momofuku Chili Crunch is a packaged “spicy-crunchy chili oil that adds a flash of heat and texture to your favorite dishes.”) Momofuku owns the trademark rights to the first spelling and claims common law rights to the second; it applied to register a trademark for “chili crunch” with an i around the same time it sent out the cease and desist letters.
There was significant pushback on these letters from the recipients, who posted them to social media and shared them with mainstream media outlets, highlighting how Chang and Momofuku were trying to assert rights over a generic term frequently used in Asian and Asian American gastronomic offerings. The companies felt that Chang and Momofuku were trying to use their status and financial resources to unjustifiably attack other Asian-American-owned companies.
Don’t Threaten a Trademark Lawsuit If You Don’t Own the Trademark
And then, there’s the case of the Los Angeles Police Foundation (LAPF), a private group affiliated with the Los Angeles Police Department. It sent a cease and desist letter to a company selling T-shirts emblazoned with the words “Fuck the LAPD” on top of the Los Angeles Lakers logo.
In its letter, the LAPF asserted it is “one of two exclusive holders of intellectual property rights pertaining to trademarks, copyrights and other licensed indicia for (a) the Los Angeles Police Department Badge; (b) the Los Angeles Police Department Uniform; (c) the LAPD motto ‘To Protect and Serve’; and (d) the word ‘LAPD’ as an acronym/abbreviation for the Los Angeles Police Department.”
There are a lot of whiffs here for the LAPF. Strike one: Government agencies can’t get trademark protection for their names. Strike two: The LAPF isn’t the LAPD, so it has no basis for claiming infringement on something that doesn’t belong to it. Strike three (it’s a big one): Obviously, the logo on these shirts belongs to the Lakers, not the LAPF. Strike four: There’s an argument that the shirt is meant as a parody and/or political commentary and, therefore, protected under the First Amendment.
Worth noting here is the T-shirt manufacturer’s carefully crafted response to the LAPF after receiving the cease and desist letter: “LOL, no.” That was the entirety of the response. Points for clarity, concision, and all-around humor.
What does this all mean? Well, if you send a cease and desist letter or file suit to protect a trademark you don’t actually have (the LAPF), or if you’re trying to accomplish a goal that is not related to actually protecting your trademark (Trader Joe’s), you’re just going to be embarrassed. And while the Momofuku matter is more nuanced, it’s fair to say many companies use the term “chili crisp,” making Momofuku’s efforts to trademark it seem like the work of a bully.
The lesson here: Legal claims don’t exist in a vacuum. Examine the validity of your claims, but also think about the potential negative publicity and damage to your reputation before firing off cease and desist letters haphazardly or filing suit. Because even if you win in court, sometimes public opinion is the final judge. And no business wants to upset that judge.
On June 28, 2024, in a maximalist decision that went further than even the most ardent opponents of Chevron deference thought possible, the Supreme Court finally and emphatically overruled Chevron deference, the watershed rule that governed the level of deference afforded to administrative agency interpretation of ambiguous statutes for nearly forty years.
The Court’s decision will have an immediate and lasting impact on executive agency interpretations of ambiguous federal statutes, as well as potentially on hundreds, if not thousands, of prior decisions decided on Chevron deference grounds—and the future of the administrative state in America.
An Emphatic Rejection of Judicial Deference to Agency Interpretation
Chevron deference, established in 1984, required courts to defer to “permissible” agency interpretations of the statutes those agencies administer, even when a reviewing court reads the statute differently. This principle of deference to administrative agencies was a cornerstone of administrative law for nearly four decades and one that Chevron opponents had looked to overturn for years.
Enter Loper Bright Enterprises v. Raimondo and Relentless, Inc. v. Department of Commerce, a pair of cases that sought to overturn Chevron deference once and for all. As the Court’s questions at oral argument made clear, Chevron deference was on borrowed time. Even so, the majority opinion in Loper Bright and Relentless, Inc. represents an emphatic rejection of the agency deference ushered in by Chevron and its progeny.
Chief Justice Roberts’s majority opinion focused on not only the history of statutory interpretation in the United States, but also the creation of the Administrative Procedures Act (APA), as well as what the majority viewed as the unworkability of Chevron deference in its current form. The Chief Justice first noted that Article III was always interpreted to vest in the courts the power to interpret what a law means. Despite this, Chief Justice Roberts noted that courts have always understood that some deference was afforded to the Executive Branch’s interpretation of statutes. But, according to the Chief Justice, that deference was not unlimited. Rather, “[t]he views of the Executive Branch could inform the judgment of the Judiciary, but did not supersede it.” The majority opinion explained that this version of agency deference continued throughout the New Deal era, further noting that when deference was given to an agency, it was to fact-based inquiries, not questions of law.
The APA was enacted in 1946 “as a check upon administrators whose zeal might otherwise have carried them to excesses not contemplated in legislation creating their offices.” As Chief Justice Roberts noted, under the APA, courts utilize their own judgment in deciding questions of law, notwithstanding an agency’s interpretation of the particular law. In the majority’s view, the APA “makes clear that agency interpretations of statutes—like agency interpretations of the Constitution—are not entitled to deference. The APA’s history and the contemporaneous views of various respected commentators underscore the plain meaning of its text.” This reasoning, according to the majority, supports a de novo (i.e., no deference given) review standard of an ambiguity’s meaning in a particular statute.
Despite this, the Court did note that some degree of agency deference may still be appropriate in certain circumstances. As the Chief Justice explained:
Courts exercising independent judgment in determining the meaning of statutory provisions, consistent with the APA, may—as they have from the start—seek aid from the interpretations of those responsible for implementing particular statutes. And when the best reading of a statute is that it delegates discretionary authority to an agency, the role of the reviewing court under the APA is, as always, to independently interpret the statute and effectuate the will of Congress subject to constitutional limits. The court fulfills that role by recognizing constitutional delegations, fixing the boundaries of the delegated authority, and ensuring the agency has engaged in “ ‘reasoned decision making’ ” within those boundaries.
According to the majority, Chevron cannot be reconciled with the text and framework of the APA because it requires a court to “ignore, not follow” the reading of the text the court would have reached if it exercised its own independent judgment as the APA (and Article III) require. The Court further rejected the claim that statutory ambiguities are implicitly delegated to agencies as Chevron presupposes.
Not only did the majority find that Chevron contradicts the mandates of the APA, but it also rejected the government’s (and dissents’) arguments in support of the continued viability of Chevron deference. For instance, the majority disagreed that agency experts are better suited to decide and interpret tough and complicated statutory questions. According to Chief Justice Roberts, “agencies have no special competence in resolving statutory ambiguities. Courts do,” and “even when an ambiguity happens to implicate a technical matter, it does not follow that Congress has taken the power to authoritatively interpret the statute from the courts and given it to the agency.” The Court further rejected the claim that such interpretations should be made by policymakers as opposed to unelected judges, noting that “[r]esolution of statutory ambiguities involves legal interpretation, and that task does not suddenly become policymaking just because a court has an ‘agency to fall back on.’ ”
What about Consistency?
What about the consistency that adherents claim comes with applying Chevron deference? According to the majority, it provides no such consistency at all. Rather, because Chevron deference is so indeterminate and sweeping, the Court has had to consistently amend and revise the test, “transforming the original two-step into a dizzying breakdance.” The Court was also not persuaded that its decision would have any impact on the more than 18,000 lower court cases decided on Chevron deference grounds. According to the majority, a party seeking to challenge one of those rulings must establish a “special justification” to do so, and the end of Chevron deference does not constitute such a justification.
Finally, the majority rejected the argument that stare decisis warranted saving Chevron from the chopping block, stating that Chevron is “unworkable”; that there has not been, according to the majority, a meaningful reliance on Chevron in recent years by the Court; and that it has been chipped away at over the years, which calls into question its continued validity and reliance by lower courts.
A Fiery Dissent
Justice Kagan pulled no punches in her dissent and took the majority to task for, in her opinion, giving “itself exclusive power over every open issue—no matter how expertise-driven or policy-laden—involving the meaning of regulatory law.” As Justice Kagan explained:
Its justification comes down, in the end, to this: Courts must have more say over regulation—over the provision of health care, the protection of the environment, the safety of consumer products, the efficacy of transportation systems, and so on. A longstanding precedent at the crux of administrative governance thus falls victim to a bald assertion of judicial authority. The majority disdains restraint, and grasps for power.
Justice Kagan also emphatically disagreed with both the majority’s rationale and its disregard, in her opinion, for what comes next with the end of Chevron deference. For instance, she disagreed with the majority that section 706 of the APA mandated a court to utilize a de novo standard when deciding an agency’s interpretation of an ambiguous statute. The dissent also vehemently disagreed with the majority’s contention that courts are in a better position to resolve statutory ambiguities than the so-called agency experts.
In addition, the dissent took the majority to task for not adhering to stare decisis, claiming that Chevron was entitled to a particularly strong form of reliance because (1) Congress has had opportunities to overrule it in the past but has declined to do so; and (2) the Court has continued to rely on Chevron deference in thousands of decisions, as have lower courts. And what about the justification that the Court had not relied on Chevron lately? According to Justice Kagan, that was all by design:
This Court has “avoided deferring under Chevron since 2016” (ante, at 32) because it has been preparing to overrule Chevron since around that time. That kind of self-help on the way to reversing precedent has become almost routine at this Court. Stop applying a decision where one should; “throw some gratuitous criticisms into a couple of opinions”; issue a few separate writings “question[ing the decision’s] premises” (ante, at 30); give the whole process a few years . . . and voila!—you have a justification for overruling the decision.”
Justice Kagan likewise found little comfort in the majority’s attempt to insulate prior Chevron-based decisions from being collaterally attacked, noting that finding a “special justification” to warrant overturning such precedent is a low burden to meet.
What Comes Next?
The decision is expected to impact a wide range of regulatory environments, from environmental protections and healthcare to maritime, securities, tax, and financial regulations, and a litany of other federally regulated areas. Federal agencies will now face closer scrutiny and potentially more frequent legal challenges when interpreting ambiguous statutes. Moreover, federal district and circuit courts do not always agree, and this will result in inconsistent application of regulations throughout the country. This, in turn, will result in more issues needing to be resolved by the Supreme Court.
Perhaps unsurprisingly, the Court did not replace Chevron deference with another test for courts to apply when confronted with an ambiguous statute and an agency’s interpretation of the same. Rather, it appears that when faced with ambiguity in a statute, pursuant to the APA, courts will utilize the normal tools of statutory interpretation to decide what the ambiguity means, and that no deference will ordinarily be given to an administrative agency’s interpretation of the ambiguity.
Notably, the majority did find that in some circumstances (like when Congress expressly authorizes it) deference may be appropriate to an administrative agency. Regardless, it is likely that the end of Chevron deference will turbocharge forum shopping. Plaintiffs hostile to an agency’s particular statutory interpretation or final rule will most likely seek out sympathetic courts, whereas those seeking to uphold an agency’s decision will look for courts traditionally more deferential to the Executive Branch.
And what about those 18,000-plus cases previously decided on Chevron deference grounds? While there certainly may be defenses the government can raise to a belated challenge (e.g., laches, statute of limitations), the dissent’s worry that a requirement of a “special justification” to overturn such precedent amounts to no justification at all is well-founded. Indeed, a court hostile to a particular agency or its interpretation can easily come up with a rationale it labels as a “special justification” to overturn an old Chevron-based decision, should it choose to do so. And as Solicitor General Elizabeth B. Prelogar stated at oral argument, litigants almost assuredly “will come out of the woodwork” to challenge Chevron-based decisions.
Further, Loper Bright and Relentless, Inc., at least on paper, represent a seismic shift in power in Washington. Under Chevron, the Executive Branch’s interpretation of statutory ambiguities was given heightened deference. Now that interpretation belongs almost exclusively to the judicial branch to, in the words of Justice Kagan, decide hyper-technical questions like “[w]hen does an alpha amino acid polymer qualify as such a ‘protein’ ” under the Public Health Service Act, or “[h]ow much noise is consistent with ‘the natural quiet’ ” that the Department of the Interior must regulate from aircraft flying over the Grand Canyon?
Finally, while this decision represents an emphatic rejection of agency deference, the majority did concede that agency deference is appropriate in certain circumstances. Indeed, Chief Justice Roberts made clear that Skidmore deference (in which courts grant a modicum of deference to an agency’s statutory interpretation “ ‘to the extent it rests on factual premises within [the agency’s] expertise’ . . . which may give an Executive Branch interpretation particular ‘power to persuade’ ”) remains alive and well. Moreover, the Court’s opinion makes clear that Congress is free to delegate authority to the Executive Branch to interpret the meaning of certain statutes. It remains to be seen how often courts will utilize Skidmore deference moving forward when confronted with agency interpretation of ambiguous statutes.
Regardless, Loper Bright and Relentless, Inc. mark a tectonic shift in administrative law and could reshape the landscape of American governance for years to come. Federal agencies will need to adapt to new judicial scrutiny, legislators may face increased pressure to craft more precise laws, and courts will brace for a heavier caseload as they take on a more prominent role in statutory interpretation.
Connect with a global network of over 30,000 business law professionals