Reflections on Citizenship’s Obligations

I. The Bill of Obligations

A. Introduction

We are challenged today to consider what it means to be a true citizen of our country: to consider our Rights as American citizens in the context of the Bill of Rights and Constitution and our Obligations and Responsibilities to the country and each other as American citizens in the context of what it takes to make our Constitution work.

We hear, read, and say a great deal about our rights as citizens and say, read, and hear much less about our obligations as citizens.

B. The Ten Habits of Good Citizens

In his book The Bill of Obligations: The Ten Habits of Good Citizens, author Richard Haass starts the reader with a discussion of rights and their limits, as well as the importance of addressing citizens’ duties and responsibilities, the performance of which is necessary to support and assure the maintenance, preservation, and enforcement of those rights. Haass then articulates his thoughts on restoring the essentials of good citizenship if we are to avert threats to the future of our country. Those ten habits are:

  1. Be Informed. Haass notes that an effective, functional democracy necessitates an informed citizenry, and goes on to ask what is an informed citizenry, and what does it take to be an informed citizen? He highlights an awareness of civics, understanding the fundamentals of how the country functions from a governance and governing perspective, and notes the importance of understanding the world in which the country operates, including both domestic and international challenges. He comments that being informed is essential to casting one’s vote in favor of particular candidates and understanding what those candidates bring to the table from the perspective of experience, judgment, ethics, and trustworthiness, as well as their function in both governing and representing the country. He notes the importance of exercising judgment as to peoples’ opinions and how the uninformed may be subject to being manipulated by misinformation and the intentions of the various people seeking to be part of the governance of the country. Haass notes the plethora of information and newspapers, magazines, television, podcasts, websites, social media, and the other tools used to transmit information and opinions, and he points out that an informed person must be able to choose sources that are significant to being informed and avoiding those aimed at misleading or misinforming an aware citizen. He notes that being informed is essential to one’s ability to hold people accountable for their actions, pronouncements, etc., and that accountability is essential to a functioning democratic country given that the people vote on who is best to lead that country. It is also true that being informed is significant to influencing the views of others who are less informed and still essential to voting practices within a democracy.
  2. Get Involved. Haass immediately asserts, with respect to getting involved, that a democracy will depend on the participation of its citizens, and that again, it is essential that these citizens are well informed and capable of exercising their responsibility to act, as well as their responsibility to hold others (particularly elected officials) accountable for their actions. He notes the importance of individual actions to making a difference in the way society performs and how particular individuals, by simple acts, have created changes in policy.
  3. Stay Open to Compromise. Haass notes that while on the one hand, compromise can be viewed as a sign of weakness, on the other hand, compromise is essential to the functioning of a democracy so that multiple interests can be addressed and recognized. In many ways, compromise was at the heart of drafting the Constitution and many other achievements of our society from civil rights to other matters of American government interests, as well as our societal interests. Haass comments that while compromise is a value, it’s also important to be able to describe and understand the give-and-take in the process, as well as the ultimate choices made and matters considered in effecting a resolution. Through compromise, he says, one can help the country move beyond stalemate to arrive at action.
  4. Remain Civil. Haass notes the importance of civility as a means of overcoming cynicism and bridging disagreements that come in a democracy by exercising respect for various sides. Haass notes that in discussions where there are multiple sides and issues to be resolved, civility is sometimes an essential quality in reaching resolution.
  5. Reject Violence. Haass highlights the differences between democracy and democratic governments and authoritarian systems. He notes that in particular, rejecting violence in almost any situation is preferable to using it to achieve any kind of political end, as in a democracy, violence is the antithesis of achieving results that reflect the best interests of the groups involved, etc.
  6. Value Norms. Haass notes the importance of observing and respecting norms, mores, and social conventions that form the fabric of a society, as one can’t make laws to cover all situations, nor is a society purely based on laws and their observance, to the exclusion of these other elements that are essential to a society’s viability.
  7. Promote the Common Good. Haass notes that the “common good” is critical to the functioning and existence of a democracy. Looking past our own self-interests, it is critical to look at the best collective interests of the country itself for civil and political service.
  8. Respect Government Service. While Haass notes that it is very American to be suspicious of government and governmental authority, it is also critical to respect those who provide government service as a social good, who often take less and have a sense of public service that motivates them to do the work they do, which should be respected, and not denigrated.
  9. Support the Teaching of Civics. Haass and others have noted the current critical failure of the teaching and knowledge of civics, resulting in a lack of understanding of and commitment to the structure and critical elements of our democracy. Interestingly, in order to become a citizen of the United States, those seeking to immigrate are required to review materials including civics information pertaining to U.S. history and government as part of the naturalization process. (See Section II.) Recently, the American Bar Association’s Task Force for American Democracy and deans of American law schools have stressed the critical importance to our society and its founding principles of improved education with respect to civics and the primacy of the Rule of Law.
  10. Put the Country First. Interestingly, Haass notes that governing one’s self, personal character, integrity, and tolerance towards others is essential to the functioning of our democracy. He comments in particular on several situations in which the interests of the country were not put first, but were sacrificed to political objectives, and the damage to the interests of the country and its principles as a whole resulting from that kind of action.

II. The Citizenship Pledge for Newly Naturalized Citizens

A. Introduction

When persons seek to become U.S. citizens, they must obtain an application from U.S. Citizenship and Immigration Services (USCIS), and they must review materials containing civics information pertaining to U.S. history and government in preparation for an interview with a USCIS officer, who may ask ten or more questions from a list of one hundred questions in the materials. Applicants who answer at least six questions correctly and are approved for citizenship are invited to participate in a naturalization ceremony, where the applicants must raise their right hands and swear an Oath of Allegiance. As a new citizen, each person taking the Oath of Allegiance agrees to take on and perform those duties and responsibilities sworn to in the Oath.

B. The Seven Undertakings of Newly Admitted American Citizens

  1. In order to become a citizen, each person renounces all allegiance, loyalty and fidelity to any foreign ruler, state, or sovereignty of whom or which such person had once been a subject or citizen. In other words, the applicant renounces all loyalty, citizenship, duties, or obligations to any non-U.S. leader, state, or country.
  2. In order to become a citizen, each person promises to bear true faith and allegiance to the Constitution and the laws of the United States, essentially a commitment to the rule of law as an essential element of U.S. citizenship.
  3. In order to become a citizen, each person commits to support and defend the U.S. Constitution and the laws of the United States of America against all enemies, foreign and domestic.
  4. In order to become a citizen, each person promises to bear arms on behalf of the United States when required by law.
  5. In order to become a citizen, each person agrees to perform noncombative services in the Armed Forces of the United States when required by U.S. law.
  6. In order to become a citizen, each person agrees to perform work of national importance under civilian direction when required by law.
  7. Further, each person taking the Oath swears “under God” that they have agreed to perform the duties and responsibilities in the Oath “freely, without any mental reservation or purpose of evasion.”

“Reflections on Citizenship’s Obligations” by John H. Stout, co-chair of the American Bar Association Business Law Section’s Rule of Law Working Group, is part of a series on the rule of law and its importance for business lawyers created by the Rule of Law Working Group. Read more articles in the series.

Not Your Parents’ Consumer Arbitration

Every year, the American Arbitration Association-International Centre for Dispute Resolution (AAA-ICDR) administers thousands of consumer arbitration matters, and, over time, those have grown in variety and sophistication. Disputes involving solar power, sales of electric vehicles, data privacy, emerging technologies like cryptocurrency, and the gig economy now make up a significant portion of filings.

Each area raises new issues, different legal principles may apply, and new types of parties are involved. Arguments over the specific technical aspects of the products have become more common; these disputes can delve into topics ranging from battery performance to SIM swap fraud to the inner workings of web-based advertising. Warranty and terms of use agreements are longer and more sophisticated than ever, and they include arbitration provisions that have been crafted with great care. Historically, consumer cases were considered “simple” matters, but these are not your parents’ consumer cases.

The AAA-ICDR has adapted to these changes by:

  • embracing virtual hearings,
  • addressing emerging technology disputes, and
  • upholding procedural safeguards to help ensure fair and enforceable outcomes.

As the quantity of cases and the stakes in individual cases grow, a fair process has become even more critical.

Consumer claims may arise out of an agreement that applies to thousands or millions of consumers. Although some consumer arbitration clauses contain opt-out provisions, consumers generally do not have much choice about how their dispute will be resolved. When a consumer contract calls for arbitration to be administered by the AAA-ICDR, we require a baseline level of due process protections as set out in the Consumer Due Process Protocol and the Consumer Arbitration Rules. For example, the Protocol and Rules call for:

  • a fundamentally fair process,
  • at a reasonable cost (not more than $225 to the consumer),
  • in a reasonably convenient location,
  • before a neutral arbitrator who can allow for discovery necessary for a fundamentally fair process and all remedies that could be available in court.

These procedural safeguards cannot be replaced with a simpler process; otherwise, the AAA-ICDR will not administer the case and any decision of the arbitrator could potentially be subject to vacatur. This is possibly the worst outcome for an arbitration process, because the parties who have spent the time, money, and energy going through the process usually must go through it again to resolve their case. Balancing the considerations of fairness and speed helps ensure that the outcome will withstand further challenges and that parties can move on from the dispute. In 2023, the median time from filing to award for consumer cases administered by the AAA-ICDR was 9.6 months, while it was 35.6 months for cases in US district courts.[1]

Quickest time to award: 3.3 months. Median time to award: 9.6 months. Median time to trial in US district court: 35.6 months.

Filing to Award: Consumer arbitration cases filed with the AAA that proceeded to hearing and award in 2023 did so much more quickly compared to US district courts. Source: “2023 Consumer Arbitration Statistics,” AAA-ICDR.

Expert, diverse arbitrators are needed to decide consumer disputes.

It is more important than ever to have arbitrators experienced in consumer law deciding consumer arbitrations. The nuances of consumer claims require arbitrators who are familiar with modern technologies and how people interact with them, as well as the law governing those interactions. Arbitrators who educate themselves and interact with these technologies are more likely to serve the parties and the process better than those arbitrators who rely on an assistant to “work the computer.” A roster of arbitrators should also reflect the diversity of the population served to help ensure fair outcomes. Thirty-nine percent of those on the AAA-ICDR’s consumer roster are women and/or people of color, with 39 percent of appointments going to panelists from that group.[2]

Transparency about consumer arbitration cases and outcomes is important.

The AAA-ICDR provides information about our cases in several forms. Each quarter, we update our Consumer and Employment Arbitration Statistics Report,[3] which shows case data for all consumer matters closed within the last five years. This report does not show the consumer party’s name but contains information about the opposing party, arbitrator, and case outcome, as well as other important data. We also maintain a quarterly report on arbitrator demographic data.[4] Both of these reports are free and available to the public.

In addition to these reports, the AAA-ICDR anonymizes and provides for publication of consumer awards. These awards are available via legal research sites and contain the identity of the arbitrator as well as the text of the award.

Virtual hearings are here to stay and have clear benefits for consumer disputes.

The COVID pandemic shifted much of our lives online, and consumer arbitration was not exempt. Virtual hearings have become the norm in our cases, even as the world has returned to in-person activities.

The popularity of virtual hearings seems to indicate that most parties have grown comfortable presenting their cases via those platforms and that arbitrators are comfortable hearing cases in that manner. Virtual hearings also improve access to justice, allowing parties to attend from a comfortable and familiar location without paying for travel costs, with electronic management of evidence, and with less disruption to their work and family obligations.

Technology will continue to enhance the dispute resolution process for consumers.

Online Dispute Resolution (ODR) could be the future of resolving consumer disputes. Online Dispute Resolution can be used as a step in a dispute resolution program and incorporates both binding and nonbinding processes. Various successful ODR tools already resolve over sixty million cases per year and could serve as the model for new platforms to be implemented on a broader scale.[5]

Artificial intelligence is already impacting dispute resolution, with new tools and services appearing frequently. Parties can build a clause with the AAA-ICDR’s ClauseBuilderAI, edit drafts and analyze documents with GenAI tools like ChatGPT or Claude, and make use of AI transcription services for their hearings. While some AI tools are priced for mid-sized or large law firms, the capabilities of more widely available platforms continue to increase while the cost remains relatively low for an individual subscription. Free platforms also continue to advance, potentially increasing access to justice.

Consumer arbitration has evolved significantly, reflecting the complexities and technological advancements of modern disputes. The AAA-ICDR’s adaptation to these changes, through the embrace of virtual hearings, addressing of emerging technology disputes, and upholding of procedural safeguards, helps to ensure a fair and efficient process. With the growing importance of an expert and diverse group of arbitrators, transparency in case outcomes, and the integration of technology such as ODR and AI tools, the future of consumer arbitration is poised to offer even greater accessibility and effectiveness. These advancements not only uphold the fundamental principles of fairness and due process but also enhance the efficiency and adaptability of the arbitration system, ultimately benefiting all parties involved.


  1. 2023 Consumer Arbitration Statistics,” AAA-ICDR, accessed September 18, 2024.

  2. Id.

  3. 2024 Q1 Consumer and Employment Arbitration Statistics Report,” AAA-ICDR, accessed September 18, 2024. Note: The link to this source begins a spreadsheet download.

  4. 2024 Q1 Arbitrator Demographic Data,” AAA-ICDR, accessed September 18, 2024.

  5. About Us,” ODR.com, accessed September 18, 2024.

Telehealth Mergers: Key Regulatory and Compliance Considerations

Introduction

There has been a growing interest in the acquisition and sale of telehealth providers. While the COVID-19 pandemic may have laid the foundation, a recent wave of success for online compounding pharmacies producing weight loss drugs has enhanced the interest in telehealth companies. Such telehealth companies are focusing on “diseases du jour,” some of which have drawn negative attention from lawmakers over their potentially misleading advertising and prescribing practices. Given the recent interest by the Criminal Division of the Department of Justice in compliance-related due diligence, private equity and venture capital companies interested in telehealth companies must ensure that they perform adequate due diligence prior to the purchase of the company, have an adequate and appropriate compliance plan and, if appropriate, voluntarily self-disclose any misconduct they become aware of to avoid criminal charges.

The potential sale of a telehealth company involves the review of a variety of legal and regulatory considerations relating to privacy, practice of medicine, and marketing. This article briefly discusses each of the above in the context of state and Drug Enforcement Agency requirements, Federal Trade Commission expectations, and Food and Drug Administration recommendations. Finally, we review recent telemedicine-related enforcement actions by the Department of Justice.

State Requirements

Telehealth laws can vary from state to state. States may differ on licensing requirements, supervision requirements, distance limitations, the type of technology that must be used, the types of services that are allowed, minimum staffing requirements, recordkeeping requirements, inspections, and more.

It is also important to note that telehealth is a broad category that can involve different types of providers. For example, some companies that provide medications might be using telehealth not only for prescribers to evaluate and prescribe medicines but also for pharmacists to actually dispense the drugs from a variety of locations (telepharmacy). Therefore, review of state licenses for each type of provider and telehealth service provided is key.

Privacy Laws

As one would expect, the storage and sharing of data raises important privacy and security considerations that must be considered while developing a compliant telehealth program. At the federal level, telehealth companies must comply with the Health Insurance Portability and Accountability Act (HIPAA) and its implementing regulations. There are also a variety of relevant state-level privacy requirements, including some specific to health privacy such as California’s Confidentiality of Medical Information Act or the Washington My Health My Data Act. It is important to note that while some state privacy laws may generally apply to medical information privacy, others target specific disease states such as Pennsylvania’s Confidentiality of HIV-Related Information Act (also known as Act 148). Act 148 generally prohibits the sharing of an individual’s HIV-related information without written permission.

Practice of Medicine

The practice of medicine is also regulated on a state-by-state basis. Telehealth companies must ensure that they comply with the varying requirements of states. One particular concern is that telehealth companies that operate in multiple states must consider whether their physicians need to be licensed in different states. For example, Florida requires out-of-state telehealth providers to register.

Disease-Specific Considerations

While telehealth providers must follow general rules related to telemedicine, certain states have disease-specific considerations, such as additional requirements imposed on providers treating obesity. For example, the Florida Commercial Weight-Loss Practices Act has specific body mass index requirements, additional informed consent rules and specific follow up care concerns. Alternatively, Virginia requires a physical examination to prescribe controlled substances for weight reduction or control, which limits the extent and scope of telehealth services.

Federal Requirements

Telehealth has recently been in the news because of companies making unvalidated claims and inappropriate sale of controlled substances. For example, as discussed below, the Drug Enforcement Administration (DEA), along with the Department of Justice, has targeted certain telehealth companies due to their noncompliant sale of controlled substances. To avoid future noncompliance, in 2023 the DEA and the Department of Health and Human Services proposed rules for prescribing controlled medications using telehealth options.

Additional telemedicine flexibilities regarding prescription of controlled medications were put in place during the COVID-19 pandemic but are currently set to expire at the end of 2024. There has been significant discussion about extending many of these flexibilities, and the DEA has been intently listening. For example, it arranged a public listening session on September 12 and 13, 2023, in addition to receiving over 38,000 comments. Despite the uncertainty, telehealth companies have continued with the prescription of controlled substances, which has brought on additional scrutiny from the Department of Justice.

Advertising Requirements

Federal Trade Commission (FTC)

The FTC works to ensure that all marketing materials are truthful and not misleading. This is generally broadly interpreted and enforced. In the context of telehealth, this could include the regulation of advertising and marketing; the use of endorsements, influencers, and reviews; online advertising and marketing; and making health claims. Companies should also pay attention to guidance specific to particular health claims. In fact, given the number of companies now focusing on weight loss, the FTC even put out a reference guide on making weight loss claims. It is worth noting that the Commission can penalize a noncompliant company as much as $50,120 per violation.

Food and Drug Administration (FDA)

While the FTC does have jurisdiction over the promotion of pharmaceutical products, the FDA primarily regulates drug manufacturers to ensure that their products are neither adulterated nor misbranded and are safe and efficacious. Accordingly, if done appropriately, the FDA should not be involved in claims being made by telepharmacy companies. However, there is a growing interest in having the FDA, primarily through the Office of Prescription Drug Promotion, regulate unsupported claims related to prescription drugs.

Telepharmacy and telemedicine companies selling compounded drugs and making drug-like claims for compounded products generally believe that they are safe from FDA scrutiny since the FDA primarily targets manufacturers of pharmaceuticals. However, it is important to note that the FDA has exerted jurisdiction over healthcare providers who are making claims about products for unapproved uses.

Enforcement

Companies and individuals using deceptive marketing in connection with telehealth are subject not only to regulatory oversight, but also civil and criminal penalties.

In July 2022 the Department of Justice (DOJ) announced criminal charges against thirty-six defendants, including a telemedicine company executive and clinical laboratory executives, for more than $1.2 billion in alleged fraudulent schemes involving telemedicine. In one of the cases, an operator of several clinical laboratories “was charged in connection with a scheme to pay over $16 million in kickbacks to marketers who, in turn, paid kickbacks to telemedicine companies and call centers in exchange for doctors’ orders.”

In 2023, Joelson Viveros faced criminal charges related to his allegedly investing in and assisted with a kickback scheme regarding a network of pharmacies that operated a call center. At this call center, telemarketers persuaded Medicare beneficiaries to accept prescriptions for expensive medications that they neither needed nor wanted. Viveros allegedly obtained signed prescriptions by paying kickbacks to two telemedicine companies.

In 2023 executives and owners of DMERx, an internet-based platform for doctors’ orders, were also indicted. The indictments included the CEO prior to a corporate acquisition and the CEO and vice president of the company that operated it after the acquisition. Defendants were allegedly paid for connecting pharmacies, durable medical equipment suppliers, and marketers to telemedicine companies that “would accept illegal kickbacks and bribes in exchange for orders that were transmitted using the DMERx platform.” Allegedly, the prescriptions were not medically necessary and were based on a brief telephone call with the alleged patient or no interaction at all. 

In another case, David Antonio Becerril, a medical doctor, was indicted in connection with a scheme in which he allegedly signed more than 2,800 fraudulent orders for genetic tests and medical for patients he was not treating and had never spoken to. The indictment alleged that telemarketers obtained beneficiary information and prepared fraudulent orders that Becerril signed with an average of less than forty seconds of review, and few or no orders were denied. In some cases, braces were approved for patients whose relevant limbs had already been amputated. 

In May 2024, the DOJ charged a Long Island woman in connection with allegedly selling misbranded and adulterated weight loss drugs, including Ozempic. The defendant allegedly obtained the weight loss drugs from Central and South America and then posted dozens of videos advertising and selling the drugs.

Conclusion

As described above, investors in telehealth companies are exposed to significant risks ranging from lack of appropriate registration of providers, to privacy concerns, to inappropriate promotion. As previously discussed, the DOJ continues to target healthcare fraud and requires compliance oversight prior to and after an M&A transaction. Failure to have adequate compliance programs exposes acquirers to significant liability not only from the FDA, but also the DOJ, FTC, DEA, state regulators, and more.

Acquisition of Clinical Research Sites: Key Considerations

There has been a notable uptick in clinical research sites being bought out by private equity and venture capital companies. This trend signifies the growing recognition of the value these sites hold, not just in terms of their operational capabilities, but also through the critical data they generate. However, for both buyers and sellers, there are significant considerations to keep in mind to navigate these transactions successfully and ethically.

Before purchasing clinical trial sites, a private equity or venture capital company (collectively, “Buyer”) must have a clear thesis as to why such acquisitions make sense. Various reasons have included wanting to:

  1. acquire the data associated with clinical trial participants;
  2. dominate a clinical research market for a specific disease state;
  3. dominate a specific clinical research geographical area; or
  4. vertically integrate into the clinical research space.

Small clinical trial sites are typically structured to ensure that a physician is providing services to the clinical trial site—i.e., the physician serves as a contractor to the clinical trial site and is providing medical services as part of that contract. This, however, can raise significant issues during an acquisition. This article discusses several crucial considerations for Buyers and one option for addressing them.

Privacy Considerations

The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule requires compliance with “national standards to protect individuals’ medical records and other individually identifiable health information” (“protected health information” or PHI). HIPAA applies to a variety of stakeholders who conduct certain healthcare transactions electronically. State laws also have a meaningful impact on data collection and privacy in this space. California investors alone may need to deal with the California Consumer Privacy Act of 2018, the California Privacy Rights Act of 2020 amending it, and the state’s Confidentiality of Medical Information Act. Without a federal law unifying and simplifying requirements, this hodgepodge of privacy requirements has been, and continues to be, a challenge for Buyers and must be appropriately reviewed to minimize susceptibility to seemingly unlimited fines and penalties.

Compliance with privacy requirements can be pivotal for a deal. Buyers hoping to acquire data associated with clinical trial participants have approached sites to acquire access to subject data in the context of licensing or a sale and thereby enable data brokers to maximize their data chests. However, the lack of appropriate and preexisting consent has often stymied such goals due to the inability to contact past clinical trial subjects at scale.

Regulatory Due Diligence

The quality of clinical research and the integrity of the data produced hinge on robust quality programs and oversight mechanisms. For Buyers, assessing the effectiveness of these programs at the target site is crucial. This assessment includes evaluating the site’s adherence to Good Clinical Practice guidelines and whether a competent quality assurance team is present. An effective quality assurance program may often include having in place defined goals, a clear list of standard operating procedures that are routinely updated and that staff are routinely trained on, and audits.

These audits should be conducted by both internal stakeholders and external consultants to minimize bias. Externally, clinical trial sponsors and clinical research organizations will routinely conduct such audits, since they are intended to improve the functioning at an individual site and also to ensure that if the US Food and Drug Administration (FDA) ever audits the site, the records are appropriately maintained. However, the FDA may nevertheless find problems at the clinical trial site. Such findings, even if addressed, have been deemed devastating by multiple clinical trial sites. It is therefore important for a potential investor to identify potential audit findings and evaluate the implications of such findings on valuation.

Preventing the Corporate Practice of Medicine

One of the foremost considerations in the context of clinical trial site acquisition is the consideration and prevention of the corporate practice of medicine. This doctrine, which varies by state, generally prohibits corporations or non-physicians from practicing medicine or employing physicians to provide professional medical services. It is intended to ensure that medical decisions are made by qualified medical professionals rather than corporate entities driven by profit motives, or individuals who may not adequately appreciate the medical decision-making process.

Some argue that research is exempt from corporate practice of medicine rules. Nevertheless, this conclusion is generally deemed to be premature and may need to be evaluated on a case-by-case basis. By way of example, while the definition of the practice of medicine varies from state to state, the implementing regulations of the Texas Medical Practice Act specifically define “Actively engaged in the practice of medicine” as including “clinical medical research” and “the practice of clinical investigative medicine.”

Some states, such as Michigan, will not allow physicians to be employed by non-physicians and only allow physicians to form professional corporations, professional associations, or professional limited liability companies so that they are owned exclusively by physicians. Accordingly, in such states, a clinical trial site engaged in the practice of medicine cannot be owned by a Buyer who is not a physician. On the other hand, several states, including Arizona, allow non-physicians to own a portion of a professional corporation that practices medicine, but they limit this to a 49 percent interest or other noncontrolling interest. In certain states, this also means that only the physician’s office can bill for medical services. In other states, however, no such requirements are imposed on clinical trial sites. This variability can have a dramatic impact on the value of a clinical trial site, and the appropriate structure of the relationship between a physician and a clinical trial site. It is therefore important to conduct a state-by-state analysis to evaluate the definition of the practice of medicine, its application, and its implication for the corporate practice of medicine to evaluate how it applies to your target research site.

Ownership Considerations

When a Buyer purchases a clinical trial site, they hope to not only own the site, but also prevent the physician performing the research from starting a competing clinical trial site next door.

As discussed above, depending on the state, Buyers may not be able to actually own the doctor’s office or the research site that performs medical services—since that could violate state law.

The Federal Trade Commission (FTC) has proposed banning noncompetes. A Texas district court recently struck down the FTC’s new final rule banning noncompetes. However, this created a circuit split with a Pennsylvania district court, which determined that the plaintiff failed to show it would be irreparably harmed by the noncompete rule and that the FTC had the authority to issue the noncompete rule. Nevertheless some states like Pennsylvania have independently banned noncompetes for healthcare practitioners. Buyers therefore have limited ways to prevent physician flight or prevent physicians from starting a competitor next door.

When the Buyer can neither purchase the physician’s office due to state law, nor prevent the physician from starting a competitor next door, it can be unclear what the Buyer is actually buying.

A Structural Solution

There is, however, a simple, time-tested way to address many of the privacy, regulatory, and corporate practice of medicine problems described above: creating a management service organization (MSO) to handle the nonmedical aspects of the clinical research site. Such a structure enables physicians to maintain control over medical decisions at their medical office, while the MSO can be owned by the Buyer and will provide the physician’s office with services related to clinical research. Such services may include regulatory assistance, sales and marketing, training, and more.

In such a situation, PHI provided to a doctor’s office is subject to HIPAA. However, a HIPAA waiver would be obtained from the patient to enable sharing information with the MSO. This would have the further advantage of sharing the same PHI with pharmaceutical companies or medical device companies (collectively, “Sponsors”), which are also not “covered entities” as defined by HIPAA and are therefore not subject to HIPAA regulations in the context of research. This is especially important since most Sponsors refuse to sign a HIPAA “Business Associate Agreement.” The signing of the HIPAA waiver reduces the risk of privacy-related liability.

In the event a Buyer has a holding company holding multiple MSOs, working with this MSO structure minimizes impact to the holding company and related companies. For example, if a single MSO is affected by a regulatory concern related to the FDA, or privacy, a Buyer may choose to disband or disavow that individual MSO and continue to operate its remaining MSOs without all of them being tainted by the regulatory finding.

Conclusion

For clinical research sites, partnering with Buyers can provide much-needed resources and support, but it also requires careful planning and due diligence to ensure that the partnership is aligned with the mission and values of both sides. Investors and sites preparing for sale and purchase must understand the nuances of complying with corporate practice of medicine doctrines, ensuring proper patient consent for data use, and evaluating the strength and quality of programs to ensure a smooth acquisition process.

Common Issues That Arise in AI Sanction Jurisprudence and How the Federal Judiciary Has Responded to Prevent Them

In response to the misuse of generative artificial intelligence (“GAI”) in court filings, courts nationwide have promulgated standing orders and local rules on how parties should use GAI in the courtroom. This article will summarize those local rules and standing orders and identify common issues in cases where attorneys’ misuse of GAI resulted in potential sanctions. Of the approaches courts have taken thus far, the local rule set forth by the United States District Court for the Eastern District of Texas presents one notable model for courts considering promulgating a rule on the use of GAI, because it provides guidance on the use of GAI in court filings while remaining able to adapt to GAI’s rapid advancements.

An Overview of GAI

In a nutshell, GAI refers to machine learning algorithms that are “trained on data to recognize patterns and generate new content based on the ‘rules and patterns’ they have learned.”[1] There are many different GAI programs that serve many different purposes. For example, ChatGPT is a GAI that can generate pages of material and has infamously been responsible for generating court filings that included fake cases. However, Grammarly and Microsoft Copilot are GAI that serve to help with clarity of writing. Moreover, Westlaw and LexisNexis have developed GAI to help with case research, which could streamline attorney work products and save money for law firms and clients.

Legal Standard

Current case law surrounding GAI has invoked Rule 11 and Rule 8 of the Federal Rules of Civil Procedure. Rule 11 provides that any document filed with the court must be signed by at least one attorney of record who certifies that “after an inquiry reasonable under the circumstances . . . the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.”[2] Rule 11 also requires certification that any document filed with the court does not “needlessly increase the cost of litigation . . . [and] the factual contentions have evidentiary support or . . . will likely have evidentiary support after a reasonable opportunity for further investigation or discovery.”[3] Rule 8 provides that “a pleading that states a claim for relief must contain . . . a short and plain statement of the claim showing that the pleader is entitled to relief.”[4] Although Rule 26 has not been implicated yet, discovery requests, responses, and objections could be drafted using GAI. Similar to Rule 11, Rule 26(g) requires that at least one attorney of record sign discovery requests and responses and certify that after a reasonable inquiry all filings are warranted by existing law, are nonfrivolous, and do not needlessly increase the cost of litigation.

A violation of Rule 8 can lead to a dismissal of the complaint, while violations of Rule 11 and Rule 26 can result in a range of sanctions. If a court decides to issue sanctions sua sponte, it should only do so “upon a finding of subjective bad faith.”[5] When parties sign and file their affirmations and make no inquiries as to the accuracy of their assertions, it supports a finding of subjective bad faith.[6] When parties use GAI to file documents that include fake cases, it inherently supports a finding of subjective bad faith because it demonstrates a lack of inquiry sufficient to impose sanctions sua sponte. Therefore, courts possess the power to sanction parties that misuse GAI and do not need to promulgate additional filing requirements.

Local Rules and Standing Orders Relating to the Use of GAI

Courts across the country have varied on how to address the use of GAI in court filings. Court rules on the topic have ranged from guidance implementing no additional requirements to a complete prohibition on GAI. However, most courts have promulgated a rule on GAI that requires some form of disclosure and certification when a party uses GAI.

A. Disclosure and Certification When GAI Is Used to Draft Filings

Courts that require disclosure when GAI is used to draft portions of a filing have variations on their requirements. Some courts only require a verification that the contents of the filing are accurate, while others require a separate certification in addition to the filing. For example, in 2023 the United States Bankruptcy Court for the Western District of Oklahoma promulgated a general order that requires that any document drafted by GAI be accompanied by a certification that

(1) identif[ies] the program used and the specific portions of text for which [GAI] was utilized; (2) certif[ies] the document was checked for accuracy using print reporters, traditional legal databases, or other reliable means; and (3) certif[ies] the use of such program has not resulted in the disclosure of any confidential information to any unauthorized party.[7]

B. Disclosure and Certification When GAI Is Used to Prepare a Filing

Some courts require disclosure and certification when parties use GAI in any capacity to prepare filings with the court. However, these courts do not distinguish between GAI that can generate work products and other forms of GAI that can help clarify writing or facilitate legal research. For example, Judge Palk of the United States District Court for the Western District of Oklahoma created a standing order that is representative of this issue and requires parties that used GAI to draft or prepare a court filing to disclose “that [G]AI was used and the specific [G]AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by [G]AI, including all citations and legal authority.”[8] This suggests that to comply with the standing order, parties must disclose and certify every filing where they used legal search engines that incorporate GAI to help streamline search results or proofreading software such as Grammarly or Microsoft Word.

Some courts, mainly in Texas, take this a step further to require a certification regarding GAI regardless of whether it was used; they require that parties certify either that they did not use GAI to draft or prepare a filing or, if they did, that the parties will check “any language drafted by [GAI] . . . for accuracy, using print reporters or traditional legal databases, by a human being.”[9] Overly broad disclosure and certification requirements can be cumbersome and difficult to enforce, and they may create confusion among individuals trying to file.

C. Prohibitions on the Use of GAI

A minority of courts prohibit parties from using GAI to draft documents that are filed with the court.[10] Some judges prohibit the use of GAI in any capacity. Although these rules typically create a carve out to allow parties to use search engines that incorporate GAI, these orders do not create the same carve out for proofreading software that utilizes GAI for clarity of writing.[11] For example, Judge Newman of the United States District Court of the Southern District of Ohio stipulates that “[n]o attorney for a party, or a pro se party, may use Artificial Intelligence (‘AI’) in the preparation of any filing submitted to the Court.” This magnifies the problem with not distinguishing between different forms of GAI discussed above because as more proofreading software incorporate GAI to assist with clarity of writing, this standing order will become increasingly arduous to comply with. Further, it would be impossible to consistently determine whether a party has used GAI to assist with clarity of writing or not, which would make such a standing order too far-reaching to the point that it is moot. As a result, these courts will likely have to change their local rules and standing orders in the near future.

D. Rules That Provide Guidance and Do Not Impose Additional Requirements

A handful of courts have addressed the use of GAI as guidance rather than imposing an additional filing requirement. For example, the United States District Court for the Eastern District of Texas promulgated a local rule that stated if a party used GAI to prepare or draft a court filing, Federal Rule of Civil Procedure 11 still applies. The local rule also reminds parties to review the generated content for accuracy if they use GAI, in order to avoid sanctions.[12] This approach can achieve a court’s goal of addressing the use of GAI while also being able to adapt to the inevitable widespread adoption of GAI.

Common Issues That Arise in GAI Sanctions Jurisprudence

The main issue that the courts that have sanctioned litigants for misuse of GAI have encountered is the “hallucination” of cases when parties use GAI to generate work products. The United States District Court for the Southern District of New York addressed this issue in the infamous case Mata v. Avianca, where an attorney used ChatGPT to draft an Affirmation in Opposition that cited mostly fake cases.[13] Since then, citing fake cases has been the main reason parties have been sanctioned for using GAI.[14] In Kruse v. Karlen, in addition to hallucinating cases, the GAI also provided erroneous information about state statutes.[15]

Courts have also dismissed pleadings generated with GAI because they violated Federal Rule of Civil Procedure 8(a). In Whaley v. Experian Information Solutions, Inc., a pro se litigant filed a 144-page complaint alleging a violation of the Fair Credit Reporting Act and used GAI to generate a portion of it.[16] The complaint was verbose and confusing, and it lacked accurate citations. Therefore, the court dismissed the complaint without prejudice because it violated Rule 8(a).[17]

The United States Bankruptcy Court for the Southern District of New York has also addressed the use of GAI in an expert witness report. In In re Celsius Network LLC, an expert witness generated a 172-page report using GAI in seventy-two hours. He admitted that a “comprehensive human-authored report would have taken over 1,000 hours to complete.”[18] The report “contained numerous errors, ranging from duplicated paragraphs to mistakes in its description of the trading window selected for evaluation . . . [and] contain[ed] almost no citations to facts or data underlying the majority of the methods, facts, and opinions set forth therein.”[19] As a result, Judge Glenn excluded the report from the record.[20]

Although Rule 26 has not been at issue in cases noted thus far, GAI could easily be used in discovery requests, responses, and objections. Some courts have anticipated this possibility in their standing orders and stated that Rule 26 sanctions apply in addition to Rule 11 sanctions.

Conclusion

Although courts should rightfully be concerned about the widespread use of GAI, they already have the tools to address any issue that may arise without promulgating an additional rule. If parties use fictitious sources, they inherently violate the certification requirement under Rule 11 and Rule 26. The Fifth Circuit acknowledged this on June 11, 2024, and decided not to promulgate a rule on GAI because, as Law360 summarized it, “court rules already require attorneys to check filings for accuracy, and using AI doesn’t excuse lawyers from ‘sanctionable offenses.’”[21] Imposing additional certification requirements or prohibitions is likely unnecessary and could burden parties and courts. Nevertheless, considering the changing landscape of GAI, a local rule similar to the one promulgated by the United States District Court for the Eastern District of Texas may be useful to inform litigants that the use of GAI is permitted and to serve as a reminder to check all sources for accuracy or else be subject to Rule 11 and Rule 26 sanctions.


  1. Bernard Mar, What Is Generative AI: A Super-Simple Explanation Anyone Can Understand, Forbes (Sept. 26, 2023, 6:01 p.m.).

  2. Fed. R. Civ. P. 11(a), (b).

  3. Fed. R. Civ. P. 11(b)(1), (3).

  4. Fed. R. Civ. P. 8(b).

  5. Mata v. Avianca, 678 F. Supp.3d 443, 462 (S.D.N.Y. 2023) (quoting Muhammad v. Walmart Stores E., L.P., 732 F.3d 104, 108 (2d Cir. 2013)).

  6. Avianca, 678 F. Supp.3d at 464.

  7. Order re: Pleadings Using Generative A.I., General Order 23-01, Bankr. W.D. Okla. (2023). See also General Order on the Use of Unverified Sources, General Order 23-1, D. Haw. (2023) (requiring parties that used GAI to generate any filing with the court to disclose that they relied on an unverified source and confirm the language generated was not fictitious); Pleadings Using Generative Artificial Intelligence, General Order 2023-03, Bankr. N.D. Tex. (2023) (requiring parties to check for accuracy any portion of a document drafted by GAI through “print reporters, traditional legal databases, or other reliable means”); Blumenfeld Jr., J., Standing Order for Civil Cases, C.D. Cal. (last updated Mar. 1, 2024) (requiring a party that uses GAI to generate a portion of a filing to attach a separate document disclosing the use and certifying the accuracy of its content; Magistrate Judge Oliver of the same district also adopted this standing order); Vaden, J., [Standing] Order on Artificial Intelligence, Ct. Int’l Trade (2023) (requires that any submission that contains text drafted with GAI assistance be accompanied by (1) disclosure of what program was used and portions of the text that were so drafted and (2) a certification that the use of the program did not result in a breach of confidentiality to a third party).

  8. Palk, J., Disclosure and Certification Requirements – Generative Artificial Intelligence [Standing Order], D. Okla. (last visited Aug. 8, 2024). Judge Robertson of the United States District Court for the Eastern District of Oklahoma has also adopted this standing order. See also Cole, Mag. J., The Use of “Artificial Intelligence” in the Preparation of Documents Filed before This Court [Standing Order], N.D. Ill. (last visited Aug. 8, 2024) (requiring parties to disclose if GAI was used in any way, including legal research, during the preparation of the filing); Baylson, J., Standing Order re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson, E.D. Pa. (2023) (requiring parties to disclose if GAI was used in the preparation of the filing as well as a certification that each citation is accurate; Judge Pratter of the same district also adopted this standing order).

  9. Starr, J., Mandatory Certification Regarding Generative Artificial Intelligence [Standing Order], N.D. Tex. (last visited Aug. 8, 2024). Judge Kacsmaryk of the same district, Judge Olvera of the United States District Court for the Southern District of Texas, and Judge Crews of the United States District Court for the District of Colorado have also adopted versions of this standing order.

  10. Coleman, J., Memorandum of Law Requirements [Standing Order], N.D. Ill. (last visited Aug. 8, 2024).

  11. See Boyko, J., Court’s Standing Order on the Use of Generative AI, N.D. Ohio (last visited Aug. 8, 2024). See also Newman, J., Artificial Intelligence (“AI”) Provision, Standing Order Governing Civil Cases and Standing Order Governing Criminal Cases, S.D. Ohio (2023).

  12. E.D. Tex. Local Rules CV-11(g), AT-3(m) (2023). See also Subramanian, J., Individual Practices in Civil Cases [Standing Order], S.D.N.Y. (2023); Johnston, J., Artificial Intelligence (AI) [Standing Order], N.D. Ill. (last visited Aug. 8, 2024).

  13. Avianca, 678 F.Supp.3d at 450.

  14. See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (an attorney cited nonexistent cases, and the judge referred her to the court’s Grievance Panel). See also United States v. Cohen, No. 18-CR-602 (JMF), 2024 WL 1193604, at *2 (S.D.N.Y. Mar. 20, 2024) (Michael Cohen’s lawyer cited three nonexistent cases generated by Google Bard); Ex parte Lee, 673 S.W.3d 755, 756 (Tex. App. 2023) (an attorney cited five sources in an appeal from an order of judgment; three were nonexistent, and the two published cases did not correspond to the reporter the cases were cited with); Will of Samuel, 206 N.Y.S.3d 888, 891, 896 (N.Y. Sur. 2024) (although counsel did not admit to using GAI, the court suspected use of GAI because five out of the six cases he cited were fake and ordered a hearing to determine the issue).

  15. Kruse v. Karlen, ED 111172, 2024 WL 559497, at *3 (Mo. Ct. App. Feb. 13, 2024).

  16. Whaley v. Experian Info. Sols., Inc., No. 3:22-cv-356, 2023 WL 7926455, at *2 (S.D. Ohio Nov. 16, 2023).

  17. Id.

  18. In re Celsius Network LLC, 655 B.R. 301, 308 (Bankr. S.D.N.Y. 2023).

  19. Id. at 308.

  20. Id. at 309.

  21. Sarah Martinson, 5th Circ. Won’t Adopt Rule on AI-Drafted Docs, Law360 (Jun. 11, 2024).

Bank Partnerships in an Evolving World

Financial institutions have utilized service providers such as third-party vendors and nonbank entities that partner with banks for a multitude of purposes over many years. The use of service providers has not historically been a controversial issue, and financial institutions have always had an obligation to manage relationships in a manner that is consistent with safety and soundness standards. Given this background, what should we do differently when evaluating so-called bank partnership programs that have received more scrutiny, particularly in the FinTech context? The answer: closely monitor state legislation, given how rapidly evolving state law has created a patchwork of legal and regulatory issues for these programs, similar to but more complicated than prior waves of legislation regulating mortgage brokers, loan servicers, and debt collectors.

In June 2023, the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) issued guidance on managing risks associated with third-party relationships (Guidance). This Guidance replaces and rescinds prior guidance and frequently asked questions that date back to 2008. The Guidance acknowledges the long-standing use of service providers—“[b]anking organizations routinely rely on third parties for a range of products, services, and other activities”—and the benefit of such relationships: “The use of third parties can offer banking organizations significant benefits, such as quicker and more efficient access to technologies, human capital, delivery channels, products, services, and markets.” However, it notes the use of a third party does not diminish or negate the financial institution’s responsibility to ensure its activities are run in a safe and sound manner and comply with applicable laws and regulations. In other words, a financial institution cannot avoid liability by delegating certain responsibilities to their service provider.

The Guidance emphasizes the need for an appropriate risk assessment of service provider relationships, as well as tailoring the compliance management system and oversight to be commensurate with the risk presented by the service provider. For financial institutions that wish to partner with a nonfinancial institution in a “bank partner” model, this Guidance provides a good framework on how to develop policies and procedures to ensure safe and sound banking practices.

At a glance, this should be the end of the story—create solid risk management practices and appropriately manage your relationships. However, state licensing regimes and the interplay of federal and state law create complex issues, particularly when analyzing a consumer lending bank partner program. Both financial institutions and their partners that are not financial institutions must be cognizant of the rapidly changing landscape on the state level. States have threatened, and currently are attempting, to opt out of the Depository Institutions Deregulation and Monetary Control Act (DIDMCA). The purpose of DIDMCA was to place national and state banks on a level playing field. Other state legislation has created “predominant economic interest” and other so-called “true lender” tests to determine whether the financial institution is in fact the lender of record, or whether the loans should be treated as if the nondepository partner were the lender.

As a result, while the general premise of a bank partnership is old news, the current wave of legislation brings both an old concept (state licensing and supervision) and a new concept (substantively regulating the terms of credit extended by financial institutions through legislation purportedly applicable only to the nondepository entity) to regulating such partnerships. The complexity and sheer volume of state laws aimed at exercising authority over financial services products being provided by financial institutions means that both financial institutions and their partners must be diligent when crafting their relationship and monitoring ongoing legislative changes. Up-front consideration should be taken in developing the program, assigning responsibilities, developing comprehensive compliance management systems, and ensuring ongoing diligence.

The Price of Emotion: Privacy, Manipulation, and Bias in Emotional AI

Imagine shopping for Christmas gifts online without knowing that AI is tracking your facial expressions and eye movements in real time and guiding you towards more expensive items by prioritizing the display of similar high-priced items. Now picture a job candidate whose quiet demeanor is misinterpreted by an AI recruiter, resulting in the denial of his dream job. Emotional AI, a subset of an AI that “measures, understands, simulates, and reacts to human emotions,”[1] is rapidly spreading. Used by at least 25 percent of Fortune 500 companies as of 2019,[2] with the market size projected to reach $13.8 billion by 2032,[3] this technology is turning our emotions into data points.

This article examines the data privacy, manipulation, and bias risks of Emotional AI, analyzes relevant United States (“US”) and European Union (“EU”) legal frameworks, and proposes compliance strategies for companies.

Emotional AI, if not operated and supervised properly, can cause severe harm to individuals and subject companies to substantial legal risks. It collects and processes highly sensitive personal data related to an individual’s intimate emotions and has the potential to manipulate and influence consumer decision-making processes. Additionally, Emotional AI may introduce or perpetuate bias. Consequently, the misuse of Emotional AI may result in violations of applicable EU or US laws, exposing companies to potential government fines, investigations, and class action lawsuits.

1. Emotional AI Defined

Emotional AI techniques can include analyzing vocal intonations to recognize stress or anger and processing facial images to capture subtle micro-expressions.[4] As this technology develops, it has the potential to revolutionize how we interact with technology by introducing more relatable and emotionally responsive ways of doing so.[5] Already, Emotional AI personalizes experiences across different industries. Call center agents tune into customer emotions, instructors personalize learning, healthcare chatbots offer support, and ads are edited for emotional impact. AI in trucking detects drowsiness for driver safety, while in games, it personalizes experiences.[6]

2. Data Privacy Concerns

Emotional AI relies on vast amounts of personal data to infer emotions (output data), raising privacy concerns. It may use the following input data:

  1. Textual data: social media posts and emojis.
  2. Visual data: images and videos, including facial expressions, body language, and eye movements.
  3. Audio data: voice recordings, including tone, pitch, and pace.
  4. Physiological data: biometric data (e.g., heart rate) and brain activity via wearables.
  5. Behavioral data: gestures and body movements.[7]

With emotions being one of the most intimate aspects of a person’s life, people are naturally more worried about the privacy of data revealing their emotions than other kinds of personal data. Imagine a loan officer using AI-based emotional analysis to collect and analyze loan applicants’ gestures and voices at interviews. Applicants may be concerned about how their data will be used, how they can control such uses, and the potential consequences of a data breach.

A. Legal Framework

The input and output data of Emotional AI (“Emotional Data”), if directly identifiable, relating to, or reasonably linked to an individual, fall under the broad definition of “Personal Data” and are thus protected under various US state data privacy laws and the European Union’s General Data Protection Regulation (“GDPR”),[8] which serves as the baseline for data privacy laws in EU countries.[9] For example, gestures and body movements, voice recordings, and physiological responses—all of which can be processed by Emotional AI—can be directly linked to specific individuals and therefore constitute Personal Data. Comprehensive data privacy laws in many jurisdictions require the disclosure of data collection, processing, sharing, and storage practices to consumers.[10] They grant consumers the rights to access, correct, and delete Personal Data; require security measures to protect Personal Data from unauthorized access, use, and disclosure; and stipulate that data controllers may only collect and process Personal Data for specified and legitimate purposes.[11] Additionally, some laws require minimizing the Personal Data used, limiting the duration of data storage, and reducing Personal Data to nothing beyond what is necessary to achieve the stated purposes of processing.[12]

Furthermore, if the Personal Data have the potential to reveal certain characteristics such as race or ethnicity, political opinions, religious or philosophical beliefs, genetic data, biometric data (for identification purposes), health data, or sex life and sexual orientation, they will be considered sensitive Personal Data (“SPD”). For instance, Emotional AI systems that analyze voice tone, word choice, or physiological signals to infer emotional states could potentially reveal information about an individual’s political opinions, mental health status, or religious beliefs—which is SPD—such as by analyzing a person’s speech patterns and stress levels during discussions on certain topics. Both the GDPR and several US state privacy laws provide strong protections for SPD. The GDPR requires organizations to obtain a data subject’s explicit consent to process SPD with certain exceptions.[13] It also mandates a data protection impact assessment when automated decision-making with profiling significantly impacts individuals or involves processing large amounts of sensitive data.[14] Similarly, several US state laws require a controller to perform a data protection assessment[15] and obtain valid opt-in consent.[16] California grants consumers the right to limit the use and disclosure of their SPD to what is necessary to deliver the services or goods.[17] The processing of SPD may also be subject to other laws, such as laws on genetic data,[18] biometric data,[19] and personal health data.[20] Depending on the context where Emotional AI is utilized, certain sector-specific privacy laws may apply, such as the Gramm-Leach Bliley Act (“GLBA”) for financial information, the Health Insurance Portability and Accountability Act (“HIPAA”) for health information, and the Children’s Online Privacy Protection Act (“COPPA”) for children’s information.

Emotional AI relies heavily on biometric data, such as facial expressions, voice tones, and heart rate. One of the most comprehensive and most litigated biometric privacy laws is Illinois’s Biometric Information Privacy Act (“BIPA”). Under the BIPA, “Biometric information” includes any information based on biometric identifiers that identify a specific person.[21] “Biometric identifiers” include “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.”[22] The BIPA imposes the following key requirements on private entities that collect, use, and store Illinois residents’ biometric identifiers and information:

  1. Develop and make accessible to the public a written policy that outlines the schedules for retaining biometric data and procedures for its permanent destruction.
  2. Safeguard biometric data with a level of care that meets industry standards or is equivalent to the protection afforded to other sensitive data.
  3. Inform individuals about the specific purposes for which their biometric data is being collected, stored, or used, and the duration for which it will be retained.
  4. Secure informed written consent from individuals before collecting or disclosing biometric data.

The adoption of biometric privacy laws is a growing trend across the country. Several states and cities, including Texas, Washington, New York City, and Portland, have also passed biometric privacy laws.

Current data privacy laws help address the data privacy concerns related to Emotional AI. However, Emotional AI presents unique challenges in complying with data minimization requirements. AI systems often rely on collecting and analyzing extensive datasets to draw accurate conclusions. For example, Emotional AI might use heart rate to assess emotions. However, a person’s heart rate can be influenced by factors beyond emotions, such as room temperature or physical exertion.[23] Data minimization mandates collecting only relevant physiological data, but AI systems might need to capture a wide range of data to account for potential external influences and improve the accuracy of emotional state inferences. This creates a situation where data beyond the core emotional state indicators is collected and what data is necessary may be contentious.

In addition, Emotional AI development may encounter difficulties in defining the intended purposes for data processing due to the inherently unpredictable nature of algorithmic learning and subsequent data utilization. In other words, the AI might discover unforeseen connections within a dataset, potentially leading to its use for purposes that were not defined and conveyed to consumers. For example, a customer service application could use Emotional AI to analyze customer voices during calls to identify frustrated or angry customers for priority handling. Over time, the AI could identify a correlation between specific speech patterns and a higher likelihood of customers canceling the service, a purpose not included in the privacy policy.

B. Legal Strategies

To effectively comply with the complex array of data privacy laws and overcome the unique challenges presented by Emotional AI, organizations developing and using Emotional AI should consider adopting the following key strategies:

  1. Develop a comprehensive privacy notice that clearly outlines the types of Emotional Data collected, the purposes for processing that data, how the data will be processed, and the duration for which the data will be stored.
  2. To address data minimization concerns, plan in advance the scope of Emotional Data necessary for and relevant to developing a successful Emotional AI, adopt anonymization or aggregation techniques whenever possible to remove personal data components, and enforce appropriate data retention policies and schedules.
  3. To tackle the issue of purpose specification, regularly review data practices to assess whether Emotional Data in AI is used for the same or compatible purposes as stated in relevant privacy notices. If the new processing is incompatible with the original purpose, update the privacy notices to reflect the new processing purpose, and de-identify the Emotional Data, obtain new consent, or identify another legal basis for the processing.
  4. If the Emotional Data collected can be considered sensitive Personal Data, implement an opt-in consent mechanism and conduct a privacy risk assessment.
  5. Implement robust data security measures to protect Emotional Data from unauthorized access, use, disclosure, or alteration.

3. Risks of Emotion Manipulation

Emotional AI carries significant risks of being used for manipulation. In three experiments, AI has been shown to learn from participants’ responses to identify weaknesses used in decision-making and guide them toward desired actions.[24] Imagine an online social media platform using Emotional AI to detect and strengthen gamblers’ addictions to promote ads for its casino clients.

A. Legal Framework

I. EU Law

The EU recently enacted the Artificial Intelligence Act (the “EU AI Act”), addressing emotional AI abuse by prohibiting two key categories of AI systems:[25]

  1. AI systems that use subliminal methods or manipulative tactics to significantly alter behavior, hindering informed choices and causing or likely causing significant harm.
  2. Emotion recognition AI in educational and workplace settings except for healthcare or safety needs.

If an emotional AI system is not prohibited under the EU AI Act, such as when it does not cause significant harm, it is deemed a “high-risk AI system,” subjecting its providers and deployers to various requirements, including:

  1. Providers must ensure transparency for deployers by providing clear information about the AI system, including its capabilities, limitations, and intended use cases. They must also implement data governance, promptly address any violation of the EU AI Act and notify relevant parties, implement risk and quality management systems, perform conformity assessments to demonstrate that the AI system meets the requirements of the EU AI Act, and establish human oversight mechanisms.
  2. Deployers must inform consumers of significant decisions, conduct impact assessments, report incidents, ensure human oversight, maintain data quality, and monitor systems.[26]
II. US Law

There is no specific US law that addresses Emotional AI. However, section 5 of the Federal Trade Commission (“FTC”) Act prohibits unfair or deceptive acts or practices.[27] FTC attorney Michael Atleson stated in a 2023 consumer alert that the agency is targeting deceptive practices in AI tools, particularly chatbots designed to manipulate users’ beliefs and emotions.[28] Within the FTC’s focus on AI tools, one concern is the possibility of companies’ exploiting “automation bias,” where people tend to trust AI outputs perceived as neutral or impartial. Another area of concern is anthropomorphism, where individuals may find themselves trusting chatbots more when such bots are designed to use personal pronouns and emojis or otherwise provide more of a semblance of a human person. The FTC is particularly vigilant about AI steering people unfairly or deceptively into harmful decisions in critical areas such as finance, health, education, housing, and employment. It assesses whether AI-driven practices might mislead consumers into actions contrary to their intended goals and thus constitute deceptive or unfair behavior under the FTC Act. Importantly, these practices can be deemed unlawful even if not all consumers are harmed or if the affected group does not fall under protected classes in antidiscrimination laws. Companies must ensure transparency about the use of AI for targeted ads or commercial purposes and inform users if they are interacting with a machine or whether commercial interests are influencing AI responses. The FTC warns against cutting AI ethics staff and emphasizes the importance of risk assessment, staff training, and ongoing monitoring.[29]

B. Legal Strategies

To avoid regulatory scrutiny and potential claims of emotional manipulation, companies developing or deploying

  1. Ensure transparency by clearly informing users when they are interacting with an Emotional AI and explaining in a privacy policy how the AI analyzes user data to infer emotion and how output data is used, including any potential commercial influences on AI responses.
  2. Refrain from using subliminal messaging or manipulative tactics to influence user behavior. Conduct ongoing monitoring and periodic risk assessments to identify and address emotional manipulation risks.
  3. If operating in the EU, evaluate the Emotional AI’s potential for causing significant harm and determine if it falls under the “prohibited” or “high-risk” category. For high-risk AI systems, comply with the applicable obligations under the EU AI Act.
  4. Train staff on best practices for developing and deploying Emotional AI.

4. Risks of AI Bias

Emotional AI may have biased results, particularly if the training data lacks diversity. For instance, a system trained on images of people of only one ethnicity may not recognize facial expressions of another ethnicity, and cultural differences in gestures and vocal expressions may be misinterpreted by an AI system without diverse training data.[30] An example of the potential impact of such bias would be an Emotional AI trained on mental health patients from only one ethnic group that may misinterpret emotions and thereby overlook important symptoms in other groups, resulting in misdiagnosis.

A. Legal Framework

I. EU Law

The EU AI Act addresses bias by imposing stringent requirements on high-risk AI providers and deployers, with a particular emphasis on the provider’s obligation to implement data governance to detect and reduce biases in datasets.[31] The GDPR provides an additional layer of protection against AI bias. Under the GDPR, decision-making based solely on automated processing (including profiling), such as AI, is prohibited unless necessary for a contract, authorized by law, or done with explicit consent.[32] Data subjects affected by such decisions have the right to receive clear communication regarding the decision, seek human intervention, express their viewpoint, comprehend the rationale behind the decision, and contest it if necessary.[33] Data controllers are required to adopt measures to ensure fairness, such as using statistical or mathematical methods that avoid discrimination during profiling, implementing technical and organizational measures to correct inaccuracies in personal data and minimize errors, and employing methods to prevent discrimination based on SPD.[34] Automated decision-making and profiling based on SPD are only permissible if the data controller has a legal basis to do so under the GDPR.[35]

II. US Law

There is no specific federal law addressing AI bias in the US. However, existing antidiscrimination laws apply to AI. Notably, the FTC has taken action related to AI bias under the unfairness prong of Section 5 of the FTC Act. In December 2023, the FTC settled a lawsuit with Rite Aid over the alleged discriminatory use of facial recognition technology, setting a new standard for algorithmic fairness programs. This standard includes consumer notification and contesting options, as well as rigorous bias testing and risk assessment protocols for algorithms.[36] This case also establishes a precedent for other regulators with fairness authority, such as insurance commissioners, state attorneys general, and the Consumer Financial Protection Bureau, to use such authority for enforcement against AI bias.

On the state level, in May, Colorado enacted the Artificial Intelligence Act, the first comprehensive state law targeting AI discrimination, which applies to developers and deployers of high-risk AI systems doing business in Colorado.[37] This may extend to out-of-state businesses serving consumers in Colorado.[38] Emotional AI that significantly influences decisions with material effects in areas such as employment, finance, healthcare, and insurance is considered high-risk AI under the Act. Developers of such systems are required to provide a statement on the system’s uses; summaries of training data; information on the system’s purpose, benefits, and limitations; documentation describing evaluation, data governance, and risk mitigation measures, as well as intended outputs; and usage guidelines.[39] Developers must also publicly disclose types of high-risk AI systems they have developed or modified and risk management approaches, and they must report potential discrimination issues to the attorney general and deployers within ninety days.[40] Deployers must inform consumers of significant decisions, summarize deployed systems and discrimination risk management on their websites, explain negative decisions with correction or appeal options, conduct impact assessments, report instances of discrimination to authorities, and develop a risk management program based on established frameworks.[41]

In addition, most state data privacy laws stipulate that a data controller shall not process personal data in violation of state or federal laws that prohibit unlawful discrimination against consumers.[42] The use of Emotional AI in the employment context also subjects companies to various federal and state laws.[43]

B. Legal Strategies

To comply with antidiscrimination laws and address bias risks of Emotional AI, companies developing or deploying Emotional AI should consider adopting the following strategies:

  1. Establish a robust data governance program to ensure diversity and quality of training data for Emotional AI systems, including regularly monitoring and auditing the training data.
  2. Develop a risk management program based on established risk frameworks, such as the AI Risk Management Framework released by the National Institute of Standards and Technology.[44]
  3. Conduct routine AI risk assessments and bias testing to identify and mitigate potential biases in Emotional AI systems, particularly those used in high-risk areas such as employment, finance, healthcare, and insurance.
  4. Publicly disclose details about Emotional AI systems on the company website, including data practices, types of systems developed or deployed, and risk management approaches.
  5. Inform consumers of significant decisions made by Emotional AI systems. Establish mechanisms to allow consumers to contest decisions and appeal unfavorable outcomes, notify consumers of their rights, and provide clear explanations for decisions made by Emotional AI systems.
  6. In employment contexts, comply with federal and state laws, Equal Employment Opportunity Commission guidance, and Colorado’s and the EU’s AI Acts.[45]

5. Conclusion

The rapid growth of Emotional AI presents a complex challenge to legislators. The EU’s strict regulations on AI and data privacy more effectively safeguard consumers’ interests. However, will this approach hinder AI innovation? Conversely, the reliance of the United States on a patchwork of state and sector laws, along with federal government agencies’ guidance and enforcement, creates more room for AI development. Will this strategy leave consumer protections weak and impose burdensome compliance requirements? Should the United States consider federal legislation that balances innovation with consumer protections? This is an important conversation. In the meantime, companies must continue to pay close attention to Emotional AI’s legal risks across a varied legal landscape.


  1. Meredith Somers, “Emotion AI, Explained,” MIT Sloan School of Management, March 8, 2019.

  2. Id.

  3. Cision, “Emotion AI Market Size to Grow USD 13.8 Billion by 2032 at a CAGR of 22.7% | Valuates Reports,” news release, Yahoo! Finance, May 15, 2024.

  4. Somers, “Emotion AI, explained.”

  5. Noa Yitzhak, “The Future of Emotional AI: Trends to Watch,” Emotion Logic, May 5, 2024.

  6. Neil Sahota, “Emotional AI: Cracking the Code of Human Emotions,” NeilSahota.com, September 28, 2023.

  7. What Is Emotional AI?,” Emotional AI Lab, accessed August 27, 2024.

  8. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1.

  9. Currently, twenty US states have passed data privacy laws: California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Texas, Florida, Montana, Oregon, Delaware, New Hampshire, New Jersey, Kentucky, Nebraska, Maryland, Minnesota, and Rhode Island.

  10. See Cal. Civ. Code §§ 1798.100 to 1798.199.100; Va. Code Ann. §§ 59.1-575 to 59.1-584; Colo. Rev. Stat. §§ 6-1-1301 to 6-1-1313; Utah Code Ann. §§ 13-61-101 to 13-61-404.

  11. Id.

  12. Id.

  13. GDPR Article 9.

  14. GDPR Article 35 (3).

  15. See Colo. Rev. Stat. § 6-1-1309(2)(c); Conn. Gen. Stat. § 42-522(2)(a)(4); Del. Code Ann. tit. 6, § 12D-108(a)(4); Ind. Code § 24-15-6-1(b)(4); Or. Rev. Stat. § 646A.586; Mont. Code § 30-14-2814; Tenn. Code Ann. § 47-18-3206(a)(4)); Tex. Bus. & Com. Code § 541.105(a)(4); Va. Code Ann. § 59.1-580(A)(4) (each requiring controllers to perform data protection assessments when processing sensitive data); see also 4 Colo. Code Regs. § 904-3-8 (providing additional requirements for conducting assessments under Colorado law).

  16. See Colo. Rev. Stat. § 6-1-1308(7); Conn. Gen. Stat. § 42-520(a)(4); Del. Code Ann. tit. 6, § 12D-106(a)(4); Ind. Code § 24-15-4-1(5); Or. Rev. Stat. § 646A.578; Mont. Code § 30-14-2812; Tenn. Code Ann. § 47-18-3204(a)(6); Tex. Bus. & Com. Code § 541.101(b)(4); Va. Code Ann. § 59.1-578(A)(5) (each requiring opt-in consent).

  17. Cal. Civ. Code 1798.121(a).

  18. See, e.g., Cal. Civil Code §§ 56.18–56.186; Ariz. Rev. Stat. § 20-448.02; Genetic Information Nondiscrimination Act of 2008, 42 U.S.C. § 2000ff.

  19. See 740 Ill. Comp. Stat. §§ 14/1–99; Wash. Rev. Code Ann. §§ 19.375.010–.900; Tex. Bus. & Com. Code § 503.001; N.Y.C. Admin. Code §§ 22-1201 to 22-1205.

  20. See Wash. Rev. Code §§ 19.373.010–.900; Nevada S.B. 370 (2023) (codified as amended at Nev. Rev. Stat. §§ 598.0977, 603A.338, 603A.400–.550).

  21. 740 Ill. Comp. Stat. Ann. 14/10.

  22. Id.

  23. American Heart Association editorial staff, “All About Heart Rate,” American Heart Association, May 13, 2024.

  24. Georgios Petropoulos, “The Dark Side of Artificial Intelligence: Manipulation of Human Behaviour,” Bruegel, February 2, 2022.

  25. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).

  26. Lena Kempe, “Colorado and EU AI Laws Raise Several Risks for Tech Businesses,” Bloomberg Law, May 30, 2024.

  27. 15 U.S.C. § 45.

  28. Michael Atleson, “The Luring Test: AI and the Engineering of Consumer Trust,” Federal Trade Commission, May 1, 2023.

  29. Id.

  30. Somers, “Emotion AI, explained.”

  31. Kempe, “Colorado and EU AI Laws.”

  32. GDPR, Recital 71.

  33. Id.

  34. Id.

  35. Id.

  36. Alvaro M. Bedoya, “Statement of Commissioner Alvaro M. Bedoya on FTC v. Rite Aid Corporation & Rite Aid Headquarters Corporation, Commission File No. 202-3190,” Federal Trade Commission, December 19, 2023.

  37. Kempe, “Colorado and EU AI Laws.”

  38. Id.

  39. Id.

  40. Id.

  41. Id. See Cal. Civ. Code §§ 1798.100 to 1798.199.100; Va. Code Ann. §§ 59.1-575 to 59.1-584; Colo. Rev. Stat. §§ 6-1-1301 to 6-1-1313; Utah Code Ann. §§ 13-61-101 to 13-61-404; Tex. Bus. & Com. Code §§ 541.001 to 541.205; Or. Rev. Stat. §§ 646A.570 to 646A.589.

  42. See Va. Code Ann. § 59.1-578; Colo. Rev. Stat. § 6-1-1308; Conn. Gen. Stat. § 42-520; Ind. Code § 24-15-4-1.

  43. See Lena Kempe, “Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies,” Business Law Today, April 10, 2024.

  44. AI Risk Management Framework: Generative Artificial Intelligence Profile,” National Institute of Standards and Technology, June 26, 2024.

  45. See Kempe, “Navigating the AI Employment Bias Maze.”

AI and Attorney-Client Privilege: A Brave New World for Lawyers

This article is related to a Showcase CLE program that took place at the American Bar Association Business Law Section’s 2024 Fall Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.


The rapid advancement of artificial intelligence (“AI”) technologies, particularly generative AI (“GenAI”), presents both opportunities and challenges for the legal profession. While AI offers significant benefits to legal practice, it does not diminish the core ethical obligations of lawyers. In fact, it heightens the need for accountability, critical thinking, and professional judgment. The legal profession stands at a critical juncture, tasked with harnessing the power of AI while steadfastly maintaining the ethical standards that underpin the administration of justice. By embracing a thoughtful, accountable approach to AI integration, lawyers can enhance their practices while continuing to fulfill their paramount duty to clients and the legal system.

This panel will present a perspective that is neither pessimistic nor optimistic; our goal is not to declare that the glass is either half-empty or half-full. Instead, we will present practical guidance regarding what you need to know to effectively and ethically integrate AI into the practice of law.

The fulcrum for our discussion will be the American Bar Association’s recent Formal Opinion 512, issued on July 29, 2024 (“Opinion 512”). Opinion 512 is practically grounded in the present capabilities of GenAI. To that end, it focuses upon three core issues:

  1. Lawyers remain fully accountable for all work product, regardless of how it is generated;
  2. the existing rules of professional conduct are sufficient to govern AI use in legal practice; and
  3. AI is here and here to stay—it is not going away.

We will also explore formal guidance provided by several other bars—including California, Florida, Kentucky, New York, New Jersey, Pennsylvania, and the District of Columbia (note: some of these links may start the download of a PDF)—and the varying opinions and points of focus presented by each.

Our presentation will delve into specific ABA Model Rules of Professional Conduct and their implications for AI use, generally using the order that these are discussed in Opinion 512.

  • Rule 1.1 (Competence) requires lawyers to maintain technological competence. This necessitates a “trust but verify” approach to GenAI outputs that never compromises accountability. Competency with GenAI also means that lawyers need to understand its capabilities and limitations, not in some abstract technical way, but in ways sufficient to comprehend how it could impact their duties as lawyers. To that end, we will discuss how GenAI is not actually intelligent, but instead is simply “applied statistics”; how to leverage the power that this miracle of math provides; and, perhaps most importantly, how to avoid being deceived by AI creators into thinking that an AI tool is somehow a thinking, feeling person just like you.
  • Rule 1.6 (Confidentiality) mandates vigilance in protecting client information when using AI tools. Lawyers using GenAI need to understand whether the GenAI systems that they are using are “self-learning” and will thus send information—including confidential client information—as feedback to the system’s main database. Because the vast majority of such systems are self-learning, a healthy skepticism to disclosing any client information to GenAI is critical.
  • Rule 1.4 (Communication) may require client consultation about AI use in their matters, particularly when confidentiality concerns arise.
  • Rules 3.1, 3.3(a)(1), and 8.4(c) (Meritorious Claims, Candor to the Tribunal, and Misconduct) prohibit the use of AI-generated false or frivolous claims. This once again implicates our first core issue: As the lawyer, you are the one who is accountable, and “I trusted the AI (but forgot to verify)” is not going to be acceptable.
  • Rules 5.1 and 5.3 (Supervision of Lawyers and Nonlawyers) may one day raise complex questions of how human-level AI must be properly supervised. But for now, the New York Bar Association’s guidance provides the best set of guidelines (leveraging ABA Resolution 122 from 2019) to avoid letting a GenAI tool supplant the lawyer as the final decision-maker.
  • Rule 1.5 (Fees) presents challenges in balancing efficiency gains from AI with ethical billing practices.
  • Rule 5.5 (Unauthorized Practice of Law) necessitates vigilance to ensure AI tools do not cross into providing legal advice or exercising legal judgment without appropriate lawyer oversight.

Finally, we will look to the future, beyond the present-focused Opinion 512. As AI capabilities expand, we must all remain vigilant as lawyers in upholding our ethical duties, which are fundamentally rooted in human knowledge, judgment, and accountability. Because, until AI can credibly match such human qualities, it cannot—and should not—be able to claim such ethical responsibilities as, inter alia, attorney-client privilege.

How Will the Recent Amendments to Illinois’s BIPA Affect the Use of Biometric Data?

The Illinois Biometric Information Privacy Act (“BIPA”) became effective in 2008. Alleged violations under BIPA have resulted in numerous lawsuits and defendants’ (businesses’) liability for substantial damages.[1] On May 16, 2024, the Illinois State Legislature passed Senate Bill 2979 (SB 2979) to amend BIPA, and sent the bill to Illinois Governor J.B. Pritzker. On August 2, 2024, the governor signed the legislation into law effective immediately. The amendments limit BIPA damages and provide for electronic consent. Key changes include:

  • A private entity that collects or discloses a person’s biometric data without consent can only be found liable for one BIPA violation per person regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. New 740 ILCS 14/20(b) and (c) modify the 2008 740 ILCS 14/20 text[2] “A prevailing party may recover for each violation . . . ,” which was interpreted by the courts as a “per scan” damages calculation.
  • Written consent for collection of biometric information under BIPA now includes electronic signatures. 740 ILCS 14/10 (Definitions) as amended adds a new definition, “electronic signature,” and includes it as part of the definition of “written release.”

These BIPA amendments underscore the need for businesses to review their contracts with vendors providing biometric devices. In particular these contracts should consider requiring, among other things, detailed functional specifications, as well as vendor warranties and indemnifications, concerning the biometric device’s abilities to capture, record, and preserve electronic signatures of users whose biometric data is captured by the devices, consistent with the proposed written consent provisions in BIPA.

It is important to note that these BIPA amendments do not eliminate all liabilities for violations under BIPA. Hypothetically, a business with a large number of employees or customers could still potentially be liable for substantial damages. For example, if a business was found to have intentionally or recklessly violated BIPA and was subject to liquidated damages of $5,000 or actual damages, and it has 1,000 employees or customers for whom it collected biometric data, then damages could be $5,000,000 (=$5,000 x 1,000) plus reasonable attorneys’ fees and costs. Of course, this is hypothetical and would be subject to the facts and the applicable law, but you can do the math and see that even with these BIPA amendments, BIPA violations can result in substantial damages.

In Cothron v. White Castle System, Inc.,[3] the Supreme Court of Illinois, citing to one of its earlier decisions,[4] recognized the potential for significant damages awards under BIPA:

This court explained that the legislature intended to subject private entities who fail to follow the statute’s requirements to substantial potential liability. The purpose in doing so was to give private entities “the strongest possible incentive to conform to the law and prevent problems before they occur.” As the Seventh Circuit noted,[5] private entities would have “little incentive to course correct and comply if subsequent violations carry no legal consequences.”[6]

The Supreme Court noted in Cothron: “It also appears that the General Assembly chose to make damages discretionary rather than mandatory under the Act.”[7] However, the Supreme Court held “that the plain language of section 15(b) and 15(d) shows that a claim accrues under the Act with every scan or transmission of biometric identifiers or biometric information without prior informed consent.”[8]

In a separate opinion upon denial of rehearing in Cothron, Justice David K. Overstreet[9] in a dissent stated:

Although the majority recognized that it “appear[ed]” that these awards would be discretionary, such that lower courts may award damages lower than the astronomical amounts permitted by its construction of the Act, the court did not provide lower courts with any standards to apply in making this determination. This court should clarify, under both Illinois and federal constitutional principles, that statutory damages awards must be no larger than necessary to serve the Act’s remedial purposes and should explain how lower courts should make that determination. Without any guidance regarding the standard for setting damages, defendants, in class actions especially, remain unable to assess their realistic potential exposure.[10]

In the Cothron decision, the Court found that the BIPA statutory language clearly supported plaintiff’s position.[11] Still, the Court stated:

Ultimately, however, we continue to believe that policy-based concerns about potentially excessive damage awards under the Act are best addressed by the legislature. See McDonald[12] . . . (observing that violations of the Act have the potential for “substantial consequences” and large damage awards but concluding that “whether a different balance should be struck *** is a question more appropriately addressed to the legislature”). We respectfully suggest that the legislature review these policy concerns and make clear its intent regarding the assessment of damages under the Act.[13] (emphasis added)

SB 2979 was the result of the Illinois legislature considering the Court’s invitation to amend BIPA.

The bottom line is that the courts and the legislature will continue to have to address the tension between the 2008 Illinois legislative findings[14] underlying BIPA and potentially excessive BIPA damages awards. This analysis should consider evolving artificial intelligence (“AI”) software’s potential to provide humanity with many benefits, but also risks, and AI’s use of biometric data (and ability to copy that biometric data). Hypothetically, consider an AI software provided with an individual’s compromised biometric data obtained in a cybersecurity event coupled with a BIPA violation; the individual could potentially suffer financial damages (e.g., where the biometric data allows unauthorized access to an individual’s financial accounts) or health damages (e.g., where the biometric data allows unauthorized access to an individual’s medical records and where the unauthorized access allows for changing the individual’s medical history concerning allergies or medications which, in an emergency, could be life threatening). The full ramifications of biometric technology and AI are not fully known. Legislators and the courts will need to consider the opportunities and risks these, and other, technologies present to society, and strive to achieve a judicial and legislative balance that will maximize the beneficial opportunities of these technologies, and contain, mitigate, or remove the risks.


This article was updated on September 4, 2024, after its original publication on June 17, 2024.


  1. Many BIPA defendants paid these damages pursuant to a settlement agreement.

  2. SB 2979 relabeled 740 ILCS 14/20 to make the original text subpart (a) and add new subparts (b) and (c).

  3. Cothron v. White Castle System, Inc., 2023 IL 128004, 216 N.E.3d 918 (Ill. 2023), reh’g denied (July 18, 2023).

  4. Rosenbach v. Six Flags Entm’t Corp., 2019 IL 123186, ¶¶ 36–37, 129 N.E.3d 1197 (Ill. 2019).

  5. Cothron v. White Castle System, Inc., 20 F.4th 1156 at 1165 (7th Cir. 2021).

  6. Cothron, 216 N.E.3d at 928–929.

  7. Cothron, 216 N.E.3d at 929 (citations omitted). 740 ILCS 14/20 as adopted in 2008 actually concludes with text supportive of the discretion afforded courts regarding damages: “A prevailing party may recover for each violation: . . . (4) other relief, including an injunction, as the State or federal court may deem appropriate” (emphasis added).

  8. Cothron, 216 N.E.3d at 929.

  9. Justice Overstreet’s dissent upon denial of rehearing was joined by Chief Justice Mary Jane Theis and Justice Lisa Holder White.

  10. Cothron, 216 N.E.3d at 940, reh’g denied (July 18, 2023) (Overstreet, J., dissenting).

  11. Cothron, 216 N.E.3d at 928.

  12. McDonald v. Symphony Bronzeville Park, LLC, 2022 IL 126511, ¶¶ 48–49, 193 N.E.3d 1253.

  13. Cothron, 216 N.E.3d at 929.

  14. 740 ILCS 14/5 (Legislative findings; intent) includes, without limitation: “(c) Biometrics are unlike other unique identifiers that are used to access finances or other sensitive information. For example, social security numbers, when compromised, can be changed. Biometrics, however, are biologically unique to the individual; therefore, once compromised, the individual has no recourse, is at heightened risk for identity theft, and is likely to withdraw from biometric-facilitated transactions. . . . (f) The full ramifications of biometric technology are not fully known.”

© 2024 Alan S. Wernick & Aronberg Goldgehn.

Supreme Court Business Review: Significant Business Cases in the October 2022 and 2023 Terms

This article is related to a Showcase CLE program that took place at the American Bar Association Business Law Section’s 2024 Fall Meeting. All Showcase CLE programs were recorded live and will be available for on-demand credit, free for Business Law Section members.


Much of the Supreme Court’s docket affects businesses in some respect, but some cases address business issues directly. During the past two Supreme Court terms there have been several cases that dealt directly with business issues or will have heavy impact on businesses.

Some of the cases dealt with nation-level events. The chapter 11 proceeding of Purdue Pharma was perhaps the largest one. In that case, Harrington v. Purdue Pharma, the Court was called on to decide whether a proposed chapter 11 plan that resolved the bankruptcy could be confirmed if it required nondebtor claimants to release nondebtors who were financing the plan. The Court said no: nondebtors can’t be forced against their will to release other nondebtors. In a separate case, Truck Insurance v. Kaiser Gypsum, the Court gave broad standing to those with an interest in a plan to appear and object. One of the functions of bankruptcy court is to provide a forum where those affected by a party’s insolvency can be heard, so this decision buttresses this function.

As intellectual property continues its important role in the American economy, the Court continues to decide a steady stream of IP cases. Andy Warhol Foundation v. Goldsmith grappled with the scope of “fair use” of copyrighted works and held that Andy Warhol’s use of the plaintiff’s photograph of Prince was not a fair use. It remains to be seen how much remains of fair use beyond truly transformative noncommercial uses. In Warner Chappell Music v. Neely, the Court permitted copyright plaintiffs to recover damages incurred before the limitations period. Jack Daniels Properties v. VIP Products held that a parody is not immune from claims for trademark infringement or dilution. That case involved a dog toy designed to look like a Jack Daniels bottle, complete with humorous text. But the parodic humor did not insulate the product from claims under the Lanham Act. And Vidal v. Elster held that the Patent and Trademark Office did not violate the First Amendment by rejecting registration of “Trump Too Small” as a trademark; the Lanham Act’s caution not to register the name of a living person as a trademark was not unconstitutional. The plaintiff still had the right to use “Trump Too Small” as a slogan, but he couldn’t register it.

Securities law issues also were addressed. Slack Technologies v. Pirani held that, in a direct listing, only holders of securities sold under a registration statement could assert claims under § 11 of the Securities Act of 1933. In a separate case, Macquarie Infrastructure v. MOAB Partners, the Court held that securities fraud claims under § 10(b) of the Securities Exchange Act of 1934 and associated Rule 10b-5 cannot be premised on pure omissions Instead, some statement had to be misleading for a plaintiff to be able to sue.

Employment issues also featured on the Court’s docket. Groff v. DeJoy clarified that an employer can defeat a religious discrimination claim under Title VII by showing that a “reasonable accommodation” would impose a substantial cost; a mere de minimis cost is not enough. On the other hand, a Title VII plaintiff challenging a transfer need show only some harm even if not “significant,” under Muldrow v. City of St. Louis. And a plaintiff who seeks whistleblower protection under the Sarbanes-Oxley Act need prove only that his or her protected activity was a contributing factor to the adverse job action, with no need to prove retaliatory intent, per Murray v. UBS Securities.

Another perennial business topic for the Court is arbitration. Smith v. Spizziri held that when a court holds a dispute is arbitrable, the case is not dismissed but stayed. Coinbase v. Bielski held that when a court holds a dispute is not arbitrable, the case does not proceed to discovery while an appeal is pending. Instead the case in the lower court is stayed pending decision of the appeal. Coinbase v. Suski is an object lesson for drafters of contracts. When there is more than one arguably governing dispute resolution provision—one calling for arbitration and another for litigation—it is for a court rather than an arbitrator to decide which one governs, because the issue is whether there was an agreement to arbitrate at all.

The Commerce Clause came into play in interesting ways. National Pork Producers v. Ross held that California did not violate the dormant Commerce Clause by requiring that any pork sold in California was required to have been raised in specified humane conditions, even though almost all pork is raised outside California. Mallory v. Norfolk Southern upheld against a due process challenge a Pennsylvania statute under which a corporation that registers to do business in the state must consent to personal jurisdiction in the state for all purposes (but whether this passes Commerce Clause muster was left for another day).

Property rights also made an appearance. In Sheetz v. El Dorado County, the Court held that the Takings Clause can be violated by legislatively imposed fees and conditions that are not linked to the impact or conditions of a particular project. As a result, the owner of a newly built prefabricated home could challenge, as a Fifth Amendment taking, California’s imposition of various statutory charges in connection with the construction of his home.

Numerous other cases, including especially those concerning administrative law and Title VI, are likely to have substantial impact on business as well. The long-term impact of the Court’s recent decisions will become apparent in the marketplace and in follow-up litigation in the Court in coming years.