The Price of Emotion: Privacy, Manipulation, and Bias in Emotional AI

19 Min Read By: Lena Kempe

Imagine shopping for Christmas gifts online without knowing that AI is tracking your facial expressions and eye movements in real time and guiding you towards more expensive items by prioritizing the display of similar high-priced items. Now picture a job candidate whose quiet demeanor is misinterpreted by an AI recruiter, resulting in the denial of his dream job. Emotional AI, a subset of an AI that “measures, understands, simulates, and reacts to human emotions,”[1] is rapidly spreading. Used by at least 25 percent of Fortune 500 companies as of 2019,[2] with the market size projected to reach $13.8 billion by 2032,[3] this technology is turning our emotions into data points.

This article examines the data privacy, manipulation, and bias risks of Emotional AI, analyzes relevant United States (“US”) and European Union (“EU”) legal frameworks, and proposes compliance strategies for companies.

Emotional AI, if not operated and supervised properly, can cause severe harm to individuals and subject companies to substantial legal risks. It collects and processes highly sensitive personal data related to an individual’s intimate emotions and has the potential to manipulate and influence consumer decision-making processes. Additionally, Emotional AI may introduce or perpetuate bias. Consequently, the misuse of Emotional AI may result in violations of applicable EU or US laws, exposing companies to potential government fines, investigations, and class action lawsuits.

1. Emotional AI Defined

Emotional AI techniques can include analyzing vocal intonations to recognize stress or anger and processing facial images to capture subtle micro-expressions.[4] As this technology develops, it has the potential to revolutionize how we interact with technology by introducing more relatable and emotionally responsive ways of doing so.[5] Already, Emotional AI personalizes experiences across different industries. Call center agents tune into customer emotions, instructors personalize learning, healthcare chatbots offer support, and ads are edited for emotional impact. AI in trucking detects drowsiness for driver safety, while in games, it personalizes experiences.[6]

2. Data Privacy Concerns

Emotional AI relies on vast amounts of personal data to infer emotions (output data), raising privacy concerns. It may use the following input data:

  1. Textual data: social media posts and emojis.
  2. Visual data: images and videos, including facial expressions, body language, and eye movements.
  3. Audio data: voice recordings, including tone, pitch, and pace.
  4. Physiological data: biometric data (e.g., heart rate) and brain activity via wearables.
  5. Behavioral data: gestures and body movements.[7]

With emotions being one of the most intimate aspects of a person’s life, people are naturally more worried about the privacy of data revealing their emotions than other kinds of personal data. Imagine a loan officer using AI-based emotional analysis to collect and analyze loan applicants’ gestures and voices at interviews. Applicants may be concerned about how their data will be used, how they can control such uses, and the potential consequences of a data breach.

A. Legal Framework

The input and output data of Emotional AI (“Emotional Data”), if directly identifiable, relating to, or reasonably linked to an individual, fall under the broad definition of “Personal Data” and are thus protected under various US state data privacy laws and the European Union’s General Data Protection Regulation (“GDPR”),[8] which serves as the baseline for data privacy laws in EU countries.[9] For example, gestures and body movements, voice recordings, and physiological responses—all of which can be processed by Emotional AI—can be directly linked to specific individuals and therefore constitute Personal Data. Comprehensive data privacy laws in many jurisdictions require the disclosure of data collection, processing, sharing, and storage practices to consumers.[10] They grant consumers the rights to access, correct, and delete Personal Data; require security measures to protect Personal Data from unauthorized access, use, and disclosure; and stipulate that data controllers may only collect and process Personal Data for specified and legitimate purposes.[11] Additionally, some laws require minimizing the Personal Data used, limiting the duration of data storage, and reducing Personal Data to nothing beyond what is necessary to achieve the stated purposes of processing.[12]

Furthermore, if the Personal Data have the potential to reveal certain characteristics such as race or ethnicity, political opinions, religious or philosophical beliefs, genetic data, biometric data (for identification purposes), health data, or sex life and sexual orientation, they will be considered sensitive Personal Data (“SPD”). For instance, Emotional AI systems that analyze voice tone, word choice, or physiological signals to infer emotional states could potentially reveal information about an individual’s political opinions, mental health status, or religious beliefs—which is SPD—such as by analyzing a person’s speech patterns and stress levels during discussions on certain topics. Both the GDPR and several US state privacy laws provide strong protections for SPD. The GDPR requires organizations to obtain a data subject’s explicit consent to process SPD with certain exceptions.[13] It also mandates a data protection impact assessment when automated decision-making with profiling significantly impacts individuals or involves processing large amounts of sensitive data.[14] Similarly, several US state laws require a controller to perform a data protection assessment[15] and obtain valid opt-in consent.[16] California grants consumers the right to limit the use and disclosure of their SPD to what is necessary to deliver the services or goods.[17] The processing of SPD may also be subject to other laws, such as laws on genetic data,[18] biometric data,[19] and personal health data.[20] Depending on the context where Emotional AI is utilized, certain sector-specific privacy laws may apply, such as the Gramm-Leach Bliley Act (“GLBA”) for financial information, the Health Insurance Portability and Accountability Act (“HIPAA”) for health information, and the Children’s Online Privacy Protection Act (“COPPA”) for children’s information.

Emotional AI relies heavily on biometric data, such as facial expressions, voice tones, and heart rate. One of the most comprehensive and most litigated biometric privacy laws is Illinois’s Biometric Information Privacy Act (“BIPA”). Under the BIPA, “Biometric information” includes any information based on biometric identifiers that identify a specific person.[21] “Biometric identifiers” include “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.”[22] The BIPA imposes the following key requirements on private entities that collect, use, and store Illinois residents’ biometric identifiers and information:

  1. Develop and make accessible to the public a written policy that outlines the schedules for retaining biometric data and procedures for its permanent destruction.
  2. Safeguard biometric data with a level of care that meets industry standards or is equivalent to the protection afforded to other sensitive data.
  3. Inform individuals about the specific purposes for which their biometric data is being collected, stored, or used, and the duration for which it will be retained.
  4. Secure informed written consent from individuals before collecting or disclosing biometric data.

The adoption of biometric privacy laws is a growing trend across the country. Several states and cities, including Texas, Washington, New York City, and Portland, have also passed biometric privacy laws.

Current data privacy laws help address the data privacy concerns related to Emotional AI. However, Emotional AI presents unique challenges in complying with data minimization requirements. AI systems often rely on collecting and analyzing extensive datasets to draw accurate conclusions. For example, Emotional AI might use heart rate to assess emotions. However, a person’s heart rate can be influenced by factors beyond emotions, such as room temperature or physical exertion.[23] Data minimization mandates collecting only relevant physiological data, but AI systems might need to capture a wide range of data to account for potential external influences and improve the accuracy of emotional state inferences. This creates a situation where data beyond the core emotional state indicators is collected and what data is necessary may be contentious.

In addition, Emotional AI development may encounter difficulties in defining the intended purposes for data processing due to the inherently unpredictable nature of algorithmic learning and subsequent data utilization. In other words, the AI might discover unforeseen connections within a dataset, potentially leading to its use for purposes that were not defined and conveyed to consumers. For example, a customer service application could use Emotional AI to analyze customer voices during calls to identify frustrated or angry customers for priority handling. Over time, the AI could identify a correlation between specific speech patterns and a higher likelihood of customers canceling the service, a purpose not included in the privacy policy.

B. Legal Strategies

To effectively comply with the complex array of data privacy laws and overcome the unique challenges presented by Emotional AI, organizations developing and using Emotional AI should consider adopting the following key strategies:

  1. Develop a comprehensive privacy notice that clearly outlines the types of Emotional Data collected, the purposes for processing that data, how the data will be processed, and the duration for which the data will be stored.
  2. To address data minimization concerns, plan in advance the scope of Emotional Data necessary for and relevant to developing a successful Emotional AI, adopt anonymization or aggregation techniques whenever possible to remove personal data components, and enforce appropriate data retention policies and schedules.
  3. To tackle the issue of purpose specification, regularly review data practices to assess whether Emotional Data in AI is used for the same or compatible purposes as stated in relevant privacy notices. If the new processing is incompatible with the original purpose, update the privacy notices to reflect the new processing purpose, and de-identify the Emotional Data, obtain new consent, or identify another legal basis for the processing.
  4. If the Emotional Data collected can be considered sensitive Personal Data, implement an opt-in consent mechanism and conduct a privacy risk assessment.
  5. Implement robust data security measures to protect Emotional Data from unauthorized access, use, disclosure, or alteration.

3. Risks of Emotion Manipulation

Emotional AI carries significant risks of being used for manipulation. In three experiments, AI has been shown to learn from participants’ responses to identify weaknesses used in decision-making and guide them toward desired actions.[24] Imagine an online social media platform using Emotional AI to detect and strengthen gamblers’ addictions to promote ads for its casino clients.

A. Legal Framework

I. EU Law

The EU recently enacted the Artificial Intelligence Act (the “EU AI Act”), addressing emotional AI abuse by prohibiting two key categories of AI systems:[25]

  1. AI systems that use subliminal methods or manipulative tactics to significantly alter behavior, hindering informed choices and causing or likely causing significant harm.
  2. Emotion recognition AI in educational and workplace settings except for healthcare or safety needs.

If an emotional AI system is not prohibited under the EU AI Act, such as when it does not cause significant harm, it is deemed a “high-risk AI system,” subjecting its providers and deployers to various requirements, including:

  1. Providers must ensure transparency for deployers by providing clear information about the AI system, including its capabilities, limitations, and intended use cases. They must also implement data governance, promptly address any violation of the EU AI Act and notify relevant parties, implement risk and quality management systems, perform conformity assessments to demonstrate that the AI system meets the requirements of the EU AI Act, and establish human oversight mechanisms.
  2. Deployers must inform consumers of significant decisions, conduct impact assessments, report incidents, ensure human oversight, maintain data quality, and monitor systems.[26]
II. US Law

There is no specific US law that addresses Emotional AI. However, section 5 of the Federal Trade Commission (“FTC”) Act prohibits unfair or deceptive acts or practices.[27] FTC attorney Michael Atleson stated in a 2023 consumer alert that the agency is targeting deceptive practices in AI tools, particularly chatbots designed to manipulate users’ beliefs and emotions.[28] Within the FTC’s focus on AI tools, one concern is the possibility of companies’ exploiting “automation bias,” where people tend to trust AI outputs perceived as neutral or impartial. Another area of concern is anthropomorphism, where individuals may find themselves trusting chatbots more when such bots are designed to use personal pronouns and emojis or otherwise provide more of a semblance of a human person. The FTC is particularly vigilant about AI steering people unfairly or deceptively into harmful decisions in critical areas such as finance, health, education, housing, and employment. It assesses whether AI-driven practices might mislead consumers into actions contrary to their intended goals and thus constitute deceptive or unfair behavior under the FTC Act. Importantly, these practices can be deemed unlawful even if not all consumers are harmed or if the affected group does not fall under protected classes in antidiscrimination laws. Companies must ensure transparency about the use of AI for targeted ads or commercial purposes and inform users if they are interacting with a machine or whether commercial interests are influencing AI responses. The FTC warns against cutting AI ethics staff and emphasizes the importance of risk assessment, staff training, and ongoing monitoring.[29]

B. Legal Strategies

To avoid regulatory scrutiny and potential claims of emotional manipulation, companies developing or deploying

  1. Ensure transparency by clearly informing users when they are interacting with an Emotional AI and explaining in a privacy policy how the AI analyzes user data to infer emotion and how output data is used, including any potential commercial influences on AI responses.
  2. Refrain from using subliminal messaging or manipulative tactics to influence user behavior. Conduct ongoing monitoring and periodic risk assessments to identify and address emotional manipulation risks.
  3. If operating in the EU, evaluate the Emotional AI’s potential for causing significant harm and determine if it falls under the “prohibited” or “high-risk” category. For high-risk AI systems, comply with the applicable obligations under the EU AI Act.
  4. Train staff on best practices for developing and deploying Emotional AI.

4. Risks of AI Bias

Emotional AI may have biased results, particularly if the training data lacks diversity. For instance, a system trained on images of people of only one ethnicity may not recognize facial expressions of another ethnicity, and cultural differences in gestures and vocal expressions may be misinterpreted by an AI system without diverse training data.[30] An example of the potential impact of such bias would be an Emotional AI trained on mental health patients from only one ethnic group that may misinterpret emotions and thereby overlook important symptoms in other groups, resulting in misdiagnosis.

A. Legal Framework

I. EU Law

The EU AI Act addresses bias by imposing stringent requirements on high-risk AI providers and deployers, with a particular emphasis on the provider’s obligation to implement data governance to detect and reduce biases in datasets.[31] The GDPR provides an additional layer of protection against AI bias. Under the GDPR, decision-making based solely on automated processing (including profiling), such as AI, is prohibited unless necessary for a contract, authorized by law, or done with explicit consent.[32] Data subjects affected by such decisions have the right to receive clear communication regarding the decision, seek human intervention, express their viewpoint, comprehend the rationale behind the decision, and contest it if necessary.[33] Data controllers are required to adopt measures to ensure fairness, such as using statistical or mathematical methods that avoid discrimination during profiling, implementing technical and organizational measures to correct inaccuracies in personal data and minimize errors, and employing methods to prevent discrimination based on SPD.[34] Automated decision-making and profiling based on SPD are only permissible if the data controller has a legal basis to do so under the GDPR.[35]

II. US Law

There is no specific federal law addressing AI bias in the US. However, existing antidiscrimination laws apply to AI. Notably, the FTC has taken action related to AI bias under the unfairness prong of Section 5 of the FTC Act. In December 2023, the FTC settled a lawsuit with Rite Aid over the alleged discriminatory use of facial recognition technology, setting a new standard for algorithmic fairness programs. This standard includes consumer notification and contesting options, as well as rigorous bias testing and risk assessment protocols for algorithms.[36] This case also establishes a precedent for other regulators with fairness authority, such as insurance commissioners, state attorneys general, and the Consumer Financial Protection Bureau, to use such authority for enforcement against AI bias.

On the state level, in May, Colorado enacted the Artificial Intelligence Act, the first comprehensive state law targeting AI discrimination, which applies to developers and deployers of high-risk AI systems doing business in Colorado.[37] This may extend to out-of-state businesses serving consumers in Colorado.[38] Emotional AI that significantly influences decisions with material effects in areas such as employment, finance, healthcare, and insurance is considered high-risk AI under the Act. Developers of such systems are required to provide a statement on the system’s uses; summaries of training data; information on the system’s purpose, benefits, and limitations; documentation describing evaluation, data governance, and risk mitigation measures, as well as intended outputs; and usage guidelines.[39] Developers must also publicly disclose types of high-risk AI systems they have developed or modified and risk management approaches, and they must report potential discrimination issues to the attorney general and deployers within ninety days.[40] Deployers must inform consumers of significant decisions, summarize deployed systems and discrimination risk management on their websites, explain negative decisions with correction or appeal options, conduct impact assessments, report instances of discrimination to authorities, and develop a risk management program based on established frameworks.[41]

In addition, most state data privacy laws stipulate that a data controller shall not process personal data in violation of state or federal laws that prohibit unlawful discrimination against consumers.[42] The use of Emotional AI in the employment context also subjects companies to various federal and state laws.[43]

B. Legal Strategies

To comply with antidiscrimination laws and address bias risks of Emotional AI, companies developing or deploying Emotional AI should consider adopting the following strategies:

  1. Establish a robust data governance program to ensure diversity and quality of training data for Emotional AI systems, including regularly monitoring and auditing the training data.
  2. Develop a risk management program based on established risk frameworks, such as the AI Risk Management Framework released by the National Institute of Standards and Technology.[44]
  3. Conduct routine AI risk assessments and bias testing to identify and mitigate potential biases in Emotional AI systems, particularly those used in high-risk areas such as employment, finance, healthcare, and insurance.
  4. Publicly disclose details about Emotional AI systems on the company website, including data practices, types of systems developed or deployed, and risk management approaches.
  5. Inform consumers of significant decisions made by Emotional AI systems. Establish mechanisms to allow consumers to contest decisions and appeal unfavorable outcomes, notify consumers of their rights, and provide clear explanations for decisions made by Emotional AI systems.
  6. In employment contexts, comply with federal and state laws, Equal Employment Opportunity Commission guidance, and Colorado’s and the EU’s AI Acts.[45]

5. Conclusion

The rapid growth of Emotional AI presents a complex challenge to legislators. The EU’s strict regulations on AI and data privacy more effectively safeguard consumers’ interests. However, will this approach hinder AI innovation? Conversely, the reliance of the United States on a patchwork of state and sector laws, along with federal government agencies’ guidance and enforcement, creates more room for AI development. Will this strategy leave consumer protections weak and impose burdensome compliance requirements? Should the United States consider federal legislation that balances innovation with consumer protections? This is an important conversation. In the meantime, companies must continue to pay close attention to Emotional AI’s legal risks across a varied legal landscape.


  1. Meredith Somers, “Emotion AI, Explained,” MIT Sloan School of Management, March 8, 2019.

  2. Id.

  3. Cision, “Emotion AI Market Size to Grow USD 13.8 Billion by 2032 at a CAGR of 22.7% | Valuates Reports,” news release, Yahoo! Finance, May 15, 2024.

  4. Somers, “Emotion AI, explained.”

  5. Noa Yitzhak, “The Future of Emotional AI: Trends to Watch,” Emotion Logic, May 5, 2024.

  6. Neil Sahota, “Emotional AI: Cracking the Code of Human Emotions,” NeilSahota.com, September 28, 2023.

  7. What Is Emotional AI?,” Emotional AI Lab, accessed August 27, 2024.

  8. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1.

  9. Currently, twenty US states have passed data privacy laws: California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Texas, Florida, Montana, Oregon, Delaware, New Hampshire, New Jersey, Kentucky, Nebraska, Maryland, Minnesota, and Rhode Island.

  10. See Cal. Civ. Code §§ 1798.100 to 1798.199.100; Va. Code Ann. §§ 59.1-575 to 59.1-584; Colo. Rev. Stat. §§ 6-1-1301 to 6-1-1313; Utah Code Ann. §§ 13-61-101 to 13-61-404.

  11. Id.

  12. Id.

  13. GDPR Article 9.

  14. GDPR Article 35 (3).

  15. See Colo. Rev. Stat. § 6-1-1309(2)(c); Conn. Gen. Stat. § 42-522(2)(a)(4); Del. Code Ann. tit. 6, § 12D-108(a)(4); Ind. Code § 24-15-6-1(b)(4); Or. Rev. Stat. § 646A.586; Mont. Code § 30-14-2814; Tenn. Code Ann. § 47-18-3206(a)(4)); Tex. Bus. & Com. Code § 541.105(a)(4); Va. Code Ann. § 59.1-580(A)(4) (each requiring controllers to perform data protection assessments when processing sensitive data); see also 4 Colo. Code Regs. § 904-3-8 (providing additional requirements for conducting assessments under Colorado law).

  16. See Colo. Rev. Stat. § 6-1-1308(7); Conn. Gen. Stat. § 42-520(a)(4); Del. Code Ann. tit. 6, § 12D-106(a)(4); Ind. Code § 24-15-4-1(5); Or. Rev. Stat. § 646A.578; Mont. Code § 30-14-2812; Tenn. Code Ann. § 47-18-3204(a)(6); Tex. Bus. & Com. Code § 541.101(b)(4); Va. Code Ann. § 59.1-578(A)(5) (each requiring opt-in consent).

  17. Cal. Civ. Code 1798.121(a).

  18. See, e.g., Cal. Civil Code §§ 56.18–56.186; Ariz. Rev. Stat. § 20-448.02; Genetic Information Nondiscrimination Act of 2008, 42 U.S.C. § 2000ff.

  19. See 740 Ill. Comp. Stat. §§ 14/1–99; Wash. Rev. Code Ann. §§ 19.375.010–.900; Tex. Bus. & Com. Code § 503.001; N.Y.C. Admin. Code §§ 22-1201 to 22-1205.

  20. See Wash. Rev. Code §§ 19.373.010–.900; Nevada S.B. 370 (2023) (codified as amended at Nev. Rev. Stat. §§ 598.0977, 603A.338, 603A.400–.550).

  21. 740 Ill. Comp. Stat. Ann. 14/10.

  22. Id.

  23. American Heart Association editorial staff, “All About Heart Rate,” American Heart Association, May 13, 2024.

  24. Georgios Petropoulos, “The Dark Side of Artificial Intelligence: Manipulation of Human Behaviour,” Bruegel, February 2, 2022.

  25. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).

  26. Lena Kempe, “Colorado and EU AI Laws Raise Several Risks for Tech Businesses,” Bloomberg Law, May 30, 2024.

  27. 15 U.S.C. § 45.

  28. Michael Atleson, “The Luring Test: AI and the Engineering of Consumer Trust,” Federal Trade Commission, May 1, 2023.

  29. Id.

  30. Somers, “Emotion AI, explained.”

  31. Kempe, “Colorado and EU AI Laws.”

  32. GDPR, Recital 71.

  33. Id.

  34. Id.

  35. Id.

  36. Alvaro M. Bedoya, “Statement of Commissioner Alvaro M. Bedoya on FTC v. Rite Aid Corporation & Rite Aid Headquarters Corporation, Commission File No. 202-3190,” Federal Trade Commission, December 19, 2023.

  37. Kempe, “Colorado and EU AI Laws.”

  38. Id.

  39. Id.

  40. Id.

  41. Id. See Cal. Civ. Code §§ 1798.100 to 1798.199.100; Va. Code Ann. §§ 59.1-575 to 59.1-584; Colo. Rev. Stat. §§ 6-1-1301 to 6-1-1313; Utah Code Ann. §§ 13-61-101 to 13-61-404; Tex. Bus. & Com. Code §§ 541.001 to 541.205; Or. Rev. Stat. §§ 646A.570 to 646A.589.

  42. See Va. Code Ann. § 59.1-578; Colo. Rev. Stat. § 6-1-1308; Conn. Gen. Stat. § 42-520; Ind. Code § 24-15-4-1.

  43. See Lena Kempe, “Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies,” Business Law Today, April 10, 2024.

  44. AI Risk Management Framework: Generative Artificial Intelligence Profile,” National Institute of Standards and Technology, June 26, 2024.

  45. See Kempe, “Navigating the AI Employment Bias Maze.”

By: Lena Kempe

MORE FROM THIS AUTHOR

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login