Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies

13 Min Read By: Lena Kempe

Imagine receiving a layoff notice because an AI evaluation tool predicted a higher risk of future underperformance due to your age. Or picture repeatedly having job applications rejected, only to find out the cause was an AI tool screening out candidates with a disability. These are just a few examples of real-world AI bias in the realm of hiring and employment, a growing issue that has already resulted in several notable lawsuits. How can companies effectively take advantage of AI in their employment practices while minimizing legal risks? This article discusses employment laws applicable to AI discrimination and provides practical strategies for companies to prevent potential government investigations, lawsuits, fines, class actions, or reputational damage.

A. AI Bias

A recent IBM article defines AI bias as “AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.”[1] Two major technical factors contribute to AI bias:

  1. Training Data: AI systems develop their decision-making based on training data; when those data overrepresent or underrepresent certain groups, it can cause biased results. A typical example is a facial recognition algorithm trained on data that overrepresents white people, which may result in racial bias against people of color in the form of less accurate facial recognition results. Moreover, mislabeled data, or data that reflect existing inequalities, can compound these issues. Consider an AI recruiting tool trained on a dataset where some applicant qualifications were incorrectly labeled. This could result in the tool rejecting qualified candidates who possess the necessary skills but whose résumés were not accurately understood by the tool.
  2. Programming Errors: AI bias may also arise from coding mistakes, wherein a developer inadvertently or consciously overweighs certain factors in algorithmic decision-making due to their own biases. In one good example discussed in the IBM piece, “indicators like income or vocabulary might be used by the algorithm to unintentionally discriminate against people of a certain race or gender.”

B. AI Employment Discrimination

Companies have increasingly used AI tools to screen and analyze résumés and cover letters; scour online platforms and social media networks for potential candidates; and analyze job applicants’ speech and facial expressions in interviews.[2] In addition, companies are using AI to onboard employees, write performance reviews, and monitor employee activities and performance.[3] AI bias can occur in any of the above use cases, throughout every stage of the employment relationship—from hiring to firing and everything in between—and can result in discrimination lawsuits.

In one notable example, the Equal Employment Opportunity Commission ( “EEOC”) settled its first AI hiring discrimination lawsuit in August 2023.[4] In Equal Employment Opportunity Commission v. iTutorGroup, Inc.,[5] the EEOC sued three companies providing tutoring services under the “iTutorGroup” brand name (“iTutorGroup”) on the basis that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (“ADEA”) because the AI hiring program it used “automatically reject[ed] female applicants age 55 or older and male applicants age 60 or older,” resulting in screening out over 200 applicants because of their age.[6] Subsequently, iTutorGroup entered into a consent decree with the EEOC, under which iTutorGroup agreed to pay $365,000 to the group of automatically rejected applicants, adopt antidiscrimination policies, and conduct training to ensure compliance with equal employment opportunity laws.

The ongoing Mobley v. Workday, Inc.[7] litigation, one of the first major class-action lawsuits in the United States alleging discrimination through algorithmic bias in applicant screening tools, presents another warning. The plaintiff, an African-American man over the age of forty with a disability, claims that Workday provides companies with algorithm-based applicant screening software that unlawfully discriminated against job applicants based on protected class characteristics of race, age, and disability and thus violated Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866,[8] the ADEA, and the ADA Amendments Act of 2008 (“ADAAA”). On January 19, 2024, the court granted Workday’s motion to dismiss the case, with leave for the plaintiff to amend the complaint.[9] On February 21, 2024, the plaintiff filed an amended complaint outlining further details to support his claims.[10]

With the foresight to prevent the kind of lawsuits discussed above, Amazon took proactive measures in 2018 by ceasing using an AI hiring algorithm after finding it discriminated against women applying for technical jobs; after being trained on a dataset of mostly men, the tool preferred applicants who used words that are more commonly used by men in their resumes, such as “executed” or “captured,” among other issues.[11]

These cases, along with Amazon’s decision to scrap its biased AI hiring tool, highlight the growing concern about algorithmic bias in recruitment. Given this evolving landscape, employers must carefully examine all applicable federal, state, and local laws, as well as EEOC guidelines, to ensure fair and unbiased hiring practices.

C. Governing Law

1. Federal Law

There is currently no federal law specifically targeting the use of AI in the employment context. However, most employers’ use of AI tools in their employment practices would be subject to federal laws prohibiting employment discrimination based on race, color, ethnicity, sex (including gender, sexual orientation, and gender identity), age, national origin, religion, disability, pregnancy, military services, and genetic information.

Below is a list of primary federal laws a company must consider when evaluating AI-based employment evaluation tools. The most highly litigated one is Title VII, which applies to private employers that employ fifteen or more employees.

  1. Title VII of the Civil Rights Act of 1964 (“Title VII”)[12]: prohibits employment discrimination based on race, color, religion, sex (including gender, pregnancy, sexual orientation, and gender identity), or national origin.
  2. Section 1981 of the Civil Rights Act of 1866[13]: prohibits discrimination based on race, color, and ethnicity.
  3. The Equal Pay Act[14]: prohibits sex-based wage discrimination.
  4. The Age Discrimination in Employment Act[15]: prohibits discrimination based on age (forty and over).
  5. The Immigration Reform and Control Act[16]: prohibits discrimination based on citizenship and national origin.
  6. Title I and Title V of the Americans with Disabilities Act (“ADA”)[17] (including amendments by the Civil Rights Act of 1991 and the ADAAA): prohibits employment discrimination against qualified individuals based on disability and those regarded as having a disability.
  7. The Pregnant Workers Fairness Act[18]: prohibits discrimination against job applicants or employees because of their need for a pregnancy-related accommodation.
  8. The Uniformed Services Employment and Reemployment Rights Act[19]: prohibits discrimination against past and current members of the uniformed services, as well as applicants to the uniformed services.
  9. The Genetic Information Nondiscrimination Act[20]: prohibits discrimination in employment and health insurance based on genetic information.

2. State and Local Law

To address concerns over the use of AI in employment, states and local governments have become more proactive. Three notable examples of legislation that have been enacted, discussed below, demonstrate the growing trend among policymakers to regulate AI usage in employment practices, underscoring the increasing importance placed on ensuring fairness and accountability in AI-driven decision-making.

i. Illinois

In 2020, Illinois adopted the Artificial Intelligence Video Interview Act (820 ILCS 42/1), which imposes several requirements on employers if they conduct video interviews and use AI analysis of such videos in their evaluation process. These requirements include (i) notifying applicants of the AI’s role, (ii) providing applicants with an explanation of the AI process and types of characteristics used for evaluating applicants, (iii) obtaining the applicants’ consent for such AI use, (iv) only sharing videos with those equipped with the expertise or technology to evaluate the applicant’s fitness for a position; and (v) destroying videos within thirty days of a request by the applicant.

ii. Maryland

While not explicitly targeting AI, Maryland’s 2020 facial recognition technology law prohibits an employer from using certain facial recognition services—many of which use AI processes—during job interviews unless the applicant consents.

iii. New York City

New York City began enforcing its law on Automated Employment Decision Tools (“AEDT Law”) on July 5, 2023. Under this law, passed in 2021, employers and employment agencies are prohibited from using an automated employment decision tool (“AEDT”), which includes AI, to assess candidates for hiring or promotion in New York City unless an independent auditor completes a bias audit of the AEDT before its use and the candidates who are New York City residents receive notice that the employer or employment agency uses an AEDT. A bias audit must include “calculations of selection or scoring rates and the impact ratio across sex categories, race/ethnicity categories, and intersectional categories.”[21] For each violation, offenders could face penalties ranging from $375–$1,500.

3. EEOC Guidance

The EEOC enforces federal laws prohibiting discrimination in hiring, firing, promotions, training, wages, benefits, and harassment. Employers with at least fifteen employees, labor unions, and employment agencies are subject to EEOC review. The EEOC has the authority to investigate discrimination charges against employers and, if necessary, file a lawsuit. Therefore, even though EEOC guidance is not legally binding, it proves valuable for companies seeking to avoid potential investigations or lawsuits when using AI tools.

i. EEOC 2022 Guidance on the ADA and AI

In May 2022, the EEOC issued technical guidance addressing how the ADA applies to the use of AI to assess job applicants and employees.[22] The guidance outlines several common ways that utilizing AI tools can violate the ADA, including, for example, relying on an algorithmic decision-making tool that intentionally or unintentionally excludes an individual with a disability, failing to provide necessary “reasonable accommodation,” or violating the ADA’s restrictions on disability-related inquiries and medical examinations.

Employers can implement practices recommended by the EEOC to effectively handle the risk associated with utilizing AI tools, such as the following:

  1. Disclose in advance the factors to be measured with the AI tool, such as knowledge, skill, ability, education, experience, quality, or trait, as well as how testing will be conducted and what will required.
  2. Ask employees and job applicants if they require a reasonable accommodation using the tool. If the disability is not apparent, the employer may ask for medical documentation when requested for a reasonable accommodation.
  3. Once the claimed disability is confirmed, provide a reasonable accommodation, including an alternative testing format.
  4. “Examples of reasonable accommodations may include specialized equipment, alternative tests or testing formats, permission to work in a quiet setting, and exceptions to workplace policies.”[23]

ii. EEOC 2023 Guidance on Title VII and AI

In May 2023, the EEOC issued new technical guidance on how to measure adverse impact when AI tools are used for employment selection, titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”[24]

Under this guidance, if the selection rate of individuals of a particular race, color, religion, sex, or national origin, or a “particular combination of such characteristics” (e.g., a combination of race and sex), is less than 80 percent of the rate of the non-protected group, then the selection process could be found to have a disparate impact in violation of Title VII, unless the employer can show that such use is “job related and consistent with business necessity” under Title VII.

If the AI tool is found to have an adverse impact under Title VII, the employer can take measures to reduce the impact or select a different tool. Failure to adopt a less discriminatory algorithm that was considered during the design process may subject the employer to liability.

Under both EEOC guidance documents discussed here, an employer will be held liable for the actions or inactions of an outside vendor who designs or administers an algorithmic decision-making tool on its behalf and cannot rely on the vendor’s assessment of the tool’s disparate impact.

D. Legal Strategies

Considering the applicable laws and EEOC guidance, it would be prudent for a company to consider the following strategies to reduce risk of AI bias in employment decisions:

  1. Prior to signing a contract with a vendor who designs or implements an AI-based employment tool, as part of the vendor due diligence process, a company’s legal team should work closely with its IT and HR teams to review and evaluate the vendor’s tools, including reviewing assessment reports and historical selection rates, based on the applicable laws and EEOC guidelines.

    In addition, any employers who are subject to New York City’s AEDT Law should have an independent auditor conduct a bias audit before utilizing the AI tool.

  2. To incentivize a vendor to deliver a high-quality, legally compliant AI tool while mitigating risks, carefully negotiate and draft the indemnity, warranty, liability cap carveouts, and other risk allocation provisions of the contract with the vendor. These provisions should obligate the vendor to bear liability for any issues arising from the use of the AI tool in employment contexts caused by the vendor’s fault.

  3. Prepare detailed internal documents clearly explaining the AI tool’s operation and selection criteria based on the review mentioned in item a to protect the company in case of government investigations or lawsuits.[25]

  4. The legal team should work closely with HR and the IT team to conduct bias audits on a regular basis.

  5. If an audit reveals the tool has disparate impacts at any point, the company should consider working with the vendor to implement bias-mitigating techniques, such as modifying the AI algorithms, adding training data for underrepresented groups, or selecting a different tool, unless the legal counsel determines that the use of this tool is “job related and consistent with business necessity.”

  6. Provide advance notice to candidates or employees who will be impacted by AI tools in accordance with applicable laws and EEOC guidance.

  7. Educate HR and IT teams regarding AI discrimination.

  8. Keep track of legal developments in this area, especially if your company has offices nationwide.

Faced with the looming threats of EEOC enforcement actions, class action lawsuits, and legislative uncertainty, employers may understandably feel apprehensive about charting a course that includes using AI in hiring or HR. However, consulting with attorneys to understand legal requirements and potential risks associated with AI employment bias—along with adopting proactive measures outlined in this article, staying informed about legal developments, and fostering collaboration across legal, HR, and IT teams—can help organizations effectively mitigate risks and confidently navigate the intricate landscape of AI employment bias.


  1. IBM Data and AI Team, “Shedding light on AI bias with real world examples,” IBM, October 16, 2023.

  2. Keith MacKenzie, “How is AI used in human resources? 7 ways it helps HR,” Workable Technology, December 2023.

  3. Aaron Mok, “10 ways artificial intelligence is changing the workplace, from writing performance reviews to making the 4-day workweek possible,” Business Insider, July 27, 2023.

  4. Annelise Gilbert, “EEOC Settles First-of-Its-Kind AI Bias in Hiring Lawsuit (1),” Bloomberg Law, August 10, 2023.

  5. Equal Employment Opportunity Commission v. iTutorGroup, Inc., No. 1:22-cv-2565-PKC-PK (E.D.N.Y. filed May 5, 2022) (Aug. 9, 2023, joint notice of settlement and request for approval and execution of consent decree).

  6. iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit,” U.S. Equal Employment Opportunity Commission, September 11, 2023.

  7. 3:23-cv-00770-RFL (N.D. Cal. filed Feb. 1, 2023).

  8. 42 U.S.C. § 1981.

  9. Joseph O’Keefe, Evandro Gigante, and Hannah Morris, “Judge Grants Workday, Inc.’s Motion to Dismiss in Groundbreaking AI Class Action Lawsuit Mobley v. Workday,” Law and the Workplace (blog), Proskauer, January 24, 2024.

  10. Daniel Wiessner, “Workday accused of facilitating widespread bias in novel AI lawsuit,” Reuters, February 21, 2024.

  11. Rachel Goodman, “Why Amazon’s Automated Hiring Tool Discriminated Against Women,” American Civil Liberties Union, October 12, 2018.

  12. 42 U.S.C. § 2000e.

  13. 42 U.S.C. § 1981.

  14. 29 U.S.C. § 206(d).

  15. 29 U.S.C. §§ 621–634.

  16. Pub. L. 99-603, 100 Stat. 3359 (1986)), as codified as amended in scattered sections of Title 8 of the United States Code.

  17. 42 U.S.C. §§ 12101–12113.

  18. 42 U.S.C. §§ 2000gg–2000gg-6.

  19. 38 U.S.C. § 4311.

  20. 42 U.S.C. § 2000ff.

  21. Automated Employment Decision Tools: Frequently Asked Questions,” NYC Department of Consumer and Worker Protection, June 6, 2023.

  22. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” U.S. Equal Employment Opportunity Commission, May 12, 2022.

  23. Id.

  24. Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” U.S. Equal Employment Opportunity Commission, May 18, 2023.

  25. See Lena Kempe, “AI Risk Mitigation and Legal Strategies Series No. 5: Explainable AI,” LK Law Firm, January 11, 2024.

By: Lena Kempe

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login