Employment Law Red Flags in the Use of Artificial Intelligence in Hiring

18 Min Read By: Gary D. Friedman, Thomas McCarthy

While the COVID-19 pandemic has had an effect on almost every aspect of employment, perhaps the biggest change for most employers (and the change that is most likely to have a lasting impact) is the transition of many employees to some form of remote work. Relatedly, many businesses have been forced to recruit and screen job applicants remotely, abandoning traditional in-person interviews and job assessments in favor of virtual meetings and online tools to measure, among other things, cognitive capabilities, emotional intelligence, personality traits, and skill sets. Even prior to the pandemic, many companies were beginning to migrate towards the use of artificial intelligence (“A.I.”) in screening applicants, in hopes that computers would speed up the hiring process, more accurately identify the right candidates for the position, and eliminate human bias and subjectivity in selecting candidates. Whether it was deploying machine learning to identify recruits based on the content of their online profiles, or using algorithms to sort through resumes, or even using face and voice analysis software to assess various competencies and characteristics, A.I. was touted by many companies as a hiring panacea. That drumbeat has only become louder among employers in an environment where live meetings and social interactions have become circumscribed. However, without proper vetting and analysis, these tools can actually introduce bias into the process and expose employers to liability under various federal, state, and local laws. This article explores the ways in which A.I. and machine learning are in use during the screening, interviewing, and hiring process, as well as the complicated (and expanding) legal framework in which these tools must operate, and identifies potential pitfalls for employers seeking to implement these technologies.

COVID-19 Has Accelerated the Move Towards a Work-From-Home Economy

Even prior to the COVID-19 pandemic, working from home was becoming an increasingly common practice. According to a 2012 study, the proportion of U.S. employees who primarily work from home nearly doubled from 2000 to 2010.[2] The number of employees regularly working from home grew 173% from 2005 to 2012, and, in 2016, 43% of employees reported working remotely with some frequency. This has been driven by a number of factors, including an increase in jobs that are performed mostly with computers, the improvement of remote work technology, and an increasing number of households with children in which all caregivers are working.

COVID-19 has obviously accelerated this shift to remote working exponentially. During COVID-19, more than 60% of U.S. employees reported that they have been primarily working from home due to the pandemic.[3] Even in cities and states where employers are not required to have non-essential personnel working remotely, many employers have voluntarily made the switch to prevent the potential spread of the virus within the workplace. While it is unlikely that more than one-half of the U.S. workforce will be working from home full time after the pandemic subsides, many employers are anticipating allowing some form of permanent flexibility in the workweek, even in a post-COVID world. A PwC survey of employers showed that 55% anticipated that most of their workers will be working from home at least one day a week following the pandemic.[4]

The current pandemic and the accelerated move towards a flexible workweek create obvious impediments to the interviewing process, as candidates cannot always be brought in for live conversations with existing employees. During the pandemic, employers have replaced some of these live meetings with video conferencing; however, Zoom meetings can remove some of the subtleties that emerge when individuals are face-to-face. One alternative that employers are increasingly exploring is A.I.

Companies Are Increasingly Using A.I. in All Stages of Screening, Interviewing and Hiring

The use of computer processing power in the screening and hiring process is not a new phenomenon. In fact, for several decades, employers and recruiting firms have been using simple text searches to cull through resumes submitted in response to job listings. These text searches have given way to more complex algorithms that do more than search for identified keywords. For example, Ideal, an “A.I.-powered talent screening and matching system,” has the ability to understand and compare experiences across resumes to determine which candidate’s work history more closely matches the requirements of an open position. Some companies, such as LinkedIn Recruiter and ZipRecruiter, bring A.I. into the equation even earlier in the process, searching the social media and public profiles of millions of individuals to determine whether a job posting is even advertised to a particular candidate.

Once a candidate has been identified, A.I., in the form of chat bots, can be used to automatically reach out to that individual and determine whether the person is available to start on the employer’s preferred timeline or whether the individual is open to commuting. Some companies have applicants play neuroscience computer games, which are then analyzed to predict candidates’ cognitive and personality traits.

A.I. is also utilized in the interview process. One tech company, HireVue, started in 2004 as a video interview platform that allowed candidates to record answers to questions and upload them to a database for recruiters to later review and compare to answers from other applicants.[5] Since then, HireVue has integrated A.I. into its platform. It now uses facial and voice recognition software to analyze body language, tone, and other factors to determine whether a candidate exhibits preferred traits.

The Pros and Cons of A.I. in Hiring

The technology companies developing these A.I. tools tout their ability to help recruiters and HR departments quickly sift through mountains of applicants and more efficiently identify qualified candidates from the outset. Companies might receive thousands of applications for a single job posting, leaving HR departments little choice but to find some way to cut down the number of resumes that have to be reviewed, or alternatively to speed-read resumes trying to weed out unqualified candidates. The use of an A.I. system could ensure that every resume is at least screened. Some A.I. services can also save time by analyzing publicly available data such as social media profiles, resumes, and other text-based data submitted by the applicant, eliminating the need for additional assessments.

Proponents of this technology also argue that A.I. systems can be fairer and more thorough than human recruiters can—some systems can consider upwards of 20 factors in each application in fractions of a second, and these automated systems can apply the same analysis to every applicant, whether it is the first resume reviewed for a position or the five hundredth. While human recruiters or interviewers might be impacted by whether they are having a particularly busy day or whether they were sleep-deprived the night before, facial and voice recognition software analyzes every candidate the same way. A.I. also, theoretically, can be used to avoid the unconscious preferences and biases of human recruiters by stripping out information relating to, among other things, name, age, and gender, all of which can color a person’s analysis of an applicant’s qualifications.

Those who are more cautious about the use of A.I. in recruiting point out that the systems are only as good as the programmers who write the algorithm and “feed the machine.” If an A.I. tool is fed resumes of people who have previously been hired by the company, and the recruiting departments making those hiring decisions harbored subconscious biases and preferences, those biases and preferences could be inherited by the A.I. tool. This could have effects that range from the bizarre—such as the resume screening company whose algorithm determined that the factors most indicative of job performance were having the name Jared and playing high school lacrosse[6]—to the more nefarious. Amazon reportedly scrapped an internally developed recruiting tool after it discovered that the algorithm was disfavoring resumes that included the word “women’s,” (for example, if a resume included information about the applicant’s participation on a college’s women’s ice hockey team) and candidates who graduated from two all-women’s colleges.[7] This occurred because the algorithm had been fed resumes from applicants who had previously been hired by Amazon, and those hires were overwhelmingly male.

Unintentional discrimination could also seep into A.I. systems in less direct ways. An algorithm trained to prefer employees within a certain commuting distance might result in applicants from poorer areas being disadvantaged. Even as recently as 2019, top facial recognition systems were shown to misidentify female black faces ten times more frequently than female white faces.[8] This suggests that A.I. programs might have issues analyzing the facial expressions of black applicants. Differences in speech patterns and vocabulary that correlate with race or ethnicity could complicate automated voice analysis. These are not biases that are being intentionally programmed into A.I. software, but they could nonetheless result in certain groups of applicants being unfairly disadvantaged, which opens employers up to potential claims under various anti-discrimination laws.

Use of A.I. Creates Potential Risks under Existing Employment Laws

Like any other recruiting or hiring practice, the use of A.I. systems to screen and interview candidates implicates Title VII of the Civil Rights Act of 1964 (“Title VII”), a federal law that protects employees and applicants against discrimination based on certain specified characteristics such as race, color, national origin, sex, and religion, as well as the Age Discrimination in Employment Act (“ADEA”). Both Title VII and the ADEA prohibit discrimination based on disparate treatment and/or disparate impact. While a claim of disparate treatment—i.e., intentional discrimination—might seem odd when talking about use of a computer program that by its nature necessarily lacks a discriminatory motive or intent, courts have upheld claims of disparate treatment based on allegations of unconscious or implicit bias.[9] As discussed above, unconscious bias can manifest in an A.I. system because of its programming and training. Thus, a court could find that an employer faces the same liability for a program exhibiting the unconscious bias of its programmer as it would if the programmer had made the hiring decision him or herself, based on that bias.

Alternatively, an employer could face a Title VII or ADEA disparate impact claim if use of a particular A.I.-driven program or algorithm adversely impacts members of a protected class, such as the female applicants who were being disfavored by Amazon’s recruiting tool. Courts analyzing such a claim could turn to a seminal line of cases that dealt with employers’ use of standardized tests in the application and promotion process. In its opinions in Griggs v. Duke Power Company[10] and Albemarle Paper Co. v. Moody[11], the Supreme Court established that if such tests are shown to have a disparate impact on protected groups of employees, employers must establish that the tests are both job-related and represent a reasonable measure of job performance. Courts could apply the same reasoning to A.I. programs and algorithms, whereby employers may be forced to establish how the factors considered by the programs relate to the specific job requirements for the position at issue. In some cases, such as analysis of relevant experience in a resume, an employer might be able to make such a showing easily. In cases where facial recognition software is prioritizing candidates who made eye contact during an automated interview, job-relatedness might be more difficult to establish. In addition, even if an employer shows that the A.I. tool is considering job-related factors, applicants could still succeed on a disparate impact claim by pointing to the existence of a less discriminatory practice that could serve the same job-related business interest.

An A.I.-hiring practice could also implicate the Americans with Disabilities Act (“ADA”) if an algorithm discerns an applicant’s physical disability, mental health, or clinical diagnosis, all of which are forbidden inquiries in pre-employment candidate assessments. The ADA Amendments Act of 2008 broadened the statutory definition of “disability,” increasing the scope of individuals whom the ADA protects. Similarly, the Equal Employment Opportunity Commission (“EEOC”) has issued guidance qualifying the expanded list of personality disorders identified in the psychiatric literature as protected mental impairments.[12] Consequently, the ADA may protect applicants who have significant concentration or communication problems, both of which A.I.-technology may identify as a disqualifying characteristic for employment.

The potential for A.I. recruiting practices to violate existing employment statutes is not hypothetical. In fact, the EEOC has already investigated at least two instances of alleged A.I. bias, and has made clear that employers using A.I. hiring practices could face liability for any unintended discrimination.[13] Furthermore, in September 2018, three U.S. Senators requested that the EEOC develop guidelines for employers’ use of facial analysis technologies to ensure they do not violate anti-discrimination laws.[14] Though the EEOC has not yet responded to the Senators’ request, the Commission’s recent enforcement activities demonstrate its focus on the growing use of new technologies. For example, the EEOC, in 2017, found reasonable cause to believe an employer violated the ADEA by advertising on Facebook for a position within its company and “limiting the audience for their advertisement to younger applicants.”[15]

In addition to laws focusing on discrimination, the use of certain A.I. recruiting tools could implicate state biometric laws. Illinois,[16] Texas,[17] and Washington[18] have laws regulating the collection of biometric identifiers including scans of hands, fingers, voices, faces, irises, and retinas. The laws generally require that businesses collecting biometric identifiers specify how they safeguard, handle, store, and destroy the data they collect, and provide individuals with prior notice and consent, including notice of how exactly the data will be collected and used. Furthermore, New York, California, Washington, and Arkansas have recently amended their existing state laws to include biometric data in the definition of protected personal information. To the extent that employers use facial or voice recognition software to analyze applicants’ video interviews, they may have to develop policies to ensure that their storage and use of that data complies with applicable state laws. Furthermore, the nature of an online application process means that employers may end up inadvertently collecting biometric data from individuals who reside outside of the states in which the company normally operates, which could expose the employer to additional legal requirements of which it might not be aware.

Many States Are Now Focused on Protecting Job Applicants Regarding the Use of A.I. in Hiring

While A.I. in recruiting is not regulated on a federal level, Illinois recently enacted a first-of-its-kind law called the Artificial Intelligence Video Interview Act. Effective January 1, 2020, the law imposes strict limitations on employers who use A.I. to analyze candidate video interviews.[19] Under the Act, employers must: a) notify applicants that A.I. will be used in their video interviews; b) obtain consent to use A.I. in each candidate’s evaluation; c) explain to applicants how the A.I. works and what characteristics the A.I. will track in relation to their fitness for the position; d) limit sharing of the video interview to those who have the requisite expertise to evaluate the candidate; and e) comply with an applicant’s request to destroy his or her video within 30 days.

New York currently is considering legislation to limit the discriminatory use of A.I. technology. If passed, the new law would prohibit the sale of “automated employment decision tools” unless the tools’ developers first conducted anti-bias audits to assess the tools’ predicted compliance with the provisions of Section 8-107 of the New York City Code, which sets forth the city’s employment discrimination laws, and prohibits, among other things, employment practices that disparately impact protected applicants or workers.[20] New Jersey and Washington state legislators introduced similar legislation in 2019.

Furthermore, beginning in 2018, New York, Vermont, and Alabama created task forces to begin studying the development and use of A.I. technologies. The states directed the task forces to assess the A.I.-tools for various benchmarks like discriminatory impact, fairness, accountability, and transparency, and to develop best practices for A.I. usage. These efforts to examine A.I. tools in depth could foreshadow upcoming state regulation of A.I.-driven pre-employment tools.

States legislatures are not the only ones scrutinizing A.I. usage in recruiting. Senate and House Congressional legislators introduced the Algorithmic Accountability Act (“AAA”) in April 2019.[21] The proposed AAA is the first federal law aimed at regulating the use of algorithms by private companies, and would task the Federal Trade Commission with creating regulations that require major employers to assess their A.I. tools for accuracy, fairness, bias, discrimination, privacy, and security and to implement timely corrections. As drafted, the AAA only applies to companies with revenues in excess of $50 million per year, that possess information relating to at least one million people or devices, or that act as data brokers who buy and sell consumer data. Commentators have stated that the proposed act provides clear notice that Congress believes A.I. should be regulated, and will step in.[22]

What Employers Should Be Aware of When Considering Using A.I. in Hiring

Just as COVID-19 has accelerated the transition of many employers to flexible work schedules, the nationwide move to more regular work-from-home arrangements is likely to accelerate the adoption of A.I. tools in the recruiting, interviewing, and hiring process. To the extent that employers are considering using such tools, either in-house or through a recruiting company, there are certain issues of which they should be cognizant:

  • Employers should know the factors being considered by the program or algorithm. In much the same way that employers carefully develop and identify non-discriminatory and non-biased factors and considerations that are important to their traditional hiring decisions, they need to be equally as diligent in developing and modifying (where appropriate) the inputs that are fed into their recruiting programs and algorithms used to screen and evaluate potential candidates and applicants. Not only will this enhance the likelihood of recruiting success, but it will give employers the opportunity to assess whether the factors are, in fact, job-related, which is a lynchpin criterion under many employment laws.
  • Employers should consider auditing automated tools on a regular basis. One of the main selling points for machine learning tools is that they can adapt on their own to feedback from the person making employment decisions, theoretically leading to better results the longer they are used. The downside of this constant adaptation is that employers cannot rely on an initial analysis of whether the program is returning results that may disadvantage one group or another. Employers should consider regularly auditing the results produced by these tools to ensure that the programs are not inadvertently “learning” illegal or improper lessons from the information that is input. Self-critical analysis of both the inputs and outputs is essential to minimize liability risk under the employment laws.
  • Outsourcing does not eliminate risk to employers. Not all employers have the capability of internally developing A.I. tools for recruiting—many likely contract with outside vendors to handle parts of the recruiting process, particularly the initial vetting of applicants and/or the advertising to specific potential candidates. Using such an arrangement, however, does not exempt the employer from liability if the vendor is using tools that discriminate against protected groups. Similar to requests for salary history and background checks, employers may be held liable for violations of employment laws by recruiting companies. As such, employers—through appropriate contract language—should require their recruiters, or others acting on their behalf, to comply with all existing employment laws in connection with the screening and hiring of job applicants.

[1] Summer Associates Rund Khayyat and Kate Waterman assisted in the drafting of this article.

[2] Mateyka, Petr J., Melanie Rapino, and Liana Christin Landivar, ‘‘Home-Based Workers in the United States: 2010,’’ U.S. Census Bureau, Current Population Reports, 2012.

[3] See https://news.gallup.com/poll/306695/workers-discovering-affinity-remote-work.aspx.

[4] See https://www.pwc.com/us/en/library/covid-19/us-remote-work-survey.html.

[5] See https://www.businessinsider.com/hirevue-ai-powered-job-interview-platform-2017-8.

[6] See https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/.

[7] See https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[8] See https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/.

[9] See e. g., Arlington Heights v. Metropolitan Housing Dev. Corp., 429 U.S. 252, 265-266, 97 S.Ct. 555, 563-565, 50 L.Ed.2d 450; see also Kimble v. Wisconsin Dep’t of Workforce Dev., 690 F. Supp. 2d 765, 778 (E.D. Wis. 2010) (holding plaintiff established prima facie discrimination case by relying on evidence of employer’s implicit bias).

[10] 401 U.S. 424 (1971).

[11] 422 U.S. 405 (1975).

[12] Equal Employment Opportunity Commission (EEOC), Enforcement Guidance on the ADA and Psychiatric Disabilities, (1997), https://www.eeoc.gov/laws/guidance/‌enforcement-guidance-ada-and-psychiatric-disabilities.

[13] U.S. Equal Employment Opportunity Commission, Press Release: Use of Big Data Has Implications for Equal Employment Opportunity, Panel Tells EEOC (Oct. 13, 2016), https://www.eeoc.gov/eeoc/newsroom/release/10-13-16.cfm.

[14] Senators Kamala Harris, Patty Murray, Elizabeth Warren, Letter to the U.S. Equal Employment Opportunity Commission, https://www.scribd.com/embeds/388920670/content#from_embed.

[15] See Commc’ns Workers of Am. v. T-Mobile US Inc., 5:17-CV-07232 (N.D. Cal. 2017); see also Mindy Weinstein, U.S. Equal Employment Opportunity Commission Determination Letters, available at https://www.onlineagediscrimination.com/‌sites/default/files/documents/eeoc-determinations.pdf.

[16] Ill. Biometric Information Privacy Act, 740 ILCS 14/1 et seq. (2008).

[17] TX Bus. & Com. Code §503.001 (2009).

[18] Wash. Rev. Code Ann. §19.375.020 (2017) (prohibiting companies from entering biometric data into a database without prior notice and consent).

[19] 820 ILL. Comp. Stat. Ann. 42/1.

[20] Int 1894-2020 (N.Y. 2020).

[21] H.R.2231, 116th Cong. (2019); S. 1108, 116th Cong. (2019).

[22] See, e.g., Starner Tom, AI can Deliver Recruiting Rewards, but at What Legal Risk?, Human Resource Executive, Dec. 31 2019, https://hrexecutive.com/ai-can-deliver-recruiting-rewards-but-at-what-legal-risk/.

By: Gary D. Friedman, Thomas McCarthy

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login