The Dangers of Artificial Intelligence in Employment Decisions

By Gwyn Powers

Artificial intelligence (“A.I.”) is becoming more and more pervasive in our society, especially in the last decade and during the COVID-19 pandemic.[1] Companies are using A.I. and analytic data to understand their customers and optimize their supply chains.[2] For example, Frito-Lay created an e-commerce website, Snacks.com, during the pandemic and used their data “to predict store openings [and] shifts in demand due to return to work[.]”[3] Companies are not limiting their use of A.I. to determine productivity and predict the next chip flavor; human resources departments have used A.I. to help with resume screenings since the mid-2010s.[4] One of the major concerns with using A.I. in the hiring process is the potential for discrimination because of implicit bias.[5]

In 2014, Amazon implemented an A.I. job recruiting program that produced a five-star ranking system for job candidates based on their resumes.[6] The engineering team for the project created 500 computer models based on certain job functions, locations, and patterns from resumes submitted to Amazon over a 10-year period.[7] However, the A.I. system taught itself to ignore skills found on common software developer resumes, such as computer coding, and to prefer male candidates.[8] In fact, the machine lowered the ratings on resumes that included the word “women’s,” as well as candidates that attended all-women’s colleges.[9] This is because the resumes that the algorithm looked at were primarily from male candidates since men make up a majority of the tech industry.[10] Ultimately, Amazon ended their A.I. recruiting program a year later due to various problems, such as recommending unqualified candidates for positions.[11] As of 2018, Amazon utilizes A.I. in their hiring process to perform simple tasks and functions, like removing duplicate candidate applications.[12]

As seen with Amazon, A.I. hiring programs look for candidates that match a specific group of people, typically a company’s current employees, using qualities that have been deemed desirable by the human creating the computer models.[13] While efficient, the criticism is that these computer models do not remove human bias from the hiring process, but rather “merely laundering it through software.”[14] This human bias, whether intentional or not, in A.I. systems can leave employers open to liability under Title VII of the Civil Rights Act (“Title VII”).[15] Under Title VII, an employee can file a claim against their employer based on disparate treatment and/or disparate impact. An employee can file a disparate treatment claim based on unintentional bias.[16] Since an A.I. program can have an unintentional bias because of its programming, a court may find that the employer is liable for the program’s implicit bias, just like the employer would be liable for their own hiring decision based on their own implicit bias.[17] Additionally, if the A.I. program adversely impacted a member of a protected class, then the employer could potentially face a disparate impact claim.[18] Similarly, if an algorithm determines a candidate’s physical disability or mental condition, then the employer could face penalties from the Americans with Disabilities Act (“ADA”).[19]

In 2021, the Equal Employment Opportunity Commission (“EEOC”) launched an agency-wide initiative on Artificial Intelligence and Algorithmic Fairness.[20] The EEOC’s initiative ensures that companies comply with federal civil rights laws when using A.I. in their hiring and employment decisions.[21] In May of 2022, the EEOC released guidelines for employers on how to comply with the ADA when using A.I. programs and decision-making.[22] These guidelines also included a section for employees and job applicants on what to do if they think their employer violated their rights with algorithmic decision-making.[23]

The EEOC is not the only actor attempting to control a rapidly growing industry, several state and city legislatures are enacting A.I. regulatory laws. For instance, in November of 2021, the New York City Council passed a bill that requires an annual bias audit for any A.I. hiring process sold in the city.[24] The law also requires employers to notify job candidates and employees who live in New York City when an A.I. program has been used in any employment decision. The employee or applicant can then request an alternative decision process.[25] Additionally, the new law imposes penalties for employers who do not comply with the statute.[26]

While the EEOC has not filed a claim against a company for violating the ADA or Title VII for using A.I. to determine employment decisions, the new guidelines indicate the federal government’s desire to protect citizens’ rights in a world with rapid technological growth. However, the courts will have to determine whether the biases in an A.I.’s learning system are enough to violate Title VII or the ADA.

 

 

 

Image Source: https://www.lomalindaca.gov/our_city/departments/administration/human_resources

[1] Jack Clark & Daniel Zhang, The 2021 AI Index: Major Growth Despite the Pandemic, Stan. Univ. Hum.-Ctr. A.I. (Mar. 3, 2021), https://hai.stanford.edu/news/2021-ai-index-major-growth-despite-pandemic.

[2] Joe McKendrick, AI Adoption Skyrocketed Over the Last 18 Months, Harv. Bus. Rev. (Sep. 27, 2021), https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months.

[3] Id.

[4] Jeffery Dastin, Amazon scraps secret AI recruiting tool that showed bias against women, Reuters (Oct. 10, 2018), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[5] Rachel Goodman, Why Amazon’s Automated Hiring Tool Discriminated Against Women, Am. C.L. Union (Oct. 12, 2018), https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against.

[6] Dastin, supra note 4.

[7] Id.

[8] Id.

[9] Dastin, supra note 4.

[10] Id.

[11] Id.

[12] Id.

[13] Goodman, supra note 5.

[14] Id.

[15] Gary D. Friedman & Thomas McCarthy, Emp. L. Red Flags in the Use of A.I.  in Hiring, Am. Bar Ass’n: Bus. L. Today (Oct. 01, 2020), https://www.americanbar.org/groups/business_law/publications/blt/2020/10/ai-in-hiring/.

[16] Arlington Heights v. Metropolitan Housing Dev. Corp., 429 U.S. 252, 265-266 (1977).

[17] Friedman & McCarthy, supra note 16.

[18] Id.

[19] Id.

[20] Press Release, EEOC, EEOC Launches Initiative on A.I. and Algorithmic Fairness (Oct. 28. 2021), https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness.

[21] Id.

[22] Press Release, EEOC, U.S. EEOC and U.S. Dep’t of Just. Warn against Disability Discrimination (May 05, 2022), https://www.eeoc.gov/newsroom/us-eeoc-and-us-department-justice-warn-against-disability-discrimination.

[23] Id.

[24] N.Y.C. Admin. Code §§ 20-871 (1)-(2).

[25] Id. at §§ 20-871 (2)(b)(1).

[26] Id. at §§ 20-872(a).