Artificial Intelligence, Hiring Practices & EPLI
Property & Casualty
Artificial Intelligence, Hiring Practices & EPLI
Artificial intelligence (AI) has an increasing presence in the workplace as technology continues to evolve, offering new methods to ease administrative burdens for employers. Employers now have a wide variety of data-driven decision-making tools to assist them in making employment decisions, such as recruitment, hiring, promotions and dismissals. Despite the many benefits of AI, employers may inadvertently run the risk of violating existing federal and state civil rights laws that could lead to employment discrimination claims.
Artificial Intelligence and the Hiring Process
According to a survey conducted by the Society for Human Resource Management (SHRM), around 79% of employers were projected to use AI software in recruitment and hiring1. Companies may rely on various software tools to scan resumes for relevant work experience or venture onto the web, monitoring sites like LinkedIn for potential applicants the company can target directly. Some employers use virtual assistants or chatbots to screen job candidates for their qualifications, rejecting those who do not meet set requirements. Hiring platforms, like HireVue, a video interviewing software, can evaluate candidates based on their facial expressions and speech patterns to determine if the applicant will be a good fit for the organization.
Proponents of using AI software in the hiring process say that it allows employers to process more resumes and applications faster and helps eliminate the risk of biases that can occur during human interaction. However, without adequate safeguards, there are also risks that employers can encounter under federal and state employment statutes.
Title VII of the Civil Rights Act of 1964 provides federal protections for employees and applicants against discrimination based on certain characteristics, including race, religion and gender. The Age Discrimination in Employment Act (ADEA) also prohibits age discrimination. However, an employer relying on AI to pre-screen applications could inadvertently disqualify an applicant based on a protected class. For example, an AI software screening application could disqualify applicants outside of a specific geographic radius, which might inadvertently discriminate against a particular racial or ethnic group2.
Potential Exposures for Employers
Several state and city governments have attempted to regulate the use of artificial intelligence in hiring procedures in response. Illinois and Maryland have enacted laws limiting the use of specific AI technologies in the workplace and New York City began enforcement of its new legislation, Local Law 144, meant to regulate the use of AI-driven tools in hiring and promotion decisions3. The NYC law requires employers to have independent auditors annually check the AI software for any biases and to notify potential candidates that an automated system is being utilized to aid the hiring process4. While this law only applies to companies with workers within NYC, labor experts expect this to influence other states and municipalities. At least four other jurisdictions – California, New Jersey, Vermont and the District of Columbia – are also working on laws to regulate AI in hiring5. As laws continue to evolve, it can be expected that plaintiffs will find new avenues to challenge companies’ employment processes.
As part of its Artificial Intelligence and Algorithmic Fairness Initiative, the Equal Employment Opportunity Commission (EEOC) issued guidance regarding the use of AI in employment-related decision-making, assessing whether these procedures may have a disproportionately discriminatory effect. The EEOC addressed the potential liability employers could face when using algorithmic tools designed or administered by another entity. Because employers cannot rely on the vendor’s representations to escape liability from disparate impact, measures should be taken to prevent any inadvertent discrimination against job seekers and workers6.
As more employers use AI to assist in their employment decisions, we expect to see more litigation which specifically addresses AI in the context of discrimination claims. Recently, the EEOC settled its first-ever discrimination lawsuit involving AI hiring software, reaching a $365,000 settlement deal with a tutoring company that allegedly used recruitment software that automatically rejected older applicants7. The parties filed a Joint Notice of Settlement in August 2023 in the U.S. District Court for the Eastern District of New York that is pending approval before the Court8.
In another example, a class-action lawsuit9 was filed in April 2023 against CVS Health Corp. in Massachusetts by an applicant who failed to get a job after completing an AIassisted video interview using HireVue10. A similar class-action suit11 alleging discriminatory practices on the basis of race, age and disability was filed in February 2023 in California against Workday on behalf of prospective employees who had failed to be hired as a result of an alleged discriminatory screening process12. While these lawsuits are still pending, the EEOC’s recent settlement brings light to the potential legal and monetary risks.
How EPLI Can Help Limit Exposure
As more AI-related employment cases are filed, employers should understand the potential impact from both a business and insurance perspective. Defending any employment-related litigation can accrue significant defense costs and drain internal resources to address the matter. If news of an AI-related employment lawsuit is broadcasted on media outlets, it can negatively impact a company’s brand and reputation.
Employers seeking to mitigate risks presented by the threat of AI-related litigation can utilize Employment Practices Liability insurance. Leading EPL insurers are now adjusting their underwriting protocols to acknowledge how and where companies are using AI in their business practices. As the prevalence of AI software expands, companies renewing their EPL policies should anticipate additional questions related to the use of AI. In recent discussions with several top EPL carriers, maintaining indemnity agreements with outside AI software vendors does insulate companies from legal action related to the use of AI software in the employment process.
As the EPL insurance space continues to monitor court filings addressing the use of AI in hiring, employers should process the effects of these cases and consider the scope of their insurance policies to help address such litigation. Brown & Brown’s Risk Solutions Practice offers innovative strategies to companies seeking to address this area of concern with comprehensive and tailored insurance coverage solutions.
¹ Society for Human Resource Management, Automation & AI in HR, (2022), https://advocacy.shrm.org/SHRM-2022-Automation-AI-Research.pdf.
² US Equal Employment Opportunity Commission, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in
Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, (2023).
³ Steve Lohr, A Hiring Law Blazes a Path for A.I. Regulation, The New York Times (May 25, 2023).
⁴ Jim Paretti et. al., New York City Adopts Final Regulations on Use of AI in Hiring and Promotion, Extends Enforcement Date to July 5, 2023, Littler
Mendelson P.C. (April 13, 2023).
⁵ Amber M. Rogers & Michael Reed, Discrimination in the Age of Artificial Intelligence, American Bar Association (2021).
⁶ Dabney D. Ware, Using AI to Help Hire May Seem Easy — Until the Legal Challenge Comes In, The National Law Review (June 9, 2023).
⁷ Annelise Gilbert, EEOC Settles First-of-Its-Kind AI Bias in Hiring Lawsuit, Bloomberg Law (Aug 10, 2023).
⁸ EEOC v. iTutorGroup Inc., E.D.N.Y., No. 22-cv-02565.
⁹ Baker v. CVS Health Corporation, D. Mass., 1:23-cv-11483.
¹⁰ Katie Johnston, A Milton resident’s lawsuit against CVS raises questions about the use of AI lie detectors in hiring, The Boston Globe (May 21, 2023).
¹⁰ Katie Johnston, A Milton resident’s lawsuit against CVS raises questions about the use of AI lie detectors in hiring, The Boston Globe (May 21, 2023).
¹¹ Mobley v. Workday, Inc., N.D. Cal., No. 23-cv-00770
¹² Annelise Gilbert, Workday AI Biased Against Black, Older Applicants, Suit Says, Bloomberg Law (Feb 22, 2023).
Adenola Adedigba
Claims Analyst
Jane K. Hahn
Senior Managing Director
Christina Wunsch
Managing Director
Michael D’Ambrise
Senior Vice President