EEOC Issues Guidance for Use of Artificial Intelligence in Employment Selections

Polsinelli
Contact

Polsinelli

So far in 2023, artificial intelligence (AI) has been at the leading edge of the technological revolution, as the potential applications for tools like ChatGPT have drawn considerable buzz.  In April 2023, we reported on New York City’s first-in-the-nation ordinance imposing notice and audit requirements on the use of artificial intelligence tools by employers.  More recently, the Equal Employment Opportunity Commission (EEOC) issued two guidance documents addressing AI in the HR context, specifically tackling the issues of adverse impact and disability accommodations.

Employers are increasingly using AI and machine learning (ML) tools to help optimize employment decisions like hiring, promotions, and terminations.  Some examples of these tools identified in the EEOC guidance include resume scanners to identify promising candidates, employee monitoring software that rates employees based on productivity metrics, virtual assistants or chatbots that question applicants about their qualifications, video interviewing software that evaluates facial expressions and speech patterns, and testing software that provides job or cultural fit scores.  Generally, an AI/ML tool is one that wholly or partially relies on a computerized analysis of data to make employment decisions.  As with many new technologies, however, in some cases, technological advancement may jeopardize legal compliance.  Employers will have to consider the implications of these tools under both new laws (like New York City’s) and older laws like those administered by the EEOC.

EEOC’s first guidance document assessed the employer’s obligation to ensure that AI/ML tools used in employment selection procedures do not adversely impact protected classes under Title VII (e.g., gender, race).  An AI/ML tool that has a “substantial” disproportionate impact on a protected class may be discriminatory if it is not job-related and consistent with business necessity or if more favorable alternatives are available.  An adverse impact can occur if a tool awards higher ratings or is more likely to select or reject, members of a certain protected class in comparison to other protected classes.  A few important takeaways from EEOC’s guidance on adverse impact:

  • Employers may be responsible for the effect of third-party software.  EEOC’s guidance signals the agency will look to hold employers responsible for adverse impact even if the AI/ML tool in question is third-party software the employer did not develop.  The guidance states that this responsibility can arise from either the employer’s own administration of the software or a vendor’s administration as an agent of the employer.

  • Employers rely on vendor assurances at their own risk.  Although EEOC encourages employers to “at a minimum” ask their AI/ML software vendors about steps taken to assess adverse impact, EEOC’s position is that reliance on the vendor’s assurances is not necessarily a shield from liability.  Employers still face liability “if the vendor is incorrect about its own assessment.”

  • Self-audits are advisable.  Given the inability to rely on a vendor’s assurances, employers are best served by periodically auditing how the AI/ML tools they use impact different groups.  To do such an audit, employers need access to the AI/ML tool’s underlying data, which is best ensured at the time the tool is implemented.

EEOC’s second guidance document addressed the impact of AI/ML tools on individuals with disabilities under the Americans with Disabilities Act (ADA).  The ADA guidance makes clear that this is an altogether different analysis than the Title VII adverse impact analysis described above.  Moreover, because of the individualized nature of the impact of disabilities and the ADA reasonable accommodation analysis, validation of an AI/ML tool, or a statistical finding that the tool does not adversely impact individuals with disabilities generally, are not sufficient to ensure ADA compliance.  Instead, EEOC anticipates a more individualized process in which the employer assesses whether the limitations of a particular employee or applicant’s condition would cause the employee or applicant to be “screened out” or unfairly rated by the AI/ML tool.  EEOC’s guidance anticipates that employers, as a best practice, would provide relatively in-depth notice of the operation of AI/ML tools and the availability of alternative processes in order for the accommodation process to occur. 

AI/ML offers the potential to transform the workplace, among other business processes, by allowing employers to sort through vast quantities of data and quickly glean actionable insight.  However, EEOC and jurisdictions like New York City have identified the potential for discriminatory biases to be built into AI/ML algorithms, or for these algorithms to disadvantage individuals with disabilities.  In order to avoid running afoul of new laws designed to address AI/ML, and existing laws like Title VII and ADA that went into effect decades ago but nonetheless govern AI/ML use, employers should carefully review their processes for using these tools in the human resources and recruitment context.

Written by:

Polsinelli
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Polsinelli on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide