EEOC Issues Guidance on Artificial Intelligence and Americans with Disabilities Act Considerations

Littler
Contact

Littler

On May 12, 2022, the U.S. Equal Employment Opportunity Commission (EEOC) issued a “Technical Assistance” (TA) document addressing compliance with ADA requirements and agency policy when using AI and other software to hire and assess employees. The agency also published a short “Tips for Workers” summary of this guidance.  Neither of these documents has the force or effect of law, nor are they binding on employers; as the accompanying press release notes, this guidance is meant to be educational, “so that people with disabilities know their rights and employers can take action to avoid discrimination.”  Nevertheless, we see several take-aways regarding the Commission’s likely expectations and areas of focus when regulating the use of such tools in hiring or assessing employees:

  • Accessibility:  Employers should account for the fact that on-line/interactive tools may not be easily accessed or used by those with visual, auditory or other impairment.
  • Accommodation:  Barring undue hardship, employers should provide alternatives to the use or application of these tools if an individual’s disability renders the use of the tool more difficult or the accuracy of the tool’s assessment less reliable.
  • Accommodation, II:  Beyond providing reasonable accommodations in accessing/using these tools, employers should ensure that the tools assess an individual in the context of any reasonable accommodation they are likely to be given when performing their job.
  • ADA vs. Title VII:  The EEOC stresses that disability bias requires different design and testing criteria than does Title VII discrimination, such as access considerations and the potential for inadvertent disability-related inquiries or medical examinations.
  • Promising Practices:  Noting that employers are responsible for ADA-violating outcomes even when a software tool is created or used by a third-party vendor or agent, the Commission provides examples of so-called “Promising Practices” that employers can engage in to demonstrate good-faith efforts to meet ADA requirements.

Throughout, the TA document uses various illustrative examples of the tools the EEOC aims to regulate.  These range from résumé scanners and virtual assistants/chatbots to video-interviewing software and software that tests an individual’s personality, aptitude, skills, and “perceived ‘cultural fit.’”  Employers using any of these tools in their recruiting, hiring, and review of applicants and employees (which, by some estimates, is up to 83% of employers) should take careful note of the EEOC’s position as to where these tools may run afoul of the ADA. 

The TA document focuses broadly on three themes, specifically, how the use of algorithmic decision-making may violate the ADA with respect to: (1) reasonable accommodation for applicants and employees; (2) where AI decision-making tools may “screen out” individuals with disabilities; and (3) where an AI-based tool may violate ADA restrictions on disability-related inquiries.  Key take-aways from each are discussed below.

Reasonable Accommodation.  Foremost, the EEOC stresses that where an employer uses AI or other algorithmic decision-making software, it is required to provide reasonable accommodation to employees whose disability may make it difficult to be assessed by the tool, or where a disability may cause a lower result than it would for a non-disabled employee.  The EEOC takes the unequivocal position that where a disability might make a test more difficult to take or reduce the accuracy of an assessment, an employer must provide an alternative testing format or a more accurate assessment of the individual’s skills, unless doing so would entail “undue hardship” (defined as “significant difficulty or expense,” and generally a very high bar for an employer to meet under the ADA).  By way of example, the EEOC offers that an employer that uses a test requiring use of a keyboard or trackpad to measure employee knowledge may need to provide an accessible version of the test to an employee with limited manual dexterity (or, where it is not possible to provide an accessible version of the test, to provide an alternative testing format).  Similarly, in its discussion of “screening out” applicants (detailed below), the agency cites tools that measure speech patterns or facial recognition, and the potential negative effect these may have on individuals with certain disabilities.  In line with its prior example, presumably the agency would take the position that an employer must provide such individuals with alternative testing methods, where it can do so without undue hardship.  Finally, the guidance makes clear that where an employer uses a third party, such as a software vendor, to administer and score pre-employment tests, the failure of the vendor to provide a reasonable accommodation required by the ADA would likely result in an employer being liable, even if the employer was unaware that the applicant reported the need for an accommodation to the vendor.

“Screening Out.”  The bulk of the EEOC’s guidance focuses on the use of AI or other algorithmic tools that act to “screen out” individuals with disabilities, where such tool causes an individual to receive a lower score or assessment and the individual loses a job opportunity as a result.  The guidance provides several examples, including a chatbot that screens out applicants with gaps in their employment history, the use of which may violate the ADA if the employment gap was due to a disability or the need to undergo treatment (EEOC appears to ignore the fact that many if not most gaps in employment history are unlikely to be occasioned by a disability).  Perhaps a more common scenario is contemplated in the example of video software that analyzes speech patterns, and which may screen out individuals with speech impediments. 

The guidance also explains in some length that while the typical steps an employer using AI may take to ensure its use is non-discriminatory (such as testing a tool for disparate impact on the basis of race or sex, and modifying the tool to eliminate any such impact if found), these efforts may be insufficient to eliminate discrimination on the basis of disability, insofar as “[e]ach disability is unique,” and the fact that some individuals with disabilities may fare well on the test does not mean that a particular individual with a disability may not be unlawfully screened out.  It goes on to state that while a decision-making tool may be “validated” (meaning that there is evidence that the tool accurately measures or predicts a trait or characteristic relevant to a specific job), such “validation” may not be sufficient with respect to individuals with disabilities. EEOC cites, for example, a visual “memory test,” which may be an accurate measure of memory for most individuals in the workforce, but may still unlawfully screen out an individual who has a good memory, but a visual impairment that reduces their ability to perform successfully on the test. 

The guidance also raises the concern that an algorithm may screen out an individual with a disability who can perform the essential functions of the job with reasonable accommodation because the algorithm is programmed to predict whether applicants under “typical working conditions” can do the job, and does not account for the possibility that an individual with a disability might be entitled to an accommodation such that they are not performing under “typical” working conditions.  By way of example, it offers an individual with PTSD, who might be rated poorly on a test that measures the ability to ignore workplace distractions without regard to the fact that such an individual may be entitled to an accommodation that would mitigate the effect of their disability (such as a quiet workstation or permission to use noise-cancelling headphones). 

Disability-Related Inquiries.  Finally, the TA notes that the use of AI tools may violate the ADA where software poses “disability-related inquiries,” meaning questions that are likely to elicit information about a disability (directly or indirectly).  While it is unlikely that most screening tools will include such questions (such as asking about an applicant’s workers’ compensation history), the EEOC warns that some seemingly innocuous questions may still run afoul of the ADA’s pre-offer limitation on medical inquiries, or act to “screen out” applicants or employees unlawfully.  The guidance notes that a personality test is not making a “disability-related inquiry” simply because it asks whether an individual is “described by friends as being generally optimistic,” even if being described in such a way may be related to a mental health diagnosis.  What the EEOC giveth with one hand, however, it appears to take away with the other:  as the agency explains, even if a question about “optimism” does not violate the ADA itself, if an individual with major depressive disorder answers negatively and loses an employment opportunity because of that answer, this may be an unlawful “screening out” if the negative answer is a result of the individual’s mental health diagnosis.  Similarly, the EEOC provides no guidance with respect to the “resume gap” it flags as a screening issue.  As mentioned above, the guidance notes that a chatbot or similar AI tool’s disqualifying an individual because of a gap in employment history may violate the ADA if the employment gap is due to a disability; left unclear is whether an invitation from the chatbot to an applicant to explain any gap in employment history is a prohibited “disability-related inquiry.”  There are many reasons an applicant may have taken time away from the workforce, such that a broad inquiry should not be seen as a disability-related inquiry; that said, the EEOC declined to provide any indication of its view on the question.

Practical Application.  The EEOC provides guidance and “promising practices” to employers seeking to use algorithmic tools, whether developed on their own, or provided by way of a third-party vendor, to minimize the risk of violating the ADA.  These include examining:

  • If the tool requires applicants or employees to engage a user interface, is the interface accessible to persons with disabilities?
  • Are materials presented in alternative formats?
  • Has the algorithm been assessed to determine whether it disadvantages individuals with disabilities?
  • Does the tool clearly indicate that reasonable accommodation, including alternative formats, are available to persons with disabilities?
  • Are there clear instructions for requesting accommodation?
  • Does the tool explain to applicants and employees what metrics the tool measures, how they are measured, and whether any disability might lower an assessment, such that a user with a disability would know to ask for a reasonable accommodation?

The EEOC’s guidance appears to raise more questions than it answers, in an area of law that is changing rapidly and already poses compliance challenges for employers.  Indeed, in many instances, it suggests that the ADA’s requirements with respect to accommodation and prohibition on unlawful screening may render the use of AI tools vastly more complicated and legally fraught. This comes at a time where the use of such tools is increasing exponentially. 

As the EEOC continues its AI initiative, we expect that the agency will provide further guidance to employers as to its view of how artificial intelligence and algorithmic decision-making interact with federal civil rights laws.  Moreover, as the composition of the Commission is likely to shift from a Republican majority to a Democratic majority no later than the end of the year, we expect the agency to ramp up its efforts to regulate in this space.  Littler’s Workplace Policy Institute will continue to keep readers apprised of relevant developments.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Littler | Attorney Advertising

Written by:

Littler
Contact
more
less

Littler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide