EEOC Has Entered the [AI] Chat

Clark Hill PLC
Contact

The U.S. Equal Employment Opportunity Commission (EEOC) has officially joined the conversation about employers’ use of artificial intelligence (AI). The EEOC published two guidance documents in support of its Artificial Intelligence and Algorithmic Fairness Initiative. In one guidance, the EEOC explained how employers can run afoul of the ADA by using computer-based tools when making decisions about hiring, monitoring, compensation, and/or terms and conditions of employment. In its other guidance, the EEOC warned of possible adverse impacts arising from the use of AI in employment.

AI and the ADA

In its ADA Guidance, the EEOC identified three ways in which an employer’s use of algorithmic decision-making tools could violate the ADA.

The first risk identified was the failure to provide a reasonable accommodation to an applicant or employee who needs an accommodation in order to be fairly and accurately rated by the algorithm. For example, if an applicant has to watch a video as part of a mandatory personality test, a visually impaired applicant should be given an accommodation that does not require watching the video.

The second risk identified was the “screening out”—intentionally or unintentionally—of an individual with a disability even though the person is able to perform the job with a reasonable accommodation. For example, consider a candidate with PTSD who requires reasonable accommodations to assist him in dealing with distractions. If the personality test ranks the ability to work around distractions as an important attribute, this candidate could be unfairly screened out. If the applicant was able to perform the job with an accommodation, such as being permitted to wear noise-canceling headphones in the office, the use of the test to screen out the candidate could violate the ADA.

Third, if the tool makes use of disability-related inquiries and medical examinations, the ADA may be violated. In other words, employers may not use a computer-based tool prior to making a conditional offer of employment if the tool could be used to identify a candidate’s medical condition.

Best Practices for AI in Hiring

The EEOC’s guidance also includes some suggestions for how employers can avoid liability while using computer-based tools in the hiring process.

As explained in the Guidance, the provision of reasonable accommodations is critical.  Staff should be trained to quickly identify and process a request for an accommodation. The EEOC also recommends that applicants be told in advance that reasonable accommodations are available and explain the process for requesting such accommodations. The EEOC notes that a request to retake a test in an alternative format may be a reasonable accommodation.

The EEOC also explains that the employer should have alternative means available to rate an applicant if the applicant would be unfairly disadvantaged by the use of a computer-based tool due to the applicant’s disability.

Interestingly, the EEOC encourages employers to pull the curtain back on the algorithms used to test candidates. Specifically, the EEOC suggests that employers reveal to candidates what specific traits the test is designed to assess, how the assessment is performed, and the variables or factors that could affect the rating.

Additionally, the EEOC advises employers to ask questions of the vendor before purchasing an algorithmic decision-making tool. At a minimum, the employer should have the vendor confirm that the tool does not ask applicants questions that are likely to elicit information about a disability or impairment.

Adverse-Impact Concerns

In its second AI guidance, the EEOC dealt with liability arising from tools that have a disproportionate impact on a protected group. The EEOC made clear that employers cannot safely avoid liability by relying on a vendor’s representations about the impact of its tool. The EEOC suggests that employers consider self-auditing how the tool impacts different groups. Such an analysis should occur on an ongoing basis to determine whether the use of the tool has a disproportionately large negative effect on a protected group. If an adverse impact is found to have occurred, the employer should proactively change the practice going forward.

Conclusion

Like most technology, there are risks and benefits to the use of automated tools and AI in employment. Employers will want to consider all of the potential implications before investing in AI tools for use in the workplace and remain diligent in ensuring such tools are not used in an unlawful manner or with an unlawful result. Experienced employment counsel can provide guidance now and as this area of the law continues to develop.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Clark Hill PLC

Written by:

Clark Hill PLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Clark Hill PLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide