Employers, Get Ready for AI Risks

Kaufman & Canoles
Contact

Kaufman & Canoles

Many employers who seek to lower hiring costs and reduce potential discrimination claims have turned to artificial intelligence (AI) to locate talent, screen applicants, perform skills-based tests and even administer certain phases of the pre-hire interview process. While automating various aspects of the hiring process can eliminate the potential for intentional discrimination, discrimination can also occur when employers use tests or selection procedures that unintentionally exclude individuals based on one or more protected characteristics. This is known as “disparate impact” discrimination. Accordingly, if the use of AI inadvertently screens out individuals with physical or mental disabilities because of difficulties they may have with AI processes or if AI poses questions that may be more familiar to one race, sex or cultural group than another, this could result in illegal disparate impact discrimination.

Recent guidance from the Equal Employment Opportunity Commission (EEOC) confirms that rooting out AI-based discrimination is among its top strategic priorities. This guidance also confirms that when such discrimination occurs, the EEOC will hold the employer, not the AI vendor, responsible. Accordingly, employers can be liable for back pay, front pay, emotional distress, and other compensatory damages for using AI-enabled tools that result in unintentional or inadvertent discrimination.

To reduce risks associated with the use of AI tools in the hiring and performance management processes, employers should question AI vendors about the diversity and anti-bias mechanisms built into their products and make sure they understand what the AI products measure and how they measure it. Employers should also not only ask AI vendors about their performance statistics, but they should consider testing their company AI results annually. As an added protection, employers should include an indemnification provision in any contract with an AI vendor that protects the employer in case the vendor fails to design its AI in a manner that prevents actual and/or unintended bias.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Kaufman & Canoles

Written by:

Kaufman & Canoles
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Kaufman & Canoles on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide