Labor Department Provides Employers with New 10-Step Roadmap to Avoid AI Hiring Discrimination

Fisher Phillips
Contact

Fisher Phillips

Federal workplace officials just unveiled a new website guiding employers on best practices to avoid artificial intelligence discrimination during the hiring process, including a roadmap of 10 actions you should consider taking if you want to stay compliant. While the September 24 release from the Department of Labor doesn’t carry the force of law, employers that follow these steps will not only put themselves at a competitive advantage when it comes to hiring but will be able to raise a solid defense if ever accused of AI discrimination.

Quick Background

The Partnership on Employment & Accessible Technology (PEAT), funded by the DOL’s Office of Disability Employment Policy, rolled out the AI & Inclusive Hiring Framework website earlier this week. It is the latest concrete step taken by federal officials as directed by the Biden Administration’s 2023 Executive Order that tasked each agency to tackle pressing AI-related issues.

Some Suggestions Before You Dive In

The guidance specifically encourages employers to pace themselves and take time to implement the roadmap – there is no need to roll out all the changes at once, but instead employers should consider it a progressive effort evolving over time. You can also skip around and not plow through the list in numerical order. The guidance advises you to focus on your organization when implementing the recommendations, considering which steps might be of best use to your business at the present time and starting with the steps that are easiest to implement.

10-Step Roadmap

The DOL’s website contains a detailed and comprehensive list of considerations and action items you should take into account when implementing these focus areas. This Insight will provide a general overview of each area of the 10-step roadmap.

1. Identify Legal Requirements

You should first identify the employment nondiscrimination, accessibility, and privacy laws and regulations that apply to your use of artificial intelligence (AI) hiring technology. The Fisher Phillips AI, Data, and Analytics list of insights of resources is a good place to start. Once you have a solid understanding of your obligations, you should align your risk management efforts with the current legal standards at play.

2. Establish Staff Roles

You’ll next consider the roles and responsibilities of workers in your organization relating to AI deployment. You should ensure they have the resources needed to maximize AI’s effectiveness while staying legally compliant. Training is the best place to start. Give training to your team members to ensure they understand the proper guardrails and what tools they have available to them. Your organizational leadership also must be fully engaged, invested, and ready to take responsibility for the overall direction you take AI. This builds an inherent sense of accountability within your organization. The guidance recommends you involve a diverse and interdisciplinary team of workers while developing your plans.

3. Inventory Your Technology

The guidance recommends you collect information from your vendors about the AI hiring technology you plan to deploy. This critical for any third-party AI applications. You should add this information to your system inventory – along with details about the technology’s intended use, anticipated benefits, accessibility conformance status, usability reports, risk classification, deployment status, and scope.

4. Work With Vendors

Working with third parties is a critical part of any AI initiative. The guidance makes clear that you must actively engage with your to ensure compliance. You should first develop consistent policies and procedures to ensure your work with AI vendors and other third parties is approached in a thoughtful manner. It’s critical for you to know which questions to ask during the procurement and implementation phases, so work with your counsel to make sure you feel comfortable doing so. You will need to identify the risks raised by the deployment of third-party technology – and then develop risk controls to ensure responsible use.

5. Assess AI’s Impacts – Both Positive and Negative

Fostering a strong risk culture – the established set of norms, attitudes, and behaviors related to awareness, management, and controls of risks – is an important component of any new process, and your AI hiring initiatives are no different. The guidance recommends you start by developing policies and practices for accountability, and then conducting assessments of the positive and negative impacts of the AI hiring technology. You will need to truly examine all angles of how the technology will impact your organization. The guidance recommends you collaborate with both internal staff and external sources – experts, independent assessors, and job seekers – to gather this information.

6. Provide Accommodations

This focus area could be semi-controversial. The guidance recommends you create a process for job seekers to request accommodations if they are not comfortable being subjected to the use of AI during the hiring process. Your process should ensure that your staff can offer, respond to, and promptly fulfill accommodation requests and collect feedback on the accommodations process and the accommodations requests that were fulfilled. Note, however, that no existing federal or state law explicitly requires you to automatically accommodate applicants in this manner upon request (proposed laws from Congress and in California both failed to even make it to a vote in 2024) – but you still may need to accommodate an applicant under existing ADA or state disability discrimination law. Check out some of the ways that AI hiring tools have fallen under attack for alleged discrimination and failure to accommodate.

7. Use Explainable AI to Aid Your Transparency Efforts

Transparency is a critical element of AI best practices. The DOL also puts this goal front and center with one of its focus areas. The guidance recommends you collect “explainable” AI statements – the methods and tools that help people understand how and why the AI models produce their outputs – and other supporting documentation from vendors to first understand how the specific technology works. It also suggests you develop accessible plain-language notices for external users that describe what the technology does and how you intend to use it. These notices will give users more information to help them make decisions about requesting accommodations, communicating about data privacy, and getting support if needed.

8. Ensure Human Oversight

Human oversight for AI processes and applications is critical. Every company should create effective human oversight policies and procedures with clear roles and responsibilities. These policies should establish clear guidelines for how workers can use the AI technology, ensuring proficiency in human oversight, and measuring human accountability. And then, of course, the guidance recommends you train your workers on the importance of human oversight and the best ways to ensure that humans remain in control of the AI process.

9. Manage Incidents

The guidance recognizes that problems will inevitably arise. It recommends taking a proactive approach so you have a ready framework for dealing with any problems. Anticipate the problems now, so you are ready to act effectively later. After defining what an “incident” might look like with your specific AI tools at your organization – failures, negative impacts, accessibility problems, negative user experience feedback – the DOL recommends you implement policies and procedures for handling any incidents. The DOL offers suggestions for how to regularly collect, measure, and record such incidents, and then for managing them as they arise.

10. Monitor Your AI Regularly

Finally, the guidance recommends you regularly monitor the performance of your AI hiring technology to help you assess its reliability and evaluate compliance risks. Many AI tools are probabilistic rather than deterministic. They can be prone to errors. Also, AI systems are organic and can change over time. They are subject to things like model and algorithm drift. You must continuously monitor the performance of your AI systems. This requires collaboration with internal staff, vendors, and independent assessors. As the guidance explains, this collaboration will not only help you assess benefits and develop user communications, but also allow you to handle the technology’s configuration, replacement, decommissioning, and phaseout as necessary.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Fisher Phillips

Written by:

Fisher Phillips
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Fisher Phillips on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide