Early this year, the New Jersey Office of the Attorney General and Division on Civil Rights (the “DCR”) issued new guidance (the “Guidance”) addressing how the New Jersey Law Against Discrimination (the “LAD”) applies to “automated decision making tools.” Artificial intelligence (“AI”) is one common form of these decision-making tools that employers, housing providers, places of public accommodation, and other entities covered under the LAD have begun using to make key decisions.
While these tools carry potential benefits for regulated entities and the public, the DCR warns that they also carry potential risks. Specifically, the DCR’s Guidance warns employers and other entities covered under the LAD that if these tools are not designed and implemented responsibly, they can result in unlawful algorithmic discrimination (i.e., discrimination that results from the use of automated decision-making tools).
In light of the growing prominence of AI, the DCR clarifies that any employer that engages in algorithmic discrimination may be held liable for violating the LAD regardless of their intent. Additionally, employers may be liable for algorithmic discrimination even if the employer uses an automated decision-making tool it did not develop.
Key Takeaways from the Guidance
Automated Decision-Making Tools. The DCR defines the term “automated decision-making tools” as any technological tool including, but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process. Automated decision-making tools include AI, machine-learning models, traditional statistic tools, and decision trees. These tools are often used to help determine who views a job advertisement, as well as whether an employee receives a raise or promotion or is demoted or terminated.
Many automated decision-making tools accomplish their tasks by using algorithms, or sets of instructions, to achieve a desired outcome. These algorithms analyze data, uncover correlations, make predictions and recommendations, and/or generate new data. In doing so, however, automated decision-making tools can create classes of individuals who will be either advantaged or disadvantaged in ways that may exclude or negatively impact them based on their protected characteristics (e.g., race, religion, age, national origin, sex, sexual orientation, gender identity, disability status, etc.).
Designing, Training, and Implementing AI and Other Tools. The DCR identifies three areas where decision-making tools can lead to discrimination: designing the tool, training the tool, and deploying the tool.
As the Guidance recognizes, the choices a developer makes when designing an automated decision-making tool can skew the tool and its outcomes, either purposefully or inadvertently. Decisions regarding the output the tool provides, the model or algorithm the tool uses, and what inputs the tool assesses can introduce bias into the tool.
As per the Guidance, automated decision-making tools must be “trained” before they are ready for real word application. In the case of AI, training often occurs by exposing the tool to training data from which the tool learns correlations or rules. The training data may reflect the developer’s own biases, or it may reflect institutional and systemic inequities. Accordingly, the tool can become biased if the training data is skewed or unrepresentative, lacks variation, reflects historical bias, is disconnected from the context the tool will be deployed in, is artificially generated by another automated decision-making tool, or contains errors.
When an automated decision-making tool is ultimately deployed, algorithmic discrimination may occur for a number of reasons. The tool can be used in a purposefully discriminatory manner by applying it to assess members of one protected class but not another. It can also be used to make decisions that it was not designed to assess, which can amplify any bias in the tool and systemic inequities that exist outside of the tool. For example, if the automated decision-making tool is designed and trained for recruiting, the employer should not then use the tool for onboarding.
In sum, the Guidance explains that bias can be introduced into automated decision-making tools if systemic inequalities based on protected characteristics are not accounted for when designing, training, and deploying the tools.
The LAD Prohibits Algorithmic Discrimination in All Forms. The Guidance provides that the LAD prohibits all forms of discrimination based on actual or perceived protected characteristics, including discrimination caused by automated decision-making tools. Within the Guidance, the DCR makes clear that employers and other covered entities are not immunized by the use of these tools – even if they have no intent to discriminate and a third-party was responsible for developing the tool.
The Guidance explains that automated decision-making tools may result in disparate treatment discrimination, which is when an employer treats an applicant or employee differently because of their membership in a protected class. These tools may also cause disparate impact discrimination, which is when an employer’s actions have a disproportionately negative effect on members of a protected class.
Additionally, algorithmic discrimination may occur if an automated decision-making tool precludes or impedes the LAD’s provisions regarding reasonable accommodations for a person’s disability, religion, pregnancy, or breastfeeding status. For example, if an employer uses a tool to monitor and track the productivity of its employees and the tool is programmed to flag atypical or unsanctioned breaks but is not programmed to consider reasonable accommodations, the tool may disproportionately flag for discipline employees who are allowed additional break time to accommodate a disability or need for milk expression. The employer may violate the LAD if it accepts the recommendation from the tool to discipline these employees.
Recommendations for Employers
The Guidance is abundantly clear that employers may not rely on the fact that they did not create or understand the automated decision-making tool as a defense to a discrimination claim. Accordingly, employers should keep the following recommended practices in mind in using such tools:
- Do Your Due Diligence. Employers must ask their vendors the right questions to ensure they understand the tool they are using, how it was developed and trained, and its real-world practical outcomes. Employers should also carefully consider whether it is appropriate to use an automated decision-making tool that has not been used by other companies in the past.
- Audit the Tool. After implementation of the tool, an employer should periodically audit the results to ensure the tool is not disproportionately affecting individuals of certain protected characteristics. The employer’s legal counsel should be involved in this auditing process as their involvement may provide various strategic benefits, such as application of the attorney-client privilege with respect to the conclusion and outcome of the audit.
- Training is Key. The relevant employees should be trained to use the tool in the appropriate manner and should understand who within the employer should be contacted with any concerns.
- Stay Tuned for Further Guidance. The use of automated decision-making tools and AI in the workplace is rapidly evolving, so employers should stay up-to-date with any applicable legal developments.