In recent years, we have seen an increase in employers using artificial intelligence (AI) in the workplace, whether to assist with decision-making and staff management across the life-cycle of the employment relationship or to support recruitment and termination decisions. In this article, we will focus primarily on the recruitment stage. We have seen AI used to make recommendations on whether to shortlist a candidate for interview against parameters the AI has been instructed to look for. We are also aware of AI being used to provide instructions to workers in carrying out tasks, monitor their performance against algorithmically generated targets, and support biometrics, such as fingerprint/face scanning for employees entering the workplace.
Legislative framework
Legal frameworks underpinning the use of AI in employment are rapidly being developed across the globe. Pending the adoption of specific legislation, in most countries, the field is currently regulated separately by both the relevant employment and data protection regulators.
Globally, there are big developments on the horizon. The European Union is proposing new legislation governing how AI can be used and prohibiting its use where the risk is considered unacceptable. The use of AI in recruitment, for example, is classed as “high risk” and subject to additional restrictions. These concepts will be implemented by new legislation, including the EU AI Act which was adopted by the European Parliament last week and is widely expected to set a precedent for future risk-based regulatory approaches beyond Europe.
In the United States, the position varies by state. Maryland, for example, has a law addressing the use of AI in hiring which requires consent of the applicant before AI tools can be used for facial recognition during hiring. New York City also requires the annual publication of audits for bias and discrimination. Other states and cities are likely to follow in passing laws around the use of AI and other automated decision-making tools in the hiring process.
Not all jurisdictions, however, are adopting legislation to govern the use of AI. In Hong Kong, for example, there are currently no proposals to legislate the development and use of AI. The UK government also announced in its 2023 white paper that, for now, there is no new legislation proposed. Instead, it hopes to rely on a more flexible and “pro-innovation” approach led by existing regulators. Regulators will be guided by cross-sectoral principles, including “appropriate transparency and explainability” and “fairness.”
Considerations for employers
With the rapid advance of AI, alongside the evolving legal framework, what should employers do to navigate AI in the workplace? Here are some of the core areas to consider:
- Bias: Employers should assess the potential for bias and discrimination in AI systems. This is the main employment law risk for employers, although AI-assisted decision-making can touch other areas of employment law (for example, dismissal protection in certain jurisdictions and working time requirements). Many of the AI platforms market themselves as reducing bias and discrimination; however, this does not absolve employers from satisfying themselves that appropriate safeguards have been adopted and ensuring they can explain how decisions have been made. AI models can contain and magnify biases ingrained in the data they are trained on, including those based on indirect indicators of protected characteristics. Employers should take steps to understand the types of data (and source) used in training and considered by the model.
- Data Protection: There are a number of other risks that relate to data privacy and data security. The use of an individual's personal data, both to make decisions about them specifically and as part of the input data that may be used to train and fine-tune AI systems, is subject to an employer's obligations under data protection laws in most countries, including in relation to transparency obligations (privacy notices) and data subject rights to object to automated processing of their personal data. Although many AI tools seek to rely only on anonymised data—which is not subject to data protection laws—this is a very high bar to meet, for example removing names and contact details from CVs is likely to be insufficient.
- Regular Reviews: Because AI models drift, utilize periodic testing and robust review to reduce the risk of new biases (including through an employer’s own data sets) and inappropriate use or deployment of AI tools.
- Contract Terms: Negotiation of appropriate contractual terms with providers of AI tools and recruiting services that use AI tools can help mitigate some of the risks associated with the use of AI. For example, are providers required to comply with their relevant obligations under any new legislation or regulatory guidance? Are they required to assist the employer to comply with its own obligations? Are they required to review and correct their AI models with adequate frequency (and are any warranties given as to function or biases)?
- Rapid Legal Developments: Keeping track of the evolving legal frameworks governing AI across different jurisdictions can be challenging, but employers are advised to monitor developments to inform their processes, as well as their review of whether suppliers of AI tools are doing the same.
Using AI in recruitment processes may offer the benefits of ease and efficiency, but for employers who pride themselves on diversity, equity and inclusion, there is a risk that using AI in recruitment without appropriate safeguards could undo good work. The safest approach for employers is to ensure that AI is used only as one element to assist with recruitment decisions, and not the determinative one.
[View source.]