The march to regulatory change for artificial intelligence: the commonalities between the EU and US

Eversheds Sutherland (US) LLP

This briefing links up some commonalities between the EU and US in terms of the AI march to regulatory change. Our global regulatory specialists have put their heads together for this update on EU, New York City (NYC) and Colorado. AI, of course, can be about algorithms learning from data relating to people - whether that is in the context of employment applications, insurance claims, or otherwise. All are mentioned below.

Links to further reading on AI regulation are below.

US:

EU:

Reminder: What is happening in the EU?

As a quick reminder:

  • Businesses around the globe are using AI technology and benefitting from the wealth of opportunities it can bring to them and to all people. On the flip side, AI can have a ‘dark side’.
  • Many jurisdictions are bringing in new laws to keep us all the right side of the tracks. The Council of the EU approved a compromise version of the Act at the end of 2022. The Parliament is expected to vote on it by March 2023, with a view to adopting it by the end of 2023. 
  • The EU AI Act is hotly anticipated as being a benchmark AI law that other jurisdictions might look towards when developing their own laws (much like GDPR has become a standard upon which some other countries’ own laws are based). First, much like the GDPR in terms of impact, the EU AI Act will have an extra-territorial scope, extending to providers and users of AI outside the EU where the output is used in the EU. Secondly, the Act does lay down fixed penalties for certain infringements of the Act, the highest fine being 30,000,000 EUR or 6% of a company’s total worldwide annual turnover (3% in the case of an SME or start-up) for non-compliance with the prohibitions of AI practices.  

Does the EU AI Act have wider impact than EU? 

Yes. Even if an organisation is not caught by extra territorial reach of the EU’s AI Act because it’s not using the output from it in the EU, AI themes/opportunities/concerns are going to be the same no matter which country/region.

Data privacy regulators and other regulators, particularly in the financial services sector, have already produced detailed guidance about AI technology following the ‘OECD’s Principles for Trustworthy AI’. That is built around five values based core principles that are reflected, in whole or in part, in many other publications on trustworthy AI and corporate codes of ethics. There should be responsible stewardship of trustworthy AI.

The principles mean that AI should ensure: (1) promotion of inclusive growth, sustainable development and wellbeing; (2) human centered values and fairness; (3) transparency and explainability; (4) robustness, security and safety; and (5) accountability. Indeed, the EU AI Act has these principles at its heart.

What is happening in the US? NYC and Colorado are at the forefront 

NYC

Of significant interest to a great many global businesses who have a presence in NYC is a NYC AI employment law. It’s important to keep bias out of the way in which AI is used to screen out candidates applying for jobs and promotions. AI technology can learn unhelpful things from data sets that are not in tune with an organization’s desire to recruit the best candidates in a way that is diverse and inclusive. AI can lead things the wrong way and draw conclusions without a firm basis in fact.

This law means the any AI decision tool used to hire or promote NYC residents must be audited by an independent auditor before the tool is used, and annually thereafter, to prove there isn’t bias and that audit must be published on the company’s website. Employers must notify candidates that they are using the AI tool and provide candidates with an opportunity to request an alternative selection process.

Quality input data helps with quality output. For instance, the training data input into the AI system may be based on a larger proportion of one type of person (whether based on their age, race, sex, gender or otherwise). Alternatively, that training data may reflect past discrimination. It may be possible to balance it out by adding or removing data about under or over-represented subsets of the population.

The law was originally intended to go into effect on January 1, 2023, but enforcement has been delayed until April 15, 2023, as rule-making around the law continues. NYC can enforce the law and issue fines between $500 and $1,500 per violation, per day. The law also provides for a private right of action by employees and candidates.

As NYC employers prepare for the day enforcement goes live, they should carefully assess whether they use AI decision tools in employment that could meet the NYC law’s definition, as its scope may be broader than one would expect. Employers must stay attuned to developments in this area and work with their trusted advisors to inventory and audit their AI tools, and incorporate counsel in the bias audit to ensure attorney-client privilege protection, where possible.

Colorado

In the insurance industry, around the world, decisions have traditionally been taken about whether to sell insurance to a customer and how much to charge the customer for that coverage based on insurance underwriting algorithms (a set of rules from a human or a computer) which draw on data sets of past modelling. AI technology is now being used for this and for other insurance practices, such as marketing, fraud protection and claims handling. In Colorado, a new law, Senate Bill 21-169, is leading the way.

Insurers have increasingly used “external consumer data” - data from social media, credit scores and risk scores – to supplement or supplant traditional underwriting factors. The new law prohibits use of such data, and algorithms or models that use such external data, if their use results in unfair discrimination against protected classes of people (race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.) Put simply: S.B. 21-169 prohibits the Colorado insurance industry (which consists of most large national insurers) from using AI technology and big data to determine insurance coverage and price, marketing targets, and claims settlements if the machine learning results in unfair discrimination against protected classes.

On February 1, 2023, the Colorado Division of Insurance released a draft of the first of several regulations that will implement S.B. 21-169. This proposal covers detailed governance and risk management requirements, as well as documentation standards, regarding the use of external consumer data, algorithms and models by life insurers and will be followed soon by a separate rule proposal covering how to test for bias. Another set of regulations will be released for property and casualty insurers. The February 1 proposal makes clear that insurers will be held accountable at the board level for all aspects of their use of external consumer data, algorithms and models. While the regulation is still in the proposal phase, it is critical for insurers to begin fully inventorying their external consumer data, algorithms and models to understand why they are using such tools and how those tools operate.

What does this all mean?

It means that lawmakers around the globe are starting to make sure AI “behaves itself”. The right thing to do from a human and moral perspective is now being enshrined in countries’ and regions’ and US states’ laws. There can be severe penalties for failures. Reputation damage is also a very significant risk organizations will want to avoid.

As with data privacy, ‘baking in’ compliant use of AI technology (this most valuable of tools to be welcomed, but also trained and controlled with robust guardrails) from the very start of a new project/imitative/programme is key. In the same way that data privacy became a Board level issue when the EU’s GDPR came into force in 2018, use of AI technology is following suit. Financial services regulators, data privacy regulators, national and state legislatures – everyone, it seems, is interested in AI.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Eversheds Sutherland (US) LLP

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide