Cross-sector code of conduct for AI – a solution with substance?

Hogan Lovells
Contact

On 16 April 2018 the House of Lords Select Committee on Artificial Intelligence published a wide-ranging report on the status of artificial intelligence (“AI”) in the UK.

The report, entitled “AI in the UK: ready, willing and able?” (available here), provides comprehensive coverage of critical issues relevant to the development and use of AI in the UK, such as the potential bias in AI systems; the need for AI systems to be intelligible; funding, education and training in the AI sector; and risk mitigation.

The Chairman of the Select Committee, Lord Clement-Jones, presented a summary of the report at a Law Society event entitled “AI and Ethics: plotting a path to unanswered questions” hosted by Hogan Lovells International LLP on 27 April 2018.

Amongst the many recommendations in the report is that a cross-sector ethical code of conduct for organisations developing and using AI should be drawn up and promoted.

AI risks

The report recognises the significant potential of AI to contribute to economic productivity and for the UK to be among the world leaders in the field of AI, but finds that there are areas of uncertainty which could dissuade investment and potentially hinder uptake of AI by the general population. The report identifies a number of critical risks presented by AI which would need to be mitigated in order to support development and growth of AI systems including:

  • the potential bias in AI systems and the need to ensure that the data used is truly reflective of diverse populations;
  • the security risks associated with the use of personal data;
  • the need for AI systems to be transparent and intelligible; and
  • the potential for AI to contribute to social inequality.

Regulation of AI

The report considers whether regulation of AI should be introduced as a mechanism to manage these (and other) risks, but concludes that blanket regulation of AI, at this stage, would be inappropriate given the rapid developments being made in AI, the risk of regulation inhibiting innovation, and the difficulties of successfully designing a one-size-fits-all solution. The report recommends that existing sector-specific regulators are at present best placed to consider the impact of AI on their sectors and any subsequent regulation that may be needed. In this respect, the report acknowledges that in some areas existing legislative frameworks may be sufficient, for example the Data Protection Bill and GDPR will go a long way to address the concerns associated with the handling of personal data.

However, the report acknowledges that there may be risks associated with AI which are not adequately covered by existing legislation.

Proposed solutions

One of the suggested solutions is an overarching code to control behaviours associated with the development and use of AI, presumably with the aim that major tech firms and other AI actors sign up to the code on a voluntary basis. The report suggests that, in time, the code could provide the basis for new statutory regulation, if deemed necessary.

As a starting point, the report sets out five overarching principles that would form the basis of the code:

  1. AI should be developed for the common good and benefit of humanity.
  2. AI should operate on the principles of intelligibility and fairness.
  3. AI should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

Comment

The introduction of a cross-sector AI code is likely to be welcomed by a public that is increasingly reliant on AI in many areas of life. As highlighted by Lord Clement-Jones, an effective code of ethics may be potential measure to improve public trust in AI and to equip the public to be prepared to challenge its misuse. However, whether such a code has any actual impact on the behaviour of the dominant tech firms and other key actors in the AI field will depend on firstly persuading them to sign-up, and secondly on their ongoing compliance.

There is no point introducing a voluntary code so onerous that no one is willing to comply. However, at the same time, such codes must have sufficient teeth to be meaningful. The first hurdle will therefore be for critical actors to agree a set of standards. This exercise will be a huge challenge given the range of relevant organisations and institutions involved in the AI space. The risk in emphasising collaboration is that the resulting code is too flimsy to have any effect, whilst a more forceful approach may risk alienating critical players.

Assuming a sensible set of standards is developed, a suitable body will need to be given the role of monitoring and enforcing the code, with sufficient gravitas to make their “seal of approval” for signatories worthwhile, and the power to ensure that signatories toe the line. The report has suggested that the Centre for Data Ethics and Innovation could be one such body.

Exactly how monitoring and enforcement of compliance with the code might be undertaken without statutory powers to investigate, and without the threat of criminal or civil sanctions, is the next challenge, and it will be interesting to see how far such voluntary measures are able to go. For example, would the enforcement body have the authority and resources necessary to scrutinise non-open source algorithms to assess whether those algorithms might be producing discriminatory results, or whether the institutions using them have been sufficiently transparent in how those algorithms determine outcomes? The threat of the introduction of regulation may in itself be sufficient to motivate key institutions to ensure the code has some weight, but at present it seems unlikely that the Government would act on that threat for all of the reasons outlined in the report, and the uncertainties surrounding Brexit.

We will watch with interest for responses to this report from both the Government and the dominant tech firms, to see whether the proposals are suitably ambitious to be effective at galvanising further action.

Further materials

Hogan Lovells: Global media, technology, and communications quarterly – Spring 2018

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide