Shaping the Future: Australia’s Approach to AI regulation

DLA Piper
Contact

DLA Piper

[co-author: Olivia Newbold]

Recent updates

There is finally some clarity around how artificial intelligence (AI) regulation is going to look in Australia. The Australian Government has released a proposals paper for introducing mandatory guardrails for AI in ‘high risk’ settings (Proposals Paper), as well as a Voluntary AI Safety Standard (Voluntary Standard) following its consultations on safe and responsible AI.

There is currently no regulation of AI in Australia, and the Australian Government has remarked that, following its consultations, it considers the current regulatory system to be unfit-for-purpose to respond to the distinct risks posed by AI. The Proposals Paper outlines the mandatory guardrails which the Australian Government is proposing to implement for the use of AI in ‘high-risk settings’ in Australia as part of its approach to regulation of AI.

The Voluntary Standard consists of voluntary guardrails designed to give businesses certainty ahead of the introduction of legislation and mandatory guardrails. Most of the voluntary guardrails are aligned with the mandatory guardrails under consideration in the Proposals Paper (except for voluntary guardrail #10, which is around stakeholder engagement, whereas conformity assessments are being proposed under mandatory guardrail #10). The voluntary guardrails are applicable to AI systems of any risk level, as opposed to the mandatory guardrails which will apply to high-risk AI systems only.

The Voluntary Standard is intended by the Government to be a measure of best practice to assist Australian organisations with the practical development and deployment of AI throughout the AI supply chain, by guiding organisations to:

  • raise the levels of safe and responsible capability across Australia
  • protect people and communities from harms
  • avoid reputational and financial risks
  • increase trust and confidence in AI systems, services and products
  • align with legal needs and expectations of the Australian population
  • operate more seamlessly in an international economy.

Where organisations have implemented the voluntary guardrails, they will be well-positioned for compliance with any mandatory legislation that is subsequently introduced by the Government. Furthermore, the Voluntary Standard is aligned with international standards and is designed to align Australian practices with other jurisdictions to ensure consistency for organisations operating across multiple jurisdictions.

Voluntary Guardrails

The Voluntary Standard is made up of 10 voluntary guardrails which are ongoing (as opposed to once-off) activities for organisations:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance. Accountability for the safe and responsible deployment of AI cannot be outsourced. Organisations should establish proper foundations for the use of their AI, including accountability processes. This will involve assigning an overall owner for the AI used by the organisation, implementing an AI strategy and other relevant policies, and providing strategic training as appropriate to both individuals and organisation more broadly.
  2. Establish and implement a risk management process to identify and mitigate risks. Organisations should take practical steps in order to implement a risk management process at an organisational-level, and system risk and impact assessments for assessing AI systems in accordance with the organisations risk appetite. Rigorous risk and impact assessments should be undertaken on an ongoing basis for each AI system.
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance. Organisations should implement fit-for-purpose approaches to data governance, privacy and cybersecurity. Requirements for such measures will differ depending on the use case and risk profile of the AI but all must account for the unique characteristics of the AI including data quality, data provenance and cyber vulnerabilities.
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed. AI systems must be tested at all stages of their life cycle including prior to deployment and subsequently on an ongoing basis to monitor for any behaviour changes or unintended consequences. Organisations should implement clearly defined acceptance criteria for the AI system against which AI systems can be assessed.
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle. A competent person within the organisation should be accountable for each AI system and product. Human oversight will ensure an organisation (or the appropriate service provider) is able to intervene if necessary, reducing potential for unintended consequences.
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. Organisations should be transparent about the use of AI in order to create trust with users and the community at large by disclosing when AI is being used and what content is generated by AI. The organisation should determine the most appropriate mechanism for disclosure based on the AI system, the stakeholders involved and the technology in use.
  7. Establish processes for people impacted by AI systems to challenge use or outcomes. Organisations must establish processes for stakeholders impacted by the AI systems to challenge how the organisation is using AI and contest any output or decisions generated by the AI.
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. Both developers and deployers of AI have obligations of transparency and must consider safe and responsible AI practices in development / deployment. Organisations should understand what components were used in the AI system, understand how it was built and have sufficient information to understand and manage the risk of the system.
  9. Keep and maintain records to allow third parties to assess compliance with guardrails. Organisations must create and maintain an up-to-date, organisation wide inventory of each AI system in use by the organisation. Records should demonstrate compliance with the guardrails.
  10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness. Engagement should occur over the life of the AI system, at both the organisational and system levels.

Looking forward

The Government is seeking input from the public on the Proposal Paper including the proposed mandatory guardrails, the definition of ‘high risk’ AI and regulatory options for mandating the guardrails. Consultation is open for 4 weeks, closing 4 October 2024. In the meantime, we recommend that organisations using AI familiarise themselves with the voluntary guardrails and take practical steps to implement the guardrails, in preparation for the mandatory regime.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© DLA Piper

Written by:

DLA Piper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

DLA Piper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide