Security Snippets: DHS issues AI security and safety guidelines for critical infrastructure

Hogan Lovells
Contact

Hogan Lovells

DHS advises safeguards to protect AIs and to protect critical infrastructure from AI-powered attacks.


In continuing its work under the Biden Administration’s Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, the Department of Homeland Security (DHS) has produced AI security and safety guidelines to mitigate cross-sector AI risks impacting the security of U.S. critical infrastructure.

The guidance, which builds on the Five Eyes intelligence agencies’ published report on AI security, is another resource developed by DHS and contributes to DHS’s broader efforts to protect the nation’s critical infrastructure.

The guidelines are organized around three categories of system-level risks:

  • Attacks Using AI: Threat actors leverage AI to enhance and scale attacks on critical infrastructure, including automated physical attacks deployed via autonomous systems, AI-enabled cyber compromises of supply chains, and using AI for autonomous malware and other operations such as social engineering and theft of intellectual property.
  • Attacks on AI Systems: Threat actors target AI systems supporting critical infrastructure. Activities include adversarial manipulation of AI algorithms or data, evasion attacks, interruption of service attacks, and model inversion & extraction.
  • Inaccuracies in AI Design and Implementation: Oversights in the planning, design, deployment, or operation of an AI tool or system can cause unintended effects that may disrupt critical infrastructure operations. This may include supply chain vulnerabilities, inconsistent system maintenance, over/under reliance on AI, brittleness of systems, and statistical bias.

In order to help critical infrastructure owners and users mitigate the AI risks above, DHS suggests several strategies that are aligned with the four AI Risk Management Framework functions outlined by NIST. These includes:

  • Establishing an organizational culture of AI risk management through the prioritization of safety and security outcomes and radical transparency.
  • Understanding the individual AI use context and risk profile by which AI risks can be evaluated.
  • Developing systems to assess, analyze, and track AI risks through the use of repeatable methods and metrics.
  • Prioritizing and acting upon AI risks to safety and security by implementing and maintaining identified risk management controls.  

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide