Joint guidance from the “Five Eyes” cybersecurity agencies provides best practices on securely deploying and operating AI systems.
New guidance by the U.S. National Security Agency’s Artificial Intelligence Security Center, CISA, the FBI, and cybersecurity agencies in Australia, New Zealand, UK, and Canada expand on earlier efforts to safeguard AI systems and infrastructures.
The report focuses on AI systems that companies deploy, but which were designed and developed by third-party developers. These AI systems, while offering organizations the potential for innovative solutions and services, are attractive targets for malicious actors seeking to exploit those systems through a variety of unique attack vectors. The guidelines emphasize the importance of fortifying the deployment environment, maintaining the security of the AI system throughout its lifecycle, and ensuring secure operations and maintenance.
Among other recommendations, some particularly interesting recommendations include:
-
Secure the deployment environment. Prior to deployment within the existing IT infrastructure, the report suggests that companies verify that the IT environment adheres to foundational security principles including strong governance, sound architecture, and secure configurations; including b implementing zero-trust architecture. The report suggests that companies pay particular attention to securing training data and weights.
-
Monitor model behavior. The report recommends logging model inputs, outputs, intermediate states and errors, to enable to detection of attempts to compromise the model.
-
Protect against model inversion. The agencies suggest configuring models to return only the minimal information necessary to complete the task, in order to make model inversion attacks more difficult.
-
Prepare secure deletion capabilities. The report suggests that companies consider, in advance, how they will securely and finally delete certain sensitive information, such as cryptographic keys and training data.
You can read the full guidance here.
[View source.]