On October 16, 2024, the New York State Department of Financial Services (the "DFS"), under its Cybersecurity Regulation—23 NYCRR Part 500—issued a memorandum providing guidance on the risks posed by artificial intelligence ("Guidance Memo"). The guidance is addressed to the entities within the DFS' jurisdiction, including entities regulated by the New York Banking Law, the Insurance Law and the Financial Services Law ("Covered Entities"), and clarifies that the Guidance Memo does not impose additional requirements on Covered Entities but rather illustrates how the Cybersecurity Regulation framework should be used to assess and address the cybersecurity risks presented by AI.
The Guidance Memo emphasizes the significant impact AI has had on cybersecurity, both positively and negatively. While AI has enhanced the ability of entities to prevent cyberattacks, improve threat detection and bolster incident response, it has also introduced new mechanisms and opportunities for cybercriminals to commit crimes at greater scale and speed. The Guidance Memo outlines strategies to mitigate risks that will be essential for Covered Entities to follow but that will also be useful for businesses across sectors to consider as part of their cybersecurity compliance programs.
Key Cybersecurity Risks Posed by AI
The Guidance Memo highlights two key risks presented by threat actors' use of AI:
- AI-Enabled Social Engineering. AI is increasingly being used to generate realistic audio, video and text ("deepfakes") that threat actors use to gain access to information systems with nonpublic information ("NPI") and to attempt to persuade employees to divulge sensitive information or take unauthorized actions. As an example, the Guidance Memo cites an instance of a finance worker being tricked into transferring $25 million to threat actors after a video call in which every participant, including the alleged Chief Financial Officer, was a video deepfake.
- AI-Enhanced Cybersecurity Attacks. Because AI can quickly scan vast amounts of information, it can enable threat actors to find and exploit system vulnerabilities, accessing NPI and evading detection. The Guidance Memo also notes that "it is widely believed by cyber experts that threat actors who are not technically skilled may now, or potentially will soon, be able to launch their own attacks."
In addition, the Guidance Memo notes that an entity's own use of AI can expose it to new risks. For example, some AI products involve the storage of large quantities of NPI and sensitive data, such as biometric data, which is then vulnerable to attack from threat actors who may seek to obtain that data for financial gain. Companies are especially vulnerable to their data being compromised when being collected or processed by vendors and third-party service providers ("TPSPs") who are using AI platforms.
Guidance for Mitigating AI-Related Threats
The Cybersecurity Regulation requires Covered Entities to implement and maintain multiple layers of overlapping controls, so that if one control fails, others will be able to prevent or mitigate a cyberattack. Below, we summarize the key obligations of Covered Entities under the Cybersecurity Regulation and highlight the new DFS guidance for addressing AI-related risks.
Risk Assessments and Risk-Based Policies, Procedures and Plans
Covered Entities must perform cybersecurity Risk Assessments at least annually, or whenever a change in the business or technology introduces new material risks. Those Risk Assessments should address AI-related risks, including the threat of deepfakes; the entity's own use of AI; and AI-powered technology used by its vendors and TPSPs. Based on their cybersecurity Risk Assessments, Covered Entities must have programs, policies and procedures that address those risks, and any updates made to a Risk Assessment warrants a review of such programs, policies and procedures. Covered Entities must also design and test proactive measures to investigate and mitigate cybersecurity events so they are prepared for incident response, business continuity, and disaster recovery. Those measures should include preparation for possible AI-related cyberattacks.
The DFS emphasizes the "crucial role" that senior leadership plays in prioritizing cybersecurity. The Guidance Memo highlights that the Senior Governing Body (e.g., the board of directors) must have an adequate understanding of all cybersecurity-related matters, exercise authority over cybersecurity risk management, and receive and review regular reports that cover cybersecurity—including AI-related threats.
Vendor and Third-Party Service Provider Management
The DFS provides some specific recommendations for Covered Entities to follow when contracting with TPSPs, especially those that will access their information systems or NPI. These recommendations include to:
- Create guidelines for due diligence on TPSPs
- Implement TPSP policies and procedures that specify minimum requirements for access controls and encryption (see some access control recommendations below) at the TPSPs
- Require TPSPs to provide timely notice of any cybersecurity event that impacts the Covered Entity's information systems or NPI
- Consider including additional representations and warranties in contracts to ensure security of NPI if a TPSP is using AI
Access Controls
Starting in November 2025, the Cybersecurity Regulation will require Covered Entities to implement multifactor authentication ("MFA") for all authorized users accessing information systems or NPI, whether those users are employees, customers, or TPSPs. The Regulation allows flexibility to decide which authentication factors to use, based on the entity's Risk Assessment. But the DFS Guidance Memo emphasizes that "not all forms of authentication are equally effective" and recommends considering "factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks by avoiding authentication via SMS text, voice, or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys."
For biometric authentication, the Guidance Memo recommends that Covered Entities consider using technology with liveness detection or texture analysis, which can verify that the biometric input is from a live person. Other options to consider are combining one biometric input with another (for example both a fingerprint and iris recognition) or combining a biometric input with user keystrokes and navigational patterns.
Cybersecurity Training
Cybersecurity training, which, under the Cybersecurity Regulation, must be provided annually to all personnel, including senior executives and senior governing body members, should cover "risks posed by AI, procedures adopted by the organization to mitigate risks related to AI, and how to respond to AI-enhanced social engineering attacks." Training on social engineering is now a required element of the annual cybersecurity training. The Guidance Memo recommends providing deepfake simulation exercises and instructions on how to respond to unusual requests, such as requests for credentials, urgent money transfers or access to NPI.
Cybersecurity personnel should receive additional training on AI use in social engineering attacks, the use of AI in facilitating and enhancing cyberattacks, and how AI can improve cybersecurity.
If the Covered Entity plans to deploy AI internally, the relevant employees must be trained on how to design and deploy AI systems securely and how to defend them against cybersecurity attacks. Any users of those AI systems must be trained on how to avoid disclosing NPI.
Data Management
Data management can limit the amount of NPI that could be exposed in the event of a cybersecurity attack. Covered Entities are already required to have data minimization practices and to dispose of NPI that is no longer needed for business operations, including NPI used for AI purposes. Further, if an organization uses AI-powered products, it should put controls in place to secure the data those products use. It should also maintain an inventory of all such AI systems and prioritize implementing mitigation procedures for any AI systems that are business-critical.
Monitoring
Covered Entities are required by the Cybersecurity Regulation to monitor email and web traffic to block malicious content and have processes in place that can quickly identify security vulnerabilities in information systems. The Guidance Memo recommends that if a Covered Entity uses AI-enabled products or services or allows personnel to use AI applications such as ChatGPT, they should monitor for unusual query behavior that signals possible public exposure of NPI.
The Guidance Memo notes that AI can also assist with monitoring, including by reviewing security logs, analyzing behavior, detecting anomalies and predicting security threats.
Key Takeaways
Although the Guidance Memo reiterates current requirements under the NYDFS Cybersecurity Regulation, we highlight the following Key Takeaways for organizations to consider in evaluating and using AI in their business operations and as part of their cybersecurity tools:
- Risk Assessments need to be updated to address new material risks presented by AI tools and providers. If a Covered Entity has a Risk Assessment that does not address AI-related threats, then it should revise that assessment—and update related policies and procedures accordingly.
- Senior executives and senior governing body members should be closely involved in assessing AI use and ensure they receive and review reports on AI and cybersecurity compliance. In addition, organizations must provide training and craft policies relating to AI-related cybersecurity risks. Training should involve the review of specific use cases, such as the deepfake incident described in the Guidance Memo, and should be consistent with existing incident response procedures.
- The Guidance Memo should be considered by companies that are not Covered Entities under the Cybersecurity Regulation. While certain threats may be unique to entities in the financial, banking or insurance industry, the overall need to address AI in cybersecurity risk assessments, policies, training, access controls, data management and monitoring applies across all industries.