SEC Roundtable Presents Both Risks and Opportunities of AI in the Financial Industry

DLA Piper
Contact

DLA Piper

The Securities and Exchange Commission (SEC) recently hosted a Roundtable on Artificial Intelligence in the Financial Industry in Washington, DC (SEC AI Roundtable). The day-long event, which took place on March 27, 2025, focused on discussion of various topics related to the use of artificial intelligence (AI) within the financial industry.

As acting SEC Chairman Mark T. Uyeda noted in his opening remarks, the financial industry has a long history of technological advancements and innovations. While Chairman Uyeda acknowledged that the use of AI brings risks, challenges, and potential regulatory gaps, he called for a “technology-neutral approach” to the regulation of AI in the financial industry. He cautioned against using “an overly prescriptive approach” and encouraged innovation while protecting investors with effective, cost-efficient regulations on AI. The panels that followed Chairman Uyeda’s remarks – which included US regulators, academics, and leaders in the private sector – shared perspectives on these issues.

In this alert, we provide insights from the SEC AI Roundtable and, specifically, how the SEC and the financial industry may address the challenges and opportunities presented by continuing advancements in AI.

AI may pose new challenges, but the risks are not new

Chairman Uyeda and several of the panelists emphasized that, while AI introduces new complexities to the financial industry, the fundamental risks it poses are not entirely unprecedented. The SEC AI Roundtable highlighted that many of the challenges associated with AI, such as its use in fraud schemes, cyber-attacks, and market manipulation, are extensions of existing issues that the financial industry has been grappling with for years. The primary difference lies in the scale and speed at which AI can operate.

Specifically, the panelists pointed out that fraud prevention has always been a concern in the financial industry, with schemes continually evolving and tactics adapting to exploit vulnerabilities. AI technologies, particularly generative AI, may be used to enhance fraud schemes by creating large scale, hyper-personalized deceptive practices, such as phishing and social engineering attacks. But at their core, these emerging technologies and related schemes are just an evolution of traditional fraud techniques in which the main objective remains the same – deceiving individuals and organizations to gain unauthorized access to funds or sensitive information.

Preventing market manipulation is another longstanding concern in the financial industry that may become more complicated with the widespread use of AI by market participants and other stakeholders. The panelists highlighted the risks of AI-driven trading algorithms being used for manipulative trading practices such as spoofing and layering. As with fraud targeting individuals, while the tools and methods may have evolved, the underlying intent to manipulate market conditions for financial gain remains unchanged.

While acknowledging that many of the risks are familiar, panelists noted that advancements in AI technologies pose new challenges. For example, the use of agentic AI systems – AI systems that can autonomously make multi-step decisions, with little human intervention – poses new challenges for tracing bad actors and holding them accountable because the AI system obscures the impact of human decision-making. Additionally, large language models (LLMs) employed by market participants can produce hallucinations – incorrect but convincing outputs – that could lead to unintended investor losses.

The SEC AI Roundtable signals that the SEC will likely rely on existing legal frameworks to pursue AI-related misconduct

Chairman Uyeda in his opening remarks, and the panelists that followed throughout the day, emphasized that the existing regulatory framework should be leveraged to address AI-related misconduct in the financial industry. While existing regulations may need to be adapted to address the unique attributes of emerging technologies, the panelists emphasized that the emerging risks posed by AI do not require an entirely new set of regulations. This suggests that, in the short term, the SEC will likely take a careful approach to implementing new, AI-specific regulations.

The SEC AI Roundtable also highlighted potential instances of fraud that could arise from the use of emerging technologies in the financial industry, including:

  • Hyper-personalized fraud: Leveraging AI to create highly personalized phishing attacks and social engineering schemes, making it easier to deceive individuals and organizations.
  • Algorithmic manipulation: Exploiting AI-driven trading algorithms to engage in manipulative trading practices, such as spoofing and layering.
  • Deepfakes and fake news: Using AI to create deepfakes and disseminate fake news to manipulate market perceptions and investor behavior, leading to market manipulation.
  • Credential harvesting: Using AI to automate the process of harvesting credentials through phishing, social media scraping, and other means, enabling unauthorized access to accounts and sensitive information.

Addressing the potential risks and harnessing the capabilities of AI to mitigate those risks

Throughout the day, panelists underscored the importance of proactively addressing both the risks and opportunities associated with the use of AI in the financial industry. As AI technologies continue to evolve and their use expands in the financial industry, industry participants may consider adopting comprehensive strategies to manage the associated risks effectively. This would involve implementing robust AI governance and risk management frameworks, enhancing cybersecurity measures, fostering collaboration and information sharing among market participants and other stakeholders, and investing in AI education and training without stifling innovation , which we describe in more detail below:

  • Robust AI governance committees and risk management frameworks. Firms may consider establishing cross-functional committees that include representatives from risk management, compliance, legal, IT, and business units to oversee AI initiatives, set policies, monitor AI activities, and ensure compliance with regulatory requirements. They may also consider implementing continuous monitoring and testing processes to ensure AI systems operate as intended and comply with regulatory requirements (including regular reviews of AI models, data inputs, and outputs to detect and address any anomalies or biases).
  • Enhance cybersecurity measures. Firms may consider adopting enhanced authentication methods, such as multifactor authentication, to protect against unauthorized access. They may also consider utilizing AI-driven behavioral analytics to detect anomalies and potential security breaches in real-time.
  • Foster collaboration and information sharing. Firms may consider engaging in industry-wide collaboration and information sharing to stay informed about emerging threats and best practices. They may also consider maintaining open lines of communication with regulators to ensure compliance and gain insights into regulatory expectations.
  • Invest in AI education and training. Firms may consider developing comprehensive training programs to educate employees about AI technologies, their potential risks, and how to use them responsibly. They may also consider ensuring that senior leaders and decision-makers have a strong understanding of AI to make informed strategic decisions.

Conclusion

The SEC AI Roundtable underscored the importance of adapting the existing regulatory framework to address the unique challenges posed by the growth of AI. By implementing robust governance, enhancing cybersecurity measures, fostering collaboration, and investing in education, market participants can potentially mitigate the risks associated with AI while leveraging its capabilities to drive innovation and efficiency in the financial industry.

Beyond the financial industry, AI technologies present broader legal risks across multiple sectors. These risks include, among other things, potential data privacy violations, misrepresentations of AI capabilities, biased or inaccurate AI outputs, anti-competitive practices, and deceptive marketing claims. Various federal and state law enforcement authorities have taken note and are similarly leveraging existing statutes to prosecute the misuse of AI, including the False Claims Act, antitrust laws, and state and federal privacy and consumer protection statutes.

In light of this ever-evolving landscape, organizations may consider evaluating both the risks and opportunities of using AI to assess and mitigate potential liabilities and related enforcement actions.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© DLA Piper

Written by:

DLA Piper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

DLA Piper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide