The Rise of “Agentic” AI: Potential New Legal and Organizational Risks

DLA Piper
Contact

DLA Piper

Artificial intelligence (AI) technology is advancing rapidly. A key development is the emergence of “agentic” AI structures, which operate with a greater autonomy compared to traditional AI systems.

These powerful agents have the potential to transform the way companies do business – but also raise new legal and organizational risks.

What is agentic AI?

“Agentic AI” describes self-governing bots that solve problems and take actions. According to IBM:

Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time.

Unlike AI “assistants” that respond directly to user prompts with outputs, agentic AI takes action and accomplishes defined objectives without ongoing human prompting.

Agentic AI Chart v2

 

Agents can be integrated into broader human or multiagent workflows to drive efficiency. In the near term, new agents, debuting as task-specialized bots, will likely be relatively limited in their capabilities. However, in the future, whole workforces could be equal parts human and agentic AI “workers.”

Key legal and risk considerations

  1. Agents act 24/7 at scale: Agentic AI scales legal and compliance risk by acting around the clock in a distributed manner. This could increase the potential for unintended consequences and make it more challenging to detect misalignment with company goals and potential failures. Next-generation risk and compliance frameworks must scale and adapt. Scaling requires “proportionate” resource allocation to agentic governance (drawing on the proportionality standard from the Department of Justice’s 2024 Evaluation of Corporate Compliance Programs). Agentic AI policies, monitoring, and auditing, with clear demarcation of bot role and authority on behalf of the company, could help reduce risk.
  2. Agents interact: Agents can act in unpredictable ways as part of their independent problem solving. Agent–human interactions can trigger disclosure laws and raise issues regarding the system’s authority to bind the company. Agent–agent interactions can quickly grow in scale and complexity, leading to behavior that may be difficult to oversee and control. Companies are encouraged to establish and actively monitor clear rules to govern interactions between agents both within the company and externally, including with an eye toward pricing and antitrust considerations, bias, and deception, particularly as bots with different authorities and rules interact.
  3. Agents should explain, but verify: Because agentic AI systems can make autonomous decisions and adapt their behavior, they require robust explainability mechanisms to ensure their actions can be understood, traced, and evaluated. Yet agents can hallucinate or otherwise misstate their explanations, so verification is often key. (Consider controls for disparate intent and impact.)
  4. AI agents will not be bound by traditional principal-agent law: Companies can assert defenses when human agents act outside the scope of their authority. But the law of AI agents is undefined, and companies may find themselves strictly liable for all AI agent conduct, whether or not predicted or intended. Contractual arrangements with AI developers can assign accountability between in-scope and out-of-scope agentic behavior. Recent actions like FTC v. Rite Aid Corporation & Rite Aid Headquarters Corporation show that large companies may not be able to shift blame to vendors.
  5. Testing agentic systems: Agentic AI systems are dynamic and goal driven. Because these systems have the capacity to adapt, change, and interact in complex environments, organizations cannot rely on static testing methods across all contexts. Rather, testing approaches likely must be tailored to specific capabilities, use cases, and real-world contexts and may require the combination of multiple methodologies to adequately evaluate system behavior over time.
  6. Building an agentic infrastructure: Agentic AI could rapidly scale toward a future state of digital workers, potentially matching or exceeding the human workforce in size and range. Companies are encouraged to begin building and testing their agentic AI infrastructure now, while monitoring developments to stay ahead of emerging risks. Working with trusted counsel could set the stage for future-proofed governance measures – for example, agentic AI policies, protocols for level of agentic authority, agent–human interaction disclosures, and disclaimers – and help companies keep pace with this fast-moving technology.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© DLA Piper

Written by:

DLA Piper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

DLA Piper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide