Agentic AI: The liability gap your contracts may not cover
9 February 2026
Agentic AI is reshaping the nature of technology risk. These systems don’t just process data or generate insights – they take actions, make decisions and, increasingly, operate without human oversight. Unlike traditional generative AI, which primarily responds to prompts, agentic AI can initiate and execute tasks across connected systems. For example, a customer support AI agent might verify account status, reset passwords and send follow‑up communications to a customer automatically. An AI agent dealing with your travel plans might compare prices, scan your calendar and book your flights as part of an end‑to‑end process.
Adoption is accelerating rapidly. Analysts forecast that throughout 2026, businesses will embed AI agents deeper into operations, granting them more authority over high-stakes activities, including executing financial transactions, placing orders, managing supply chains and screening job applicants.
Yet many of these systems are still deployed under legacy technology contracts written for passive, predictable software firmly under human control. As vendors release agentic capabilities faster than contracts can evolve, and regulatory scrutiny of automated decision‑making increases, a liability gap is emerging. Businesses relying on unmodified agreements may find that risk is no longer allocated fairly when it comes to agentic AI and they are exposed to significant contractual, legal, reputational and operational consequences.
This briefing explores some of the key liability gaps and outlines how businesses using AI agents can manage agentic AI risk effectively.
Download PDF