
Jamie Beckland at APIContext suggests ways of preparing for the agentic AI revolution
As the adoption of agentic AI accelerates within enterprises, we see new pressures on Application Programming Interfaces (APIs). APIs enable communication and data transfer between software components and services. In theory, agentic AI brings many organisational efficiencies, from the ability to orchestrate dozens of parallel API calls in seconds to speeding up traditional manual processes.
Yet, in practice, unpredictable behaviour combined with a lack of human developer intuition has exposed gaps in API documentation and specification drift, as well as weakened safety guardrails. This has led to a greater risk of sensitive data exposure and instability of systems.
Put simply, in order to leverage AI agents successfully, it is vital for enterprises to take appropriate measures to ensure that their APIs are ready to handle this latest technological shift.
When agentic AI meets enterprise APIs
Autonomous agents interact independently to achieve defined goals. In practice, this means an agent can conduct complex tasks, optimise dynamic processes, enhance customer service, and improve data analysis for better decision-making, all without direct human input. Due to its dynamic nature, agentic AI adoption within enterprises is surging, with more than 51% of companies already deploying AI agents and another 35% planning to do so by 2027.
However, these agents rely on APIs to connect with data sources, applications, and external services for coherent collaboration and execution.
Though APIs are crucial to agentic AI success, most APIs were designed for deterministic, rule-based integrations rather than autonomous, adaptive systems like agentic AI. This discrepancy means that while AI agents are becoming more capable, the APIs they depend on are often not equipped to support their speed, adaptability, or complexity.
And despite how critical keeping API documentation up to date has become, a recent study found that 89% of API specifications had not been updated in the past six months, highlighting a significant ongoing challenge for the industry. Without updated specifications, AI agents risk calling deprecated endpoints, breaking workflows, and ultimately failing to deliver on their potential for seamless automation and decision-making.
Moreover, poor specifications can often lead to API drift, or the divergence of an API’s actual behaviour from its documented specification. API drift can significantly impact AI agents because without predictable patterns and consistent behaviour, AI agents may fail to function as intended. In fact, testing has shown that 75% of APIs have at least one nonconformant endpoint, highlighting how widespread drift can be. At scale, drift becomes unmanageable, as organisations cannot manually keep up to accommodate deviations.
As more AI agents are onboarded, teams must orchestrate and streamline requests to downstream APIs. One of the key enablers of this process is a new open standard called the Model Context Protocol (MCP). MCP is an open standard that provides a consistent interface for AI agents to access APIs, enabling them to communicate and orchestrate requests more efficiently.
While MCP unlocks scale for AI agent developers, it also introduces new complexities and operational strain for the downstream applications these agents rely on. By providing a standard interface, MCP enables AI agents to access APIs more efficiently, but it can also exponentially increase API consumption, placing a significant burden on organisations.
Making enterprise APIs AI-ready
To prepare for agentic AI, enterprises must rethink how they design, manage and secure their APIs. A good starting point is ensuring OpenAPI specifications are accurate and updated regularly. OpenAPI definitions should be treated as living contracts, updated continuously and embedded into development workflows.
An accurate and current specification is not only useful for human developers but is often the primary way AI agents learn to consume an API. This helps reduce the likelihood of API drift, but it must be combined with continuous testing and monitoring of the API’s runtime to effectively prevent drift.
Another way for organisations to strengthen and leverage API specifications is by ensuring APIs provide machine-readable hints and clear descriptions, as LLM-based agents rely on parsing text to understand how to interact with endpoints. Descriptive field names and endpoint summaries make the API more interpretable, while explicit constraints such as minimum/maximum values or regex patterns help agents avoid sending invalid inputs. For example, a description like “status must be one of [OPEN, CLOSED]” guides agents to comply automatically.
Additionally, formalising business logic in the specification or a companion rules document ensures AI agents follow defined workflows and rules, improving reliability, safety, and alignment with organisational policies.
Finally, deploying agent gateways or MCP servers provides a controlled layer between APIs and autonomous agents, ensuring inputs are validated and behaviour is monitored. This can also serve developers who are integrating AI now or in the future. Once deployed, MCP servers can help to maintain control over how their systems are used by AI agents, which can stop agentic AI from misusing or overloading an application.
These practical recommendations will prepare any team for more automated interaction, driven by machine-to-machine communication. Agentic AI is now operating in the wild, and systems that are the easiest to work with will grow along with the entire AI ecosystem.
Jamie Beckland is Chief Product Officer at APIContext
Main image courtesy of iStockPhoto.com and PonyWang

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543