Can a standardised framework accelerate adoption of agentic AI?

Despite calls for curbing the speed of AI development by politicians, experts, commentators and even tech leaders themselves to minimise social disruption, the technology continues to advance at breakneck speed.
New capabilities and tweaks are announced almost monthly. ChatGPT, arguably the shiniest new toy the tech industry has ever produced, was released only three years ago. Yet the self-contained chatbot and completion tool – cut off from browsers, files and external systems – that defined its early incarnation is already becoming a thing of the past.
Over the past two years the original generative AI model has turned from a standalone tool into an agentic solution that can not only draft texts, explain and summarise concepts but also make multi-step plans and execute them.
While in the case of single-task models, each task is prompted separately by the user, later iterations can already chain prompts and access external data bases through plug-ins first and via APIs later to access current or real-time data.
As gen AI models are becoming more tightly integrated with external resources, their agency is also growing. Today’s agentic systems can deconstruct complex objectives into workflows, decide when and how to call tools, execute those calls, and feed the results back into an ongoing reasoning loop.
This shift from reactive assistant to semi-autonomous actor marks one of the most important transitions in AI to date.
Capabilities required to achieve autonomy
True autonomy, however, demands more than just tool access. Unlike the linear, question-and-answer workflows of traditional LLMs and copilots, agentic AI must maintain context across the entire execution of a multi-step task.
This is no small feat. Managing context is key to the efficient and reliable operation of LLMs. Although the latest models allow up to 128 thousand tokens in their context window – a dramatic leap from the original ChatGPT’s 4,096 tokens – these windows are still finite.
Uploaded documents, conversation history, intermediate results and tool outputs all count toward the token budget.
Complex, multi-step tasks quickly outgrow even the largest context windows. When this happens, agents forget prior actions or overload halfway through executing a task.
Advancement toward autonomous models therefore required something fundamentally new: an external mechanism for persistent context that lives outside the model itself.
Enter the Model Context Protocol. In November 2024, Anthropic released this open-source framework designed to provide a “USB-C” port for AI – an interface where LLMs can pull fresh context from files, directories or external tools in a standardised way, without developers having to write bespoke integration code for every new resource.
At its core, MCP follows a client-server architecture. The host application – typically an AI assistant or agent – acts as the system’s “brain”.
It connects to external resources via multiple MCP clients, each of which manages a secure, one-to-one connection to a server, enabling the host to tap into multiple servers simultaneously.
Servers, in turn, are often described as librarians. They don’t just store data; they tell the agent what data and tools are available and how best to use them.
Connection between the agent and the server starts with a “handshake”: the AI application creates a client for the purpose of establishing a connection with a single server to send a request to it, which responds by listing all the capabilities – data sources and tools – it supports.
Beyond resources and tools, MCP servers also expose prompts which are playbooks that guide the AI agent on how to interact with what the server offers.
These standardised, reusable templates ensure that the AI agent’s user won’t need any deep prompting knowledge to use MCP technology.
Once the connection is live, the client invokes whichever capabilities are needed for a given task and relays responses back to the host.
Much like a team of journalists working in parallel on different aspects of a story, individual clients operate independently, while the host aggregates their findings into a coherent whole.
The final goal of the process is to enable the host to integrate the information gathered into the LLM’s context window, so that it can bring the most relevant response grounded in rich, real-time data.
Crucially, MCP enables context persistence. Within a session, agents can retain memory via dedicated in-memory storage. Across sessions, servers assign unique session IDs that allow clients to reload prior context – conversation history, summaries, intermediate results – when work resumes.
Why MCP matters for agentic AI
This ability to dynamically supply real-time awareness of goals, history, tools and environment is what makes MCP so important for agentic AI. It eliminates the constant need for human intervention between steps and allows agents to operate with a higher degree of independence.
In a fully realised scenario, agentic AI could, for example, organise a three-day business trip end-to-end.
One agent would check calendars and budgets and break the objective into sub-tasks. Another would research flights and hotels and rank options. A third would handle bookings, while a monitoring agent would manage updates, disruptions and notifications in real time.
Currently, such seamless automation remains aspirational. Most implementations still rely on bounded autonomy – with action limits to what agents can do, confidence thresholds that trigger human review and approval steps for high-impact decisions.
However, the fact that every major AI and cloud provider has integrated the protocol into its ecosystem and an estimated 25 per cent of Fortune 500 companies are already using it in some form suggests rapid adoption.
According to Zuplo’s Industry Report 2025, there are now over 17,000 publicly listed servers managed by service providers to offer secure access to their own documentation and infrastructure.
Other technological domains – from IOT to blockchain – have shown how a fragmented landscape of competing protocols can stifle interoperability and slow down deployment.
Thanks to the growing adoption of Anthropic’s MCP standard, agentic AI seems to have avoided this pitfall. Still, standardisation is not without risk. MCP servers dramatically expand what models can access, but they also introduce new attack surfaces.
Whether developers will apply hard-won security lessons from earlier digital technologies remains an open question. Will MCP be designed in from the start or, once again, security treated as an afterthought?
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543