ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Secure MCP: a reality, not just a dream

Steve Riley at Netskope explains the popularity of Model Context Protocol

Linked InXFacebook

You’ve probably encountered an MCP reference somewhere—breathless vendor propaganda, an irritatingly animated colleague, even a dull airport advertisement. What’s all the fuss? Why, it’s an AI thing, of course! Model Context Protocol is a standardised method for AI applications to access the information (context) they need to perform various tasks. It replaces bespoke per-resource proprietary connections and has become immensely popular across the AI spectrum. The diagram below (Figure 1), from MCP’s GitHub, shows some examples.

netskope 1
Figure 1. MCP provides a standardised way to connect AI applications to external systems

An MCP conversation consists of four entities: a host, a client, a server, and a resource. A host is an AI application that’s built to use MCP. When a host needs to communicate with a resource, it spawns a client to interact with the MCP server associated with that resource. A server exposes various types of context that the resource can offer. These could be functions (like file/database operations and API calls), data sources (like file/database contents, API responses), or prompt-based workflows (like interactions with LLMs). Anthropic introduced MCP in November 2024 and donated it to the Linux Foundation in December 2025.

 

Why is MCP suddenly so popular?

In a word: agents. For agents to do anything useful beyond just answering questions, they need to interact with resources. MCP removes the complexity of agents needing specialised coding for every interesting resource. Indeed, it is MCP that’s largely responsible for the zillions of agents now proliferating across the internet. Before MCP, AI applications relied on APIs to interact with resources. It’s why we see, for example, mostly one-to-one relationships between applications and LLMs: ChatGPT connects to OpenAI models, Claude connects to Anthropic models, Gemini connects to Google models, and so on. Each application implements the APIs required to interact only with its associated models and none of the others.

 

Agents—little chunks of software anyone can write—would be impossible to build effectively if they had to implement APIs required for dozens of distinct LLMs and an essentially limitless array of functions and data sources. If APIs represent a form of abstraction away from lower-level objects, then MCP represents a form of abstraction away from APIs. Recognising this, resource creators have busied themselves releasing MCP servers so that agents can scale, be reused, and work without any custom logic for their resources. All the major LLM providers and SaaS application vendors have MCP-server-enabled their products.

 

(Agent-2-Agent, that other agent-related protocol gaining popularity, defines a standard for how agents can interact with each other. Google announced it in April 2025 and donated it to the Linux Foundation in June 2025. Both MCP and A2A will feature prominently in our AI-addled future—and yes, researchers have already discovered how to abuse A2A, too.)

 

 

How secure is MCP?

Early iterations of MCP ignored security (much like how various internet protocols that originated in the 1970s did; when will we ever learn? sigh). Research by Backslash Security in June 2025 and Knostic in July 2025 discovered thousands of internet-accessible MCP servers that happily disgorged the contents of their resources to anyone who twisted the doorknobs. Unauthenticated requests freely exposed MCP server details, including the capabilities of their attached resources, running in production! Misconfigured MCP servers litter the internet, including local servers exposed to entire networks and servers that allow arbitrary command execution. Attackers can hide instructions that subvert resource behaviour, substitute a malicious resource during runtime, exfiltrate sensitive information, and create persistent back doors—which in turn could lead to prompt injection and context poisoning that nefariously manipulate LLM output.

 

In June 2025 MCP added an authorisation layer that claims “to build trust between MCP clients and MCP servers” following OAuth 2.1 conventions. The MCP project describes five categories of attacks and mitigations, and recommends adding authorisation when a server accesses user-owned data (whether directly or via APIs), in environments where strict access controls are prevalent, or for operations like rate-limiting, usage tracking, and auditing. Unfortunately, its use is optional — which suggests that many MCP-enabled interactions will occur without any visibility or control.

 

More recently, the MCP project added a registry that improves discoverability of various public MCP servers. Most companies will create private subregistries that limit the servers to which clients may connect. Private subregistries can incorporate security scanning and vulnerability checks to ensure that clients interact only with approved and risk-appropriate servers. Notably, neither the top-level registry nor public or private subregistries can enforce integrity-based security controls such as digital signatures and validation logic. These might be added at some later date.

 

 

How to improve MCP security

To incorporate AI agents into workflows, you must take the necessary precautions to avoid MCP-related risks. Many guides offer recommendations for developing and deploying MCP securely. I recommend two: best practices suggested by the MCP project and the OWASP Top 10 for MCP. Both are comprehensive and include thorough steps for mitigating risks.

 

Interestingly, well-built MCP servers add a layer of security to the data sources they expose. They: 

  • Centralise access to multiple sources of data and add authentication, authorisation, data masking, and appropriate retrieval even when the sources themselves lack such capabilities.
  • Act as gateways to APIs, managing authentication, formatting, and tokenisation even when underlying APIs don’t offer such functions.
  • Improve privacy and compliance by controlling access and auditing activities, even when applications can’t do this themselves. 

Nevertheless, MCP lacks a native policy layer: it can’t offer visibility, enforcement, or governance of data flows.

 

This lack is especially serious because not every aspect of MCP remains under your control. Several of the thousands of publicly available MCP servers will probably tempt your company because they can offer value when you connect them to enterprise data and permit them to perform autonomous operations. Fortunately, AI security vendors now offer products that add a policy layer, which enables you to incorporate public servers safely. Insist on products that offer these capabilities and evaluate them thoroughly: 

  • Identify MCP servers and clients in use within your company constantly and in real time, including attributes such as name, ID, URL, version, host, data source, and protocol.
  • Apply risk scoring to MCP servers, to help you quickly assess and prioritise which AI tools, agents, or integrations pose the greatest security and compliance risks.
  • Manage access using granular, context-based policy controls (including a default block option for MCP traffic) and real-time prevention of data leaks.
  • Detect and monitor non-human traffic between and across MCP servers, clients, functions, hosts, data sources, and development tools.
  • Log MCP events, including sessions, initialisations, function requests and responses, and deployments.
  • Determine and classify sensitive data, such as intellectual property and passwords, in use with MCP-enabled applications. 

As with other kinds of layered controls, these products are effective because they get in line and see all the MCP traffic between clients and servers—they apply your policies to your traffic to ensure your data goes and stays where it should, not where it shouldn’t. I’d even suggest that interactions between your clients and your internal servers should pass through the inline policy layer, too.

 

 

Embrace your future

If your company imagines that agentic AI will generate value—which describes most companies—then you have some work to do before hundreds of agents seep into production. Be the champion of security I know that you are: read the guides, build security into your MCP clients and servers, and add that necessary policy layer. Demonstrate to your leadership that secure agentic AI can, in fact, be a thing, even a thing that confers competitive advantage to your company.

 


 

Steve Riley is Field CTO at Netskope

 

Main image courtesy of iStockPhoto.com and Pakorn Supajitsoontorn

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543