Glyn Morgan at Salt Security argues that API security is integral to AI governance because you can’t govern AI responsibly if you can’t see and secure the APIs that power it

Application Programming Interfaces (APIs) have become the unseen machinery driving every AI-enabled business process. Whether it’s a large language model (LLM) workflow, an autonomous agent, or a Model Context Protocol (MCP) tool call, each interaction runs through an API, together forming the organisation’s API fabric and the AI agent action layer, where agents trigger tools, exchange contextual information and execute tasks.
This makes APIs effectively the operational backbone of AI, and the AI agent action layer the focal point for governance. As landmark frameworks such as the EU AI Act and ISO/IEC 42001 arrive, the connection between AI governance and API governance is becoming impossible to ignore. These regulations demand traceability, robustness and security across the AI lifecycle, all of which depend on how well APIs are secured and managed.
When compliance controls are embedded directly into API design, development and deployment, governance shifts from a bottleneck to an efficiency driver. Audit trails, data-flow visibility and access management become part of the release process, rather than an afterthought. The result is faster delivery cycles with built-in compliance, which can in turn become a competitive advantage in regulated markets.
As AI regulation takes shape, APIs are emerging as the critical control point for risk, compliance and resilience.
When compliance becomes an API problem
The latest wave of AI regulation isn’t just about ethics or fairness; it’s also deeply technical. For instance, the EU AI Act’s Article 15 on “accuracy, robustness and cyber-security” for high-risk systems cannot be achieved without securing the APIs that those systems depend on.
Article 10, which mandates strong “data and data governance,” equally relies on API-level controls to ensure integrity and prevent data poisoning. Even “logging and traceability” obligations in Articles 12 and 20 depend on visibility into the API layer; for example, where data moves, who accesses it and how it’s processed.
Similarly, ISO/IEC 42001’s requirements for lifecycle management and risk assessment hinge on maintaining a complete, continuously updated API inventory. Simply put, AI governance without API governance just doesn’t work.
This growing overlap means compliance and security teams must collaborate earlier in the development lifecycle. Cross-functional governance models that bring together data scientists, API architects and security leaders can help translate regulatory principles into actionable, auditable technical controls.
The expanding attack surface
While it’s true that APIs enable AI innovation, they also broaden the attack surface if not protected properly. The past year has seen a sharp rise in API-related security incidents, many stemming from authenticated but compromised sources. Attackers increasingly target external-facing APIs, exploiting misconfigurations, excessive permissions and broken authorisation logic, vulnerabilities that map directly to known categories such as API1 (Broken Object Level Authorisation) and API8 (Security Misconfiguration).
This shift challenges the traditional perimeter-based security mindset. Once inside the system, compromised agents or users can move laterally through APIs, accessing sensitive data and even altering model behaviour. In the context of AI, such attacks can lead to data leakage, model theft, or manipulation, essentially turning governance failures into actual business risks.
For organisations experimenting with generative AI or multi-agent systems, this expanded risk surface can also affect intellectual property and customer trust. A single insecure endpoint could expose data as well as the very logic that powers proprietary AI models.
Governance by design, not by audit
Both ISO/IEC 42001 and the EU AI Act emphasise accountability and human oversight from the start of the AI lifecycle. Building “compliance by design” into API architecture not only prepares organisations for regulation but also strengthens operational resilience.
Those that invest early in continuous API discovery, automated policy enforcement and runtime protection will find themselves better equipped to demonstrate compliance, respond to audits and maintain trust with regulators and customers alike.
A recent Gartner® report reinforces this shift, noting that Model Context Protocols (MCPs) and agent-to-agent (A2A) interactions rely entirely on APIs for context and data exchange and urging organisations to “double down on API security” through advanced monitoring, rate limiting and access controls.
Ultimately, treating APIs as first-class citizens in AI governance frameworks allows organisations to future-proof their investments. With clear API inventories, consistent policies and traceable interactions, businesses gain compliance assurance and build a stronger foundation for innovation.
APIs as the core of responsible AI
In the age of agentic systems and autonomous workflows, securing these systems can seem daunting - even overwhelming. However, thinking about APIs as more than just integration tools and more as the control surface for the AI action layer itself can prove advantageous. They form a fabric which determines how data flows, how decisions are made and how systems interconnect.
The organisations that understand this and treat API security and governance as integral to AI governance will not only stay compliant but also build a more secure and trustworthy foundation for innovation. In other words: you can’t govern AI responsibly if you can’t see and secure the APIs that power it.
Glyn Morgan is UK&I country manager at Salt Security
Main image courtesy of iStockPhoto.com and BlackJack3D

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543