ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Governing agentic AI: the accountability gap

Dr Paul Dongha at NatWest Group argues that the absence of clear accountability models and auditable governance structures introduces systemic risk to organisations using AI

Linked InXFacebook

Agentic AI has arrived. Perplexity’s Comet browser demonstrates autonomous, multi-step execution. The AI takes a single natural language request from a shopper (e.g. "Find top 3 headphones under $400...") and breaks it down, autonomously navigating multiple retailer sites (Amazon, Best Buy …), extracting and comparing dynamic data (such as price, returns, and shipping), before making a final decision to drive the user directly to the best product’s checkout page for purchase authorisation.

 

The rapid deployment of autonomous AI agents capable of independent decision-making exposes a major gap in current legal and regulatory systems. While AI improves efficiency, if given too much free rein (or ‘autonomy’) the absence of clear accountability models and auditable governance structures introduces systemic risk. For engineers and technical professionals, governance is now a core engineering challenge, not just a legal one.

 

Technical solutions to establish accountability must be prioritised, ahead of global regulation, which might impose mandatory compliance requirements.

 

 

Who is responsible when AI makes a mistake?

The main legal problem is that AI systems break the usual link between a person’s command and a system’s outcome. When complex, adaptive AI makes a decision, we can’t easily trace the error back to a single human fault.

 

The EU recommends a hybrid liability model that divides responsibility among the people who control the AI’s life cycle, not the AI itself. This model looks at two timeframes: 

  1. Ex-ante responsibility (developer’s fault): This falls on the provider (the company that built the AI). The provider is responsible for defects in the core product, such as biased training data, flawed model design, or failing to test the limits of the system.
  2. Ex-post responsibility (operator’s fault): This targets the deployer (the company using the AI, like a bank or hospital). The deployer is responsible for not supervising the system correctly, failing to update it, or using it outside its intended purpose. In high-risk cases, laws like the ones described in the EU’s AI Act may create a presumption of causality, meaning the deployer is assumed to be at fault unless they can prove otherwise. 

The technical solution: One way of assigning responsibility is to provide absolute traceability. Every AI system could include Explainable AI (XAI) capabilities that automatically create an unchangeable record (an audit log) of every decision and the data used. This is hard to achieve in practice, as AI systems are complex and operate based on probabilities; therefore, full explainability remains elusive. The EU’s AI Act for example makes this recordkeeping mandatory for high-risk systems to ensure traceability. Engineers must build this logging feature into the foundation of their AI systems.

 

 

Tools for oversight: governance and audit models

To make legal responsibility practical, organisations need structured tools for control and checking. The US AI Risk Management Framework (RMF) is the global blueprint for this [5]. The NIST AI RMF (released in January 2023) is a voluntary, influential guide that defines four steps: 

  1. Map: Identify the AI’s risks (e.g., classifying it as "high-risk").
  2. Measure: Create metrics to check if the AI is fair, robust, and accurate (Verification and Validation).
  3. Manage: Implement controls, document the system, and define where humans must oversee decisions.
  4. Govern: Set up committees and policies to ensure the RMF steps are followed continuously. 

Accountability is confirmed through the external audit model. Auditors need to independently check three main areas: the Model (technical accuracy), the Data (quality and bias), and the Process (governance adherence). Because AI systems constantly learn and change, these audits must provide continuous assurance, with automated monitoring to flag major performance or fairness drift, triggering a necessary re-certification. 

 

 

How regulations affect business planning

Global rules are quickly changing how businesses develop and use AI. Compliance is now necessary for market access. 

 

The EU AI Act: mandatory minimum standards

The EU AI Act sets a global standard by classifying AI systems by risk. "High-Risk" systems (in areas like hiring or critical infrastructure) must meet strict, mandatory requirements, including: 

  • A comprehensive Risk Management System.
  • Strict Data Governance rules for training data.
  • Detailed Technical Documentation and recordkeeping.
  • Formal Human Oversight. 

For businesses, this means investing heavily in redesigning development processes to ensure compliance from the start. Non-EU companies must comply to access the EU market. 

 

The US and UK frameworks: flexibility and principles

The UK’s approach is lighter, relying on five cross-sectoral principles: Safety (including security and robustness), Transparency (including explainability), Fairness, Accountability, and Contestability. Existing regulators (in finance, health, etc.) are tasked with enforcing these principles, leading to flexible but potentially fragmented guidance.

 

The key takeaway is that global firms face high compliance costs and the risk of regulatory fragmentation. The only way to win is to treat AI governance as an engineering priority today, using a framework like NIST to build an adaptable, auditable foundation.

 

 

Regulations are not barriers

The time for easy, autonomous AI without accountability is over. We need to assign responsibility by designing technical traceability into every AI system. Accountability demands the immediate adoption of Risk Management Frameworks and transparent audit processes.

 

Regulations are not barriers; they are necessary to build the public trust that the AI economy needs to grow. Businesses that proactively embed governance into their AI development will be the market leaders. Engineers must prioritise accountability in design, or face delays under the weight of future regulatory requirements. 

 


 

Dr Paul Dongha leads Responsible AI and AI Strategy at NatWest Group, ensuring that AI innovation drives value while rigorously adhering to regulations and protecting customers

 

Main image courtesy of iStockPhoto.com and witsarut sakorn

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543