Dr Paul Dongha at NatWest Group argues that the absence of clear accountability models and auditable governance structures introduces systemic risk to organisations using AI

Agentic AI has arrived. Perplexity’s Comet browser demonstrates autonomous, multi-step execution. The AI takes a single natural language request from a shopper (e.g. "Find top 3 headphones under $400...") and breaks it down, autonomously navigating multiple retailer sites (Amazon, Best Buy …), extracting and comparing dynamic data (such as price, returns, and shipping), before making a final decision to drive the user directly to the best product’s checkout page for purchase authorisation.
The rapid deployment of autonomous AI agents capable of independent decision-making exposes a major gap in current legal and regulatory systems. While AI improves efficiency, if given too much free rein (or ‘autonomy’) the absence of clear accountability models and auditable governance structures introduces systemic risk. For engineers and technical professionals, governance is now a core engineering challenge, not just a legal one.
Technical solutions to establish accountability must be prioritised, ahead of global regulation, which might impose mandatory compliance requirements.
Who is responsible when AI makes a mistake?
The main legal problem is that AI systems break the usual link between a person’s command and a system’s outcome. When complex, adaptive AI makes a decision, we can’t easily trace the error back to a single human fault.
The EU recommends a hybrid liability model that divides responsibility among the people who control the AI’s life cycle, not the AI itself. This model looks at two timeframes:
The technical solution: One way of assigning responsibility is to provide absolute traceability. Every AI system could include Explainable AI (XAI) capabilities that automatically create an unchangeable record (an audit log) of every decision and the data used. This is hard to achieve in practice, as AI systems are complex and operate based on probabilities; therefore, full explainability remains elusive. The EU’s AI Act for example makes this recordkeeping mandatory for high-risk systems to ensure traceability. Engineers must build this logging feature into the foundation of their AI systems.
Tools for oversight: governance and audit models
To make legal responsibility practical, organisations need structured tools for control and checking. The US AI Risk Management Framework (RMF) is the global blueprint for this [5]. The NIST AI RMF (released in January 2023) is a voluntary, influential guide that defines four steps:
Accountability is confirmed through the external audit model. Auditors need to independently check three main areas: the Model (technical accuracy), the Data (quality and bias), and the Process (governance adherence). Because AI systems constantly learn and change, these audits must provide continuous assurance, with automated monitoring to flag major performance or fairness drift, triggering a necessary re-certification.
How regulations affect business planning
Global rules are quickly changing how businesses develop and use AI. Compliance is now necessary for market access.
The EU AI Act: mandatory minimum standards
The EU AI Act sets a global standard by classifying AI systems by risk. "High-Risk" systems (in areas like hiring or critical infrastructure) must meet strict, mandatory requirements, including:
For businesses, this means investing heavily in redesigning development processes to ensure compliance from the start. Non-EU companies must comply to access the EU market.
The US and UK frameworks: flexibility and principles
The UK’s approach is lighter, relying on five cross-sectoral principles: Safety (including security and robustness), Transparency (including explainability), Fairness, Accountability, and Contestability. Existing regulators (in finance, health, etc.) are tasked with enforcing these principles, leading to flexible but potentially fragmented guidance.
The key takeaway is that global firms face high compliance costs and the risk of regulatory fragmentation. The only way to win is to treat AI governance as an engineering priority today, using a framework like NIST to build an adaptable, auditable foundation.
Regulations are not barriers
The time for easy, autonomous AI without accountability is over. We need to assign responsibility by designing technical traceability into every AI system. Accountability demands the immediate adoption of Risk Management Frameworks and transparent audit processes.
Regulations are not barriers; they are necessary to build the public trust that the AI economy needs to grow. Businesses that proactively embed governance into their AI development will be the market leaders. Engineers must prioritise accountability in design, or face delays under the weight of future regulatory requirements.
Dr Paul Dongha leads Responsible AI and AI Strategy at NatWest Group, ensuring that AI innovation drives value while rigorously adhering to regulations and protecting customers
Main image courtesy of iStockPhoto.com and witsarut sakorn

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543