ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

When AI gets it wrong, who takes the blame?

Dr Megha Kumar at CyXcel outlines some of the intricacies of AI-related liabilities

Artificial intelligence (AI) is no longer a future concept; it is an operational reality reshaping every major industry. From manufacturing and financial services to healthcare and logistics, AI technologies are driving productivity gains, cost reductions and entirely new business models. The technology’s potential to enhance efficiency, democratise access to information and enable sustainable practices is unmatched.

 

Yet alongside these benefits, AI also introduces new categories of business risk. Disruptions in labour markets, exposure to algorithmic bias, data privacy breaches, intellectual property infringement and legal uncertainty.

 

In an era increasingly defined by artificial intelligence and with more and more cases of AI-related harm coming to the fore, a pressing question emerges: who bears the liability when an AI delivers a biased, discriminatory or otherwise harmful outcome or output?

 

 

Understanding AI liability

To manage AI risk effectively, businesses must understand where liability originates.

 

Since the November 2022 launch of OpenAI’s ChatGPT, generative AI technology and its use cases have evolved rapidly. Generative AI models are systems trained on large datasets of natural language content, proprietary data, image, video or sound content, and generate new outputs (such as text, image, sound, software code and videos). These systems are being widely used for tasks such as analysing commercial data, generating marketing or sales content, and creating customer service chatbots.

 

The latest innovation, Agentic AI systems, are trained on vast public and/or proprietary datasets, and leverage their ‘reasoning’, iterative planning and learning algorithms to perform multi-step tasks autonomously or with limited human supervision. Whereas Generative AI tools can generate content, research materials and make recommendations for decisions, Agentic AI tools can also be programmed to execute complex tasks such as optimising supply chains or finding and booking travel tickets.

 

However, an AI model, whether Generative or Agentic, is only as good as the data it is trained on and the quality of the algorithms. If the dataset is compromised, then the ‘decisions’ drawn or executed by the AI tool will necessarily be of limited utility.

 

If the training data is biased or incomplete, then the decisions made by the AI tool could deliver problematic outcomes. On the other hand, over-inclusion of data without appropriate filtering and curating of the datasets fed into AI models carries the risk of copyright breaches, leakage of personal or sensitive information and biased decision-making.

 

One major issue is the lack of transparency in AI decision-making processes, often referred to as the "black box" problem. Many vendors of the most sophisticated AI tools do not fully understand what exact chain of reasoning an AI system uses to parse through the dataset and reach the final conclusion; the vendors, in many cases, only understand how the method is supposed to work in principle.

 

With this lack of understanding of the AI decision-making process, some AI tools will generate problematic outcomes without the vendor or the party affected by the decision being aware of it. This raises the question of liability.

 

For example, as the Guardian newspaper reported in December 2024, an AI-powered tenant-screening tool called SafeRent used by a US letting company gave a score to an ethnic minority female and on that basis of that score,  her tenancy application was denied. The 11-page scoring report she received did not clarify why or how the AI tool reached that score, and she was given no redress options. Some 400 other Black and Hispanic customers had had a similar experience.

 

This is only one example. There are numerous other cases where people, especially from marginalised sections of society have been negatively affected by algorithmic bias: women’s CVs going through a recruitment AI, non-white people screened by AI-powered biometric surveillance tools, or offenders whose risk of reoffending is evaluated by an AI model.

 

 

Liability lies with the operator

Under current law, AI systems do not possess legal personality. This means liability for AI-related actions falls on the human or legal entity operating the system. As with any other business tool, if an AI application causes harm, through inaccurate decision-making, IP infringement, or cyber-security vulnerabilities, the organisation using or deploying the system is generally responsible.

 

This approach mirrors established legal principles.  For example, a delivery company is responsible for an accident if it involves its vehicle - unless the fault lies in the manufacturer’s design. In the AI context, vendors and operators share responsibility depending on the nature and origin of the harm. For businesses, this makes AI governance, not just adoption, a strategic imperative.

 

 

AI governance is imperative

For executives, investors and compliance officers, the message is clear: AI governance is not optional. To harness AI responsibly and sustainably, organisations should prioritise: 

  1. Data governance: Implement rigorous data sourcing, curation and auditing processes to mitigate bias and ensure legal compliance.
  2. Transparency and explainability: Invest in tools and frameworks that allow internal and external stakeholders to understand how AI systems make decisions.
  3. Vendor accountability: Include clear liability and indemnity clauses in contracts with AI vendors, particularly regarding data usage and decision outcomes.
  4. Regulatory monitoring: Stay ahead of evolving global AI regulations and align internal practices with emerging standards.
  5. Ethical oversight: Establish cross-functional AI ethics committees to review system impacts, particularly in customer-facing and HR contexts.

 

Looking ahead

AI offers transformative potential for growth and innovation, but without proper oversight, it also presents several risks. The businesses that will thrive in the AI era are those that not only innovate but also implement strong governance, compliance and transparency frameworks.

 

The future won’t reward those who rush to adopt AI - it will reward those who do so responsibly, ethically and with accountability built into every aspect of its use. 

 


 

Dr Megha Kumar is Chief Product Officer and Head of Geopolitical Risk at CyXcel

 

Main image courtesy of iStockPhoto.com and WANAN YOSSINGKUM

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543