ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Building explainable AI

Vimal Raj Sampathkumar at ManageEngine explains how explainability builds confidence in AI

For CTOs and CIOs introducing AI tools into the workplace, the major question is no longer "What can AI do? but "How can we trust it?"

 

The true promise of AI can only be fully realised if teams trust its outputs. Commentators often describe AI as being more akin to an employee rather than a software tool, and they’re right. If you don’t trust your colleagues, you’re not going to get the best out of them. Conversely, those you do trust quickly become crucial to the business.

 

 

No trust, no joy

AI models are no different. When companies can track their AI processes, fully explain the methods employed to arrive at outputs, and consistently see accurate results, trust grows. In turn, this allows companies to deploy AI into critical workflows with confidence.

 

However, the reality is often less rosy. In October, Deloitte issued a refund to the Australian government after its AI model fabricated references in a commissioned report. The project was worth hundreds of thousands of pounds. With stories like this making headlines, it’s unsurprising that 38% of UK adults lack trust in AI technology. That wariness manifests itself in the workplace just as much as in personal use.

 

The reality is that most organisations simply haven’t reached the point where AI is viewed as a trusted ally. Instead, it’s often seen as an unreliable tool: accurate at times, but inconsistent. Would you accept a ride from a self-driving taxi using an inconsistent, unreliable AI model?

 

A lack of traceability can undermine confidence in automation just as much as inaccurate outputs. Even if the AI delivers helpful predictions or highly accurate syntheses of vast datasets, the inability to explain how it arrived at those outputs is just as concerning. If the system makes a mistake and no one can explain the error, the consequences could include financial and reputational damage.

 

 

Explainable AI

As a result, IT leaders are increasingly demanding greater transparency from their AI processes, not just for compliance, but to ensure operational reliability. Explainable GenAI within IT workflows can help reduce resistance to adoption by helping technicians understand how automation decisions are made. At the same time, it strengthens accountability and service reliability by providing companies with deeper assurance that the model’s output can be relied upon.

 

In short, explainable AI paves the way to trust, which paves the way to business success.

 

So what exactly is explainable AI? Also known as white-box AI, it offers greater level of transparency by providing clear explanations behind its decisions, rather than leaving its processes and logic hidden from view. This approach helps users understand how each system works, enabling them to assess AI models’ vulnerabilities, verify their output, and safeguard against unintentional biases.

 

 

Internal and external trust

Explainable AI doesn’t just improve internal buy-in or help guarantee service levels for customers. It’s also becoming increasingly important as the regulatory framework for AI-enabled technology takes shape. Even without considering newer, AI-specific regulations, explainable AI makes adhering to regulatory standards and policy requirements significantly easier.

 

For example, data regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) grant individuals the right to request explanations about an automated dispute. If critical workflows depend on opaque AI systems, this becomes a major obstacle to overcome. How can the explanation for why personal data was used, stored, or shared in a particular way be communicated if the AI is a closed book? By contrast, explainable AI can easily provide the necessary information required to satisfy the regulator and the complainant.

 

As companies implement AI models in day-to-day workflows, a clearer understanding of the internal processes can help ensure these systems are running reliably, ethically, and sustainably.

 

 

Investing in a trustworthy future

Building and deploying explainable AI can require a steeper investment of time and resources as opposed to implementing black box solutions. It also requires skilled staff to interpret and act on the insights provided. Nevertheless, that level of investment more than pays for itself in reduced time to value, smoother collaboration with staff, stronger compliance, and better-served customers. As AI becomes embedded in workflows across industries and countries, explainability has to be a key priority for sustainable and ethical development. 

 


 

Vimal Raj Sampathkumar is technical head at ManageEngine

 

Main image courtesy of iStockPhoto.com and Ankabala

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543