ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Debate Agrees that Balancing Risk and Governance with Tech Skills is Essential for AI to Succeed – And Outside Support will be Vital

Sponsored by SoftServe & Google

Successful adoption of Artificial Intelligence (AI) depends as much on risk management, governance, and culture as it does on technological capability, attendees at a dinner hosted by SoftServe and Google Cloud agreed. They also stressed that without deliberate attention to these areas, and the right external support, even the most promising AI projects risk falling short of their potential.

Linked InXFacebook

As AI starts to move from hype and proofs of concept to real use cases, businesses are finding out that this has wider ramifications across the enterprise.  The diners were told that if AI is to solve real-world problems and deliver the deeper, more transformative changes that businesses expect it therefore needs to be accompanied by solid foundations.

 

At a Business Reporter dinner at the House of Lords, senior IT and business executives acknowledged the new challenges faced when embedding AI into decision-making, data governance, and an organisational culture. But they also voiced more nuanced frustrations that firms face when working with AI and the imperfect data sets they own.

 

Good decisions

 

Paul Fryer, Enterprise Solutions Principal at SoftServe, said although AI will never make every decision correctly it should not be a reason for businesses to hesitate. Instead, he said, “You should ask if everyone wants to be a data-informed business, how do you make sure you make good decisions?”

 

Fryer compared AI outputs to legal opinions: expert but fallible. Just as organisations accept that a lawyer’s advice may later be challenged, so might they learn to live with AI systems that offer high-quality guidance without guarantees.

 

Some suggested that will come down to weighing potential risks or harm. Firms should work to understand the potential impact of those errors, they added, and build some level of risk acceptance, or tolerance, into AI governance frameworks.

 

Messy realities

 

The room agreed that no AI system can succeed without reliable data, but that getting to a shared understanding of “reliable” was not straightforward. Several said data governance often falters when teams cannot agree on a correct source of truth. But, most accepted that where alignment exists, or can be found, technical challenges become more manageable.

 

It was pointed out by one that Generative AI can work around some of these data gaps. But, as another diner said, while Gen AI can infer incomplete addresses or detect missing fields, equally, a single wrongly tagged file can cascade errors through an entire system. Therefore, to mitigate those risks, robust governance around input data will remain important for the successful delivery of critical services.

 

One of the toughest cultural challenges identified was asking users to abandon the idea of a single, perfect version of the truth. Instead, many felt that organisations must develop more nuanced, task-specific data strategies that reflect the messy realities of the data they hold.

 

Automation at scale

 

While the potential for automation is considerable, most businesses have yet to realise it at scale. Mark Westhenry, Analytics and AI Lead for Telco at Google Cloud, emphasised that using the cloud is not simply a cost play. He said rather it represents a leap in automation and visibility. “New technology provides a better picture of the data you’re working with, which can unlock automation at scale,” he added.

 

Yet challenges persist. Attendees highlighted the difficulty in distinguishing whether a failure is rooted in the data or the model itself. Sometimes Large Language Models complicate this further by producing non-deterministic results - answers that change with each interaction.

 

Regulatory environments add further layers of complexity, particularly in sectors such as financial services, where explainability is vital. AI outputs must be defensible, one said, as any lapse, such as bias in hiring processes, can escalate into a crisis.

 

Participants stressed this all means that AI adoption cannot happen in isolation. Most agreed with the need for support and that vendors and partners must step up, not only by providing robust systems but by helping clients understand and manage AI risks at scale.

 

Exponential change

 

As discussion turned to the future, a note of urgency emerged. Businesses accustomed to gradual, incremental improvement must pivot towards delivering exponential change if they are to compete. “The board and C-suite don’t want marginal gains; they want 5x improvements,” Westhenry said.

 

Workforce dynamics were another issue raised. For example, employees who once preserved institutional knowledge are increasingly scarce, leading companies to seek ways to embed this knowledge into processes. Many felt that AI and automation, if governed wisely, could help mitigate this erosion.

 

But, the room was told, organisations must also deal honestly with the human cost of AI-driven change. Jobs will be displaced. Businesses must decide who will prepare workers for this transition - employers, governments and regulators, or workers themselves. Open communication about the realities of AI will be important if it is to succeed, one added.

 

Be bold

 

As the evening concluded, Fryer noted that many financial services companies, despite heavy regulation, are pushing ahead with AI deployments, proving that progress is possible even under tight constraints. If they can succeed, so can others, added Westhenry, urging businesses to think bigger. With an estimated 60 percent of AI projects likely to fizzle out, he said, the ones that succeed must deliver significant, transformative impact.

 

The attendees were therefore left to consider whether the path to effective AI lies not in chasing marginal gains, but in embracing bold change – albeit underpinned by sound governance and a clear-eyed view of risk. Either way, it seems that a great deal of trial and error still lies ahead on the enterprise AI journey and plenty of support from expert partners will be required to keep businesses moving in the right AI direction.


To learn more, please visit: SoftServe & Google Cloud.

Sponsored by SoftServe & Google
Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543