
At a private dinner at the House of Lords hosted by Dynatrace , senior leaders from automotive, banking, energy, technology, marketing and professional services gathered to examine one of the most urgent questions facing modern enterprises: how to safely deploy, scale and govern AI in a way that genuinely transforms the business.
Dynatrace’s Nina Harris framed the discussion with a clear message: AI workloads are accelerating into production far faster than most organisations are prepared for, bringing “way more risk, way more complexity” and a steep rise in expectations for rigour, accuracy and measurable outcomes. Gartner’s prediction that nearly 40 per cent of AI initiatives may fail within three years – and Deloitte’s finding that 63 per cent of leaders cite accuracy concerns – hung over the room as a reminder that enthusiasm alone will not be enough to succeed.
The pressure to deliver – and the reality of AI’s limits
The conversation opened with a simple question: “Who has an AI project that is fully delivering on its promises?” Only one participant cautiously raised half a hand.
Across the table, leaders described the tension between top-down ambition and ground-level complexity. Projects are progressing, often with striking early gains, but very few have reached industrialised, trusted, scalable maturity. One attendee at a major automotive brand illustrated this vividly: a proof of concept using generative AI to generate software test cases that could save “40 to 50 per cent of time” compared with manual work, yet which were bottlenecked immediately by the human-in-the-loop requirement. The very teams that needed relief were too overloaded to review and validate the AI’s outputs.
Others echoed this friction. A banking executive described deploying an AI complaints-handling bot that already performs better than some human agents in accuracy, clarity and tone. But despite improved service, headcount has not reduced: rather, resources have simply been redistributed. “It’s too early to make those savings,” they said. “We’re still learning what agents are really good for.”
Several attendees warned of a recurring challenge: magical thinking among staff. Internal hackathons produced ambitious ideas detached from organisational reality. “People think AI will magically solve problems we’ve never been able to solve as humans,” one participant noted.
A shifting workforce and the coming skills gap
A recurring theme was the changing shape of the workforce. Some firms reported that graduate recruitment had sharply dropped – not explicitly because of AI but because the nature of work is shifting away from junior manual roles.
“We don’t need graduate engineers in the way we used to,” said one banking leader. “We need people who are good prompt engineers, not people writing boilerplate code.”
Yet this is creating an unexpected dilemma: if fewer juniors enter the profession now, who becomes the senior AI literate workforce of the future?
Several participants worried openly about a looming capability gap. Older practitioners bring business context and institutional memory, but younger staff are “native” in AI thinking. In some technical areas – such as SEO – graduates wielding AI tools can outperform experts with 20 years’ experience. “It’s scary,” admitted one leader, “how quickly they bulldoze through problems.”
The consensus was that future teams will be hybrid: small groups of highly experienced strategists supported by a “network of agents” and AI-fluent, early-career talent. But that hybrid model has profound implications for training, culture and organisational design.
Process redesign, not ‘lift and shift’
Many attendees stressed that the biggest gains will not come from automating existing workflows but from redesigning processes entirely.
“When older staff look at AI, they ask: how can I embed AI in my current process?” one participant observed. “Younger people ask: why are you doing that process at all?”
A contributor noted that, as with the shift from on premises to cloud, there is no such thing as “lift and shift” for AI. AI changes the financial model, the speed of iteration and the way services are delivered. That requires rethinking data architecture, code deployment and operational pipelines.
Several organisations admitted they had burned months delivering clever individual use cases, only to discard them when better models appeared a few months later. They are now pivoting; focusing on fixing data foundations, deployment pipelines and observability, so that new AI components can be introduced, and retired, safely and quickly.
Confidence, error tolerance and the human factor
A striking theme throughout the evening was trust, or rather, the lack of it.
Participants described internal discomfort not only with wrong answers, but with uncertain answers.
One asset management leader noted that even when AI is demonstrably more accurate than human judgement, the appetite for error is “far lower” for machines. A human mistake is forgivable; an AI mistake is unacceptable, even if they are statistically rarer.
AI’s inconsistency also undermines confidence. One model risk specialist described trying to produce CV screening results: one run produced perfect recommendations, the next run produced “complete rubbish”. Without robust observability into multi-step reasoning chains, he said, it is nearly impossible to know why a model performed well one day and badly the next.
Others shared concerns that AI can be manipulated or tricked – for example, generating code that calls out to non-existent libraries, which criminals can exploit by creating malicious lookalike packages. As one delegate put it: “The AI is trying to be helpful, like an overeager dog, and that introduces massive holes.”
Bias, ethics and the difficulty of defining ‘good’
Governance dominated the latter part of the discussion. Leaders agreed that bias is an unavoidable byproduct of human-generated data, but determining what “unbiased” even means is fraught.
“We don’t necessarily have a definition of what ‘good’ looks like,” one attendee argued. Another cited military research showing that AI behaved more ethically than experienced officers – not because AI was inherently moral, but because its creators had clearly articulated rules of war to train it on.
The challenge, many said, is replicating such clarity in commercial contexts. What does a fair loan decision look like? A fair marketing model? A fair recruitment filter? Without explicit definitions, governance becomes guesswork.
Participants proposed practical approaches: verification datasets, scenario testing, systematic bias dimensions and formalised rulesets. But there was broad acknowledgement that governance frameworks remain immature.
Early days – but a direction is emerging
Despite the challenges, the tone remained pragmatic rather than pessimistic.
Most leaders agreed that AI is in its early days, akin to where cloud adoption was a decade ago. Mistakes are to be expected. Iteration cycles are fast. The priority is not eliminating all risk but ensuring it is visible, manageable and explainable.
As the dinner ended, a growing consensus emerged:
Drawing the evening together, Nina noted that organisations face both opportunity and risk in equal measure. AI brings extraordinary capability, but also uncertainty, inconsistency and cultural tension. The challenge now is to build confidence, clarity and governance structures robust enough to let innovation scale safely.
Many of the challenges discussed during the evening can be addressed by adopting AI observability practices. Dynatrace offers advanced observability and automation capabilities that help organisations build strong data foundations, establish reliable deployment pipelines and implement effective monitoring frameworks – essential steps for scaling AI initiatives with confidence.
These tools and practices enable organisations to move beyond proof-of-concept projects and achieve operational maturity in their AI efforts. By providing real-time insights into performance and helping to identify and address risks, Dynatrace allows teams to focus on innovation while maintaining trust and consistency. And by tackling foundational challenges and promoting transparency, it helps organisations unlock the full potential of AI tools and initiatives in a way that is both sustainable and impactful.
To learn more please visit: www.dynatrace.com

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543