
AI, and particularly Generative AI (GenAI) is beginning to reshape how financial institutions analyse risk, manage data and serve customers. At a recent Business Reporter dinner at the House of Lords, senior executives discussed how this technology is influencing the future of financial services, and what obstacles still stand in the way.
Opening the discussion, Clemens Thym, Head of Credit Solutions at S&P Global Market Intelligence explained how S&P Global is leveraging AI across a range of use cases, from content extraction to mapping relationships between entities more efficiently. The company is even launching a GPT-style tool to support credit research, underscoring how deeply the technology could transform data-driven decision-making.
For participants, the transformation is also under way. One of the participants stated ,they are now more likely to hire prompt engineers than traditional software developers, demonstrating that AI is redefining technical skillsets. Another shared automation and outsourcing have replaced many junior roles in their London office, illustrating the impact on recruitment and workforce structures.
Yet the shift is not purely technological. Some European firms voiced unease about relying on too few providers for critical AI infrastructure, particularly amid concerns over political volatility and the risk of future trade restrictions.
Early adoption and practical use
When it comes to rolling out AI, attendees agreed that people remain the most important factor. Employees must understand how to use the tools and why they are being introduced. Many organizations start with a department like marketing and communications, where AI can deliver quick productivity gains.
Others view AI as a bottom-up technology, comparable to the early days of the internet: give it to teams, see how they use it, and build formal processes around the most effective use cases. Popular early experiments include agentic AI for KYC searches and low-risk periodic reviews, always with human oversight.
Participants also highlighted AI’s growing role in surveillance of communications and transactions, helping detect unusual patterns at scale and across languages, as well as in research and analysis, where summarisation tools save analysts hours of preparatory work.
Getting the basics right, good data, strong governance and robust security, remains non-negotiable. Some attendees said they would like to have internal sandboxes to allow safe experimentation and knowledge-sharing across teams.
Challenges and ethical boundaries
For all its potential, the group acknowledged that results can be mixed. “A lot is promised but not always delivered,” one executive said, noting that some projects had been repeatedly delayed. Measuring success is difficult when algorithms are opaque: some attendees talked of the need for parallel processes, duplicating effort and cost.
Most firms have established ethical boundaries, defining what they will, and will not, allow AI to do. The pace of corporate decision-making was another source of frustration: it can take six months to approve projects, a lifetime when the technology itself evolves monthly.
Some participants expressed concern that we are at the peak of the AI hype cycle, predicting a cost correction, at least. As one noted, the true price of computation has yet to be felt; when it is, the economics of many projects may change sharply. Hallucinations – the possibility of AI tools inventing information – were also mentioned as a risk. Others raised governance concerns about non-deterministic models producing different answers to the same query. One said: “They are unpredictable, so we may need to manage them the same way we manage people.”
Additional risks included a flood of low-quality content - “AI slop” - and persistent problems with siloed data and inadequate infrastructure. Data-security questions remain for some, too. Can information be extracted from a model without proper authorization?
Humans in the loop
Not all agreed on how far to trust AI decision-making. One participant insisted that AI should never decide whether a customer receives a mortgage. Another argued that it could, and should, handle straightforward applications, escalating only complex ones for human review. However, the consensus leaned towards human-in-the-loop systems to balance speed, fairness and accountability.
A second point of contention concerned the rise of AI-native challengers. Some executives worried that while established institutions debate regulation and ethics, start-ups will learn faster through experimentation and create an advantage through increased market share. Others were less concerned: heavy regulation will constrain those firms, and large banks have the scale to be fast followers once standards stabilise.
The regulatory landscape was seen as both safeguard and barrier. EU rules are highly prescriptive, slowing innovation and creating uncertainty. Several participants criticised the sector’s culture of over-caution. “We’ve convinced ourselves we can’t move until the regulator says it’s OK,” said one attendee.
Furthermore, regulators expect financial institutions to be both cautious and innovative, which can feel contradictory. As one guest said, “They want us to protect customers and push boundaries at the same time.”
In closing, Clemens Thym noted that discussion frequently returned to the principle of human oversight. Despite some differences, participants shared a broadly European perspective, valuing privacy, ethical guardrails and measured progress.
There is a clear appetite to move quickly, Thym concluded, but an equally strong instinct to advance carefully rather than lead recklessly. As the hype around AI begins to settle, that balance between ambition and prudence may prove the sector’s most valuable risk-management tool.
To learn more, please visit: www.spglobal.com

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543