ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

FinTechTalk: Automating compliance – driving innovation while playing by the rules 

On 27 January 2025, FinTechTalk host Charles Orton-Jones was joined by Alex Tsepaev, Chief Strategy Officer, B2PRIME Group; Denitsa Rebaine, Sanctions Compliance SME, US Financial Institution; and Ty Francis MBE, Chief Advisory Officer, LRN. 

Linked InXFacebook
close

Please register or log-in to watch this on-demand episode

 

Views on news 

More than 75% of City firms now use AI, with insurers and international banks among the biggest adopters. It is being used to automate administrative tasks or even help with core operations, including processing insurance claims and assessing customers’ creditworthiness.

 

But the UK has failed to develop any specific laws or regulations to govern their use of AI, with the FCA and Bank of England claiming general rules are sufficient to ensure positive outcomes for consumers. That means businesses have to determine how existing guidelines apply to AI, leaving MPs on the Treasury committee worried this could put consumers and financial stability at risk. 

 

The committee is calling for AI-focussed stress testing and FCA guidance on how existing consumer protection rules apply to AI, as well as more clarity on accountability in relation to AI-driven decisions: does responsibility sit with data providers, model developers, cloud providers or the regulated firms themselves?  

 

Leveraging AI in compliance 

For regulators, it’s a challenging job to keep up with an industry that moves as fast as AI does. Current AI systems still need a human not only in the loop but also before the loop to train and tweak AI. The best approach is to treat AI as a cultural and behavioural rather than a cybersecurity issue.

 

There are some compelling use cases for AI in compliance thanks to its ability to monitor thousands of different reports and queries at a time. Compliance teams can gather data from training, culture surveys, code of conduct cases and complaints to create models that can not only benchmark behind but also predict future trends. 

 

One of the main risks about AI is that people over-rely on it, while the technology is still not mature enough for that level of autonomy – an issue compounded by LLMs inherent opacity. Compliance and governance frameworks inside corporations are to help employees to understand their obligations and enable them to explain to customers why algorithms make a particular decision.

 

But it’s equally important that employees are given context regarding how decisions are made. Another consideration for regulators is whether a decision made by an AI model can be challenged or overridden.  Unrealistic expectations of AI come with the risk of undermining trust between the business and its employees and customers.  

 

The panel’s advice 

  • Focus on outcomes, not activities when designing AI training programmes for employees that can help them understand their job better.  
  • Use governance artifacts, such as model cards, decision logs and escalation thresholds to establish where you need oversight most.  
  • The tasks that can be fully automated by AI include transaction monitoring, sanction screening and policy attestation.  
Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543