ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

GenAI and compliance

Gabe Hopkins at Ripjar explores some of the challenges and opportunities of generative artificial intelligence

 

Generative AI (GenAI) uses complex algorithms to create content, including text, images, videos and music, on demand. It can also be used to perform tasks and process data, making arduous jobs more manageable and saving considerable time and money. This is transformational for many industries, especially for teams looking to boost operational efficiency and drive innovation—and who isn’t.

 

Compliance as a sector has traditionally been hesitant to act when it comes to new technologies, taking longer to procure and implement due to enhanced caution about perceived risks. Many compliance teams will not be using any AI, never mind GenAI.

 

However, this hesitancy also means that these teams missing out huge potential benefits. At the same time, other less risk-averse industries are enjoying the upside from having the technology implemented into their systems. 

 

It’s therefore time that compliance teams look for ways to leverage all AI, and specifically GenAI, in safe, tested ways, without introducing unnecessary risk. 

 

Overcoming concerns 

GenAI is a new and rapidly developing technology so it’s natural that many compliance teams have some reservations around how the technology can be applied safely. In particular, teams will worry about sharing data which might then be used as part of training and become embedded within future models.

 

It is also unlikely that most organisations would want to share data across the internet without very strict privacy and security measures. 

 

When thinking about the options for running models securely or locally, teams are likely also worried about costs, as much of the public discussion of the topic has focussed on the immense costs of preparing the foundation models. 

 

In addition to these concerns, model governance teams within organisations will worry about the black box nature of models, which also casts a spotlight on the potential for models to embed biases towards specific groups that are hard to detect. 

 

The good news is that there are ways to use GenAI to tackle all these concerns, including selecting the right models that provide the required security and privacy and then fine tuning those models within a strong statistical framework to mitigate against bias.

 

Organisations will need to find the right resources, whether it’s data scientists or qualified vendors, to support them in that work, which may also prove challenging. 

 

Challenges for compliance teams

Despite this initial hesitancy, analysts and other compliance professionals stand to gain massively from implementing GenAI. For example, teams in regulated industries such as banks, fintechs and large corporations are often faced with huge workloads and resource constraints.

 

The challenges are numerous. Depending on the industry, teams may be responsible for identifying a range of risks, including sanctioned individuals and entities, adjusting to new regulatory requirements, managing huge quantities of data, or a combination of all three.

 

For compliance professionals, the task of reviewing huge quantities of potential matches can be extremely monotonous and prone to error. The risks are huge. If teams make errors and miss risks, the potential impact for their firms can be significant, both in terms of financial penalties and reputational risk. It’s not surprising that organisations can struggle to hire and retain staff, leading to a serious skills shortage among compliance professionals. 

 

So what can organisations in regulated and other industries do to tackle issues of false positives and false negatives associated with modern customer and counter-party screening? It seems GenAI may hold some of the answers.

 

False positives are where systems or teams incorrectly flag risks, while false negatives are where we miss risks that should be flagged. These errors may come from human error and inaccurate systems, but they are hugely exacerbated by challenges such as name matching, risk identification and quantification. All of this can be mitigated with the right implementation of AI tools including GenAI without sacrificing accuracy.

 

Use case of GenAI in compliance

GenAI can be implemented in a number of useful ways to improve compliance processes. The most obvious is in Suspicious Activity Report (SAR) narrative commentary. Compliance analysts must write a summary of why a specific transaction or set of transactions is deemed suitable in a SAR. Well before the arrival of ChatGPT, forward thinking compliance teams had been using technology based on its ancestor technology to semi-automate the writing of narratives. It is a task that newer models excel at, particularly with human oversight.

 

The ability to produce summarised data can also be useful when it comes to tasks like Politically Exposed Persons (PEP) or Adverse Media screening. These processes involve conducting reviews or research on a client to check for potential negative news and data sources. These types of screenings enable companies to identify potential problems and prevent them from becoming implicated or facing reputational damage as a result.

 

Deployed in the right way, summary technology can enable analysts to review match information more effectively and efficiently. With any AI deployment, it is essential to consider which tool is right for which activity and the same is true here.

 

Combining GenAI with other machine learning and AI techniques can provide a real step change. This involves blending generalised and deductive capabilities from GenAI with highly measurable and explainable results available in well-known machine learning models.

 

For instance, traditional AI can then be used to create profiles, differentiating large quantities of organisations and individuals and separating out distinct identities. The techniques move past the historical hit-and-miss where analysts carry out manual searches, limiting results by arbitrary numeric limits. Once these profiles are available, GenAI supercharges analysts even further. 

 

The results from the latest innovations show that GenAI powered virtual analysts can achieve human accuracy or better across a range of different measures.

 

While concerns about accuracy will likely slow adoption, it is clear that future compliance teams will benefit heavily from these breakthroughs, which will enable significant improvements in speed, effectiveness and the ability to react to new threats and requirements. 

 


 

Gabe Hopkins is Chief Product Officer at Ripjar 

 

Main image courtesy of iStockPhoto.com and baona

 

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings