On 15 April 2025, FinTechTalk host Charles Orton-Jones was joined by Nisan Bangiev, Director, Fraud Risk Officer, Valley Bank; and Nishant Verma, Platform Lead - Data Integration & Realtime Services, NatWest.
Views on news
Large and small businesses and even tax offices may be hit with fraudulent reimbursement claims, which are almost impossible to distinguish from legitimate receipts and invoices, especially now that OpenAI released an improved image generation model which can create images with photorealistic outputs including text.
As the technology is being honed, it’s becoming increasingly difficult or even impossible for humans to tell fakes and real business documents apart. Hackatons are organised for fraud experts to encourage them to come up with fraud detection tools, but even the best ones there only have a 7% accuracy.
Attack vectors
GenAI is improving the scalability of cyber criminals’ pursuits, enabling them to blast hundreds of institutions with a fraudulent document from fake accounts, and AI can help them replicate voice or biometrics, too. Fraudsters may engage with their victims and build a relationship with them in virtual meetups and webinars before using deepfakes to cheat them out of their money. With heightened awareness of fraud, though, users can still catch out cyber criminals in the social engineering phase until it gets fully AI-driven too.
Fraudsters use sophisticated bots, which earlier were binary and now are trained by AI to interact with customers to set them up. Then, once they are hooked, the case is passed to the next stage, where there is also a human involved. Banks may also realise that their clients are involved in human trafficking and social engineering through their fraud detection monitoring, in which case remittances from distant countries can be a telltale sign. Fraudsters can open up an account with stolen IDs too to funnel money. One way to combat this is digital fingerprinting, where specialist companies offer threat metrics to detect the changes in user habit. However, bots now can also mimic a human filling out a form.
Financial institutions must introduce a multi-layered, personalised onboarding process. You should start with establishing what sources you’re going to use to verify the data provided at onboarding. Verification must be mobile-centric, checking on geolocation, IP address, etc. Additionally, you can include behavioural biometrics in the process too. It’s key that the data onboardings generate are then fed back to the model to teach it how to detect the same type of fraud next time.
There is also an information sharing system, where banks share data on frauds that have been successfully prevented. Internet behavioural data, formerly used mostly for marketing, is now leveraged by banks to detect fraud. Vendors of fraud prevention tools support their clients by pulling together data from their numerous bank clients. In the US, there is an organisation called the Aspen Institute, which brings together financial institutions, telcos and social media companies to find ways of stopping scams and fraud. In the UK, the FCA ensures that a similar kind of data sharing is taking place in real time.
Signatures should be a thing of the past too, as they are easily forgeable. Anachronistically, however, the head of the notary system that verifies identity is headed by the Archbishop of Canterbury in the UK and the pope in Europe. Meanwhile, in India there is the state-of-the-art digital ID system called Aadhaar. Some banks also ask for permission from their clients to monitor their activity outside the bank’s app and to send alerts when they’ve detected some anomaly.
The panellists’ insights
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543