ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Managing the problem of deep fake fraud

Linked InTwitterFacebook

People have been manipulating images ever since photography was invented. Perhaps the earliest example of a fake photograph is a supposed image of a drowned man, which dates back to 1840, when photography itself was in its infancy. Photographs of the Cottingley fairies created by children in 1917 fooled many people in the UK. And Josef Stalin was well known for erasing unwanted colleagues from photographs (and from life).

 

But these instances are trivial compared with today’s deep fakes, which have emerged over the past few years as a major phenomenon. Deep fakes are audio or video streams that appear to represent live people talking, but which are in fact highly realistic counterfeits.

 

Driven by generative AI, deep fakes are growing at a rate of 400 per cent a year. Many of them are legitimate, such as their use by film studios to “de-age” actors, such as Mark Hamill in Star Wars spin off The Mandalorian. They are already being used by businesses to create advertising featuring celebrities or to make business pitches in multiple languages.

 

However, there is a growing trend for deep fakes to be used fraudulently. As far back as 2019 a British energy company was swindled out of $240,000 by a deep fake, while in 2021 reports emerged of a huge deep fake fraud in Dubai involving $35 million. In both cases an employee had received phone instructions from someone who appeared to be a senior executive.

 

The deep fake problem

 

Fake emails, designed, for instance, to persuade unwary finance executives to send money to fake suppliers, have been around for years. However, this type of fraud is now made more effective by highly credible audio and video deep fakes.

 

Deep fakes can also be used to spread disinformation which can damage the reputation of a business (for example, making it seem that the CEO has made racist remarks) or its products, leading to a loss of confidence and a potential fall in share price. They can be used to trick employees into giving away sensitive information such as passwords. And developments in technology mean that deep fakes will soon enable criminals to create fake videos in real time that they can use in fraudulent video conference calls.

 

Detecting deep fakes

 

The obvious response to the deep fake problem is to teach people to spot them. A number of guides have been published on spotting deep fakes. These give advice such as looking out for:

  • Blurring around eyes and teeth; unnatural eye or mouth movements; limited facial expressions
  • Incorrect pronunciation or odd phraseology; poor lip synch; monotone or metallic sounding speech
  • Blurring around the edge of the face; odd-looking hair; a lack of symmetry especially with ears, jewellery and collars
  • Odd lighting and shadows; inconsistent colours 

Unfortunately, the reality is that none of these things will definitively identify a deep fake; they are clues at best and many clues are quickly becoming out of date as technology improves. (Try some of the tools that test your ability to spot fakes such as the Fraunhofer fake audio test or the MIT Media Lab fake media test if you want to see how hard it is to spot a deep fake.)

 

An alternative is to fight fire with fire – to use AI-powered tools to identify deep fakes. These tools analyse video and audio for anomalies that may indicate manipulation. Intel, for example, has developed FakeCatcher, a deep fake detection tool that checks for authenticity by scrutinising facial movement and the subcutaneous blood flow shown in the video. These can be useful tools to warn you. But again, they are not perfect, even if they are perhaps better than the average human at spotting a fake.

 

The truth is that criminals are in an arms race with companies aiming to detect fakes. It’s becoming harder and harder to be confident that what you see on a screen is real.

 

Managing deep fakes

 

So, if it really is impossible to be certain whether a video is genuine or a fake, what can businesses do? After all, deep fakes aren’t going away.

 

First of all, organisations need to understand the risks and take them seriously. This isn’t science fiction or a trivial problem. And it’s going to get worse.

 

Second, they need to expect that they will be targeted. A report on propaganda from the Brookings Institution suggests that “militaries and security agencies should just assume that rivals are capable of generating deep fake videos of any official or leader within minutes”. Businesses should too.

 

Thirdly, they can and should search for deep fakes, but they must recognise that relying on automated tools or physical clues will not be sufficient. Instead, they need to have systems for preventing and responding to damage caused by deep fakes.

 

Don’t just detect: prevent

 

In truth, preventing deep fakes may be impossible. But you can at least make them harder to create. One tactic is to strengthen the security of any places where media assets useful to criminals (such as social media accounts) may be stored. Strong passwords, multifactor authentication, access limitations and other cyber-hygiene techniques are important.

 

In some circumstances – for example, where it is important to verify that someone at the other end of a video camera is who they say they are (as opposed to a video recording) – using liveness checks is going to be helpful.

 

Another technique is to use forensic watermarking or blockchain technology on your digital assets. This can make it much easier to identify whether a media asset has been tampered with. In practice this may be impossible to apply to all assets, but at the very least important assets such as recordings of the CEO introducing an annual report could be protected.

 

Don’t just prevent: respond

 

Once you have done as much as possible to limit deep fake creation, you need to take active steps to manage your reputation online. Do this by actively searching for instances where images of your senior executives or company employees appear online – especially on social media – looking for fakes and reporting them as appropriate (to law enforcement, regulators, the media and/or customers).

 

In addition, it’s important to have a well-rehearsed communications and response plan for any instances of a damaging deep fake emerging. As with a cyber-attack, it’s important to look calm and well-prepared.

 

Put appropriate controls in place. This is particularly important for finance departments. For example, it would be unwise to pay a supplier immediately the CEO calls you and tells you to. Instead, you could have a process that involves phoning them back and establishing their identity before any payment is made. Recognising the CEO’s voice is not a sufficient reason for making that payment.

 

Of course, it’s perfectly possible that the CEO is phoning you with a genuine instruction. So culture and education are important too. An impatient CEO needs to be taught that controls are necessary. If they bully to the extent that a subordinate overrides the controls, they must also accept that it is not the subordinate’s fault when things go wrong. With the imminent arrival of real-time video-calling deep fakes, this is an essential precaution.

 

And finally, don’t be credulous. If something seems wrong, or too good to be true, then it probably is. JDLR (“just doesn’t look right”) should raise a red flag. Again, culture is really important here: employees must feel motivated to raise suspicions with line managers.

 

Deep fakes are evolving at a frightening rate. Organisations need to stay up to date, both on the latest technology to create them, and on the latest scams that criminals are using to exploit them. The best way to defend against deep fakes is to take a multi-layered approach that combines technology, strong security, employee training and cultural initiatives. Combined, of course, with an appropriate degree of cynicism.

Linked InTwitterFacebook
Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join free today
Join Business Reporter