Click spam – how a new form of digital fraud can be prevented
20 July 2016
Berlin-based mobile analytics company Adjust analyses the mobile marketing outreach for thousands of companies and tens of thousands of mobile apps worldwide. Earlier this year, the company made a routine adjustment to the way it attributed user activity in apps to the advertising that drove it. In the process, it made a discovery that prompted a completely new product direction: fraud prevention.
Paul H Müller, co-founder and CTO
The app economy is growing at an immense pace, and marketing an app is becoming an increasingly significant market. In September 2015 alone, we received more than 16 billion clicks from our partners. Yet according to Gartner research, there are roughly 2.5 billion smartphone units worldwide. Some quick back-of-the-napkin maths would tell us that the number of advertising clicks we are sent vastly outnumbers the number of potential smartphone users who would be clicking on ads in a given month.
In the course of our work, we recently made a routine tweak to our fingerprinting. Fingerprinting is a method to match an advertisement click to an app install. Before we deployed our upgrade, we estimated that a total of 10 per cent of all fingerprint-based matches between clicks and installs, which in turn make up around 5 percent of all app installs we see, would be rejected on the basis of the new algorithm. Given this seemingly minimal impact, we deployed the changes to our production system.
What we discovered upon deployment is that the impact was disproportionately distributed across the advertising networks that buy and sell ad space. Some networks had conversion rates dropping from 0.X per cent (click-to-install) to 0.0X per cent, which was exactly the factor by which we improved the fingerprinting. Installs previously attributed to clicked ads were discovered to have never actually been driven by those millions of clicks – they were in fact organic, user-generated app installs randomly claimed by ad networks by spamming the fingerprinting algorithms.
The most effective name for these types of fraud campaigns is “click spam”. In terms of total user volume, these campaigns are dwarfed by legitimate traffic, but they are still large enough by a long shot, and so unevenly distributed, that they can move millions of euros from an app developer’s pocket to a shady fraudster.
The clicks themselves are uninteresting — app developers are optimising for installs, and typically paying on that basis as well. Diving deeper, ad networks or their fraudulent publishers are sending these clicks in the hope they can claim a share of an app’s organic user installs. Those users will look like quality traffic if they convert and retain exactly like organic traffic. The conclusion: advertising publishers are falsely claiming ad clicks that map to installs.
We now have approaches to prevent parts of this illegitimate traffic, and investigating more. We look at how frequently ad clicks are made, modelling aggregate timespan distributions and maintaining blacklists of data centres or other servers. But it is a constant cat-and-mouse game. The more realistic goal for all parties in this ecosystem (developers, publishers, networks and attribution companies alike) is to raise the cost of fraud to a level where diverting resources to other targets becomes more profitable for the criminals we are up against.
To find out more about Adjust’s research into mobile ad fraud, and the types of ad fraud most frequently encountered, check out adjust.com/mobile-ad-fraud-prevention/