Solution or "Solution" ?
A scenario that has played out several times over the past few years: a client approaches me when they are bleeding money to fraud but have limited experience with fraud prevention. They’ve most likely thrown some “fraud tools” at the problem, each one providing some temporary relief, but ultimately the problem has not gone away; sometimes it’s even gotten worse. I get my eyes on their fraudulent activity, see what’s going on and come up with a way to spot it. What I have is a "solution”: a quick, cheap (and maybe even automatic) way of distinguishing the “bads” from the “goods”. The only thing left to do is to block the “bads”.
So the client is begging for “low hanging fruit”, and I have exactly that. Problem solved, no?
While the client‘s concern is how to quickly stop the bleeding, I have to take - for their sake - a longer term view of the problem. What the client doesn’t know yet, is that the very minute a “solution” like this is implemented, an invisible countdown starts, and within a painfully short time (it could be hours!) the fraudsters will realize exactly how they are being detected, make simple and cheap changes on their end, and elegantly go around the newly crafted barrier. If that’s not bad enough, the changes which the fraudsters implement have the potential of making their transactions significantly harder to detect. This, in turn, might cause the client to bleed much more money until they launch a sophisticated solution for the longer term. Great consultant me!
The flip-side is that when you’re bleeding money (and the CEO is breathing down your neck), even a short stopgap solution is useful because, in addition to the obvious immediate savings, it affords you the time necessary to understand the issue and create a longer lasting solution. So you’re not interested in listening to theoretical “what the fraudsters are going to do next” analysis. You want action, and you want it yesterday.
Protection for your detection
Hence the detection paradox: how to prevent fraud effectively without revealing your detection methods to the fraudsters? It is a careful balancing act. On the one hand it is tempting to prevent any fraudster activity on your platform. Examples: allowing a fraudster to log into a legitimate customer’s account seems wrong. Letting even a $10 transaction through when you know the credit card is stolen sounds illogical. On the other hand, when you take such a hard line you’re also providing a very good service for fraudsters: immediate feedback. They know - quickly and with very little effort - what works and what doesn’t. So they know very quickly what they need to change in order to succeed next time. And what’s worse, a small cheap change on their end might end up requiring very expensive improvements on your end.
Before we address this challenge, let’s make sure we’ve captured its two important facets:
How much and how quickly your detection methods are revealed to the fraudsters
How easy it is for the fraudsters to circumvent them, once revealed
What can you do?
1. Delay tactics
Passively collect as much information as you can about the user’s activity, at every point in the flow. Correlate and process the information continuously and update your risk assessment all the time. But only act when it is absolutely necessary to act.
As an example, you detect with high probability that a fraudster has logged into a legit user’s account. Instead of acting, track what they are doing and keep collecting more information. They’ve added a new bank account so they can siphon the money out? Let them. Now they’ve initiated a withdrawal? Accept it and show them the standard “You’re money is on the way” page. Let them go to sleep happy, but don’t move the money. Whatever other action you take at this point (maybe none at all, just wait for them to complain) is so far removed from everything they’ve done, that they’ll have trouble figuring out what exactly tipped you off. And they will have to start again from scratch, obtain another hacked password, get a new bank account, etc. It’s costly and frustrating for them.
2. TMI (Too Much Information)
As hinted above, it is important to limit the information you share with fraudsters. When a user’s action is blocked or a transaction is declined due to a high likelihood of fraud, you have to make a careful choice about the message the user sees. False positives are very annoying and it is good practice to help your good customers recover from a decline. Still, you should remain vague about the exact reason for a decline, and refrain from mentioning any particular details about how your system has determined that the risk was high. To augment this, make it easy for good users to quickly access customer service (most fraudsters won’t). For those rare cases in which fraudsters do make the effort, train your agents not to be too chatty. I’ve seen call scripts where agents share very sensitive technical details, right out of their “admin”, with callers. Not a good idea.
3. Cocktails, not shots
Your system should base decisions on as many parameters as possible. Even if you’ve isolated a single data point that is highly effective at detecting fraud today, don’t use it on its own, because you’re revealing your strongest card, and it will quickly lose its potency. Mix and match.
Let’s say your fraud analysts discover a pattern: the fraudsters are often using gmx.de email addresses and it is very rare for your legitimate, mostly US-based, customers to have such an email. Instead of blocking all new account creations with this kind of email, make sure you’re also taking into account their IP address, their device links, behavioral patterns, etc. The most common way of doing this is by letting a statistical model generate a score based on many variables. (Just make sure that the model itself is not blinded by how incriminating the gmx.de emails are, to such an extent that it completely ignores everything else. Your data science geeks know how to do this). The desired effect here is to prevent the fraudster from noticing a direct link between gmx.de emails and a decline. It is achieved when just changing the email is not enough for the fraudster to predict success/failure: sometimes they'll fail with a gmail.com email and at other times they might succeed with a gmx.de email. As a result, they will not immediately stop using these emails, thus depriving you of a useful signal.
This example also highlights the other facet of the challenge I outlined above: even a very strong “signal” is useless against an adversary who can easily stop “transmitting” it. While weather doesn’t change in response to your usage of umbrellas, fraudsters are very adept at adapting. So the question around any of your detection variables (a.k.a. features) is: how costly is it for the fraudsters to bypass it, once they know they need to? In this case using newly created or stolen gmail.com accounts is almost as cheap as using gmx.de accounts, so it’s a no-brainer for the fraudster. As a counterexample, asking users to provide a full billing address for every payment card allows you to create a strong barrier. While fraudsters have a large number of stolen cards, they all come with different addresses. So you’ll quickly detect they are using different cards that don’t seem to belong to the same person, which is a very strong fraud indicator. Additionally, cardholder address is an important data point that you can use to correlate with other geographic information on the transaction/account (most critically IP geolocation). And in case you think it’s easy for the fraudsters to just repeat the same address for the different stolen cards, remember that these cards will then fail AVS. So the barrier holds.
4. Leave the door ajar
Let some fraud in, on purpose! For example, adjust your model threshold to allow some fraud through in such a way that the losses are small and manageable. Or maybe: allow fraudsters to log into legit accounts and take certain actions that do not directly harm anyone. Of course, you’ll need to make sure that your good customers are not sustaining losses nor losing trust, by protecting them adequately, notifying them of suspicious activity, etc. If this is implemented well, the fraudsters will end up doing a lot of work that doesn’t land them any profit. In other words, you’re providing them with a low ROI. (Remember that each attempt has some cost for them, like revealing some of their stolen identity PIIs, proxy IPs, etc. If they reuse the same ones on their next attempt, they’re giving you important information.) Low fraudster ROI may sometimes be enough to drive them away from your platform. In addition, it allows you to continuously collect information about fraudulent activity and learn how it evolves over time. It’s also useful for training your models, keeping your fraud agents on their toes, and measuring your false positives.
5. Play with dice
Inject a little randomness into your fraud decision! In effect this is taking all of the above a step further and confusing the fraudsters slightly more.
A simple application is in setting thresholds: let’s say your fraud logic is set up to shoot all transactions over $1000 to manual review. The fraudsters will likely notice this pretty quickly because the high value transactions will be delayed. But if that threshold is randomized between, say, $700 and $1500, you can maintain the same cap on your loss and the same load in your queues while leaving the fraudsters in the dark a bit more often.
You could potentially choose to do much more confusing stuff, which I will not discuss here because the fraudsters may be listening. If you do decide to try this at home, proceed with caution. It has the potential to confuse good users and even your own people; it is no use against automated attacks in which failed attempts carry almost no cost; etc.
You’re playing against intelligent, cunning and adaptive adversaries. It is crucial that you don’t reveal your cards and maintain a poker face at all time. If your “face” is an automatic system that provides immediate feedback and is based on crude logic, it will very quickly cease to be effective. Continuously collect information and assess risk, but delay, obfuscate, and even randomize your responses, so that you waste your opponents’ time and assets, and make their life harder by confusing them.
Save the good user experience for good users.