#40 - Vendor disasters start with how you test them
Are there more contested business relationships than fraud vendors and their clients?
Somehow, when “fraud hits the fan”, it is usually the vendor that gets the flak... and the internal stakeholder that brought them in.
As it is such a strategic, long-term decision that often gets criticized, we talk a lot about how to pick the right vendors.
I myself have written in the past on how to pick a vendor that matches your needs, one that would support you along your journey, and how that support needs to be modeled in the right way for you.
Today I’d like to explore a different angle of the same topic:
How to test vendors.
Or more specifically–how not to.
Let’s talk about it.
The AI Score Test Trap
Here’s the thing:
When clients test a new fraud provider, 90% of the time I see that they test the AI score performance.
And in 90% of those cases, that’s the reason they’re reaching the wrong decision.
How so? Well, let’s be honest with ourselves:
We say that we want an AI fraud solution.
We say that we want AI to make decisions for us.
We think AI is a plug-n-play magic solution that would just work once integration is complete.
But in reality:
We don’t really know how to optimize AI scores
Because of that, we lose trust in AI
And as we lose trust, we tend to neglect the score and turn to rules
Evidently, I’ve touched on this point many times in the past. Why? Because I see it daily in my consulting conversations. And even before that, as a vendor myself.
Here’s what I want to highlight:
Fintechs assess vendors thinking they would mainly use their AI capabilities, when in reality they end up using them for simple rules.
But as they’re going through the RFI motions, they are still focused on AI as the ROI driver. That is their focus when testing solutions.
We’ve all been there: spending days assembling a dataset, sending it out for a (hopefully) blind test, poring over the results, and deciding which vendor’s model did best.
The sad truth is that I’ve seen the same Fintechs spending years(!) with those vendors without ever using their AI score in production. Simply because they don’t know how.
And I’m here to say one simple thing: it’s ok.
It’s ok not to use AI for your fraud decisions.
It’s ok if your team doesn’t know how to best utilize it.
Evidently, you’re far from being alone in that. And that certainly doesn’t mean you cannot fight fraud effectively.
In fact, you can fight fraud with rules very effectively just as well. Both approaches are viable, and if it fits your team or business better, then it’s ok to lean into it.
But there’s one important bit.
You want to test your vendor for its rule management capabilities, not their AI capabilities.
Testing Vendors for How You Are Going to Use Them
Let’s start with the bad news: testing rules is not so simple.
At least not if you’re expecting clear, quantifiable results.
Great rule management tools combine powerful features with a sleek user experience that enables the user to write good rules.
To a large degree, your rules can only be as effective as your team designs them to be.
And if your team’s skill has such an impact on performance, how can we determine the vendor’s impact in the process?
And without determining impact, how can we justify the cost?
Not so easy. No wonder we keep falling for the same AI score test trap. It’s just… simpler.
But here’s the good news–testing rules might not be so easy, but it’s also not impossible.
Here are a few approaches that would allow you to gain confidence with the vendor and establish quantifiable ROI metrics:
Testing The Vendor
The first option is putting the vendor on the hook. Instead of them digesting your dataset and training a model, they should write rules to improve overall performance.
The main advantage of this approach is how cheap it is for the client. You basically don’t need to do much other than creating the dataset.
But I believe the disadvantages outweigh it.
For starters, unless you’re expecting the vendor to manage your rules in production (which isn’t usually the case), you haven’t really tested how easy it would be for your team to utilize the product.
Secondly, it would put most vendors at a disadvantage as they are product-led organizations, not necessarily expert fraud rule writers. So results might be sub-par to what your team can achieve.
Testing Offline
The second option would be to get access to an offline sandbox environment where you can upload your own data and play around with the functionalities.
This would let you test the data features themselves and how powerful the rules you can write on the platform are.
The advantage is that you get your team to familiarize themselves with the product and test if they can squeeze more performance from their rules.
The disadvantage is that an offline environment doesn’t let you experience rule writing as a business process, only the analytical benefits.
Live Trial Period
Finally, a third option is to integrate and test the product fully in your live environment.
This will help you to not only gauge the direct analytical benefits of the product, but also the benefits of utilizing the risk suite’s full capabilities in a live environment.
There are a lot of moving parts to it: alerting, backtesting, analyzing false positives, adapting to fraud pattern changes, minimizing bugs in production, researching new rules… the list goes on.
And while all of these are critical to your performance, they are also very hard to isolate and measure unless you compare apples to apples. Before and after.
Does integrating the vendor make a potential misfit even more expensive? Perhaps.
You don’t need to jump into the deep end first.
Test locally, get comfortable, agree with the vendor on a trial period and contract exit points, and you can minimize the risk and associated costs.
Test what you use, not what the vendor is selling
It all comes down to this: when buying a new car, you’re not taking it to the racetrack to check how fast it can run a lap.
No. You test drive it around town. Just like you would if the car was already yours.
Same thing with fraud vendors - test the capabilities you know and are comfortable using.
Different vendors might offer different testing options than the possibilities I outlined above. Just pick something you feel would give you the confidence you need to make a data-driven decision.
And you just might find out that some vendors would even allow for discounts if they don’t need to train and maintain an AI model just for you…
What’s your experience with fraud vendors? What’s the best POC method you’ve seen? Hit the reply button and let me know!
In the meantime, that’s all for this week.
See you next Saturday.
P.S. If you feel like you're running out of time and need some expert advice with getting your fraud strategy on track, here's how I can help you:
Free Discovery Call - Unsure where to start or have a specific need? Schedule a 15-min call with me to assess if and how I can be of value.
Schedule a Discovery Call Now »
Consultation Call - Need expert advice on fraud? Meet with me for a 1-hour consultation call to gain the clarity you need. Guaranteed.
Book a Consultation Call Now »
Fraud Strategy Action Plan - Is your Fintech struggling with balancing fraud prevention and growth? Are you thinking about adding new fraud vendors or even offering your own fraud product? Sign up for this 2-week program to get your tailored, high-ROI fraud strategy action plan so that you know exactly what to do next.
Sign-up Now »
Enjoyed this and want to read more? Sign up to my newsletter to get fresh, practical insights weekly!