This blog post explains why, if you use automated KYC verification, you’ll need to be able to show how your technology is making its decisions. It then explains how using an intelligent automation platform can make that possible.
____
For financial service providers, automating KYC verification can massively reduce the strain and difficulty of keeping compliant. To many, its benefits are obvious—like helping heads of KYC save on department operational costs (at a time when external economic pressures mean operational efficiency is business critical).
What’s not obvious, however, is the best way to actually implement an automated KYC verification process.
One of the main obstacles is ensuring explainability. The hard work of KYC comes down to this: making decisions. And, critically, every decision made in the course of that work, regardless of whether it is a human or a machine making those decisions, needs to be explainable.
Transparency in the automation of these types of decisions is increasingly being seen as a must-have. Indeed, you’ll find few business leaders who disagree—according to PwC, 84% of global CEOs believe decisions made by AI should be explainable.
So, as the automation market continues to flood with options for automating KYC, heads of KYC can be forgiven for having more questions than answers when it comes to how explainability can be possible without compromising on quality and efficiency.
As of February 2021, financial sector-specific regulatory guidance and expectations on explainability are still maturing. The FCA appears to be recommending the maxim that use of AI should have ‘sufficient interpretability’.
That does not mean there are not yet solid regulations—far from it. UK and EU data and information regulators’ guidance is very well developed. And firms could be fined for getting it wrong. The ICO’s guidance in its online guide “Explaining decisions made with AI” makes clear that being able to explain decision automation is not just good practice, but a legal requirement under GDPR. For instance, if you use AI to make decisions that affect individuals’ circumstances, you should seek to provide those individuals with “meaningful information about the logic involved.” The EU regulator (the FRA) stresses the importance of explainability, too, in its recent report.
So, if you use automated KYC verification, you’ll need to be able to show how your technology is making its decisions. Now, here’s how you can.
There are ways better than machine learning to automate KYC verification
In its 2019 survey “Machine learning in UK financial services”, the Bank of England noted that its respondents had raised “lack of explainability” as one of the biggest risks to using machine learning (one of the most common ways of automating…anything) in financial services.
Similarly, people the FRA interviewed for its above-mentioned report said that “results from complex machine learning algorithms are often very difficult to understand and explain.”
It’s all true. Yes, the architect of any machine learning algorithm would be able to explain how each decision has been reached. But having them do that’d be either costly or time-consuming. (And that’d fly in the face of why you’d want to automate KYC…)
Now, I have an important distinction we often make at Rainbird: not all AI is machine learning.
As wonderful as it is, machine learning is not always best suited to a business context. Why? Essentially, machine learning is just the application of mathematics to data. It operates at the fundamental, “data-up” level, in which it finds patterns in data and then imposes these patterns on humans—leaving humans to scramble to try to find answers to how machines had found those patterns. The result? Barriers to explainability.
And for the automation of KYC (or, really, for the automation of anything), that’s important to consider. Because KYC is in the end all about your customers—and they have the right to know how decisions that affect them have been made.
Critically, not all automated KYC verification has to use machine learning. You can achieve all the same benefits of AI and automation using an intelligent decision automation platform. You can also easily ensure every decision made is explainable—because explainability is essentially built-in with the automation.
Rather than operating at a “data-up” level of machine learning described above, intelligent automation takes a “human-down” approach, in which, using a software platform, you can apply human reasoning to data to make decisions. Human reasoning can be replicated by representing the knowledge and logic of any expert (like that of a top KYC analyst) then applied to any decision (like deciding whether to onboard a potential customer or deciding how frequently to monitor a customer). And the knowledge and logic can easily be encoded as a series of facts in a map (more on that below).
Intelligent automation takes firms beyond (solely) using machine learning because it enables its KYC analysts to be fully in control of machines for the decisions they (KYC analysts) need to make (rather than leave those decisions up to the patterns that machines find on their own). And intelligent automation takes firms beyond normal rules-based (RPA) automation because it is far more nuanced—intelligent automation can allow for uncertainty, ambiguity and lack of data in the same way a KYC analyst might when making a complex judgement (rather than being tied to linear and unidimensional decisions of if-this-then-that logic).
Here’s what I mean in action:
Transform KYC with Intelligent Onboarding automation from Rainbird AI on Vimeo.
How intelligent automation can make explaining KYC processes easy
As an example, applying intelligent automation to a typical KYC verification process, then, could go something like this:
- Creating a knowledge map in Rainbird. A knowledge map constructs the logic by which Rainbird will make decisions. Making the map is intuitive and visual, kind of like a mind map. Note, a knowledge map is not the same as a decision tree: while a decision tree provides a set of linear if-this-then-that instructions, a knowledge map provides a holistic model of all the factors that are important to a decision by starting with concepts, relationships and rules (more on this here). That way, a decision can always be reached—even if there are gaps in data.
- Tell the map how important different factors are to you when determining a customer’s risk level—like how important it is if a customer is a PEP, or how different credit scores would affect your decision to onboard a customer.
- Let Rainbird use the logic you’ve made to reason over data. This is where Rainbird’s reasoning engine does all the heavy lifting of making decisions.
- Receive the risk score for each customer and suggested monitoring frequency, along with an explanation, and use this as a basis to decide what to do next.
Crucially, because in step 1 you would have used sentences to create your knowledge map, this means you’d have modelled your automation to be easily explainable by design.
***
To find out more about how intelligent automation enables a consistent, dynamic, and automated KYC verification, download the Managing continual KYC at scale eBook.