Intelligent Automation

Enabling ongoing due diligence as part of KYC with intelligent automation

Featured image for blog post
Share on facebook
Share on twitter
Share on linkedin

This article explains how to run due diligence checks as part of a real-time, continuous monitoring programme that uses intelligent decision automation software. It highlights essential requirements and explains what ‘ongoing’ means in practice.


Know Your Customer (KYC) has become a key battleground in the war against financial crime, with fraudsters using new weapons to break the financial system’s frontlines more frequently following the COVID-19 pandemic.   

The financial system needs to be one step ahead of the financial criminal. This means it’s critical that your compliance function can properly implement protocols that follow new regulatory guidance—which calls for KYC monitoring to be ongoing.


This blog post covers:

  1. What’s required in initial onboarding due diligence checks 
  2. What you need to do for follow-up onboarding due diligence checks on onboarded customers 
  3. How to run ongoing due diligence as a continuous, dynamic KYC process 
  4. Essential considerations you’ll need to make in order to keep regulators happy and key features you should ensure your intelligent automation tech stack offers 

The checks you need to run will differ depending on the type of customer you are onboarding. Both (individual customer and corporate client onboarding) are covered in this blog post.

1. What’s required in initial onboarding due diligence checks

A) Onboarding individual customers

When you onboard any new individual customer, you should run customer due diligence (CDD) checks. For example, when onboarding a retail banking customer, you should determine the individual’s: 

  • Identity & verification: Does the name, date of birth and address info provided match up with info on external data sources? Is the applicant indeed who they say they are?
  • PEP status: Is the individual a politically exposed person? 
  • Home country: In particular, is the individual from a sanctioned country?
  • (UK) residency info: Has the individual lived in the UK for under two years?
  • Credit score: What is the individual’s credit score? And is there any evidence of CCJs or insolvency?


B) Onboarding corporate clients

When you onboard a new corporate client, you should run client due diligence (CDD) checks. For example, if a business is setting up a new bank account with you, you should determine the business’s: 

  • Corporate structure: For instance, is it a PLC, LTD, or LLP? Can this be confirmed with external data sources, such as Companies House? Does the corporate structure make sense?
  • Geographical location: Where is it headquartered? If operating in multiple jurisdictions, where are its offices? In particular, are any of its offices located in any sanctioned countries?
  • Subsidiaries: What are the business’s subsidiaries, if any? Does it have any shell companies? For what reason(s) is it associated with any shell companies?
  • Ultimate beneficial owners (UBOs): Who owns the company? Who are the shareholders? On each UBO, you should also run individual due diligence checks:
    • Identity & verification: Does the name, date of birth and address info provided match up with info on external data sources? Is the applicant indeed who they say they are?
    • PEP status: Is the individual a politically exposed person? 
    • Home country: In particular, is the individual from a sanctioned country?
    • (UK) residency info: Has the individual lived in the UK for under two years?
    • Credit score: What is the individual’s credit score? And is there any evidence of CCJs or insolvency?


For both (A) individual customers and (B) corporate clients


The outcome of the above checks should yield the following:

  1. A decision on whether to onboard or not onboard: yes or no?
  2. (If yes) A decision on the customer’s or client’s risk score: how acceptable is the customer’s/client’s risk score? What percentage of risk can be attached to that score?
  3. (And) A decision on customer or client monitoring frequency: how often should you carry out follow-up checks?
  4. Whether or not enhanced due diligence (EDD) checks are required: do you need to run further checks to assess the suitability of the customer or client for onboarding?


Important: EDD checks can require manual intervention. But they are crucial to get right. If you determine that you need to run additional EDD checks, then you will need to seek further information on the customer. For instance, details on a PEP and the associated relationships between that PEP and family members. 

If you do complete EDD checks on a customer or client, it should put you in a better position to answer questions 1 to 3.


2. What you need to do for follow-up due diligence checks on onboarded customers

For subsequent checks, you should run due diligence checks exactly as you ran them at the initial onboarding stage. That means you need all of the (same) facts to be rechecked on every customer or corporate client.

It’s likely that you will be using a customer risk categorisation system comprising three levels: high risk, medium risk, low risk. 

It’s likely, also, that this categorisation will determine how often you monitor your customers. 

For example, you might currently check a high-risk customer or client every three months to determine whether their level of risk has changed (and, if so, what action needs to be taken). 

However, this kind of approach can be ineffective, and even counterproductive. That’s because it risks delaying the time it takes for your KYC team to notice signals of impending criminal activity—not good when your KYC process’s underlying goal is to reduce risk.

This is where intelligent decision automation software comes in.


3. How to run ongoing due diligence as a continuous, dynamic KYC process

Regardless of the level of complexity of the checks, using intelligent decision automation software makes KYC processes more responsive to changes in risk. 

Combined with the right stack of robotic process automation (RPA), data and other in-house monitoring systems, intelligent decision automation software can boost your compliance function’s ability to run due diligence checks continuously, throughout any customer’s or corporate client’s lifecycle.

This is particularly important to do because:

  • The regulators are pushing firms in the direction of continuous, dynamic monitoring for KYC.
  • Customer and client risk status is a function of human behaviour, which is liable to change at any time.

Intelligent decision automation software enables you to be both proactive and reactive when it comes to compliance.


On the proactive side, you can build a much more accurate picture of who your customers are. This is because intelligent decision automation software can reason over data, using the models of your best KYC analysts’ knowledge. 

The software generates a percentage risk score for each customer or corporate client, which you can then monitor over time. You can also set accurate risk parameters, determined by these percentage risk scores, and follow-up due diligence checks of the required frequency.


On the reactive side, you can feed into the software important information from your transaction monitoring system. 

For instance, if unusual transactions start appearing in a customer’s account, it may be an indicator of irregularity within that account. You could set any such transactions to trigger new due diligence checks. 

The goal here is to narrow the financial criminal gap: the closer to real-time the monitoring, the smaller the gap. 


4. Essential considerations for keeping regulators happy and fraudsters at bay

As AI and automation solutions flood the compliance technology market, you need to make sure the intelligent decision automation solution you choose follows these key tenets:

  1. Explainability: Regulators recommend that AI be explainable. In KYC, this can mean that any customer or client decision must have a clear evidence trail that shows why each decision was reached. Such an evidence trail is not always readily available through pattern-finding automation techniques, such as machine learning.
  2. Adaptability: The rules on which your decision automation is based should be able to change in step with regulations. Further, you should be able to plug your intelligent decision automation software into any data source or technology you require. Where possible, aim for platforms that have an open API.
Intelligently automate standard and enhanced due diligence processes, on an ongoing basis
Faster, more accurate onboarding.

Join the Newsletter

Essential KYC software every AML programme needs

Image for blog post
Share on facebook
Share on twitter
Share on linkedin

This article explains why our model of intelligent automation for KYC is essential to adopt, and how it works.


To meet today’s challenges, every AML programme needs a robust KYC software stack that can make intelligent decisions about whether to onboard new customers and how frequently to monitor them.

Our model of intelligent automation for KYC is essential to adopt. This is because it has key benefits that you won’t find with common point-to-point technologies. Nor will you find them with analysis-based technologies, such as machine learning algorithms.

Our model will give you these key benefits:

  • You’ll be able to get a percentage score for the level of risk each customer poses (so that you’ll have a clearer picture of who you are onboarding and won’t rule out perfectly good customers). This means fewer wasted revenue opportunities.
  • You’ll be able to easily explain how automated decisions are made—and trace the important factors in making those decisions to relevant regulations. This means you’ll keep regulators like the FCA and ICO happy.
  • Your KYC processes will be more accurate and much faster than using big teams and legacy technologies. This means you can improve the operational efficiency of your department while improving work output.


The automated KYC process

Before diving into the KYC software stack that makes up our model, let’s go over the fundamental steps common to any KYC onboarding process. Here’s what you need to do when you are deciding whether or not to onboard a new customer: 

First, you need to gather (necessary) information on that customer.

Then, you need to make a judgement about what kind of risk level that customer might pose and decide what to do with their application (ensuring you can be transparent about how the decision was made). 

Lastly, you need to take follow-up action(s), informed by initial information gathering as well as ongoing monitoring of information on that customer. This can be tailored to how your organisation would typically map a customer’s risk.

Diagram to support blog post


The underlying KYC software stack that makes these four steps possible

The stack

Rainbird has teamed up with Smart Automation to create a KYC software solution that allows KYC operations managers to achieve and automate these four steps—only with superhuman speed and accuracy (at a fraction of the cost of legacy systems). It also gives you total transparency over the decisions made about each customer.


Diagram to support blog post


Data (via an integration layer): Data are the ingredients of the entire process. In the stack, data comes from form entry information (when a customer signs up) and from external sources (such as those relating to PEP, sanction lists and credit ratings).

Smart Automation: Using tools by Blue Prism (a robotic process automation (RPA) provider), Smart Automation’s role is to collect and manage data.

Rainbird: Rainbird is the decision automation engine that adds a layer of cognitive reasoning to the stack (this process is called intelligent automation).


The stack in action

Let’s now see how the KYC process works in practice.


Diagram to support blog post


Set up: 

Prior to starting to process applications, you’ll have set up a knowledge map in Rainbird. What this means is that you can visually encode the knowledge of your best KYC analyst(s), during a process similar to mind mapping, so you can digitally replicate them.


1. Gather information

Smart Automation collects customer data using an online customer-facing form. This information is retained in an integration layer, like a spreadsheet. Then, Smart Automation uses Blue Prism digital workers to run checks on external sources of data, such as politically exposed persons (PEP) lists, to see if there are any matches.

Smart Automation then passes all the necessary information about each customer to Rainbird via an API. 


2. Make a judgement

Put simply, this is where ‘intelligent’ comes into ‘intelligent automation’. Using all the information provided, Rainbird reasons over the data for each customer in the same way your best KYC decision-maker would. Then, Rainbird makes a judgement. That judgement allows your KYC team to see for each customer: 

  • A risk score—as in, a percentage of how much risk that customer would pose if onboarded.
  • A certainty factor for that risk score—as in, how certain Rainbird is that the score it has assigned each customer accurately reflects the level of risk posed by that customer.
  • A suggested monitoring frequency for that customer (to maintain compliance with money laundering regulations across jurisdictions).


3. Decide what to do

This score and certainty level will then determine what happens next—based on rules that you encode in Rainbird and can always control or change. For instance, a customer could be onboarded…or they could be rejected…or their application could trigger enhanced due diligence (EDD) checks and be flagged to your KYC team—the key is, this decision can be set up to mimic the gold standard of your own KYC procedures. 


4. Take follow-up action

Depending on what decision is made, Smart Automation then uses Blue Prism to carry out the appropriate tasks. For instance, it can send notifications to customers or notify your KYC team that EDD checks are needed.




Ongoing monitoring

The regulators are getting tougher on AML; they need firms to do more. Especially now. That’s why we’re seeing new obligations mandated, like those requiring firms to monitor customers on an ongoing basis—not just at onboarding. In short, the case for digital transformation couldn’t be stronger. 

Once a customer is onboarded, our model of intelligent automation for KYC lets you easily make updates and upgrades to its setup (for example, if regulations are tweaked or tightened). So, you can be sure you’re applying the same fair, consistent process to every customer, at every stage of their journey. Indeed, the process would be the same as described above if you wanted to run checks on the customers you already have.

Intelligently automate standard and enhanced due diligence processes, on an ongoing basis
Faster, more accurate onboarding.

Join the Newsletter

Automated KYC verification—how to make it compliant and explainable

Image for blog post
Share on facebook
Share on twitter
Share on linkedin

This article explains why, if you use automated KYC verification, you’ll need to be able to show how your technology is making its decisions. It then explains how using an intelligent automation platform can make that possible.


For financial service providers, automating KYC verification can massively reduce the strain and difficulty of keeping compliant. To many, its benefits are obvious—like helping heads of KYC save on department operational costs (at a time when external economic pressures mean operational efficiency is business critical). 

What’s not obvious, however, is the best way to actually implement an automated KYC verification process.

One of the main obstacles is ensuring explainability. The hard work of KYC comes down to this: making decisions. And, critically, every decision made in the course of that work, regardless of whether it is a human or a machine making those decisions, needs to be explainable.

Transparency in the automation of these types of decisions is increasingly being seen as a must-have. Indeed, you’ll find few business leaders who disagree—according to PwC, 84% of global CEOs believe decisions made by AI should be explainable. 

So, as the automation market continues to flood with options for automating KYC, heads of KYC can be forgiven for having more questions than answers when it comes to how explainability can be possible without compromising on quality and efficiency. 

As of February 2021, financial sector-specific regulatory guidance and expectations on explainability are still maturing. The FCA appears to be recommending the maxim that use of AI should have ‘sufficient interpretability’. 

That does not mean there are not yet solid regulations—far from it. UK and EU data and information regulators’ guidance is very well developed. And firms could be fined for getting it wrong. The ICO’s guidance in its online guide “Explaining decisions made with AI” makes clear that being able to explain decision automation is not just good practice, but a legal requirement under GDPR. For instance, if you use AI to make decisions that affect individuals’ circumstances, you should seek to provide those individuals with “meaningful information about the logic involved.” The EU regulator (the FRA) stresses the importance of explainability, too, in its recent report.

So, if you use automated KYC verification, you’ll need to be able to show how your technology is making its decisions. Now, here’s how you can.

There are ways better than machine learning to automate KYC verification

In its 2019 survey “Machine learning in UK financial services”, the Bank of England noted that its respondents had raised “lack of explainability” as one of the biggest risks to using machine learning (one of the most common ways of automating…anything) in financial services. 

Similarly, people the FRA interviewed for its above-mentioned report said that “results from complex machine learning algorithms are often very difficult to understand and explain.” 

It’s all true. Yes, the architect of any machine learning algorithm would be able to explain how each decision has been reached. But having them do that’d be either costly or time-consuming. (And that’d fly in the face of why you’d want to automate KYC…)

Now, I have an important distinction we often make at Rainbird: not all AI is machine learning. 

As wonderful as it is, machine learning is not always best suited to a business context. Why? Essentially, machine learning is just the application of mathematics to data. It operates at the fundamental, “data-up” level, in which it finds patterns in data and then imposes these patterns on humans—leaving humans to scramble to try to find answers to how machines had found those patterns. The result? Barriers to explainability. 

And for the automation of KYC (or, really, for the automation of anything), that’s important to consider. Because KYC is in the end all about your customers—and they have the right to know how decisions that affect them have been made.

Critically, not all automated KYC verification has to use machine learning. You can achieve all the same benefits of AI and automation using an intelligent decision automation platform. You can also easily ensure every decision made is explainable—because explainability is essentially built-in with the automation.

Rather than operating at a “data-up” level of machine learning described above, intelligent automation takes a “human-down” approach, in which, using a software platform, you can apply human reasoning to data to make decisions. Human reasoning can be replicated by representing the knowledge and logic of any expert (like that of a top KYC analyst) then applied to any decision (like deciding whether to onboard a potential customer or deciding how frequently to monitor a customer). And the knowledge and logic can easily be encoded as a series of facts in a map (more on that below). 

Intelligent automation takes firms beyond (solely) using machine learning because it enables its KYC analysts to be fully in control of machines for the decisions they (KYC analysts) need to make (rather than leave those decisions up to the patterns that machines find on their own). And intelligent automation takes firms beyond normal rules-based (RPA) automation because it is far more nuanced—intelligent automation can allow for uncertainty, ambiguity and lack of data in the same way a KYC analyst might when making a complex judgement (rather than being tied to linear and unidimensional decisions of if-this-then-that logic).

Here’s what I mean in action: 

How intelligent automation can make explaining KYC processes easy

As an example, applying intelligent automation to a typical KYC verification process, then, could go something like this:

  1. Creating a knowledge map in Rainbird. A knowledge map constructs the logic by which Rainbird will make decisions. Making the map is intuitive and visual, kind of like a mind map. Note, a knowledge map is not the same as a decision tree: while a decision tree provides a set of linear if-this-then-that instructions, a knowledge map provides a holistic model of all the factors that are important to a decision by starting with concepts, relationships and rules (more on this here). That way, a decision can always be reached—even if there are gaps in data. 
  2. Tell the map how important different factors are to you when determining a customer’s risk level—like how important it is if a customer is a PEP, or how different credit scores would affect your decision to onboard a customer.
  3. Let Rainbird use the logic you’ve made to reason over data. This is where Rainbird’s reasoning engine does all the heavy lifting of making decisions.
  4. Receive the risk score for each customer and suggested monitoring frequency, along with an explanation, and use this as a basis to decide what to do next.


Crucially, because in step 1 you would have used sentences to create your knowledge map, this means you’d have modelled your automation to be easily explainable by design.


To find out more about how intelligent automation enables a consistent, dynamic, and automated KYC verification, download the Managing continual KYC at scale eBook.

Managing continual KYC at scale, using decision automation
How to stop siloed technologies from turning ongoing KYC into a never-ending nightmare.

Join the Newsletter

Rethinking KYC AML as part of digital transformation

Image for Rainbird blog post
Share on facebook
Share on twitter
Share on linkedin

The climate of tightening regulations, the increasing difficulty of preventing fraud, and the digitisation of operations amid COVID mean it’s time that firms rethink KYC AML as part of their wider digital transformation strategies. This article explains how they can, using intelligent automation.


For firms offering financial services, legacy KYC solutions are simply no longer good enough to meet today’s challenges. 

The cunning stratagems of fraudsters and money launderers have seen firms’ KYC processes increasingly struggle in the sinking sand of financial crime. 

All the while, regulators are asking firms to do more. Anti-money laundering (AML) regulations are constantly being strengthened and tweaked. And the fines keep coming. The first half of 2020 alone saw global regulators hand out 60% more in AML fines than in the whole of 2019, according to a survey by Duff & Phelps, published in the FT. The same survey also found that customer due diligence was the most common failure to result in sanction. 

Financial service firms clearly can’t afford to cut corners with KYC and AML. But these days, compliance is far from cheap. The 2008 crisis saw financial service firms increase their compliance headcount—for example, American bank Citigroup, which upped its compliance staff from 4% of its total number of employees in 2008 to 15% in 2018. 

Nonetheless, as the global business world heads further into the economic unknown, this is a moment that calls for operational efficiency. 

It also calls for investing in compliance in new ways. 

One new way is to invest in AI and automation. And one area of AML compliance that could be particularly well set to benefit is KYC. 


The digitisation of compliance

The current downturn has no precedent. We’ve started 2021 with many financial services teams dispersed around the UK, working from their homes. Like every corner of business, the work of compliance is having to adapt. 

But there’s another reason this downturn has no precedent: never before have financial service providers been so digitally capable, underpinned by a market for digital compliance technology that has advanced significantly in the last decade. There are now far more possible solutions to compliance problems than there were in 2008.

Undoubtedly, in their efforts to make KYC faster, simpler and cheaper, many financial service providers will turn to machine learning—they’ll use a “data-up” approach, with their machines finding patterns in data, then using those patterns to make high-volumes of decisions about customers. 

However, this can lead to machines making biased decisions, algorithms that overfit for past patterns and are blind to contemporary issues and staff having a lack of oversight over the total client lifecycle. It can also leave firms unable to explain automated decisions.

Using machine learning would mean firms relying on closed databases that don’t easily allow for ongoing integrations with other platforms—a common problem other financial service providers experience, according to the Bank of England’s survey.

It would also mean relying on data that is likely anachronistic to current regulations, like the EU’s AML directive, which is updated every year. This is important because, since the effectuation of AMLD5 in 2018, firms have been required to monitor their customers continuously—not just at the onboarding stage. This puts pressure on organisations to ensure the data on which their algorithm is based will retain its utility when regulations change. 

To truly achieve operational efficiency in compliance, a new paradigm for KYC AML processes is needed. One that situates KYC AML within wider digital transformation plans, rather than in new, bolt-on machine learning approaches.


Intelligent automation can support digital transformation (in ways that machine learning can’t)

“[D]igital transformation should be guided by the broader business strategy.”

— Behnam Tabrizi et al., Harvard Business Review


For financial service providers, one of the top business strategy objectives when embarking on, or continuing with, digital transformation is to improve customer experience. 

That means any automation technology used as part of that transformation must be sustainable, in the sense that it is able to mitigate unnecessary risks (whether they be reputational, financial or otherwise) and is future-proofed as far as possible. It can’t be at risk of being made redundant by yet-to-be-defined regulatory updates, for instance, relating to explainability.

The oft-cited reputational debacle that engulfed Apple and Goldman Sachs after it turned out that their credit card product’s algorithm discriminated against women, exemplifies approaches to avoid. But intelligent automation could have removed the possibility of bias, thus dampening the reputational and financial risks

Intelligent automation could, also, have gone even further, offering both firms the means of explaining how decisions had been reached to customers. Considering the Department of Financial Services is investigating the incident, the perks of intelligent automation would have helped with regulators, too. 

If and when regulatory changes do come along, that doesn’t mean a whole new algorithm. Updates to the logic can easily be made and up and running in a matter of days, hours, or even minutes. Why? Because intelligent automation is based on human logic, and that human logic can be easily represented (and codified) in a knowledge map.

Over and above machine learning, intelligent automation can more flexibly handle the issue of integrating with legacy platforms. No doubt, many firms would love to get the opportunity to completely redesign their tech stack, but it would appear not so straightforward in KYC and AML, much in the same way as it’s not that straightforward to replace a water dam. 

For instance, it may be that governance requirements prevent firms from removing certain integrations in fraud prevention. Crucially, intelligent automation can sit at the core of any KYC process, without replacing integrations that are still critical to AML. 


Intelligent automation not only poses far fewer long-term risks, then. It can even make up for poorly performing, clunky legacy systems without replacing them. It all starts with a rethink.

Managing continual KYC at scale, using decision automation
How to stop siloed technologies from turning ongoing KYC into a never-ending nightmare.

Join the Newsletter

In insurance, AI and intelligent automation can improve customer loyalty

Featured image for blog post
Share on facebook
Share on twitter
Share on linkedin

As it reached the end of the 2010s, the UK’s insurance sector faced a problem: its customer satisfaction levels were in decline. In January 2020, it had seen its lowest UKCSI customer satisfaction score since July 2015, falling by 1.4 per cent on the previous year. 

Then came COVID-19. Events were cancelled. Travel plans were ditched. And businesses were interrupted. Practically overnight, insurers faced a stupefying volume of claims. 

But tackling the claims spike led to even more displeased customers. In the few weeks that followed, insurers’ customer satisfaction levels saw a significant depression, dropping a further 1 per cent.

Customer satisfaction is not just critical for insurers; in any business, getting customer satisfaction right is ultimately the pathway to customer loyalty and, by extension, increased revenue. “Loyalty leaders grow revenues roughly 2.5 times as fast as their industry peers,” writes Rob Markey in Harvard Business Review.

The insurance sector’s customers would seem decidedly harder than most to gain loyalty from. According to Accenture, as many as 41 per cent of customers are likely to switch insurer after making a claim. 

But the sector needs to look inwards before it draws any conclusions. Try as they might to digitise their product offerings, insurance firms are lagging behind other sectors like finance and retail banking. Though COVID has brought truly unprecedented activity, a new, sudden spike in claims should not have necessarily meant a new, sudden dent in customer satisfaction. 

So, why did it? 

From the customer’s perspective, insurance products available to market are often cast as a few clumsily and rigidly sized policies that ought to fit all; when it comes to actually making a claim, the process is correspondingly clumsy and rigid. 

Traditional insurance firms’ products lack personalisation and transparencyand the claims process lacks speed. And, all the while, insurtech startups continue to show the wider sector what’s possible, snapping up customers who expect more digitally-ready insurance services. 

To create the kinds of customer loyalty they need, agents and brokers need to bring about proper digital transformations at their firms. This means using AI to fine-tune their products to deliver more value to their customers. 

Of course, AI is already being used for things like tackling fraud and other back-office processes. But further adoption of AI into customer experience as part of digital transformation would give insurance firms a number of ways to make tangible improvements to their products. This would also up their chances of winning increased customer loyalty. 

Transform digitally; see results

Those insurers that are making use of AI as part of digital transformations are already seeing the benefits in their customer satisfaction ratings. In the UK, few insurers are getting this right quite like LV=, which kickstarted its digital transformation in 2017 by tailoring its services to a complex map of possible customer journeys. 

On top of this, LV= has also used machine learning to boost its ability to settle grey-area claims, meaning it can focus resources on improving the customer experience. The result: LV= has remained the top insurer in the UKCSI’s customer satisfaction rankings for two years running.

But integrating AI into services could take businesses in the sector even further than this. With AI-powered tech, handling unexpected claims spikes can be made relatively easy. The real difference is made when the way in which claims are handled feels personalised—as part of a more human-centric customer experience. 

“There was a time when insurance customers were satisfied with a timely response, a fair price and quality service,” Phil Britt writes (paraphrasing Clark Wooten). Undeniably, there’s a gap at the top of the insurance market for a more personalised customer experience. And, to illustrate the scope for closer personalisation in the insurance market, I remind you that two paragraphs up I said that LV= has remained the top insurer in the UKCSI’s customer satisfaction rankings for two years running. Imagine what further improvements to customer satisfaction LV= could make if it had a mobile app as user-friendly as those of some insurtech startups, like Cuvva, Zego and INSHUR.

Build deep personalisation in insurance using intelligent automation

The truth is, AI shouldn’t just be some ‘nice to have’, bolted on and deployed when necessary; customers are magnetised to immediacy and personalisation. Think of US insurtech startup Lemonade, who in 2016 set a new world record for the fastest time to pay a claim (three seconds—and perhaps unsurprisingly, then, Lemonade tops the Cleasurance rankings for customer satisfaction). 

AI can help solve the insurance sector’s customer satisfaction problem by tailoring the process to what end users actually want out of insurance. Like more transparent pricing, communication through preferred channels, and dynamic product offers that resonate with their plans. Intelligent automation can be particularly effective in helping agents know things like what to upsell (and when) based on data collected through ongoing interactions with customers. Its explainability can also ensure that insurers can keep information accessible to auditors. 


Embracing AI and automation could mean claims volume fluctuations (like the one we saw in March 2020) don’t also mean customer satisfaction fluctuations. Brokers and agents can instead focus their time on the customers most at risk of jumping ship and on building long-term customer loyalty.

Free webinar: Truly intelligent automation—beyond the limitations of RPA and machine learning
If you've tried robotic process automation (RPA) or machine learning (ML), you may know about their limitations. Access this free webinar to learn how intelligent automation can move you beyond those limitations.

Join the Newsletter

COVID recovery: 5 roles intelligent automation can play

Image for Rainbird blog post
Share on facebook
Share on twitter
Share on linkedin

“COVID has accelerated a phenomenon that was happening anyway.” 

—James Duez, CEO Rainbird


In the economic maelstrom of COVID, many companies have increased the urgency with which they consider adoption of AI and automation. Their CTO’s and CIO’s once long-term boardroom agenda items are now integral components of business recovery strategies.

And urgent it is. In autumn, as Europe put swathes of land under new lockdowns, businesses across the insurance sector and financial and professional services abandoned return-to-office plans. Now in winter, the promising news of the COVID vaccines has given way to the reality of awaiting their roll-out. “Companies are realising they need to move forward in coming up with answers to the questions that COVID has brought,” says our CEO, James Duez. 

But, when we entered 2020, some companies could be understood for not rushing their AI and automation adoption. High-profile failures of AI transparency, such as Apple’s credit card algorithm’s purported discriminatory bias, have shown how AI debacles can not only cause serious reputational harm but also mean being investigated by regulators. 

As businesses head into the potentially powerful economic currents of a new year, few can afford a debacle like Apple’s. 

Fortunately, it’s easily avoidable.

Why? For Apple, the problem was not just that its algorithm appeared to show bias. It was also that the technique of AI usedmachine learningmeant Apple couldn’t explain how the AI’s biased judgements had been reached. “The problem is, machine learning is not easily interpretable,” explains James. “Firstly, you can’t know for sure if it’s right. Secondly, it can’t give you a reason; it’s always a mathematical judgement.” 

Any truly effective digital strategy must take into account compliance and transparency. Which is why so many companies are turning to intelligent automation (IA). IA is based on a human-down structurethat is, it starts with human knowledge and applies it to dataso that humans can always understand and explain what the machines are doing.

For businesses, IA can play many roles in COVID recoveryfrom the financial to the operational to business growth. Here are five roles.

1. Unlock thinking-time for employees

Just like the cattle drawn ploughs that turned the soils of Ancient Egypt, IA is designed to reduce heavy lifting. Building visual models of the thought processes of highly effective employees means businesses can automate decision making, helping to make judgement-heavy worksuch as assessing suspect payments or valuing stocks100 times faster.

This means businesses can make the workloads of those staff who are best placed to help with new solutions far more manageable, and let them think ahead. A London Business School study found that reducing employee time spent on repetitive tasks makes business innovation more likely. “If you can liberate people from time-heavy work, they can do the things that they are good at,” says James. “You can help people innovate.” 

2. Protect cash flow by focusing on operational efficiency

Businesses who focus on operational efficiency during a recession have the highest chance of thriving after it, argued Ranjay Gulati et al. following their 2010 HBR study. Gulati et al. write: “Companies that master the delicate balance between cutting costs to survive today and investing to grow tomorrow do well after a recession.” 

10 years on, things are different. “Now, the IA sector is more mature,” says James. “Today, it’s the other way round. You invest in order to cut costs.” This is critical. While we’ve seen many organisations rush to make deep cuts and protect their bottom line, investment in IA can help operational efficiency while retaining (or indeed increasing) work output.

3. Support employee mental wellbeing to boost productivity and make work quality reliable

A snap poll held in April this year by the Institute of Employment Studies found that 48 per cent of home workers are working longer hours and irregular patterns. Studies in The Lancet have already shown that COVID has had a significant impact on mental health. 

In times of crisis, you need your employees’ judgement at its best. Paradoxically, crises catalyse poor decision making. According to NCBI, poor mental health can markedly impair a business’ productivity and profits, with work-related stress being a major contributor to poor decision making. 

IA can take the pressure off employees to perform on time-heavy tasks. This can help employers more easily reduce employee stress and therefore be better able to meet their duty of care to employee wellbeing.

4. Simplify workforce office usage to minimise risk of coronavirus transmission

IA can take off HR and office managers’ hands the pressure of making high-stakes judgements about simple questions, like whether an employee can go to the office. For instance, in June, we developed a risk assessment tool for Norfolk and Norwich University Hospital that manages risk of virus exposure to vulnerable front-line workforce with digital, one-to-one automated reports. The tool took manual effort out of workforce management for the NHS hospital at a truly critical time.

If their employees do need to use the office, businesses can use IA effectively to tether the burden of risk to an accurate, user-friendly management process. 

5. Compete and scale in the new economy

As much of the world heads into recession, law firms anticipate that their clients will see a rise in litigation and disputes, and auditors expect to see an increase in restructuring and insolvencies. The kinds of bursts of work output that may be required from the professional services could be intense.

IA can give businesses the tools to create new revenue streams to help manage high volumes of work while staying cost-effective for clients.  


“This isn’t the pandemic, it’s a pandemic. People now know they can work in different ways.” —James Duez

With predictions like PwC’s that AI will add $16trn to the world economy by 2030, there is no question that, in their path to COVID recovery, businesses will continue to fast track the adoption of AI and automation. For those that do, the real question will be whether the AI techniques on which they based their recoveries were right for the digital transformations they needed.

Introduction to the new Rainbird
This webinar runs through major upgrades to the Rainbird platform, as well as how they enhance the user experience and provide easier access to key features—without coding.

Join the Newsletter

Case study: assessing COVID-19 risk for thousands within minutes

NNUH COVID Tool Case Study
Share on facebook
Share on twitter
Share on linkedin

Challenge: getting a team of less than 20 to have the impact of 1000s

Like many other NHS trusts, the occupational health team at the Norfolk and Norwich University Hospitals (NNUH) trust was significantly impacted by the novel coronavirus pandemic. 

Hilary Winch, Head of Workplace Health, Safety and Wellbeing at NNUH, explains:

“I recall very distinctly, Boris [Johnson] announcing the shielding guidance late one evening and I knew full well that our phone would be ringing the next morning… In those initial few weeks, we spoke to over 3,500 staff who were concerned about their own health or the health of those they lived with. It was just crazy… It was quite horrific actually.”

Hilary’s team is responsible for the health and safety of staff members within her NHS trust. With the advent of the coronavirus, it received its biggest workload, with the pressure of needing to speak to as many people as quickly as possible. Every NHS trust needed to deliver diagnostic and therapeutic services to the public but, to do this effectively, they needed to ensure their stretched workforces weren’t further decimated by COVID-19 infections. 

Equally importantly was meeting their duty of care to employees’ protection and wellbeing. NNUH needed to ensure staff wouldn’t be placed in needlessly life-threatening situations.

Approach: create a digital clone of your best specialists’ know-how

Dr Robert Hardman, Consultant in Occupational Medicine at Workplace Health, Safety and Wellbeing, is a specialist who works on Hilary’s team. While the team was having manual conversations with staff members more vulnerable to COVID, Dr Hardman began talking to Rainbird about how automation and AI might help. 

In a typical occupational health assessment, Dr Hardman and his colleagues go through (broadly speaking) a few steps: 

  1. Conduct interviews with staff members (gather information)
  2. Assess each staff member’s risk, based on their personal circumstances, government policy and clinical guidance (make a judgement)
  3. Document the information gathered and judgment made, especially for future reference or subsequent assessments (record data)

Rainbird’s intelligent decision automation platform could help across all three steps, given: 

  1. It can interact with people using a conversational, chat-style interface to gather information
  2. It can hold a model of an expert’s decision-making process and connect that model to data so it can automate decisions
  3. It can automate the process of deciding which pieces of information to record, as well as where and how

According to Hilary, the interviews alone were time-consuming: 

“You would speak with a staff member probably for a good 20 minutes to half an hour, to find out their own personal health information, assess this against the knowledge we were gaining  and talk through their anxieties.”

So, we built a Rainbird model of Dr Hardman’s expertise in conducting COVID-19 risk assessments, and incorporated government policy and clinical guidelines. 

Instead of the occupational health team having to manually assess every employee, staff could complete an online chat with Rainbird’s tool—which would ask for all relevant information, make a risk level judgment and send key information to the appropriate records.

Results: superhuman speed and accuracy, with a real human touch

Increased testing speed and accuracy

In the words of Hilary, “We can get all 9,000 staff, in theory, doing their risk assessments at the same time. We couldn’t do all 9,000 staff manually at the same time.”

Because Rainbird’s tool relies on computing power, you can run as many concurrent risk assessments as needed. Rainbird’s tool takes the expertise of a specialist and applies it consistently, every single time. 

It also accounts for unique risks to Black, Asian and Middle Eastern (BAME) people and other highly vulnerable groups—and did so early on (when many manual risk assessments did not).

Privacy protected

When undergoing risk assessments, staff often have to divulge sensitive health information. For this reason, it’s uncomfortable and borderline unethical to have line managers conducting these. 

In Hilary’s words, “Somebody might have kept some of that health information very, very private… You know, if they’ve got an immunosuppressed condition… they may not want to tell their manager.”

For this reason, Rainbird’s tool doesn’t store the personal information of staff. And when a consultation is complete, it automatically sends out two reports. One is for the occupational health team (contains detailed health information and the rationale underlying the employee’s risk rating) and the other is for managers/HR (contains only the risk rating and no health information). 

This way, managers aren’t exposed to inappropriate information and staff don’t have their privacy violated. 

Increased capacity for human connection

Machines can perform logical analyses but can’t deliver emotional connection—the COVID-19 pandemic has required Hilary’s team to excel at both. 

Because Rainbird’s tool is doing the heavy lifting of performing COVID risk assessments, Hilary and her team are freed up to have one-on-one conversations, where a human touch is needed: 

“What it meant for us was, when we launched, we were able to focus on having those conversations—particularly with those people who were shielding… we had a one-on-one conversation with every single one of them to talk through their anxieties and explore risk mitigations methods to get them back to a safe place of work, once shielding ended.”

Reduced bias and inconsistency

In situations with heightened emotions, increased complexity and overwhelming volumes, human error abounds. This is true, no matter the organisation or individual. 

Because Rainbird’s tool applies its model of COVID-19 risk without prejudice, emotions or fatigue, it doesn’t give inconsistent or contradictory outcomes. The result is a much higher level of accuracy in risk assessments and less exposure to risk for employees.

Regular reassessments that minimise risk

Since Rainbird’s tool helps Hilary’s team complete super-accurate COVID-19 risk assessments, at such a high volume and with little administrative burden, it is now possible to run risk assessments more regularly. 

This is necessary for dealing with highly dynamic situations like the COVID-19 pandemic—new information about the virus (e.g. if certain individuals are at greater risk of death from COVID than others) is constantly emerging. As a result, government guidance, clinical measures and employee risk profiles are constantly changing. 

People’s health or personal circumstances may also change—an employee may develop a separate illness that increases their vulnerability to COVID-19 or they may become pregnant and have concerns about being in the workplace. A one-off risk assessment provides a “one-off” outcome for that time. If things change, then a risk assessment has to be reviewed and that is no different for this global pandemic. If only undertaken once, a risk assessment could provide a dangerously inaccurate picture.

Get access to Rainbird's COVID-19 occupational health risk assessment tool
The only way to responsibly meet NHS E&I’s requirement to risk assess all vulnerable staff is with an automated tool like ours. Get your trust access to our approved COVID-19 risk assessment tool today.

Join the Newsletter

KYC is the new ​backbone of customer experience in retail banking

KYC backbone of CX
Share on facebook
Share on twitter
Share on linkedin

In recent years several high profile fraud cases have stunned regulators and eroded the public’s trust in banking institutions. Take the case of Danske Bank, a large Scandinavian bank involved in an 8-year international money laundering scheme that overlooked €200 billion in payments flowing through the non-resident portfolio of the group’s Estonian branch. This begs the question as to why know your customer (KYC) processes failed to prevent such an oversight.

KYC is a precautionary measure taken by regulated firms to prevent fraudulent activities and has become an integral component of banks’ fraud prevention efforts. It is mandated and essential for confirming the identities of customers during onboarding and throughout their ‘customer lifetime’, as well as verifying their suitability and any financial crime risks they might pose.

A study commissioned by the European Parliament claims that fraud has cost the EU up to €990 billion a year in losses to gross domestic product. In response, stringent regulations to tackle tax evasion and corruption have meant that KYC’s remit has been extended to include continuous monitoring, fraud management, sanctions management and anti-money laundering.

Complying with KYC obligations has always been the responsibility of banks, where KYC capabilities have been developed reactively to meet regulatory mandates. This approach has led to cumbersome processes, fragmented not only across divisional silos but also functional silos within divisions. This is not only an unsustainable model, it puts the burden on the most important and fragile elements; your customers and the customer experience (CX).

How poor KYC can directly impact the customer experience

KYC onboarding is the crucial first step a bank goes through when acquiring lifetime clients. It consists of multiple touchpoints involving various departments, such as operations, legal and compliance. A poor KYC experience has been found to directly affect the customer experience, with adverse effects on your bank’s bottom line. The following are some ways CX is directly impacted by inept KYC.

Repetitive and unnecessary requests for information

KYC checks can be invasive during customer onboarding, with banks asking customers for a multitude of documents and personal information (mandated by AML regulations) to build an accurate customer profile. According to a study conducted by Forrester Consulting, customers were contacted on average 10 times during the onboarding process and asked to submit between five and up to 100 documents.

Not only does a poor onboarding experience frustrate your customers, but many people are concerned about how their data is being used, collected and stored. In a Cisco-led survey, it was found that 43% of consumers do not believe they can adequately protect their data.

To eliminate such a problematic hurdle during KYC onboarding, challenger FinTech banks (such as Revolut and Starling) are surpassing traditional retail banks by reducing the number of touchpoints during the onboarding process.

Opening an accountSource:


Rising costs of manpower

Not surprisingly, McKinsey stated that “in the United States, anti–money laundering (AML) compliance staff have increased up to tenfold at major banks over the past five years or so.”

Banks are adding staff to vulnerable areas that are in turn causing disjointed KYC efforts across departments, within a single institution. This can mean that customers will be periodically contacted for the same information to fulfil KYC requirements multiple times—leading to duplicated efforts among banking staff and a recurring nightmare for customers.

The kind of acceptable information can also vary depending on the country and the bank, with some institutions requiring a face-to-face meeting to fulfil KYC obligations. This can be a burden on clients who operate across different countries.

Fragmented storage of customer data 

Customer information can also be stored in different systems, departments and even branches. How do we know banks’ KYC records are accurate, if customer data are stored and periodically refreshed in silos that do not seamlessly interact?

This can increase the chances of fraud. If you lack the infrastructure to form an accurate picture of your customer, to begin with, how will you know if you’re letting in a fraudster? Fraud not only causes reputational damage to banks, affecting revenue and growth prospects, but it can also have a direct impact on your customers’ lives. It can affect a victim’s credit rating or result in debt (due to stolen money).

Binary decision-making breeds inaccuracy

Banks currently using linear, rules-based KYC/AML automation systems may be doing more harm than good, as well as putting themselves and their customers at a higher risk of fraud. Such systems have been found to generate up to 90 percent false positives with vulnerabilities that can be exploited through approaches like ‘smurfing’, due to the simple nature of the rules used.

Ongoing KYC as a replacement 

The latest EU Anti-Money Laundering Directive (AMLD5) has made it a requirement for KYC to be an ongoing activity. This means some banks cannot accept certain types of customers, as performing KYC on them would be too expensive. This means closing the door on potential revenue.

Concurrently, ongoing KYC is likely to further inflate compliance costs. A study by Bain & Co. estimates that risk, governance and compliance costs account for 15-20% of the total “run the bank” cost base, among major banks.

With compliance costs becoming so high, banks may struggle to perform day-to-day functions and customer-focussed investments may be deprioritised. But this pattern doesn’t need to be the one that plays out.

Making ongoing KYC the backbone of CX

As we move towards an era of open banking, KYC onboarding will be managed within a connected, intelligent decision automation ecosystem. Intelligent decision automation refers to machines performing “thinking tasks”, which would otherwise require human intervention, while retaining the ability to explain the rationale underlying automated decisions (just as a human would). An example would be a machine deciding which customers meet multifaceted KYC requirements, then repeating this assessment as there are changes in customer profiles and available data. Such a system would be the “brain” into which all other systems plug, so you can get that unified view banks want and provide the seamless experience customers crave.

Intelligent automation, such as Rainbird, sits within the artificial intelligence (AI) space and takes a “human down”, not “data up”, approach to automation. That means we start with human knowledge and apply it to data (so that humans can always understand and explain what the machines are doing). All the logic, ambiguity and rationality behind a specialised human-made decision is combined with data to automate complex human decisions, at scale.

You will be able to dramatically speed up the process of customer onboarding, as intelligent automation can work across siloed divisions, while taking many factors into account and weighing them in a nuanced and efficient manner. False positives can also be reduced, therefore minimising the need for manual investigation.

Intelligent automation systems are also fully transparent, allowing banks and their customers to find out exactly how and why decisions have been made, and which data points were considered. This will instil greater trust in your customers, providing them with complete transparency into decisions or recommendations made about financial products. This can also reduce compliance costs.

The accuracy of customer data is key to ensuring efficient KYC. Where your KYC analysts may be prone to human error or customer data is outdated (due to information silos), intelligent automation software can make decisions despite uncertainty and missing data—it can even gather new information to update erroneous data. Its ability to work with uncertainty will also mitigate potential fraud, by identifying high-risk customers early in the onboarding cycle.

Download our free eBook to find out how intelligent automation can make ongoing KYC a success in your organisation.

Managing continual KYC at scale, using decision automation
How to stop siloed technologies from turning ongoing KYC into a never-ending nightmare.

Join the Newsletter

New Rainbird intelligent automation: if you can click, you can use it

Rainbird For Automation Lovers
Share on facebook
Share on twitter
Share on linkedin

Ever worked with someone so good at their job, you wished you could clone them? The thought process goes something like:

“If only every fraud investigator were as good as Taylor. Every problem she encounters, she can solve quickly. Amid a sea of information, she has an eye for pinpointing what’s critical, and what’s utterly irrelevant. Her work is always 30%-40% better than her colleagues’. And she rarely makes mistakes.”

Taylor might seem powered by magic… but she’s not. She’s powered by an excellent mental model of her domain and maths that’s so complex, it comes across as intuition. What’s happening is that Taylor’s mental model, and the maths underlying it, is better than everybody else’s. It’s an abstract phenomenon with material implications. 

Rainbird is a software platform that gives you the tools to copy Taylor’s (or anyone else’s) abstract know-how and represent it visually (kind of like a mind map). That way, a computer can take that knowledge and use it to make decisions in the same domain as Taylor—except 100 times faster and 25% more accurately than her. That is what we call intelligent automation. So, you can have 2 Taylors. Or 100. Or 1000. It’s really up to you. All of them being instructed by Taylor herself.

For the new and improved Rainbird Studio, we set out to accomplish two objectives: 

  1. Make it easier for anyone (really, anyone) to replicate human expertise in machines (we refer to this as building a knowledge map)
  2. Ensure the process of doing so is pleasant


No code = instant pro

You don’t need to learn the coding language (RBLang) that Rainbird’s engine uses to understand the visual representations of human knowledge. While you could always build a knowledge map in the visual modeller, being able to code gave some users an advantage. Not anymore. Users can now intuitively apply the majority of key features within the visual interface.

When you want to build representations of your knowledge in Rainbird, you do so using concepts (i.e. a kind of thing) and relationships (i.e. how that type of thing relates to other types of things). For example, “Person” might be a concept, “Destination” might be another concept and “Visits” might be a relationship between the two concepts.


Rainbird Studio Canvas


So, a “person” “visits” a “destination”—which would help Rainbird understand how people interact with destinations. Every time it then deals with a specific person (e.g. Mike), it knows that it is possible for them to visit destinations (given certain conditions) and can make decisions on that basis. You can then build many relationships and concepts (and more layers of logic and nuance between them all). We’ve moved those deeper layers of modelling right into the visual interface. 


A lighter cognitive load

Cognitive overload is a thing of the past with our restructured layout. Our UI overhaul has divided major features into just three sections: 

  1. Edit: where you build models or representations of your knowledge (that look like mind maps) 
  2. Test: where you check that your model is making high-quality decisions, as expected
  3. Publish: where you unleash your model to make decisions in the real world

This way, the structure and hierarchy of Rainbird features make intuitive sense, rather than hitting you all at once. 


Neither a decision tree nor a random forest be

We are often asked, “Is Rainbird machine learning?” Sometimes, Rainbird is even confused with decision trees. It is neither—and it’s important to explain why. 

Machine learning refers to systems that can act to give a desired output without being explicitly programmed to do so. A random forest classifier is an example of a machine learning approach that can find patterns in data. And robotic process automation (RPA) refers to the process of programming a computer to take certain actions based on rules expressed as “If this…”, “…then do…” (typically known as decision trees). For example, if “Mike doesn’t have COVID-19 symptoms”, then “he can visit Portugal”.

But these approaches have their challenges. With machine learning, it is difficult for humans to stay fully in control because algorithms find their own patterns in data—and humans don’t always know how or why machines find the patterns they do. So, in sensitive situations—such as credit decisions for minorities or women—we can’t explain why the machine decided one person should get a better credit rate than another. The process is “data-up” (i.e. machines find patterns in data, then impose these on humans), rather than “human-down” (i.e. humans create patterns for machines, which then impose these on data).

With RPA, the decision making process is too linear and unidimensional. You can’t build holistic representations of knowledge—for example, you can’t build a detailed map of concepts and relationships that include COVID-19 risk, job status and destination infection rate. Such that anyone in any situation can consult with the same knowledge map to find out if they can go on holiday. You can only programme one-off steps, for individual scenarios. So, it’s very difficult to build decision trees that can juggle many variables at a massive scale. 



And that’s why we’ve focussed on two principles as we improve Rainbird: 

  1. Human-down structure: that is, we start with human knowledge and apply it to data (so that humans can always understand and explain what the machines are doing)
  2. Non-linear automation: that is, we focus on capturing and codifying knowledge that can apply to multiple scenarios (rather than on steps to be followed in a one-off, isolated situation)

That’s why it’s important that anyone should be able to use Rainbird and our focus is on knowledge representation (not simply the automation of steps). If you’d like to see how the new Rainbird can support your automation agenda, just request a demo.

Introduction to the new Rainbird
This webinar runs through major upgrades to the Rainbird platform, as well as how they enhance the user experience and provide easier access to key features—without coding.

Join the Newsletter

The future of fraud detection and prevention, in a post-COVID world

Fraud prevention
Share on facebook
Share on twitter
Share on linkedin

The global pandemic has brought the world, and businesses within it, to their knees. While communities band together to support one another, not everyone is working cohesively for the common good. 

Cybercrime has spiked since the novel coronavirus reached pandemic status earlier in 2020, with Checkphish reporting a rise from 400 attacks in February 2020 to almost a quarter of a million in May 2020.

Technology has revolutionised fraud detection departments, adding the ability to handle the scale and sophistication of contemporary attacks, but many approaches to fraud detection still fall flat when it comes down to the high level of accuracy required.

But it is the organisation, not the technology, that will be held legally accountable for the decisions made. Choosing a solution that will deliver a high level of accountability and accuracy in fraud judgements is key. Let’s explore how current approaches to fraud risk management measure up.

The conventional approach

It has become common to adopt basic rules-based approaches to fraud risk management. Basic rules-based systems apply a logical program of sequential parameters. They use these parameters to identify instances of fraud and subsequently perform automated actions.

While a conventional approach in fighting fraud, basic rules come with some frustrating pitfalls (although these are circumvented by intelligent rules-based systems, which are covered below). Basic rules-based systems can be expensive and too rigid to scale instantaneously (when most needed). 

The recent surge in advanced fraud cases means basic rules-based systems will struggle to keep pace with the latest techniques by opportunistic fraudsters. More sophisticated fraud solutions are being adopted, as illustrated in a research piece by the Association of Certified Fraud Examiners & SAS, which projected a 200% increase in the use of artificial intelligence (AI) and machine learning to detect fraud.

The adaptive approach 

Machine learning can detect sophisticated fraudulent activities across large volumes of data, ‘learning’ to adapt to new, unforeseen threats, while significantly reducing false positives.

Machine learning programs can be classified into two approaches; supervised and unsupervised.

In the supervised machine learning algorithm, you will present it with both fraudulent and non-fraudulent records. This ‘labelled’ data will help it achieve the level of accuracy to produce a data model, which will then predict whether fraud is present when assessing new, unknown fraud cases.

However, supervised learning is unable to scale on-demand to evolving threats. It requires clean data, substantial computation time for training, and a team of data scientists to build, maintain and interpret.  

Unsupervised machine learning, on the other hand, is useful for spotting fraudulent patterns where tagged data is not available. This model will work off an unlabelled dataset, with unknown output values mapped with the input. It will create a function that describes the structure of the data and flags anything that doesn’t fit as an anomaly. This ultimately saves time and allows the algorithm to mine at scale without the need for human input.

A major drawback to unsupervised machine learning is its inability to accurately explain its results, due to the input data being unlabelled and unknown. This makes it difficult to adhere to compliance measures or explain why a false positive occurred.

The intelligent approach

Intelligent automation (IA) copies (via a visual model) the knowledge of your fraud experts and applies this knowledge to thousands of cases, while instantaneously providing an audit trail for every decision. This includes its certainty level and every factor that went into making each decision, as well as data sources that were accessed. It also includes all considered variables and the quantitative impact of each, dramatically reducing false positives by omitting bias or inaccuracies in initial data collection. 

In contrast to simple rules-based fraud identification tools (like decision trees), the modelling process in intelligent automation allows its decision-making to reflect the ‘greyness’ inherent in fraud, resulting in more reliable outcomes.

It can also handle uncertainties in the dataset. When data is missing or uncertain, intelligent automation will try to find other data to help. If it can’t find supporting data, it is still able to make an inference, which will be presented with reduced certainty. This means that, unlike a fixed rules engine, intelligent automation doesn’t hit a dead end when data isn’t available, or end users are not sure of their answers.

Not only does this system impressively manage scale, deliver a high level of transparency and retain a human-down focus towards data input, the ROI potential of this approach is also a no-brainer. Rainbird has estimated that incorporating IA decision-making into fraud prevention could save UK businesses £7 billion over five years.

What will fraud detection look like post-COVID?

Ultimately, fraud detection and prevention systems that can deliver a high level of accuracy will: 

  1. Reason like human experts do, 
  2. Scale to deal with surges in threats, 
  3. Provide evidence for each decision made 
  4. Be able to handle uncertainty 

When all these elements are present, you will possess a fraud risk management system that will greatly reduce user friction (caused by false positives) and keep you compliant (thanks to complete transparency). 

Unfortunately, COVID-19 won’t be the last widespread crisis we will have to deal with. With climate change and the risk of future pandemics looming, choosing the right approach is not a matter of preference.

To learn more about how intelligent automation fights fraud, watch our on-demand webinar

Free webinar: How to keep up with a changing fraud landscape
Increase fraud detection rates and reduce false positives. This webinar shows how to build an automated fraud system with human intelligence at its core.

Join the Newsletter