Intelligent Automation

Rethinking KYC AML as part of digital transformation

Image for Rainbird blog post
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

The climate of tightening regulations, the increasing difficulty of preventing fraud, and the digitisation of operations amid COVID mean it’s time that firms rethink KYC AML as part of their wider digital transformation strategies. This article explains how they can, using intelligent automation.

____

For firms offering financial services, legacy KYC solutions are simply no longer good enough to meet today’s challenges. 

The cunning stratagems of fraudsters and money launderers have seen firms’ KYC processes increasingly struggle in the sinking sand of financial crime. 

All the while, regulators are asking firms to do more. Anti-money laundering (AML) regulations are constantly being strengthened and tweaked. And the fines keep coming. The first half of 2020 alone saw global regulators hand out 60% more in AML fines than in the whole of 2019, according to a survey by Duff & Phelps, published in the FT. The same survey also found that customer due diligence was the most common failure to result in sanction. 

Financial service firms clearly can’t afford to cut corners with KYC and AML. But these days, compliance is far from cheap. The 2008 crisis saw financial service firms increase their compliance headcount—for example, American bank Citigroup, which upped its compliance staff from 4% of its total number of employees in 2008 to 15% in 2018. 

Nonetheless, as the global business world heads further into the economic unknown, this is a moment that calls for operational efficiency. 

It also calls for investing in compliance in new ways. 

One new way is to invest in AI and automation. And one area of AML compliance that could be particularly well set to benefit is KYC. 

 

The digitisation of compliance

The current downturn has no precedent. We’ve started 2021 with many financial services teams dispersed around the UK, working from their homes. Like every corner of business, the work of compliance is having to adapt. 

But there’s another reason this downturn has no precedent: never before have financial service providers been so digitally capable, underpinned by a market for digital compliance technology that has advanced significantly in the last decade. There are now far more possible solutions to compliance problems than there were in 2008.

Undoubtedly, in their efforts to make KYC faster, simpler and cheaper, many financial service providers will turn to machine learning—they’ll use a “data-up” approach, with their machines finding patterns in data, then using those patterns to make high-volumes of decisions about customers. 

However, this can lead to machines making biased decisions, algorithms that overfit for past patterns and are blind to contemporary issues and staff having a lack of oversight over the total client lifecycle. It can also leave firms unable to explain automated decisions.

Using machine learning would mean firms relying on closed databases that don’t easily allow for ongoing integrations with other platforms—a common problem other financial service providers experience, according to the Bank of England’s survey.

It would also mean relying on data that is likely anachronistic to current regulations, like the EU’s AML directive, which is updated every year. This is important because, since  the effectuation of AMLD5 in 2018, firms have been required to monitor their customers continuously—not just at the onboarding stage. This puts pressure on organisations to ensure the data on which their algorithm is based will retain its utility when regulations change. 

To truly achieve operational efficiency in compliance, a new paradigm for KYC AML processes is needed. One that situates KYC AML within wider digital transformation plans, rather than in new, bolt-on machine learning approaches.

 

Intelligent automation can support digital transformation (in ways that machine learning can’t)

“[D]igital transformation should be guided by the broader business strategy.”

— Behnam Tabrizi et al., Harvard Business Review

 

For financial service providers, one of the top business strategy objectives when embarking on, or continuing with, digital transformation is to improve customer experience. 

That means any automation technology used as part of that transformation must be sustainable, in the sense that it is able to mitigate unnecessary risks (whether they be reputational, financial or otherwise) and is future-proofed as far as possible. It can’t be at risk of being made redundant by yet-to-be-defined regulatory updates, for instance, relating to explainability.

The oft-cited reputational debacle that engulfed Apple and Goldman Sachs after it turned out that their credit card product’s algorithm discriminated against women, exemplifies approaches to avoid. But intelligent automation could have removed the possibility of bias, thus dampening the reputational and financial risks

Intelligent automation could, also, have gone even further, offering both firms the means of explaining how decisions had been reached to customers. Considering the Department of Financial Services is investigating the incident, the perks of intelligent automation would have helped with regulators, too. 

If and when regulatory changes do come along, that doesn’t mean a whole new algorithm. Updates to the logic can easily be made and up and running in a matter of days, hours, or even minutes. Why? Because intelligent automation is based on human logic, and that human logic can be easily represented (and codified) in a knowledge map.

Over and above machine learning, intelligent automation can more flexibly handle the issue of integrating with legacy platforms. No doubt, many firms would love to get the opportunity to completely redesign their tech stack, but it would appear not so straightforward in KYC and AML, much in the same way as it’s not that straightforward to replace a water dam. 

For instance, it may be that governance requirements prevent firms from removing certain integrations in fraud prevention. Crucially, intelligent automation can sit at the core of any KYC process, without replacing integrations that are still critical to AML. 

***

Intelligent automation not only poses far fewer long-term risks, then. It can even make up for poorly performing, clunky legacy systems without replacing them. It all starts with a rethink.

Managing continual KYC at scale, using decision automation
How to stop siloed technologies from turning ongoing KYC into a never-ending nightmare.

Join the Newsletter

In insurance, AI and intelligent automation can improve customer loyalty

Featured image for blog post
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

As it reached the end of the 2010s, the UK’s insurance sector faced a problem: its customer satisfaction levels were in decline. In January 2020, it had seen its lowest UKCSI customer satisfaction score since July 2015, falling by 1.4 per cent on the previous year. 

Then came COVID-19. Events were cancelled. Travel plans were ditched. And businesses were interrupted. Practically overnight, insurers faced a stupefying volume of claims. 

But tackling the claims spike led to even more displeased customers. In the few weeks that followed, insurers’ customer satisfaction levels saw a significant depression, dropping a further 1 per cent.

Customer satisfaction is not just critical for insurers; in any business, getting customer satisfaction right is ultimately the pathway to customer loyalty and, by extension, increased revenue. “Loyalty leaders grow revenues roughly 2.5 times as fast as their industry peers,” writes Rob Markey in Harvard Business Review.

The insurance sector’s customers would seem decidedly harder than most to gain loyalty from. According to Accenture, as many as 41 per cent of customers are likely to switch insurer after making a claim. 

But the sector needs to look inwards before it draws any conclusions. Try as they might to digitise their product offerings, insurance firms are lagging behind other sectors like finance and retail banking. Though COVID has brought truly unprecedented activity, a new, sudden spike in claims should not have necessarily meant a new, sudden dent in customer satisfaction. 

So, why did it? 

From the customer’s perspective, insurance products available to market are often cast as a few clumsily and rigidly sized policies that ought to fit all; when it comes to actually making a claim, the process is correspondingly clumsy and rigid. 

Traditional insurance firms’ products lack personalisation and transparencyand the claims process lacks speed. And, all the while, insurtech startups continue to show the wider sector what’s possible, snapping up customers who expect more digitally-ready insurance services. 

To create the kinds of customer loyalty they need, agents and brokers need to bring about proper digital transformations at their firms. This means using AI to fine-tune their products to deliver more value to their customers. 

Of course, AI is already being used for things like tackling fraud and other back-office processes. But further adoption of AI into customer experience as part of digital transformation would give insurance firms a number of ways to make tangible improvements to their products. This would also up their chances of winning increased customer loyalty. 

Transform digitally; see results

Those insurers that are making use of AI as part of digital transformations are already seeing the benefits in their customer satisfaction ratings. In the UK, few insurers are getting this right quite like LV=, which kickstarted its digital transformation in 2017 by tailoring its services to a complex map of possible customer journeys. 

On top of this, LV= has also used machine learning to boost its ability to settle grey-area claims, meaning it can focus resources on improving the customer experience. The result: LV= has remained the top insurer in the UKCSI’s customer satisfaction rankings for two years running.

But integrating AI into services could take businesses in the sector even further than this. With AI-powered tech, handling unexpected claims spikes can be made relatively easy. The real difference is made when the way in which claims are handled feels personalised—as part of a more human-centric customer experience. 

“There was a time when insurance customers were satisfied with a timely response, a fair price and quality service,” Phil Britt writes (paraphrasing Clark Wooten). Undeniably, there’s a gap at the top of the insurance market for a more personalised customer experience. And, to illustrate the scope for closer personalisation in the insurance market, I remind you that two paragraphs up I said that LV= has remained the top insurer in the UKCSI’s customer satisfaction rankings for two years running. Imagine what further improvements to customer satisfaction LV= could make if it had a mobile app as user-friendly as those of some insurtech startups, like Cuvva, Zego and INSHUR.

Build deep personalisation in insurance using intelligent automation

The truth is, AI shouldn’t just be some ‘nice to have’, bolted on and deployed when necessary; customers are magnetised to immediacy and personalisation. Think of US insurtech startup Lemonade, who in 2016 set a new world record for the fastest time to pay a claim (three seconds—and perhaps unsurprisingly, then, Lemonade tops the Cleasurance rankings for customer satisfaction). 

AI can help solve the insurance sector’s customer satisfaction problem by tailoring the process to what end users actually want out of insurance. Like more transparent pricing, communication through preferred channels, and dynamic product offers that resonate with their plans. Intelligent automation can be particularly effective in helping agents know things like what to upsell (and when) based on data collected through ongoing interactions with customers. Its explainability can also ensure that insurers can keep information accessible to auditors. 

***

Embracing AI and automation could mean claims volume fluctuations (like the one we saw in March 2020) don’t also mean customer satisfaction fluctuations. Brokers and agents can instead focus their time on the customers most at risk of jumping ship and on building long-term customer loyalty.

Free webinar: Truly intelligent automation—beyond the limitations of RPA and machine learning
If you've tried robotic process automation (RPA) or machine learning (ML), you may know about their limitations. Access this free webinar to learn how intelligent automation can move you beyond those limitations.

Join the Newsletter

COVID recovery: 5 roles intelligent automation can play

Image for Rainbird blog post
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

“COVID has accelerated a phenomenon that was happening anyway.” 

—James Duez, CEO Rainbird

 

In the economic maelstrom of COVID, many companies have increased the urgency with which they consider adoption of AI and automation. Their CTO’s and CIO’s once long-term boardroom agenda items are now integral components of business recovery strategies.

And urgent it is. In autumn, as Europe put swathes of land under new lockdowns, businesses across the insurance sector and financial and professional services abandoned return-to-office plans. Now in winter, the promising news of the COVID vaccines has given way to the reality of awaiting their roll-out. “Companies are realising they need to move forward in coming up with answers to the questions that COVID has brought,” says our CEO, James Duez. 

But, when we entered 2020, some companies could be understood for not rushing their AI and automation adoption. High-profile failures of AI transparency, such as Apple’s credit card algorithm’s purported discriminatory bias, have shown how AI debacles can not only cause serious reputational harm but also mean being investigated by regulators. 

As businesses head into the potentially powerful economic currents of a new year, few can afford a debacle like Apple’s. 

Fortunately, it’s easily avoidable.

Why? For Apple, the problem was not just that its algorithm appeared to show bias. It was also that the technique of AI usedmachine learningmeant Apple couldn’t explain how the AI’s biased judgements had been reached. “The problem is, machine learning is not easily interpretable,” explains James. “Firstly, you can’t know for sure if it’s right. Secondly, it can’t give you a reason; it’s always a mathematical judgement.” 

Any truly effective digital strategy must take into account compliance and transparency. Which is why so many companies are turning to intelligent automation (IA). IA is based on a human-down structurethat is, it starts with human knowledge and applies it to dataso that humans can always understand and explain what the machines are doing.

For businesses, IA can play many roles in COVID recoveryfrom the financial to the operational to business growth. Here are five roles.

1. Unlock thinking-time for employees

Just like the cattle drawn ploughs that turned the soils of Ancient Egypt, IA is designed to reduce heavy lifting. Building visual models of the thought processes of highly effective employees means businesses can automate decision making, helping to make judgement-heavy worksuch as assessing suspect payments or valuing stocks100 times faster.

This means businesses can make the workloads of those staff who are best placed to help with new solutions far more manageable, and let them think ahead. A London Business School study found that reducing employee time spent on repetitive tasks makes business innovation more likely. “If you can liberate people from time-heavy work, they can do the things that they are good at,” says James. “You can help people innovate.” 

2. Protect cash flow by focusing on operational efficiency

Businesses who focus on operational efficiency during a recession have the highest chance of thriving after it, argued Ranjay Gulati et al. following their 2010 HBR study. Gulati et al. write: “Companies that master the delicate balance between cutting costs to survive today and investing to grow tomorrow do well after a recession. 

10 years on, things are different. “Now, the IA sector is more mature,” says James. “Today, it’s the other way round. You invest in order to cut costs.” This is critical. While we’ve seen many organisations rush to make deep cuts and protect their bottom line, investment in IA can help operational efficiency while retaining (or indeed increasing) work output.

3. Support employee mental wellbeing to boost productivity and make work quality reliable

A snap poll held in April this year by the Institute of Employment Studies found that 48 per cent of home workers are working longer hours and irregular patterns. Studies in The Lancet have already shown that COVID has had a significant impact on mental health. 

In times of crisis, you need your employees’ judgement at its best. Paradoxically, crises catalyse poor decision making. According to NCBI, poor mental health can markedly impair a business’ productivity and profits, with work-related stress being a major contributor to poor decision making. 

IA can take the pressure off employees to perform on time-heavy tasks. This can help employers more easily reduce employee stress and therefore be better able to meet their duty of care to employee wellbeing.

4. Simplify workforce office usage to minimise risk of coronavirus transmission

IA can take off HR and office managers’ hands the pressure of making high-stakes judgements about simple questions, like whether an employee can go to the office. For instance, in June, we developed a risk assessment tool for Norfolk and Norwich University Hospital that manages risk of virus exposure to vulnerable front-line workforce with digital, one-to-one automated reports. The tool took manual effort out of workforce management for the NHS hospital at a truly critical time.

If their employees do need to use the office, businesses can use IA effectively to tether the burden of risk to an accurate, user-friendly management process. 

5. Compete and scale in the new economy

As much of the world heads into recession, law firms anticipate that their clients will see a rise in litigation and disputes, and auditors expect to see an increase in restructuring and insolvencies. The kinds of bursts of work output that may be required from the professional services could be intense.

IA can give businesses the tools to create new revenue streams to help manage high volumes of work while staying cost-effective for clients.  

***

“This isn’t the pandemic, it’s a pandemic. People now know they can work in different ways.” —James Duez

With predictions like PwC’s that AI will add $16trn to the world economy by 2030, there is no question that, in their path to COVID recovery, businesses will continue to fast track the adoption of AI and automation. For those that do, the real question will be whether the AI techniques on which they based their recoveries were right for the digital transformations they needed.

Introduction to the new Rainbird
This webinar runs through major upgrades to the Rainbird platform, as well as how they enhance the user experience and provide easier access to key features—without coding.

Join the Newsletter

Case study: assessing COVID-19 risk for thousands within minutes

NNUH COVID Tool Case Study
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Challenge: getting a team of less than 20 to have the impact of 1000s

Like many other NHS trusts, the occupational health team at the Norfolk and Norwich University Hospitals (NNUH) trust was significantly impacted by the novel coronavirus pandemic. 

Hilary Winch, Head of Workplace Health, Safety and Wellbeing at NNUH, explains:

“I recall very distinctly, Boris [Johnson] announcing the shielding guidance late one evening and I knew full well that our phone would be ringing the next morning… In those initial few weeks, we spoke to over 3,500 staff who were concerned about their own health or the health of those they lived with. It was just crazy… It was quite horrific actually.”

Hilary’s team is responsible for the health and safety of staff members within her NHS trust. With the advent of the coronavirus, it received its biggest workload, with the pressure of needing to speak to as many people as quickly as possible. Every NHS trust needed to deliver diagnostic and therapeutic services to the public but, to do this effectively, they needed to ensure their stretched workforces weren’t further decimated by COVID-19 infections. 

Equally importantly was meeting their duty of care to employees’ protection and wellbeing. NNUH needed to ensure staff wouldn’t be placed in needlessly life-threatening situations.

Approach: create a digital clone of your best specialists’ know-how

Dr Robert Hardman, Consultant in Occupational Medicine at Workplace Health, Safety and Wellbeing, is a specialist who works on Hilary’s team. While the team was having manual conversations with staff members more vulnerable to COVID, Dr Hardman began talking to Rainbird about how automation and AI might help. 

In a typical occupational health assessment, Dr Hardman and his colleagues go through (broadly speaking) a few steps: 

  1. Conduct interviews with staff members (gather information)
  2. Assess each staff member’s risk, based on their personal circumstances, government policy and clinical guidance (make a judgement)
  3. Document the information gathered and judgment made, especially for future reference or subsequent assessments (record data)

Rainbird’s intelligent decision automation platform could help across all three steps, given: 

  1. It can interact with people using a conversational, chat-style interface to gather information
  2. It can hold a model of an expert’s decision-making process and connect that model to data so it can automate decisions
  3. It can automate the process of deciding which pieces of information to record, as well as where and how

According to Hilary, the interviews alone were time-consuming: 

“You would speak with a staff member probably for a good 20 minutes to half an hour, to find out their own personal health information, assess this against the knowledge we were gaining  and talk through their anxieties.”

So, we built a Rainbird model of Dr Hardman’s expertise in conducting COVID-19 risk assessments, and incorporated government policy and clinical guidelines. 

Instead of the occupational health team having to manually assess every employee, staff could complete an online chat with Rainbird’s tool—which would ask for all relevant information, make a risk level judgment and send key information to the appropriate records.

Results: superhuman speed and accuracy, with a real human touch

Increased testing speed and accuracy

In the words of Hilary, “We can get all 9,000 staff, in theory, doing their risk assessments at the same time. We couldn’t do all 9,000 staff manually at the same time.”

Because Rainbird’s tool relies on computing power, you can run as many concurrent risk assessments as needed. Rainbird’s tool takes the expertise of a specialist and applies it consistently, every single time. 

It also accounts for unique risks to Black, Asian and Middle Eastern (BAME) people and other highly vulnerable groups—and did so early on (when many manual risk assessments did not).

Privacy protected

When undergoing risk assessments, staff often have to divulge sensitive health information. For this reason, it’s uncomfortable and borderline unethical to have line managers conducting these. 

In Hilary’s words, “Somebody might have kept some of that health information very, very private… You know, if they’ve got an immunosuppressed condition… they may not want to tell their manager.”

For this reason, Rainbird’s tool doesn’t store the personal information of staff. And when a consultation is complete, it automatically sends out two reports. One is for the occupational health team (contains detailed health information and the rationale underlying the employee’s risk rating) and the other is for managers/HR (contains only the risk rating and no health information). 

This way, managers aren’t exposed to inappropriate information and staff don’t have their privacy violated. 

Increased capacity for human connection

Machines can perform logical analyses but can’t deliver emotional connection—the COVID-19 pandemic has required Hilary’s team to excel at both. 

Because Rainbird’s tool is doing the heavy lifting of performing COVID risk assessments, Hilary and her team are freed up to have one-on-one conversations, where a human touch is needed: 

“What it meant for us was, when we launched, we were able to focus on having those conversations—particularly with those people who were shielding… we had a one-on-one conversation with every single one of them to talk through their anxieties and explore risk mitigations methods to get them back to a safe place of work, once shielding ended.”

Reduced bias and inconsistency

In situations with heightened emotions, increased complexity and overwhelming volumes, human error abounds. This is true, no matter the organisation or individual. 

Because Rainbird’s tool applies its model of COVID-19 risk without prejudice, emotions or fatigue, it doesn’t give inconsistent or contradictory outcomes. The result is a much higher level of accuracy in risk assessments and less exposure to risk for employees.

Regular reassessments that minimise risk

Since Rainbird’s tool helps Hilary’s team complete super-accurate COVID-19 risk assessments, at such a high volume and with little administrative burden, it is now possible to run risk assessments more regularly. 

This is necessary for dealing with highly dynamic situations like the COVID-19 pandemic—new information about the virus (e.g. if certain individuals are at greater risk of death from COVID than others) is constantly emerging. As a result, government guidance, clinical measures and employee risk profiles are constantly changing. 

People’s health or personal circumstances may also change—an employee may develop a separate illness that increases their vulnerability to COVID-19 or they may become pregnant and have concerns about being in the workplace. A one-off risk assessment provides a “one-off” outcome for that time. If things change, then a risk assessment has to be reviewed and that is no different for this global pandemic. If only undertaken once, a risk assessment could provide a dangerously inaccurate picture.

Get access to Rainbird's COVID-19 occupational health risk assessment tool
The only way to responsibly meet NHS E&I’s requirement to risk assess all vulnerable staff is with an automated tool like ours. Get your trust access to our approved COVID-19 risk assessment tool today.

Join the Newsletter

KYC is the new ​backbone of customer experience in retail banking

KYC backbone of CX
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

In recent years several high profile fraud cases have stunned regulators and eroded the public’s trust in banking institutions. Take the case of Danske Bank, a large Scandinavian bank involved in an 8-year international money laundering scheme that overlooked €200 billion in payments flowing through the non-resident portfolio of the group’s Estonian branch. This begs the question as to why know your customer (KYC) processes failed to prevent such an oversight.

KYC is a precautionary measure taken by regulated firms to prevent fraudulent activities and has become an integral component of banks’ fraud prevention efforts. It is mandated and essential for confirming the identities of customers during onboarding and throughout their ‘customer lifetime’, as well as verifying their suitability and any financial crime risks they might pose.

A study commissioned by the European Parliament claims that fraud has cost the EU up to €990 billion a year in losses to gross domestic product. In response, stringent regulations to tackle tax evasion and corruption have meant that KYC’s remit has been extended to include continuous monitoring, fraud management, sanctions management and anti-money laundering.

Complying with KYC obligations has always been the responsibility of banks, where KYC capabilities have been developed reactively to meet regulatory mandates. This approach has led to cumbersome processes, fragmented not only across divisional silos but also functional silos within divisions. This is not only an unsustainable model, it puts the burden on the most important and fragile elements; your customers and the customer experience (CX).

How poor KYC can directly impact the customer experience

KYC onboarding is the crucial first step a bank goes through when acquiring lifetime clients. It consists of multiple touchpoints involving various departments, such as operations, legal and compliance. A poor KYC experience has been found to directly affect the customer experience, with adverse effects on your bank’s bottom line. The following are some ways CX is directly impacted by inept KYC.

Repetitive and unnecessary requests for information

KYC checks can be invasive during customer onboarding, with banks asking customers for a multitude of documents and personal information (mandated by AML regulations) to build an accurate customer profile. According to a study conducted by Forrester Consulting, customers were contacted on average 10 times during the onboarding process and asked to submit between five and up to 100 documents.

Not only does a poor onboarding experience frustrate your customers, but many people are concerned about how their data is being used, collected and stored. In a Cisco-led survey, it was found that 43% of consumers do not believe they can adequately protect their data.

To eliminate such a problematic hurdle during KYC onboarding, challenger FinTech banks (such as Revolut and Starling) are surpassing traditional retail banks by reducing the number of touchpoints during the onboarding process.

Opening an accountSource: https://builtformars.co.uk/banks/opening/

 

Rising costs of manpower

Not surprisingly, McKinsey stated that “in the United States, anti–money laundering (AML) compliance staff have increased up to tenfold at major banks over the past five years or so.”

Banks are adding staff to vulnerable areas that are in turn causing disjointed KYC efforts across departments, within a single institution. This can mean that customers will be periodically contacted for the same information to fulfil KYC requirements multiple times—leading to duplicated efforts among banking staff and a recurring nightmare for customers.

The kind of acceptable information can also vary depending on the country and the bank, with some institutions requiring a face-to-face meeting to fulfil KYC obligations. This can be a burden on clients who operate across different countries.

Fragmented storage of customer data 

Customer information can also be stored in different systems, departments and even branches. How do we know banks’ KYC records are accurate, if customer data are stored and periodically refreshed in silos that do not seamlessly interact?

This can increase the chances of fraud. If you lack the infrastructure to form an accurate picture of your customer, to begin with, how will you know if you’re letting in a fraudster? Fraud not only causes reputational damage to banks, affecting revenue and growth prospects, but it can also have a direct impact on your customers’ lives. It can affect a victim’s credit rating or result in debt (due to stolen money).

Binary decision-making breeds inaccuracy

Banks currently using linear, rules-based KYC/AML automation systems may be doing more harm than good, as well as putting themselves and their customers at a higher risk of fraud. Such systems have been found to generate up to 90 percent false positives with vulnerabilities that can be exploited through approaches like ‘smurfing’, due to the simple nature of the rules used.

Ongoing KYC as a replacement 

The latest EU Anti-Money Laundering Directive (AMLD5) has made it a requirement for KYC to be an ongoing activity. This means some banks cannot accept certain types of customers, as performing KYC on them would be too expensive. This means closing the door on potential revenue.

Concurrently, ongoing KYC is likely to further inflate compliance costs. A study by Bain & Co. estimates that risk, governance and compliance costs account for 15-20% of the total “run the bank” cost base, among major banks.

With compliance costs becoming so high, banks may struggle to perform day-to-day functions and customer-focussed investments may be deprioritised. But this pattern doesn’t need to be the one that plays out.

Making ongoing KYC the backbone of CX

As we move towards an era of open banking, KYC onboarding will be managed within a connected, intelligent decision automation ecosystem. Intelligent decision automation refers to machines performing “thinking tasks”, which would otherwise require human intervention, while retaining the ability to explain the rationale underlying automated decisions (just as a human would). An example would be a machine deciding which customers meet multifaceted KYC requirements, then repeating this assessment as there are changes in customer profiles and available data. Such a system would be the “brain” into which all other systems plug, so you can get that unified view banks want and provide the seamless experience customers crave.

Intelligent automation, such as Rainbird, sits within the artificial intelligence (AI) space and takes a “human down”, not “data up”, approach to automation. That means we start with human knowledge and apply it to data (so that humans can always understand and explain what the machines are doing). All the logic, ambiguity and rationality behind a specialised human-made decision is combined with data to automate complex human decisions, at scale.

You will be able to dramatically speed up the process of customer onboarding, as intelligent automation can work across siloed divisions, while taking many factors into account and weighing them in a nuanced and efficient manner. False positives can also be reduced, therefore minimising the need for manual investigation.

Intelligent automation systems are also fully transparent, allowing banks and their customers to find out exactly how and why decisions have been made, and which data points were considered. This will instil greater trust in your customers, providing them with complete transparency into decisions or recommendations made about financial products. This can also reduce compliance costs.

The accuracy of customer data is key to ensuring efficient KYC. Where your KYC analysts may be prone to human error or customer data is outdated (due to information silos), intelligent automation software can make decisions despite uncertainty and missing data—it can even gather new information to update erroneous data. Its ability to work with uncertainty will also mitigate potential fraud, by identifying high-risk customers early in the onboarding cycle.

Download our free eBook to find out how intelligent automation can make ongoing KYC a success in your organisation.

Managing continual KYC at scale, using decision automation
How to stop siloed technologies from turning ongoing KYC into a never-ending nightmare.

Join the Newsletter

New Rainbird intelligent automation: if you can click, you can use it

Rainbird For Automation Lovers
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Ever worked with someone so good at their job, you wished you could clone them? The thought process goes something like:

“If only every fraud investigator were as good as Taylor. Every problem she encounters, she can solve quickly. Amid a sea of information, she has an eye for pinpointing what’s critical, and what’s utterly irrelevant. Her work is always 30%-40% better than her colleagues’. And she rarely makes mistakes.”

Taylor might seem powered by magic… but she’s not. She’s powered by an excellent mental model of her domain and maths that’s so complex, it comes across as intuition. What’s happening is that Taylor’s mental model, and the maths underlying it, is better than everybody else’s. It’s an abstract phenomenon with material implications. 

Rainbird is a software platform that gives you the tools to copy Taylor’s (or anyone else’s) abstract know-how and represent it visually (kind of like a mind map). That way, a computer can take that knowledge and use it to make decisions in the same domain as Taylor—except 100 times faster and 25% more accurately than her. That is what we call intelligent automation. So, you can have 2 Taylors. Or 100. Or 1000. It’s really up to you. All of them being instructed by Taylor herself.

For the new and improved Rainbird Studio, we set out to accomplish two objectives: 

  1. Make it easier for anyone (really, anyone) to replicate human expertise in machines (we refer to this as building a knowledge map)
  2. Ensure the process of doing so is pleasant

 

No code = instant pro

You don’t need to learn the coding language (RBLang) that Rainbird’s engine uses to understand the visual representations of human knowledge. While you could always build a knowledge map in the visual modeller, being able to code gave some users an advantage. Not anymore. Users can now intuitively apply the majority of key features within the visual interface.

When you want to build representations of your knowledge in Rainbird, you do so using concepts (i.e. a kind of thing) and relationships (i.e. how that type of thing relates to other types of things). For example, “Person” might be a concept, “Destination” might be another concept and “Visits” might be a relationship between the two concepts.

 

Rainbird Studio Canvas

 

So, a “person” “visits” a “destination”—which would help Rainbird understand how people interact with destinations. Every time it then deals with a specific person (e.g. Mike), it knows that it is possible for them to visit destinations (given certain conditions) and can make decisions on that basis. You can then build many relationships and concepts (and more layers of logic and nuance between them all). We’ve moved those deeper layers of modelling right into the visual interface. 

 

A lighter cognitive load

Cognitive overload is a thing of the past with our restructured layout. Our UI overhaul has divided major features into just three sections: 

  1. Edit: where you build models or representations of your knowledge (that look like mind maps) 
  2. Test: where you check that your model is making high-quality decisions, as expected
  3. Publish: where you unleash your model to make decisions in the real world

This way, the structure and hierarchy of Rainbird features make intuitive sense, rather than hitting you all at once. 

 

Neither a decision tree nor a random forest be

We are often asked, “Is Rainbird machine learning?” Sometimes, Rainbird is even confused with decision trees. It is neither—and it’s important to explain why. 

Machine learning refers to systems that can act to give a desired output without being explicitly programmed to do so. A random forest classifier is an example of a machine learning approach that can find patterns in data. And robotic process automation (RPA) refers to the process of programming a computer to take certain actions based on rules expressed as “If this…”, “…then do…” (typically known as decision trees). For example, if “Mike doesn’t have COVID-19 symptoms”, then “he can visit Portugal”.

But these approaches have their challenges. With machine learning, it is difficult for humans to stay fully in control because algorithms find their own patterns in data—and humans don’t always know how or why machines find the patterns they do. So, in sensitive situations—such as credit decisions for minorities or women—we can’t explain why the machine decided one person should get a better credit rate than another. The process is “data-up” (i.e. machines find patterns in data, then impose these on humans), rather than “human-down” (i.e. humans create patterns for machines, which then impose these on data).

With RPA, the decision making process is too linear and unidimensional. You can’t build holistic representations of knowledge—for example, you can’t build a detailed map of concepts and relationships that include COVID-19 risk, job status and destination infection rate. Such that anyone in any situation can consult with the same knowledge map to find out if they can go on holiday. You can only programme one-off steps, for individual scenarios. So, it’s very difficult to build decision trees that can juggle many variables at a massive scale. 

 

 

And that’s why we’ve focussed on two principles as we improve Rainbird: 

  1. Human-down structure: that is, we start with human knowledge and apply it to data (so that humans can always understand and explain what the machines are doing)
  2. Non-linear automation: that is, we focus on capturing and codifying knowledge that can apply to multiple scenarios (rather than on steps to be followed in a one-off, isolated situation)

That’s why it’s important that anyone should be able to use Rainbird and our focus is on knowledge representation (not simply the automation of steps). If you’d like to see how the new Rainbird can support your automation agenda, just request a demo.

Introduction to the new Rainbird
This webinar runs through major upgrades to the Rainbird platform, as well as how they enhance the user experience and provide easier access to key features—without coding.

Join the Newsletter

The future of fraud detection and prevention, in a post-COVID world

Fraud prevention
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

The global pandemic has brought the world, and businesses within it, to their knees. While communities band together to support one another, not everyone is working cohesively for the common good. 

Cybercrime has spiked since the novel coronavirus reached pandemic status earlier in 2020, with Checkphish reporting a rise from 400 attacks in February 2020 to almost a quarter of a million in May 2020.

Technology has revolutionised fraud detection departments, adding the ability to handle the scale and sophistication of contemporary attacks, but many approaches to fraud detection still fall flat when it comes down to the high level of accuracy required.

But it is the organisation, not the technology, that will be held legally accountable for the decisions made. Choosing a solution that will deliver a high level of accountability and accuracy in fraud judgements is key. Let’s explore how current approaches to fraud risk management measure up.

The conventional approach

It has become common to adopt basic rules-based approaches to fraud risk management. Basic rules-based systems apply a logical program of sequential parameters. They use these parameters to identify instances of fraud and subsequently perform automated actions.

While a conventional approach in fighting fraud, basic rules come with some frustrating pitfalls (although these are circumvented by intelligent rules-based systems, which are covered below). Basic rules-based systems can be expensive and too rigid to scale instantaneously (when most needed). 

The recent surge in advanced fraud cases means basic rules-based systems will struggle to keep pace with the latest techniques by opportunistic fraudsters. More sophisticated fraud solutions are being adopted, as illustrated in a research piece by the Association of Certified Fraud Examiners & SAS, which projected a 200% increase in the use of artificial intelligence (AI) and machine learning to detect fraud.

The adaptive approach 

Machine learning can detect sophisticated fraudulent activities across large volumes of data, ‘learning’ to adapt to new, unforeseen threats, while significantly reducing false positives.

Machine learning programs can be classified into two approaches; supervised and unsupervised.

In the supervised machine learning algorithm, you will present it with both fraudulent and non-fraudulent records. This ‘labelled’ data will help it achieve the level of accuracy to produce a data model, which will then predict whether fraud is present when assessing new, unknown fraud cases.

However, supervised learning is unable to scale on-demand to evolving threats. It requires clean data, substantial computation time for training, and a team of data scientists to build, maintain and interpret.  

Unsupervised machine learning, on the other hand, is useful for spotting fraudulent patterns where tagged data is not available. This model will work off an unlabelled dataset, with unknown output values mapped with the input. It will create a function that describes the structure of the data and flags anything that doesn’t fit as an anomaly. This ultimately saves time and allows the algorithm to mine at scale without the need for human input.

A major drawback to unsupervised machine learning is its inability to accurately explain its results, due to the input data being unlabelled and unknown. This makes it difficult to adhere to compliance measures or explain why a false positive occurred.

The intelligent approach

Intelligent automation (IA) copies (via a visual model) the knowledge of your fraud experts and applies this knowledge to thousands of cases, while instantaneously providing an audit trail for every decision. This includes its certainty level and every factor that went into making each decision, as well as data sources that were accessed. It also includes all considered variables and the quantitative impact of each, dramatically reducing false positives by omitting bias or inaccuracies in initial data collection. 

In contrast to simple rules-based fraud identification tools (like decision trees), the modelling process in intelligent automation allows its decision-making to reflect the ‘greyness’ inherent in fraud, resulting in more reliable outcomes.

It can also handle uncertainties in the dataset. When data is missing or uncertain, intelligent automation will try to find other data to help. If it can’t find supporting data, it is still able to make an inference, which will be presented with reduced certainty. This means that, unlike a fixed rules engine, intelligent automation doesn’t hit a dead end when data isn’t available, or end users are not sure of their answers.

Not only does this system impressively manage scale, deliver a high level of transparency and retain a human-down focus towards data input, the ROI potential of this approach is also a no-brainer. Rainbird has estimated that incorporating IA decision-making into fraud prevention could save UK businesses £7 billion over five years.

What will fraud detection look like post-COVID?

Ultimately, fraud detection and prevention systems that can deliver a high level of accuracy will: 

  1. Reason like human experts do, 
  2. Scale to deal with surges in threats, 
  3. Provide evidence for each decision made 
  4. Be able to handle uncertainty 

When all these elements are present, you will possess a fraud risk management system that will greatly reduce user friction (caused by false positives) and keep you compliant (thanks to complete transparency). 

Unfortunately, COVID-19 won’t be the last widespread crisis we will have to deal with. With climate change and the risk of future pandemics looming, choosing the right approach is not a matter of preference.

To learn more about how intelligent automation fights fraud, watch our on-demand webinar

Free webinar: How to keep up with a changing fraud landscape
Increase fraud detection rates and reduce false positives. This webinar shows how to build an automated fraud system with human intelligence at its core.

Join the Newsletter