Blog

Human-centric automation can save insurance from a vicious cycle of ethical challenges

Sabu Samarnath
Sabu Samarnath
5 min read

For all the bluster in the sector, insurance and AI are still at a crossroads. Guidelines are being drawn out and full integration is pending. A recent C-suite poll from PwC found that 80% of global insurance chiefs believe AI is already integrated into their business or would be within the next three years, whereas a 2018 Capgemini survey revealed that only 2% of insurers worldwide have seen full-scale implementation, whereas 34% are still in ‘ideation’ and 13% use-case testing.

In other words, the future form of AI adoption in the insurance space is still up for grabs. Perfect timing, then, for the Centre for Data Ethics and Innovation (CDEI) to publish a paper on ethical AI in insurance, incorporating the Chartered Insurance Institute (CII)’s Digital Companion to their Code of Ethics. We examine this and more in the following list of ethical challenges facing the soon-to-be AI-powered insurance industry.

1. Corrupt claims optimisation

One of the more sinister guises AI in insurance could take is the practice of calculated claims optimisation – i.e paying out the minimum amount that a customer will accept without complaining. This could create a vicious cycle: if customers believe that insurers will not pay out the full claim, then there’s an incentive to inflate the size of the claim that is submitted.
Insurers need human oversight and full transparency to safeguard customers from such cost raising tactics – and protect themselves from inflated claims.
For claims optimisation done the right way, read our take on streamlining the FNOL claims process here.

2. “Uninsurable”? Unfair

The CDEI has warned that while AI could cut fraud and better enable insurers to price risk, there is a danger that certain individuals – a new class of “uninsurables” – could be hit with unaffordably high premiums. “The price of insurance products could increase for some individuals to the point where they effectively become out of reach,” the paper claims.

Beyond the obvious ethical concerns, this could quickly become another vicious cycle in commercial terms. “Were price rises to affect a large number of people”, the report warns, “the customer bases of insurance companies could shrink to such an extent that risk pooling becomes impractical.”
A clear audit trail for each automated decision can eradicate this biased treatment. For more detail on the Rainbird in-built audit trail, visit our Platform page.

3. Fraud failure

There’s no polite way to put it: insurance companies aren’t doing enough to fight fraud, and their customers are taking a collective hit.

Deloitte estimate that annual fraud-related costs add up to 10 percent of insurers’ overall claims expenditure, while premiums are soaring across the space.

Whether due to specific regional or market trends or persistent areas of vulnerability, it’s important to remember that every insurance company’s fraud profile is unique; there isn’t a one-size-fits-all solution.

A tailored approach to fraud prevention is key: this means investing in decision-making platforms that are configurable, scalable, and based on the logic of firms’ best gatekeepers of fraudulent claims.
“There’s no substitute for good old-fashioned claims and underwriting experience,” wrote one McKinsey senior partner.

Go more in-depth with Rainbird COO James Loft’s piece on insurance fraud here.

4. Knowledge leakage

The nous and expertise of seasoned decision-makers in detecting and examining insurance fraud will arguably be at a premium in years to come – particularly with waves of baby-boomers exiting the industry and less traditional workforce demographics replacing them.

Workforces are becoming more transient, more fluid; the odds of employees moving on for a different experience rather than gathering years of experience at your firm are rising all the time. Firms should be acting now to not only nurture and preserve their most valuable people, but also scale and maximise that expertise that they possess.

Find out how your best can capture their logic in Rainbird knowledge map.

5. Dodgy data

80% of the insurance executives surveyed for Accenture’s Tech Vision 2018 reported that their organizations increasingly use data to drive automated decision-making at scale – yet a recent study estimated that 97% of business decisions are made using data that the company’s own managers consider to be of unacceptable quality.

Society as a whole has yet to fully come to terms with the collection and usage of personal data to improve their experiences as consumers. Customers are divided on what they see as a valuable and ethical use of AI and data processing, and from an operational standpoint, any reliance on monumental amounts of digits introduces a new risk that Accenture calls “data veracity”.

Will the industry and its regulators need to establish new parameters around the required consent of customers before insurers begin drawing upon all kinds of personal data to make decisions? The use of big data could open the door to unprecedented levels of to “infer characteristics” about customers, as CII managing director Keith Richards puts it. The report highlights how one insurer draws on 1000 data points to judge the risk of someone making a motor insurance claim – including something as miniscule as whether they drink bottled or tap water.

6. Bad press

We’ve heard about risk exposure in insurance – but arguably just as damaging is the unwelcome media exposure when malpractice occurs.
In one such example, a mystery shopping investigation by The Sun newspaper found that insurers had given higher premium quotes to motorists with the name Mohammed.

Trust has been a huge issue in recent years for the insurance industry. Shady brokers or dodgy, unfair practises, such as slyly jacked-up premiums, have damaged the relationship between insurers and consumers and dented reputation at a PR level.

The best way to rebuild trust? Transparency. If claims handlers can keep their customers more thoroughly informed about claims decisions, with more detailed accounts of the rationale that was applied, customers can rest easier – even if the decision is an unwelcome one.

To achieve this while maintaining an efficient claims process, human-centric and transparent automation is really the only option firms can take.

Transform your business into a Decision Intelligence powerhouse

Explore how Rainbird can seamlessly integrate human expertise into every decision-making process. Embrace the future of Decision Intelligence powered by explainable AI.