Blog

In the face of GDPR, AI technologies that can explain their own rationale are the way forward

Sabu Samarnath
Sabu Samarnath
3 min read

For industry outsiders, this issue is finding mainstream coverage in the introduction of GDPR, which will affect the rights of citizens everywhere by giving them the “right to an explanation” from AI technology. For insiders, Robert Matthews’ take in the Sunday Times about how GDPR represents an obstacle to AI will be nothing new, but as the May 2018 roll-out of the new EU regulation approaches, the topic can hardly be avoided.

Not everyone knows what GDPR means, but most will have to get onboard with this “dull sounding bit of EU legislation”, as Matthews fairly accurately describes it. It’ll be enforced by huge multi-million pound fines for any companies who fail to take it seriously.

Is machine learning at risk?

Usually the debate about the “right to an explanation” is centred around machine learning. Some argue that technically machine learning isn’t really black box – that the explanations are really in there, if you know how to extract them. This is theoretically true, but if you’re anything less than a bonafide software developer, you’ll have next to no chance trying to extrapolate the explanations from reams of complex code.

Others argue that when you look at the fineprint, GDPR doesn’t actually demand explanations to particular decisions, rather requiring more of an overview of how a decision was made. There is a degree of ambiguity in the wording of Article 15, which is the crucial area of the GDPR regulation. But in their recent paper Meaningful information and the right to explanation, academics Andrew D. Selbst and Julia Powles beg to differ: “Articles 13-15 provide rights to ‘meaningful information about the logic involved’ in automated decisions. This is a right to explanation, whether one uses the phrase or not.”

A fork in the road

Either way, this regulation is the biggest change in data protection in 20 years, and represents a fork in the road for technological progress – with some suggesting that for certain technologies, it could signal the end of the road. UW Professor Pedro Domingos, a leading AI researcher, has taken an extreme view, tweeting earlier this year that GDPR will make deep learning “illegal”.

This may very well be hyperbole, as is so much of the current mainstream debate around AI, but the core truth of the matter is that if machines start making our decisions for us, we should be able to see the rationale. This point can’t be emphasised enough at Rainbird Technologies, since what we call an in-built ‘evidence tree’ is one of the key features that sets our AI-powered platform apart from the crowd.

Automated decision-making, with rationale

Whenever Rainbird automates a decision for one of our clients – be it automated tax advice for an accounting firm, or an outcome for a fraud case – they have the option of viewing the evidence tree, which articulates each decision with levels of certainty, and a trace of where Rainbird sourced the data. The articulation is both visually and linguistically simple to understand.

Even notwithstanding GDPR, in today’s compliance driven world you simply cannot automate important decisions without articulating the rationale. So Robert Matthews is right when he says that GDPR is a challenge to AI. What Matthews failed to add is that companies can rise to that challenge by utilising AI-powered automated decisioning technologies like Rainbird that can explain their own rationale – or, as Article 15 puts it ever so cryptically, provide “meaningful information about the logic involved”.

To find out more about automating decision-making with an audit trail, visit our platform overview.

Transform your business into a Decision Intelligence powerhouse

Explore how Rainbird can seamlessly integrate human expertise into every decision-making process. Embrace the future of Decision Intelligence powered by explainable AI.