At Rainbird we’re tackling the challenge of explainable AI and transparency firmly in the context of automated decision making — something that is increasingly relied upon to make complex judgements in areas such as finance, law and healthcare.
There are many examples where AI programmes make decisions, but end users are in the dark about how conclusions were reached, and if challenged the systems themselves can’t provide any reasoning retrospectively. In this context, explainable AI doesn’t mean a tech vendor having to give away IP. It means an end user of these automated systems being able to understand why the mortgage application was declined, why a credit card transaction was flagged as fraudulent or why a brokerage platform executed a particular trade when the regulators call.
As a member of the All Party Parliamentary Group on AI (APPG AI) I’ve been part of formal discussions on the explainability of AI. My view is that we need to see more technologies that model automated decision-making on human expertise – rather than the elusive data-driven matrixes that underpin machine learning. This kind of expert-down approach is particularly important to regulated industries, where there is a real need for technology that can provide auditable decisions. The biggest benefit to having an AI platform governed by human rules is that there are always subject matter experts who can provide much-needed clarity.
Regardless of the scale and importance of the decision, having technologies that operate away from public view just fans the flames of public distrust in AI, which is detrimental to the industry at large. The surest way to clear the air — and thus clear the path for AI progress — is to stop making excuses and start making sure the algorithms are held to account.