In the current conversation around transparency and reliability in AI for regulated industries – like banking, law, and medicine – maybe a bit of Platonic advice could help.
“Do you think that those who hold some true belief, without knowledge, are much different from blind men who go the right way?” (Plato, Republic)
Before we leap into the conversation about AI, let’s get to grips with a few of Plato’s ideas about knowledge.
For Plato, genuine knowledge was valuable and difficult to obtain – that’s why his definition of it had such stringent requirements, drawing a distinction between ‘knowledge’ and ‘true belief’. Knowledge, to Plato, requires your own justification, obtained through reasoning about why something is the case and understanding it fully. A true belief, on the other hand, might come from asking a reliable source, but since you yourself lack understanding of this thing, you cannot claim to know it – it’s merely a belief that you hold.
Consider Plato’s example of a man who claims to ‘know’ the route to a small town called Larissa. In fact, this man only has a true belief about how to get to Larissa – that is, he thinks he knows the way, and it just so happens that this time he is right, but, having never walked it himself, he does not truly have knowledge of it. This begs the question: if holding a true belief about the route to Larissa is just as likely to get you there as actually knowing it, why does Plato think that knowledge is more valuable than true belief?
Well, there are many cases where having a true belief about something would not be enough to get the right outcome every time. If you know what a cat is by definition, its essence, then you will never fail to identify a cat, even when faced with a whole host of other four-legged, fluffy creatures to choose from. However, if all you have is a true belief about cats (such as, they have four legs, or a tail) then there’s a chance you might select anything from a cow to a badger as your next pet. It seems like a subtle distinction – but there’s something we can take from this.
The unique value of knowledge is that it is a complete understanding, it is infallible and reliable. In more everyday terms, when you know and understand something fully, you can’t be swayed. For Plato, knowledge is more valuable than mere true belief, as it is ‘tied down’ and unshakeable – thanks to a process of reasoning and justifying why something is correct. Just because AI can provide us with conclusions, we can’t claim to know that these are the right decisions until we have conducted our own enquiry.
“Haven’t you noticed that all opinions without knowledge are ugly?” (Plato, Republic)
AI with algorithmic reasoning claims to augment human analysis, but if we just take its decisions at face value, how can we ever claim to know that they are justified? If an expert physicist tells you that E=mc2, do you actually know this, unless you fully understand the reasoning for yourself? For Plato, at least, justification needs to involve an internal process of reasoning – not simply come from an external source – if we want to have knowledge, not just true belief.
A recent legal case in the US saw a defendant, Eric Loomis, sentenced to six years in prison following a trial which used an artificially intelligent risk assessment tool. The algorithms in the COMPAS tool are described by Jeffrey Harmon, Northpointe’s general manager, as “proprietary”, meaning that the defendant was blocked from seeing the reasons why the tool recommended his conviction. Harmon’s statement that “it’s not about looking at the algorithms. It’s about looking at the outcomes.” highlights an important issue – why is the conclusion the most important thing? Surely it’s far more crucial to understand the reasoning that led the system to its decision: this is how we can make sure we always have our own justification too.
There’s no knowledge without reasoning – even when you have AI
So, what can we gain by applying a little bit of Platonic philosophy to how we think about and use AI in our business and personal lives? Perhaps most important is the idea that knowledge requires understanding. You must be able to explain the reasons why something is the case. You can only come to know anything through your own experiences and reasoning, not just from the word of someone (or something) else – no matter how reliable they might seem.
The importance of understanding the reasoning which drives our AI is key to making the most of its potential. Without this, all we can hope to gain from our AI-powered decisions and insights is a series of beliefs.
So, a bit of Platonic advice – if someone tells you the way to Larissa, make sure that you walk the road yourself, before you make your map. Let’s not fall into the trap of relying on AI that simply tells us things, leaving us in the dark about how it got there.