Don’t be a statistic: the human element that separates machine intelligence from machine learning

Trading Chart
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Last year, Apple launched a credit card. Not long afterwards, the card was exposed – by Apple co-founder Steve Wozniack himself, no less – for offering disproportionately low lending rates to women. A company that includes “Accessibility” and “Inclusion & Diversity” under its list of ‘Five Core Values’ had unwittingly created a tool for gender discrimination. Female customers had been reduced to unfavourable statistics by a biased algorithm.  

“The road to hell is paved with good intentions.” 

Crucially, Bloomberg Businessweek, reporting on Goldman Sachs’ administering of the Apple Card, wrote that the bank did not “intentionally” discriminate against women, but that this “may be the point: the complex models that guide its lending decisions may inadvertently produce results that disadvantage certain groups.” 

The fact that this disicrimination was unintended, and caused by a product marketed by Apple on the values of “simplicity and transparency”, is evidence that we have a serious problem. The machine-learnt technology driving so much of today’s algorithmic decision-making often blindly derives results from statistics in ways that humans can’t understand, meaning that users, regulators and consumers are blindsided when things go wrong. Better the devil you know, as the saying goes. 

Our understanding of AI is outdated 

Before we become dictated by the laws of unintended consequences, the time has come for us to remodel our understanding of AI. This decade, let’s separate the old from the new: Machine Learning (ML) from Machine Intelligence (MI). 

With the hype-cycle of ‘Tech Twitter’ bundling countless technologies together under a single #AI hashtag, it’s become easy to categorise everything AI-related into the branch of ‘the new’.

But it’s more pragmatic and sensible to accept that some of the technologies beneath the ‘AI umbrella’ have been around for years, and some are doing little to accelerate the field. 

In many cases, ML is actually taking us backwards, to a time when such practises as discriminatory credit scores and mortgage “redlining” were the norm. 

Part and parcel of our problem is that we’re viewing AI through a narrow-minded lens, with a focus on statistical methods.

As technology activist Cory Doctorow pointed out recently, the endlessly retargeting, predictive nature of machine learning, which places the utmost priority on legacy data, is fundamentally conservative and status-quo-driven.

Of course, this alone doesn’t mean we should avoid using it. After all, the wheel still works perfectly, which is why it doesn’t need reinventing. But there are anachronistic elements of ML that are ethically problematic and dangerous. By relying solely on statistical analysis of data to drive decision-making, we open ourselves up to unintended biases in those data, and society pays the price with the surrender of its values of fairness and equality. 

The problem with relying on statistical methods

Multidimensional, multivariate numerical analyses on big data are largely impenetrable to humans and over reliant on data. We don’t readily know the features in a data set these models are selecting on when automating a decision, and have no evidence for why the algorithm may suggest a particular solution.

True, machine learnt models are sometimes able to provide interpretable ‘explanations’ of what features in the data they are selecting on. But they will always require human judgement to be able to interpret this and describe why.

Automating decisions that will affect customers without being able to communicate how and why those decisions were reached is unthinkable. Yet with many data-driven, statistical approaches this is precisely the situation companies continue to find themselves in, despite 70 percent of global consumers valuing transparency about data usage as the key to trusting a business.

Machine intelligence grounds our technology in human knowledge

Machine intelligence is a new leap. If we are really to enter a future of AI-driven growth that society can embrace, we must adopt technology that takes the best from both human decision-making and data, and grounds AI in human knowledge.

This is machine intelligence’s modus operandi. Machine Learning blindly derives from numbers, emotionless rules about objects in our world. Machine Intelligence derives an understanding of the relationships between objects in our world, based on the wisdom of a human modeller. Propelled by machine intelligence built by humans, for humans, these models blow the ‘black box’ wide open and make AI understandable.

By representing knowledge using reasoning principles rooted in probability and mathematics, machine intelligence enables businesses to synthesise and replicate the way their smartest people use their logic and expertise to make decisions – without requiring elite and expensive data science skills.

In the plainest of terms: this is how your people can take what they know, connect it to data, and use it to solve large-scale problems.

How your smartest people can encode their own logic to solve high-value problems

Rather than simply apply potentially misleading (or illicit) historical data to new scenarios, machine intelligence enables businesses to apply complex human logic to each case, at an unprecedented scale. 

This may not be a case of “out with the old, in with the new”: there is a place for both ML and machine intelligence to work in tandem, as we will explore in my next blog post. But without the human intelligence part, we are simply replacing ourselves with cold, statistical machines, losing the ability to explain our decisions, and repeating all of our old mistakes. 

Become a truly intelligent automation and decision-making organisation
Find out how Rainbird can ensure every decision in your organisation benefits from the required expertise.

Join the Newsletter