Blog

Implicit problem, explicit solution: Bias in the workplace, implicit bias in AI, and how Rainbird can circumvent both

Sabu Samarnath
Sabu Samarnath
4 min read

For years, major companies have been hurling money at diversity programs to reduce bias in the recruitment process and in the workplace. For at least as many years, commentators have claimed that it doesn’t work. Millions are spent on poorly designed training programs, or policies that result in a “tokenism”-fuelled backlash.

A team of Princeton researchers published a salient paper in Science magazine earlier this year that outlined their findings on the issue: using the GloVe algorithm, they trained their machine learning model on billions of words from the internet. Basing their resulting analysis on the Implicit Association Test – a predominant litmus test for association bias conceived in the 1990s – they found that the system absorbed the implicit bias buried in the text. It paired certain words with obvious connotations; “rose” with “love”, for example, or “insects” with “disgust”. That’s not to say, of course, that AI is biased – the onus is on us. It’s human data that contains the bias. With all of its intrinsic patterns of meaning, our words have become inseparable from their associations, whether we like it or not.

To put it simply: machines learn their knowledge from humans, and humans are rarely impartial.
And just as a machine can learn to defeat a human at chess, it can – and will – learn the stereotypes buried in the human language. This could influence the real world in a multitude of ways: for instance, if an AI system were used by a job recruiter to screen CVs, it might lean unfairly towards a female candidate for a nursing role based on the conventional feminine associations of the role. And these examples aren’t just hypothetical. There have been genuine occurrences, such as in 2016 when ProPublica broke a story about a piece of widely used US crime-predicting software that was showing signs of bias against African Americans.

Identifying bias

When it comes to identifying and eradicating bias from the decision-making process, machine learning systems are problematic. A neural net may arrive at impeccable decisions, but its creators will often have a difficult time understanding how or why. And when those decisions are less than impeccable, that issue becomes far more pronounced. Transparency in machine learning is notoriously difficult to achieve without an expert engineer on hand to sift through the data. IBM, for instance, have been trying to “open up the black box” of Watson, in the words of IBM senior vice-president John Kelly, but he readily admits that it would require a skilled user to trace back through Watson’s decision-making path for justification.

This is where Rainbird comes in. One crucial feature that sets Rainbird apart is that it doesn’t just offer a more efficient decision-making process – it explains itself too. With Rainbird, you can be sure that your final solution won’t contain any unintended prejudices, because if it does, they will be visible in Rainbird’s ‘evidence tree’ – a web-like diagram that displays every step of the decision-making process. The rationale Rainbird provides for its choices is watertight and auditable – and if it needs to change, users will be able to amend their data input and knowledge maps accordingly, rather than being left in the dark.

The importance of “why?”

Business is not just about decisions, after all – when the stakes are high, “why” is as important as “what”. All sorts of implications would arise from an artificially intelligent business verdict cloaked in mystery. When jobs are at stake, for example, explanations are required. When an automated decision dictates a new investment, a business needs to know why. And how can you handle an employment discrimination lawsuit if you can’t explain your selection process in court? In fact, just last year a group of Oxford academics issued a call for a “right to an explanation” to be ushered into EU law to defend humans against algorhythmic discrimination.

Just as maths teachers ask pupils to show their working out, companies should ask their AI assistants to reveal their logic. Computer science professor Kris Hammond put it succinctly when he recently told the Financial Times: “If you walk into a CEO’s office and say we need to shut down three factories and sack people, the first thing the CEO will say is: ‘Why?’ Just producing a result isn’t enough.”

Ultimately, the most effective way of removing bias from the workplace isn’t by throwing money at the problem, or by allowing black-box systems to deal with the problem for us. Via Rainbird’s method of cognitive reasoning, humans can pinpoint and deal with the problem logically and ethically with the help of AI. Transparent and traceable, Rainbird can help us reveal our own bias – the first step towards getting rid of it completely.

Transform your business into a Decision Intelligence powerhouse

Explore how Rainbird can seamlessly integrate human expertise into every decision-making process. Embrace the future of Decision Intelligence powered by explainable AI.