Pseudo-AI: Why not use the real thing?
According to a Guardian piece on the West Coast tech scene last week, a large number of startups have allegedly discovered that human labour is cheaper and easier than building or implementing AI technology. Better yet, they’re employing people to do intensive manual labour, styling themselves as AI companies, and keeping it all a secret from their investors. The result is like a restored vintage car: on the outside, it’s shiny, and new, and fashioning the ‘AI’ epithet; on the inside, it’s outdated and unsustainable.
This is a strange kind of reverse of the famous ‘AI effect’. “AI is whatever hasn’t been done yet,” wrote Douglas Hofstadter about that well-known phenomenon, explaining people’s traditional reluctance to define any technology as AI or ‘real intelligence’ as soon as it actually works. Pseudo-AI is a new, obverse phenomenon: companies and business people hurriedly calling things AI when, quite clearly, they are not.
This is not a brand new phenomenon, either: in 2016, for example, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara.
There are two main problems to address here:
1) The technology exists – so use it! There is scalable, cost-effective AI technology on the market that can automate time-intensive and repetitive labour, reduce costs, and scale with business growth. It’s called Rainbird.
There’s no need for anyone to hire large numbers of staff to undertake manual work, and then lie about it as a way of claiming their technology is scalable and sustainable AI.
2) ‘Pseudo AI’ is deceptive and disingenuous – everything a technology company shouldn’t be.
Most people working in tech are well aware of the elusive jargon that can cloud the industry – at times it can resemble a sea of combobulating phraseologies, buzzwords and confusing pseudo-idioms. But when this shadowplay escalates into tech companies essentially disguising human labour as AI, a line is crossed. The last thing we need is more companies claiming to be or do something they’re not.
As with so many ethical quandaries brought up by AI, it comes down to transparency. People should be able to understand how the AI technology they use or benefit from works. At the very least, they shouldn’t be misled.
A similar theme arose when Google announced Google Duplex – a startlingly realistic virtual assistant pretending to human, complete with bumbling ‘ums’ and ‘ers’. The problem was that people wouldn’t be able to tell if they were talking to a person or a machine. The public outcry that followed demonstrated how much people value transparency. After the backlash, Google said its AI would identify itself to the humans it spoke to.
On the bright side, one thing you can say about pseudo-AI is that examples of functioning AI – transforming industries and improving lives – are so exciting that companies will play with the truth in order to be associated with it.
But it’s the ‘playing with the truth’ part that leaves a sour taste. One startup founder, of Woebot fame, called pseudo-AI the “‘Wizard of Oz’ design technique”. That figures: when the Wizard of Oz came out from behind his curtain to reveal himself as a fraud, the overriding feeling in a cinema in 1939 was one of disappointment.
Scalable, cost-effective AI technology is here, now. There’s no need to pretend otherwise.