Rainbird Book Review – Superintelligence, Nick Bostrom

Wednesday 28 October 2015

Tags:

This post is the first of a semi-regular feature taking a brief look at the books Rainbirders are reading.

This post is the first of a semi-regular feature taking a brief look at the books Rainbirders are reading.

Superintelligence: Paths, Dangers, Strategies
Nick Bostrom

As the fate of gorillas now rests with humanity, not gorillas themselves, so too the fate of humanity would rest with a super-intelligent machine should we ever create one. Self-preservation would be an inevitable goal of any super intelligent machine, how would humanity exert control over such a machine?

Posing a super intelligent machine even a seemingly mundane task, searching for prime numbers for instance, could end in catastrophe. Nothing would stop the machine rapidly working to consume all Earth’s resources to achieve it’s goal. We can’t just turn it off, after all it’s more intelligent than us and self-preservation will become key to the machine achieving it’s goal.

In order to prevent this existential catastrophe, Bostrom argues, we must solve this ‘control problem’.

Nick Bostrom is a philosopher at the University of Oxford and founder of the Future of Humanity Institute, which is taking a multidisciplinary approach to looking at some of humanity’s biggest challenges.

Many have argued that humanity faces more significant, pressing threats than that posed by super-intelligence. Climate change, nuclear war and food security are all very really dangers which could prevent us ever getting to the stage that super-intelligence would concern us.

Indeed there is a leap of faith to take in this book. Bostrom lays down a little of the history of AI, and proposes some future paths to achieving human level artificial general intelligence. However, the current state of the art is a long way from achieving anything close to this. Bostrom tackles this by presenting the aggregated view of a number of leading researchers in AGI that we’re very likely to achieve human-level artificial intelligence by 2090 at the latest. This does, though, disregard a view of many others that we have a significant gap in our understanding to bridge before we can even look at making such a prediction.

However once you accept we may, at some point in the future, achieve such a feat this book does provide a very compelling argument for how big a challenge we face. Bostrom proposes research be guided by a strict ethical framework.

Read this book with an open mind, but don’t go running for the fall-out shelter just yet.

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.” says Bostrom.

Others have argued however that “The current “AI scare” going on feels a bit like kids playing with Legos and worrying about accidentally creating a nuclear bomb”. [https://twitter.com/ryan_p_adams/status/563384710781734913]