Existential risks: Climate Change and Superintelligence

Nick Bostrom’s new book, Superintelligence, is a fascinating read for anyone interested in existential risks. I imagine that includes most people concerned with climate change. Marty Weitzman’s analysis and my own calculations make it clear that a substantial part of the expected impact of climate change come from catastrophes we would face if the world turns out to be just a bit more sensitive to greenhouse gases, or a bit more vulnerable to their effects, than we thought.

Superintelligence seems to pose a similar threat. Artificial Intelligence equal to the average human might be decades or more away, but once that point is reached, a superintelligence might emerge very rapidly. Before then we have to solve the twin problems of how to control it, and give it aims that lead to human flourishing, or it will be too late. Bostrom is doubtful that we will be able to do this, so he recommends we ‘put [the quest for artificial intelligence] down gently, and quickly back out of the room’, while concluding sadly that ‘some little idiot is bound to press the ignite button just to see what happens’.

In the week of the latest climate change summit, more and more people are coming round to the view that there is a relatively simple first step, climate change taxation, that we can take to bring it under control. What is more, it won’t cost the Earth. In fact, if we do it right, it probably won’t cost much at all. Now is surely the time for strong, comprehensive and sustained climate change taxes in those countries that are willing around the world. Let’s not kid ourselves that ‘the greatest market failure the world has ever seen’ is actually the greatest problem humanity is likely to face. Other challenges await us.

A full review of Bostrom’s book can be found here if I have whetted your appetite.

Please consider leaving a comment or subscribing to the RSS feed to have future posts delivered to your feed reader.

Leave a reply