Cassandra's conundrum is one that was on my mind four years ago when I was at Sci Foo, sitting in a discussion lead by Nick Bostrom and Martin Rees on "Existential Risks and Global Catastrophes." I wrote back then "the session was utterly pointless and I wish I had gone elsewhere." Bostrom's presentation was a rundown of risk estimates for certain catastrophe scenarios. Neither was the audience at least given a hint where these numbers came from, nor was there even an attempt to address these concerns.
Nick Bostrom is director of the Future of Humanity Institute, which has a nice website and research staff that except for him and a research fellow seems to consist of associates. Bostrom is best known for putting forward the "Simulation Hypothesis," that is the idea that we are living in a computer simulation. It is unfortunate that the extinction risk of somebody pulling the plug out of the simulation that we wrongly believe is reality has gotten mixed up with more conservative concerns like pandemics, nuclear terrorism or nanotech weapons. The PDF with the risk assessments from 2008 is on the website too, if you have a look you'll understand why I didn't find it particularly insightful.
At a conversation on last year's FQXi conference the simulation hypothesis came up, mixed with Jaan Tallinn's worry that artificial intelligence, once created, might decide humans are too dumb to be kept around. What are we supposed to do to prevent The Simulator from pulling the plug, I wondered out aloud, and Max Tegmark said above all things we should be interesting. And there, right in this instant, all of Tegmark's papers suddenly made sense to me. Though, as Anthony Aguirre remarked, the guy made it all through the Pleistocene, so how difficult can it be?
Leaving aside the question why it's a guy coding our earthly miseries, it is terribly easy to make fun of Tallinn and Bostrom's existential worries. It doesn't even help that Nick Bostrom, what I recall of his presentation, is a very serious person indeed. I doubt I would be able to talk for half an hour about the risk of human extinction without making a series of jokes. But then, Bostom's job is being serious about it.
I guess that most people prefer not to think too much about the extinction of the human race. Yet somebody has to do it. So, despite the ridicule, we should be grateful Bostrom is doing the job of putting numbers on the facts that we know of, even if nobody wants to hear them. The above mentioned risk assessment comes to the conclusion that the
"Overall risk of extinction prior to 2100 is 19%"
which isn't exactly going to make a good anecdote at your next dinner party.
So in 2100, we're either all dead or we're not, but then you already knew that. The only purpose I can see of putting a number on the extinction risk is to find a way to keep it down. But then the question becomes more involved than it seems at first sight: We have to ask then what to we want to achieve, and what's the rationale for that? For bringing down the risk will come at a price, and the mere fact that Bostrom's cassandraing isn't having much of an impact tells us that the price is too high to pay for most of us.
The Atlantic recently had an interview with Bostrom which touches this point that I found so missing in the 2008 discussion:
"[S]uppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do."
That is one part of the question, how do you value or devalue the future. But a more important part is what do you want to optimize to begin with. Bostom's mission is apparently to maximize the number of humans that will have lived before the heat death of the universe:
"Well, you might think that an extinction occurring at the time of the heat death of the universe would be in some sense mature. There might be fundamental physical limits to how long information processing can continue in this universe of ours, and if we reached that level there would be extinction, but it would be the best possible scenario that could have been achieved. I wouldn't count that as an existential catastrophe, rather it would be a kind of success scenario. So it's not necessary to survive infinitely long, which after all might be physically impossible, in order to have successfully avoided existential risk."
I don't really know what to make of Bostrom's tendency to answer questions with "Well, you might think" rather than "I think" but apparently his idea of success is to reproduce plentifully before Game Over. But why should we live according to what Nick Bostrom might think? Maybe I would prefer blowing up the planet when we're out of oil and all dying together. Who decided that Nick Bostrom must be pleased about mankind?
The underlying issue is intricate because we can't just count heads, we also have to take into account quality of life and the multitude of people's opinions of what constitutes good life.
And that brings us to the question how to measure and aggregate quality of life, and how to weigh a reduction in quality of life today against an increase in quality of life in the future, which opens a whole can of moral and political worms crawling all over the place. There is presently no good answer to this question, except of course my answer, which is is that we shouldn't attempt to measure and aggregate happiness but instead possibilities.
I therefore think that the main challenge we are facing is not to quantify existential risks, but how to integrate scientific insights - these and others - into our social and political systems.
But while I believe that thinking about existential risks is not our main challenge I am very sympathetic to Bostrom's mission. I believe he is right in that the rapid technological progress that we have seen in the last decades poses unprecedented risks that we should take very seriously. Somebody has to be the one to say what nobody wants to hear.
If Cassandra had not been cursed and been able to warn the Trojans, she would have spoiled her own prophecy; it was only her being cursed that enabled her to make good predictions. Let's hope that Bostrom is on good grounds with Apollo.