Monday, March 12, 2012

Thinking the Unthinkable

Cassandra is a tragic figure in Greek mythology. A woman of extraordinary beauty and intelligence, Cassandra attracted the attention of Apollo, god of light and the sun. He granted her the gift to see the future, but when she didn't return his love, Apollo cursed her so that nobody would believe her prophecies. This left poor Cassandra not only with the knowledge of things to come yet unable to prevent them -- she warned the Trojans about the Trojan Horse to no avail -- it also had people think she was insane.

Cassandra's conundrum is one that was on my mind four years ago when I was at Sci Foo, sitting in a discussion lead by Nick Bostrom and Martin Rees on "Existential Risks and Global Catastrophes." I wrote back then "the session was utterly pointless and I wish I had gone elsewhere." Bostrom's presentation was a rundown of risk estimates for certain catastrophe scenarios. Neither was the audience at least given a hint where these numbers came from, nor was there even an attempt to address these concerns.

Nick Bostrom is director of the Future of Humanity Institute, which has a nice website and research staff that except for him and a research fellow seems to consist of associates. Bostrom is best known for putting forward the "Simulation Hypothesis," that is the idea that we are living in a computer simulation. It is unfortunate that the extinction risk of somebody pulling the plug out of the simulation that we wrongly believe is reality has gotten mixed up with more conservative concerns like pandemics, nuclear terrorism or nanotech weapons. The PDF with the risk assessments from 2008 is on the website too, if you have a look you'll understand why I didn't find it particularly insightful.

At a conversation on last year's FQXi conference the simulation hypothesis came up, mixed with Jaan Tallinn's worry that artificial intelligence, once created, might decide humans are too dumb to be kept around. What are we supposed to do to prevent The Simulator from pulling the plug, I wondered out aloud, and Max Tegmark said above all things we should be interesting. And there, right in this instant, all of Tegmark's papers suddenly made sense to me. Though, as Anthony Aguirre remarked, the guy made it all through the Pleistocene, so how difficult can it be?

Leaving aside the question why it's a guy coding our earthly miseries, it is terribly easy to make fun of Tallinn and Bostrom's existential worries. It doesn't even help that Nick Bostrom, what I recall of his presentation, is a very serious person indeed. I doubt I would be able to talk for half an hour about the risk of human extinction without making a series of jokes. But then, Bostom's job is being serious about it.

I guess that most people prefer not to think too much about the extinction of the human race. Yet somebody has to do it. So, despite the ridicule, we should be grateful Bostrom is doing the job of putting numbers on the facts that we know of, even if nobody wants to hear them. The above mentioned risk assessment comes to the conclusion that the
"Overall risk of extinction prior to 2100 is 19%"

which isn't exactly going to make a good anecdote at your next dinner party.

So in 2100, we're either all dead or we're not, but then you already knew that. The only purpose I can see of putting a number on the extinction risk is to find a way to keep it down. But then the question becomes more involved than it seems at first sight: We have to ask then what to we want to achieve, and what's the rationale for that? For bringing down the risk will come at a price, and the mere fact that Bostrom's cassandraing isn't having much of an impact tells us that the price is too high to pay for most of us.

The Atlantic recently had an interview with Bostrom which touches this point that I found so missing in the 2008 discussion:
"[S]uppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do."

That is one part of the question, how do you value or devalue the future. But a more important part is what do you want to optimize to begin with. Bostom's mission is apparently to maximize the number of humans that will have lived before the heat death of the universe:
"Well, you might think that an extinction occurring at the time of the heat death of the universe would be in some sense mature. There might be fundamental physical limits to how long information processing can continue in this universe of ours, and if we reached that level there would be extinction, but it would be the best possible scenario that could have been achieved. I wouldn't count that as an existential catastrophe, rather it would be a kind of success scenario. So it's not necessary to survive infinitely long, which after all might be physically impossible, in order to have successfully avoided existential risk."

I don't really know what to make of Bostrom's tendency to answer questions with "Well, you might think" rather than "I think" but apparently his idea of success is to reproduce plentifully before Game Over. But why should we live according to what Nick Bostrom might think? Maybe I would prefer blowing up the planet when we're out of oil and all dying together. Who decided that Nick Bostrom must be pleased about mankind?

The underlying issue is intricate because we can't just count heads, we also have to take into account quality of life and the multitude of people's opinions of what constitutes good life.

And that brings us to the question how to measure and aggregate quality of life, and how to weigh a reduction in quality of life today against an increase in quality of life in the future, which opens a whole can of moral and political worms crawling all over the place. There is presently no good answer to this question, except of course my answer, which is is that we shouldn't attempt to measure and aggregate happiness but instead possibilities.

I therefore think that the main challenge we are facing is not to quantify existential risks, but how to integrate scientific insights - these and others - into our social and political systems.

But while I believe that thinking about existential risks is not our main challenge I am very sympathetic to Bostrom's mission. I believe he is right in that the rapid technological progress that we have seen in the last decades poses unprecedented risks that we should take very seriously. Somebody has to be the one to say what nobody wants to hear.

If Cassandra had not been cursed and been able to warn the Trojans, she would have spoiled her own prophecy; it was only her being cursed that enabled her to make good predictions. Let's hope that Bostrom is on good grounds with Apollo.

25 comments:

Arun said...

The underlying issue is intricate because we can't just count heads, we also have to take into account quality of life and the multitude of people's opinions of what constitutes good life.

Hi Bee,

Lot of food for thought here. I think the key question is whether it is legitimate to want a quality of life that is not sustainable (by the environment, resource base,etc.).

Sustainable things do not pose a problem.

We would clearly agree that no matter how we decide to aggregate and weigh people's aspirations, opinions that require a physical impossibility (e.g., something that violates the second law of thermodynamics) can be safely ignored.

Unsustainable things, while physically feasible, are limited in time range, when either resources run out or the damage to the environment starts turning lethal. Of course, what is unsustainable is, in part, dependent on the state of technology. It is here that balancing current humans versus future humans becomes problematic, because if current humans do unsustainable things, future humans not only cannot have these things, but may be further restricted.

-Arun

Arun said...

So imagine, say, a forest, that can regenerate at 2% per year. Allowing for things like forest fires, and insect infestations, say, cutting about 1.5% of the forest per year can be sustained indefinitely, because regeneration covers the amount we cut.

Is it a legitimate aspiration for someone to want a quality of life that requires cutting 10% of the forest per year?

Robert L. Oldershaw said...

Cassandra (from beyond the grave):

"The celebrity physicists, one by one are going insane, e.g., Nielsen, Tegmark, Kaku, Kane, Susskind, and a large number of non-celebrity theoretical physicists. Science is in peril!"

;)

Len Ornstein said...

Arun:

with respect to your "So imagine, say, a forest, that can regenerate at 2% per year."

See;

"Replacing coal with wood: sustainable, eco-neutral, conservation harvest of natural tree-fall in old-growth forests"

http://www.springerlink.com/openurl.asp?genre=article&id=doi:10.1007/s10584-009-9625-z

and

"Irrigated afforestation of the Sahara and Australian Outback to end global warming."

http://www.springerlink.com/openurl.asp?genre=article&id=doi:10.1007/s10584-009-9626-y

Bee said...

Dear Arun,

Yes, I would agree, sustainability seems a rational requirement. However, two problems with this:

1) It is possible to temporarily live unsustainable, and this might be justified in some circumstances, but under which? We presently don't live sustainable, and then this brings the question how and how quickly do we have to work towards sustainability, and who pays for that?

2) There is still the problem how to integrate such a requirement into our social and political systems.

Whichever way I turn it, I end up thinking that the first thing we should work at are our global decision making processes. Look at all the amount of work that went into the question of sustainability etc, and how little of that bears any relevance for our economical and political systems: because it it possible to just ignore it. There is something going very, very wrong here and that's that we have no institutionalized penalty for unsustainable living. Consider the same issue on the scale of a human rather than on the scale of human society: If you live unsustainable (say, you're not drinking enough) you'll get very clear and impossible to ignore warning signs. We have evolved these warning signs so we don't just forget drinking and drop dead one day. There's no similar system on a global scale. (And note that it is possible to not live sustainable for some period if it seems necessary.) Best,

B.

A. Mikovic said...

My problem with the statement "the risk of human extinction by 2100 is 19%" is that it can not be checked, and this type of probabilities are meaningless. Since the meaning of probability is a frequency of a measurement, then checking such a statement would entail to finding an ensemble of 100 identical Earths, and waiting 90 years or less in each case in order to detect an extinction for 19 exemplars. The same problem appears in quantum cosmology, where a similar statement for the creation of the Universe is also meaningless, since one cannot use the frequency interpretation of the probability.

Bee said...

Hi A. Mikovic,

Yes, that's right. That's why I wrote the only point I can see in putting forward these numbers is as a quantification for how well we're doing, with the aim of keeping the numbers down. Best,

B.

Phil Warnell said...

Hi Bee,

My least favourite probability based predictions are those used to forecast the weather, such as there is a 50% chance of rain, as I’m not certain why it’s cast in the context of a warning as if something undesirable. My way of looking at all this is its hard enough to deal with the present as to be overly concerned with the future; or in other words the best we can do is to address the present seriously and let the future bother with itself.

”Time present and time past
Are both perhaps present in time future,
And time future contained in time past.
If all time is eternally present
All time is unredeemable.
What might have been is an abstraction
Remaining a perpetual possibility
Only in a world of speculation.
What might have been and what has been
Point to one end, which is always present.
Footfalls echo in the memory
Down the passage which we did not take
Towards the door we never opened
Into the rose-garden. My words echo
Thus, in your mind.

But to what purpose
Disturbing the dust on a bowl of rose-leaves
I do not know.”


T.S. Eliot, “Burnt Norton” (1936)


Best,

Phil

Arun said...

The meaning of probability here (of human extinction) has the same meaning as the odds in a horse race.

Uncle Al said...

"most people prefer not to think too much about the extinction of the human race. Yet somebody has to do it." Thinking or extincting? 1) Quality matters, 2) reproduction always wins. We have abandoned (1) for (2). Classical Greek rational democracy, risk and culling, is "unfair." Scattered gifts, profuse punishment, and ignorance stabilize feudalism. The only crime is insubordination.

http://blogs.nature.com/news/2012/03/sustainable-fishing-targets-could-put-15-million-out-of-work.html

Official Truth has X fish reproduction proportional to X^2, Given X (adult) fish, it is (X/2)^2. Predation upon released eggs during spawning (1-a), and upon tiny fish after hatching, (1-b), matter. (1-b){[(X/2)(1-a)](X)/2]^2} Young fish are inert. The largest fish output the most sperm and eggs, thence baby fish. The largest fish are preferentially caught. Fisheries are already dead.

The ability of citizens to produce value then compassionately confiscated is not linear with effort. It shrinks with imposed increased losses, then catastrophically collapses. Nobody is responsible for the result.

Uncle Al said...

"most people prefer not to think too much about the extinction of the human race. Yet somebody has to do it." Thinking or extincting? 1) Quality matters, 2) reproduction always wins. We have abandoned (1) for (2). Classical Greek rational democracy, risk and culling, is "unfair." Scattered gifts, profuse punishment, and ignorance stabilize feudalism. The only crime is insubordination.

http://blogs.nature.com/news/2012/03/sustainable-fishing-targets-could-put-15-million-out-of-work.html

Official Truth has X fish reproduction proportional to X^2, Given X (adult) fish, it is (X/2)^2. Predation upon released eggs during spawning (1-a), and upon tiny fish after hatching, (1-b), matter. (1-b){[(X/2)(1-a)](X)/2]^2} Young fish are inert. The largest fish output the most sperm and eggs, thence baby fish. The largest fish are preferentially caught. Fisheries are already dead.

The ability of citizens to produce value then compassionately confiscated is not linear with effort. It shrinks with imposed increased losses, then catastrophically collapses. Nobody is responsible for the result.

A. Mikovic said...

The probability of an extinction is not the same as the probability in a horse race, since a horse has a certain probability due to its results in the previous races, while in the case of the Earth there is only one race, which is in progress.

Bee said...

Ah, no, there's a race every day and so far we won all of them. The difficulty with comparing the probability of extinction with odds in a horse race is that it's pointless betting anything else than we'll win tomorrow too, it's a lose-lose situation.

Giotis said...

'Dead' de Sitter, definitely my favourite space.

No observers no problems. Boltzmann Brains don't count.

CapitalistImperialistPig said...
This comment has been removed by the author.
CapitalistImperialistPig said...

Cassandra by ABBA

Arun said...

The probability of an extinction is not the same as the probability in a horse race, since a horse has a certain probability due to its results in the previous races, while in the case of the Earth there is only one race, which is in progress.

Well, even races of horses running for the first time in a race get odds. Probabilities of unrepeatable events have a meaning.

The difficulty with comparing the probability of extinction with odds in a horse race is that it's pointless betting anything else than we'll win tomorrow too, it's a lose-lose situation.

Well, life insurance deals with the mortality of the individual. It may be no less meaningful to deal with the mortality of a species.

Second, if we could muster the political will to do something, then a measure of our risk is useful, it can tell us whether our actions are reducing the risk or not.

Arun said...

If you don't like thinking about human extinction, think e.g., - what is the probability of tigers going extinct?

Quote: "A research project designed to model the effects of tiger poaching in Russia and India by John S. Kenney of Maine's Department of Inland Fisheries and Wildlife has determined via computer modeling that even a small increase in poaching drastically increases the threat of the endangered tigers' extinction.

To make the model, the scientists used data collected for over 20 years on the survival rates and behavior of tigers in Nepal's Royal Chitwan National Park. In addition, they estimated that every normal-sized tiger group worldwide loses 5 to 10 of its 120 or so members to poaching each year. They then used the model to predict effects of different poaching patterns.

The model predicts, If poachers killed 10 of the animals in a tiger group every year for three years, the group would have less than a 20 percent chance of extinction in the 75 years after poaching stopped. Destroying 15 tigers a year for 3 years however, bumps the probability of extinction up to 50 percent. If poachers kill 15 tigers in a group each year for six years, or 10 animals for nine years, this will destroy the group.

If poaching continues at its current rate, researchers have predicted that many if not all the tiger clans will be wiped out in the near future.

Tiger populations can appear stable yet fail to withstand an unexpected disaster, such as bad weather, disease or reproductive problems. Add to this the devastating loses the populations suffer due to poaching and one can see that the challenges the endangered tiger faces will be extremely difficult to overcome in order to survive."

http://www.tigersincrisis.com/trade_tigers.htm

Are you going to tell me that this above is meaningless because tiger extinction is a one-time event?

CapitalistImperialistPig said...

It's possible that the robots will keep a few humans around for sentimental purposes - if they happen to be sentimental. Actually it's possible that they already are doing that, and most (all?) of the what we perceive to be other people are already simulations designed to provide us with a quasi-natural environment. Or perhaps we are all now just simulations.

Mud said...

"I therefore think that the main challenge we are facing is not to quantify existential risks, but how to integrate scientific insights - these and others - into our social and political systems."

In accordance with this topic you are speaking of risk management - no? Risk management for whom?

Bee said...

Dear Arun,

Yes, insurances deal with mortality of individuals, but that only makes sense because the beneficiaries are not the deceased themselves. I'll be happy to insure you and everybody else on the planet against the extinction of the human race for as little as $10 per month. Best,

B.

Bee said...

Hi Mud,

I am not only talking about risk management, though that's part of it. I simply mean we can talk all day long about the risk of extinction, in the end it won't matter because nobody will pay attention. There's no mechanism by which any scientific consideration (including the inevitable uncertainties) is integrated into our political system. There's just this vague notion of "informing the policy makers." For whom? That's a problem which exists on all scales from the communities to the global scale. And it is, imo, the major problem that we better solve really quickly. Best,

B.

Eric said...

Hi Bee,
What really bothers me about thinking the unthinkable is the wasteful use of intellectual resources. I wish people would focus on things occurring in the future that do not require a whole host of contingent assumptions for any given proposed final result. The old saying "if my mother had wheels she'd be a trolley" sort of sums the worth of these discussion up.

That isn't to say that thinking about the future is bad. It isn't. But we should be aware when we are engaging in mental masturbation that has no chance of accomplishing a useful goal.

Eric said...

But we should be aware when we are engaging in mental masturbation that has no chance of accomplishing a useful goal.

After reading that my last sentence I realized I was being too pessimistic. If we suddenly hear paroxysms of ecstasy in the blogosphere coming from Martin Rees and Nick Bostrom we will know that their mental masturbation has indeed accomplished a useful goal.

Mud said...

Well, I guess you could say that the climate debate, global warming so to speak, and the policies established from it is a useful method to learn from. If I recall, although I can't remember the source, there was a performed study on global policy making with diplomats from each country playing the role as policy makers which yielded no agreement on any new policies. Which only leaves emergency response scenarios at the forefront of handling any risks of continuing our earthly way of life, discontinuities are not only inevitable but they can also be sacrificial.