Have you ever put away a bag of chips because they say it isn’t healthy? That makes sense. Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes… That makes you a longtermist. Longtermism is a currently popular philosophy among rich people like Elon Musk, Peter Thiel, and Jaan Tallinn. What do they believe and how crazy is it? That’s what we’ll talk about today.
The first time I heard of longtermism I thought it was about terms of agreement that get longer and longer. But no. Longtermism is the philosophical idea that the long-term future of humanity is way more important than the present and that those alive today, so you, presumably, should make sacrifices for the good of all the generations to come.
Longtermism has its roots in the effective altruism movement, whose followers try to be smart about donating money so that it has the biggest impact, for example by telling everyone how smart they are about donating money. Longtermists are concerned with how our future will look like in some billion years or longer. Their goal is to make sure that we don’t go extinct. So stop being selfish, put away that junk food and make babies.
The key argument of longtermists is that our planet will remain habitable for a few billion years, which means that most people who’ll ever be alive are yet to be born.
Here’s a visual illustration of this. Each grain of sand in this hourglass represents 10 million people. The red grains are those who lived in the past, about 110 billion. The green one are those alive today, that’s about 8 billion more. But that is just a tiny part of all the lives that are yet to come.
A conservative estimate is to assume that our planet will be populated by at least a billion people for at least a billion years, so that’s a billion billion human life years. With today’s typical life span of 100 years, that’d be about 10 to the 16 human lives. If we’ll go on to populate the galaxy or maybe even other galaxies, this number explodes into billions and billions and billions.
Unless. We go extinct. Therefore, the first and foremost priority of longtermists is to minimize “existential risks.” This includes events that could lead to human extinction, like an asteroid hitting our planet, a nuclear world war, or stuffing the trash so tightly into the bin that it collapses to a black hole. Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction.
One person who has been pushing longtermism is the philosopher Nick Bostrom. Yes, that’s the same Bostrom who believes we live in a computer simulation because his maths told him so. The first time I heard him give a talk was in 2008 and he was discussing the existential risk that the programmer might pull the plug on that simulation we supposedly live in. In 2009 he wrote a paper arguing:
“A non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback: a giant massacre for man, a small misstep for mankind”
Yeah, breakdown of global civilization is exactly what I would call a small misstep. But Bostrom wasn’t done. By 2013 he was calculating the value of human lives: “We find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives [in the present]. One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any ‘ordinary’ good, such as the direct benefit of saving 1 billion lives ”
Hey, maths doesn’t lie, so I guess that means okay to sacrifice a billion people or so. Unless possibly you’re one of them. Which Bostrom probably isn’t particularly worried about because he is now director of the Future of Humanity Institute in Oxford where he makes a living from multiplying powers of ten. But I don’t want to be unfair, Bostrom’s magnificent paper also has a figure to support his argument that I don’t want to withhold from you, here we go, I hope that explains it all.
By the way, this nice graphic we saw earlier comes from Our World in Data which is also located in Oxford. Certainly complete coincidence. Another person who has been promoting longtermism is William MacAskill. He is a professor for philosophy at, guess what, the University of Oxford. MacAskill recently published a book titled “What We Owe The Future”.
I didn’t read the book because if the future thinks I owe it, I’ll wait until it sends an invoice. But I did read a paper that MacAskill wrote in 2019 with colleague Hilary Greaves titled “The case for strong longtermism”. Hilary Greaves is a philosopher and director of the Global Priorities Institute which is located in, surprise, Oxford. In their paper they discuss a case of long-termism in which decision makers choose “the option whose effects on the very long-run future are best,” while ignoring the short-term. In their own words:
“The idea is then that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects.”
So in the next 100 years, anything goes so long as we don’t go extinct. Interestingly enough, the above passage was later removed from their paper and can no longer be found in the 2021 version.
In case you think this is an exclusively Oxford endeavour, the Americans have a similar think tank in Cambridge, Massachusetts, called The Future of Life Institute. It’s supported among others by billionaires Peter Thiel, Elon Musk, and Jaan Tallinn who have expressed their sympathy for longtermist thinking. Musk for example recently commented that MacAskill’s book “is a close match for [his] philosophy”. So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct.
Not everyone is a fan of longtermism. I can’t think of a reason why. I mean, the last time a self-declared intellectual elite said it’s okay to sacrifice some million people for the greater good, only thing that happened was a world war, just a “small misstep for mankind.”
But some people have criticized longtermists. For example, the Australian philosopher Peter Singer. He is one of the founders of the effective altruism movement, and he isn’t pleased that his followers are flocking over to longtermism. In a 2015 book, titled The Most Good You Can Do he writes:
“To refer to donating to help the global poor or reduce animal suffering as a “feel-good project” on which resources are “frittered away” is harsh language. It no doubt reflects Bostrom’s frustration that existential risk reduction is not receiving the attention it should have, on the basis of its expected utility. Using such language is nevertheless likely to be counterproductive. We need to encourage more people to be effective altruists, and causes like helping the global poor are more likely to draw people toward thinking and acting as effective altruists than the cause of reducing existential risk.”
Basically Singer wants Bostrom and his likes to shut up because he’s afraid people will just use longtermism as an excuse to stop donating to Africa without benefit to existential risk reduction. And that might well be true, but it’s not a particularly convincing argument if the people you’re dealing with have a net worth of several hundred billion dollars. Or if their “expected utility” of “existential risk reduction” is that their institute gets more money.
Singers second argument is that it’s kind of tragic if people die. He writes that longtermism “overlooks what is really so tragic about premature death: that it cuts short the lives of specific living persons whose plans and goals are thwarted.”
No shit. But then he goes on to make an important point: “just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin.” Yes, indeed, the entire argument for longtermism depends crucially on how much value you put on future lives. I’ll say more about this in a minute, but first let’s look at some other criticism.
The cognitive scientist Steven Pinker, after reading MacAskill’s What We Owe The Future, shared a similar reaction on twitter in which he complained about: “Certainty about matters on which we’re ignorant, since the future is a garden of exponentially forking paths; stipulating correct answers to unsolvable philosophical conundrums [and] blithe confidence in tech advances played out in the imagination that may never happen.”
The media also doesn’t take kindly to longtermism. Some, like Singer, complain that that longtermism draws followers away from the effective altruism movement. Others argue that the technocratic vision of longtermists is also anti-democratic. For example Time Magazine wrote that Elon Musk has “sold the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”
Christine Emba, in an opinion piece for the Washington Post, argued that “the turn to longtermism appears to be a projection of a hubris common to those in tech and finance, based on an unwarranted confidence in its adherents’ ability to predict the future and shape it to their liking” and that “longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies while patting themselves on the back for their intelligence and superior IQs. The future becomes a clean slate onto which longtermists can project their moral certitude and pursue their techno-utopian fantasies, while flattering themselves that they are still “doing good.””
Okay, so now that we have seen what either side says, what are we to make of this.
The logic of longtermists hinges on the question what the value of a life in the future is compared to ours while factoring in the uncertainty of this estimate. There are two elements which goes into this evaluation. One is an uncertainty estimate for the future projection. The second is a moral value, it’s how much future lives matter to you compared to ours. This moral value is not something you can calculate. That’s why longtermism is a philosophical stance, not a scientific one. Longtermist try to sweep this under the rug by blinding the reader with numbers that look kind of sciencey.
To see how difficult these arguments are, it’s useful to look at a thought experiment known as Pascal’s mugging. Imagine you’re in dark alley. A stranger steps in front of you and says “Excuse me, I’ve forgotten my knife but I’m a mugger, so please give me your wallet.” Do you give him your money? Probably not.
But then he offers to pay back double the money in your wallet next month. Do you give him your money? Hell no, he’s almost certainly lying. But what if he offers a hundred times more? Or a million times? Going by economic logic, eventually the risk of losing your money because he might be lying becomes worth it because you can’t be sure he’s lying. Say you consider the chances of him being honest 1 in 10,000. If he offered to return you 100 thousand times the amount of money in your wallet, the expected return would be larger than the expected loss.
But most people wouldn’t use that logic. They wouldn’t give the guy their money no matter what he promises. If you disagree, I have a friend who is a prince in Nigeria, if you send him 100 dollars, he’ll send back a billion, just leave your email in the comments and we’ll get in touch.
The point of this thought experiment is that there’s a second logical way to react to the mugger. Rather than to calculate the expected wins and losses, you note that if you agree to his terms on any value, then anyone can use the same strategy to take literally everything from you. Because so long as your risk assessment is finite, there’s always some promise that’ll make the deal worth the risk. But in this case you’d lose all your money and property and quite possibly also your life just because someone made a promise that’s high enough. This doesn’t make any sense, so it’s reasonable to refuse giving money to the mugger. I’m sure you’re glad to hear.
What’s the relation to longtermism? In both cases the problem is how to assign a probability to unlikely future events. For Pascal’s mugger that’s the unlikely event that the mugger will actually do what he promised. For longtermism the unlikely events are the existential threats. In both cases our intuitive reaction is to entirely disregard them because if we did, the logical conclusion seems to be that we’d have to spend as much as we can on these unlikely events about which we know the least. And this is basically why longtermists think people who are currently alive are expendable.
However, when you’re arguing about the value of human lives you are inevitably making a moral argument that can’t be derived from logic alone. There’s nothing irrational about saying you don’t care about starving children in Africa. There’s also nothing irrational about saying you don’t care about people who may or may not live on Mars in a billion years. It’s a question of what your moral values are.
Personally I think it’s good to have longterm strategies. Not just for the next 10 or 20 years. But also for the next 10 thousand or 10 billion years. So I really appreciate the longtermists’ focus on the prevention of existential risks. However, I also think they underestimate just how much technological progress depends on the reliability and sustainability of our current political, economic, and ecological systems. Progress needs ideas, and ideas come from brains which have to be fed both with food and with knowledge. So you know what, I would say, grab a bag of chips and watch a few more videos.