Saturday, July 30, 2022

Is the brain a computer?

If you like my content, you may also like my new newsletter to which you can sign up here (bottom at page). It's a weekly summary of the most interesting science news I came across in the past week. It's completely free and you can unsubscribe at any time.


[What follows is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


My grandmother was a computer, and I don’t mean there was a keypad on her chest. My grandmother calculated orbits of stars, with logarithmic tables and a slide ruler. But in which sense are brains similar to the devices we currently call computers, and in which sense not? What’s the difference between what they can do? And is Roger Penrose right in saying that Gödel’s theorem tells us human thought can’t just be computation? That’s what we’ll talk about today.

If you have five apples and I give you two, how many apples do you have in total? Seven. That’s right. You just did a computation. Does that mean your brain is a computer? Well, that depends on what you mean by “computer” but it does mean that I have two fewer apples than I did before. Which I am starting to regret. Because I could really go for an apple right now. Could you give me one of them back?

So whether your brain is a computer depends on what you mean by “computer”. A first attempt at answering the question may be to say a computer is something that does a computation, and a computation, according to Google is “the action of mathematical calculation”. So in that sense the human brain is a computer.

But if you ask Google what a computer is, it says it’s “an electronic device for storing and processing data, typically in binary form, according to instructions given to it in a variable program”. The definition on Wikipedia is pretty much the same and I think this indeed captures what most of us mean by “computer”. It’s those things we carry around to brush up selfies, but that can also be used for, well, calculations.

Let’s look at this definition again in more detail. It’s an electronic device. It stores and processes data. The data are typically in binary form. And you can give it instructions in a variable program. Now the second and last points, storing and processing data, and that you can give it instructions, also apply to the human brain. This leaves the two properties: it’s an electronic device and it typically uses binary data, which makes a computer different to the human brain. So let’s look at these two.

That an electronic computer is “digital” just means that it works with discrete data, so data whose values are separated by steps, commonly in a binary basis. The neurons in the brain, on the contrary, behave very differently. Here’s a picture of a nerve ending. In orange and blue you see the parts of the synapse that release molecules called “neurotransmitters”. Neurotransmitters encode different signals, and neurons respond to those signals gradually and in many different ways. So a neuron is not like a binary switch that’s either on or off.

But maybe this isn’t a very important difference. For one thing, you can simulate a gradual response to input on a binary computer just by giving weights to variables. Indeed, there’s an entire branch of mathematics for reasoning with such inputs. It’s called fuzzy logic and it’s the best logic to pet of all the logic. Trust me, I’m a physicist.

Neural networks which are used for artificial intelligence use a similar idea by giving weights to nodes and sometimes also the links of the network. Of course these algorithms still use a physical basis that is ultimately discrete and digital in binary. It’s just that on that binary basis you can mimic the gradual behavior of neurons very well. This already shows that saying that a computer is digital whereas neurons aren’t may not be all that relevant.

Another reason this isn’t a particularly strong distinction is that digital computers aren’t the only computers that exist. Besides digital computers there are analog computers which work with continuous data, often in electric, mechanical, or even hydraulic form. An example is the slide ruler that my grandma used. But you can also use currents, voltages and resistors to multiply numbers using Ohm’s law.

Analog computers are currently having somewhat of a comeback, and it’s not because millennials want to take selfies with their record players. It’s because you can use analog computers for matrix multiplications in neural networks. In an entirely digital neural network, a lot of energy is wasted in storing and accessing memory, and that can be bypassed by coding the multiplication directly into an analog element. But analog computers are only used for rather special cases exactly because you need to find a physical system that does the computation for you.

Is the brain analog or digital? That’s a difficult question. On the one hand you could say that the brain works with continuous currents in a continuous space, so that’s analog. On the other hand thresholds effects can turn on and off suddenly and basically make continuous input discrete. And the currents in the brain are ultimately subject of quantum mechanics, so maybe they’re partly discrete.   

But your brain is not a good place for serious quantum computing. For one thing, that’s because it’s too busy trying to remember how many seasons of Doctor Who there are just in case anyone stops you on the street and asks. But more importantly it’s because quantum effects get destroyed too easily. They don’t survive in warm and wiggly environments. It is possible that some neurological processes require quantum effects, but just how much is currently unclear, I’ll come back to this later.

Personally I would say that the distinction that the brain isn’t digital whereas typical computers that we currently use are, isn’t particularly meaningful. The reason we currently mostly use digital computers is because the discrete data prevent errors and the working of the machines is highly reproducible.

Saying that a computer is an electronic device whereas the brain isn’t, seems to me likewise a distinction that we make in every-day language, alright, but that isn’t operationally relevant. For one thing, the brain also uses electric signals, but more importantly, I think when we wonder what’s the difference between a brain and a computer we really wonder about what they can do and how they do it, not about what they’re made of or how they are made.

So let us therefore look a little closer at what brains and computers do and how they do it, starting with the latter: What’s the difference between how computers and brains do their thing?

Computers outperform humans in many tasks, for example just in doing calculations. This is why my grandmother used those tables and slide-rulers. We can do calculations if we have to, but it takes a long time and it’s tedious and it’s pretty clear that human brains aren’t all that great at multiplying 20 digit numbers.  

But hey, we did manage to build machines that can do these calculations for us! And along the way we discovered electricity and semi-conductors and programming and so on. So in some sense, you could say, we actually did learn to do those calculations. Just not with our own brains, because those are tired from memorizing facts about Doctor Who. But in case you are good at multiplying 20 digit numbers, you should totally bring that up at dinner parties. That way, you’ll finally will have something to talk about.

This example captures the key difference between computers and human brains. The human brain took a long time to evolve. Natural selection has given us a multi-tasking machine for solving problems, a machine that’s really good in adapting to new situations with new problems. Present-day computers, on the contrary, are built for very specific purposes and that’s what they’re good at. Even neural nets haven’t changed all that much about this specialization.

Don’t get me wrong, I think artificial intelligence is really interesting. There’s a lot we can do with it, and we’ve only just scratched the surface. Maybe one day it’ll actually be intelligent. But it doesn’t work like the human brain.

This is for several reasons. One reason is what we already mentioned above, that in the human brain the neural structure is physical whereas in a neural net it’s software coded on another physical basis.

But this might change soon. There are some companies which are producing computer chips similar to neurons. The devices made of them are called “neuromorphic computers”. These chips have “neurons” that fire independently, so they are not synchronized by a clock, like in normal processors. An example of this technology is Intel’s Loihi 2 which has one million “neurons” interconnected via 120 million synapses. So maybe soon we’ll have computers with a physical basis similar to brains. Maybe I’ll finally be able to switch mine for one that hasn’t forgotten why it went to the kitchen by the time it gets there.

Another difference which may soon fade away is memory storage. At present, memory storage works very differently for computers and brains. In computers, memories are stored in specific places, for example your hard drive, where electronic voltages change the magnetization of small units called memory cells between two different states. You can then read it out again or override it, if you get tired of Kate Bush.

But in the brain, memories aren’t stored in just one place, and maybe not in places at all. Just exactly how we remember things is still subject of much research. But we know for example that motor memories like riding a bike uses brain regions called the basal ganglia and cerebellum. Short-term working memory, on the other hand, heavily uses the prefrontal cortex. Then again, autobiographical memories from specific events in our lives, use the hippocampus and can, over the course of time, be transferred to the neocortex.

As you see memory storage in the brain is extremely complex and differentiated, which is probably why mine sometimes misplace the information about why I went into the kitchen. And not only are there many different types of memory, it’s also that neurons both process and store information, whereas computers use different hardware for both.

However, on this account too, researchers are trying to make computers more similar to brains. For example, researchers from the University of California in San Diego are working in something called memcomputers, which combines data processing and memory storage in the same chip.

Maybe more importantly, the human brain has much more structure than the computers we currently use. It has areas which specialize in specific functions. For example, the so called Broca's area in the frontal lobe specializes in language processing and speech production; the hypothalamus controls, among other things, body temperature, hunger and the circadian rhythm. We are also born with certain types of knowledge already, for example a fear of dangerous animals like spiders, snakes, or circus clowns. We also have brain circuits for stereo vision. If your eyes work correctly, your brain should be able to produce 3-d information automatically, it’s not like you have to first calculate it and then program your brain.

Another example of pre-coded knowledge is a basic understanding of natural laws. Even infants understand, for example, that objects don’t normally just disappear. We could maybe say it’s a notion of basic locality. We’re born with it. And we also intuitively understand that things which move will take some time to come to a halt. The heavier they are, the longer it will take. So, basically Newton’s laws. They’re hardwired. The reason for this is probably that it benefits survival if infants don’t have to learn literally everything from scratch. I was upset to learn, though, that infants aren’t born knowing Gödel’s theorem. I want to talk to them about it, and I think nature needs to work on this.

That some of our knowledge is pre-coded into structure is probably also partly the reason why brains are vastly more energy efficient than today’s supercomputers. The human brain consumes on the average 20 Watts whereas a supercomputer typically consumes a million times as much, sometimes more.

For example, Frontier, hosted at the Oak Ridge Leadership Computing Facility and currently the fastest supercomputer in the world consumes 21MWatt on average and 29MW at peak performance. To run the thing, they had to build a new power line and a cooling system that pumps around 6000 gallons of water. For those of you who don’t know what a gallon is, that’s a lot of water. The US department of energy is currently building a new supercomputer, Aurora, which is expected to become the world’s fastest computer by the end of the year. It will need about 60MW.

Again the reason that the human brain is so much more efficient is almost certainly natural selection, because saving energy benefits survival. Which is also what I tell my kids when they forget to turn the lights off when leaving a room.

Another item we can add to the list of differences is that the brain adapts and repairs itself, at least to some extent. This is why, if you think about it, brains are much more durable than computers. Brains work reasonably well for 80 years on average, sometimes as long as 120 years. No existing computer would last remotely as long. One particularly mind blowing case (no pun intended) is that of Carlos Rodriguez, who had a bad car accident when he was 14. He had stolen the car, was on drugs, and crashed head first. Here he is in his own words.  

Not only did he survive, he is in reasonably good health. Your computer is less likely to survive a crash than you, even if it remembered to wear its seatbelt. Sometimes it just takes a single circuit to fail and it’ll become useless. Supercomputing clusters need to be constantly repaired and maintained. A typical supercomputer cluster has more than a hundred maintenance stops a year and requires a staff of several hundred people. Just to keep working.  

To name a final difference between the ways that brains and computers currently work: brains are still much better at parallel processing. The brain has about 80 billion neurons, and each of them can process more than one thing at a time. Even for so-called massively parallel supercomputers these numbers are still science fiction. The current record for parallel processing is the Chinese supercomputer Sunway TaihuLight. It has 40,960 processing modules, each with 260 processor cores, which means a total of 10,649,600 processor cores! That’s of course very impressive, but still many orders of magnitude from the 80 billion that your brain has. And maybe it would have 90 billion if you stopped wasting all your time watching Doctor Who.

So those are some key differences between how brains and computers do things, now let us talk about the remaining point, what they can do.

Current computers, as we’ve seen, represent everything in bits, but not everything we know can be represented this way. It’s impossible, for example, to write down the number pi or any other irrational number in a sequence of bits. This means that not even the best supercomputer in the world can compute the area of a circle of radius 1, exactly, it can only approximate it. If we wanted to get pi exactly, it would take an infinite amount of time, like me trying to properly speak English. Fun fact: The current record for calculating digits of pi is 62.8 trillion digits.

But even though we can’t write down all the digits of pi, we can work with pi. We do this all the time, though, just among us, it isn’t all that uncommon for theoretical physicists to set pi equal to 1.

In any case, we can deal with pi as an abstract transcendental number, whereas computers are constrained to finitely many digits. So this looks like the human brain can do something that computers can’t.

However, this would be jumping to conclusions. The human brain can’t hold all the digits of pi any more than a computer. We just deal with pi as a mathematical definition with certain properties. And computers can do the same. With suitable software they are capable of abstract reasoning just like we are. If you ask your computer software if pi is a rational number it’ll hopefully say no. Unless it’s kidding in which case maybe you can think of more interesting conversation to have with it.

This brings us to an argument that Penrose has made, that human thought can’t be described by any computer algorithm. Penrose’s argument is basically this. Gödel showed that any sufficiently complex set of mathematical axioms can be used to construct statements which are true, but their truth is unprovable within that system of axioms. The fact that we can see the truth of any Gödel sentence, by virtue of Gödel’s theorem, tells us that no algorithm can beat human thought.

Now, if you look at all that we know about classical mechanics, then you can capture this very well in an algorithm. Therefore, Penrose says, quantum mechanics is the key ingredient for human consciousness. It’s not that he says consciousness affects quantum processes. It’s rather the other way round, quantum processes create consciousness. According to Penrose, at least.

But does this argument about Gödel’s theorem actually work? Think back to what I said earlier, computers are perfectly capable of abstract reasoning if programmed suitably. Indeed, Gödel’s theorem itself has been proved algorithmically by a computer. So I think it’s fair to say that computers understand Gödel’s theorem as much or as little as we do. You can start worrying if they understand it better.

This leaves open the question of course whether a computer would ever have been able to come up with Gödel’s proof to begin with. The computer that proved Gödel’s theorem was basically told what to do. Gödel wasn’t. Tim Palmer has argued that indeed this is where quantum mechanics becomes relevant.

By the way, I explain Penrose’s argument about Gödel’s theorem and consciousness in more detail in my new book Existential Physics. The book also has interviews with Roger Penrose and Tim Palmer.

So let’s wrap up. Current computers still differ from brains in a number of ways. Notably it’s that the brain is a highly efficient multi-purpose apparatus whereas, in comparison, computers are special purpose machines. The hardware of computers is currently very different from neurons in the brain, memory storage works differently, and the brain is still much better at parallel processing, but current technological developments will soon allow building computers that are more similar to brains in these regards.

When it comes to the question if there’s anything that brains can do which computers will not one day also be able to do, the answer is that we don’t know. And the reason is, once again, that we don’t really understand quantum mechanics.

Saturday, July 23, 2022

Does the Past Still Exist?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


One of the biggest mysteries of our existence is also one of the biggest mysteries of physics: time. We experience time as passing, with a special moment that we call “now”. Now you’re watching this video, half an hour ago you were doing something else. Whatever you did, there’s no way to change it. And what you will do in half an hour is up to you. At least that’s how we perceive time. 

But what physics tells us about time is very different from our perception. The person who figured this out was none other than Albert Einstein. I know. That guy again. Turns out he kind of knew it all. What did Einstein teach us about the past, the present, and the future? That’s what we’ll talk about today.

The topic we’re talking about today is covered in more detail in my new book “existential physics” which will be published in August. You find more info about the book at existentialphysics.com

We think about time as something that works the same for everyone and every object. If one second passes for me, one second passes for you, and one second passes for the clouds above. This makes time a universal parameter. This parameter labels how much time passes and also what we all mean by “now”.

Hermann Minkowski was the first to notice that this may not be quite right. He noticed that Maxwell’s equations of electrodynamics make much more sense if one treats time as a dimension, not as a parameter. Just like a ball doesn’t change if you rotate one direction of space into another, Maxwell’s equations don’t change if you rotate one direction of space into time.

So, Minkowski said, we just combine space with time to a 4 dimensional space-time, and then we can rotate space into time just like we can rotate two directions of space into each other. And that naturally explains why Maxwell’s equations have the symmetry they do have. It doesn’t have anything to do with electric and magnetic fields, it comes from the properties of space and time themselves.

I can’t draw a flower, let alone four dimensions, but I can just about manage two straight lines, one for time and the other for at least one dimension of space. This is called a space-time diagram. If you just stand still, then your motion in such a diagram is a straight vertical line. If you move at a constant velocity, your motion is a straight line tilted at some angle. So if you change velocity, you rotate in space-time. The maximal velocity at which you can move is the speed of light, which by convention is usually drawn at a 45-degree angle.

In space we can go forward-backward, left right, or up down. In time we can only go forward, we can’t make a u-turn, and there aren’t any driveways for awkward three-point turns either. So time is still different from space in some respect. But now that time is also a dimension, it’s clear that it’s just a label for coordinates, there’s nothing universal about it. There are many ways to put labels on a two-dimensional space because you can choose your axes as you want. The same is the case now in space-time. Once you have made time into a dimension, the labels on it don’t mean much. So what then is the time that we talk about? What does it even mean that time is a dimension? Do other dimensions exist? Supernatural ones? That could explain the strange sounds you’ve been hearing at night? No. That's a separate problem I'm afraid I can't help you with.

It was Albert Einstein who understood what this means. If we also want to understand it, we need four assumptions. The speed of light in vacuum is finite, it’s always the same, nothing can go faster than the speed of light, and all observers’ viewpoints are equally valid. This formed the basis of Einstein’s theory of Special Relativity. Oh, and also, the observers don’t have to exist. I mean, this is theoretical physics, so we’re talking about theoretical observers, basically. So, if there could be an observer with a certain viewpoint then then that viewpoint is equally valid as yours.

Who or what is an observer? Is an ant an observer? A tree? How about a dolphin? What do you need to observe to deserve being called an observer and what do you have to observe with? Believe it or not, there’s actually quite some discussion about this in the scientific literature. We’ll side-step this, erm, interesting discussion and use the word “observer” the same way that Einstein did, which is a coordinate system. You see, it’s a coordinate system that a theoretical observer might use, dolphin or otherwise. Yeah, maybe not exactly what the FBI thinks an observer is, but then if it was good enough for Einstein, it’ll be good enough for us. So Einstein’s assumption basically means any coordinate system should be as good as any other for describing physical reality.  

These four assumptions sound rather innocent at first but they have profound consequences. Let’s start with the first and third: The speed of light is finite and nothing goes faster than light. You are probably watching this video on a screen, a phone or laptop. Is the screen there now? Unless you are from the future watching this video as a hologram in your space house, I'm going to assume the answer is yes. But a physicist might point out that actually you don’t know. Because the light that’s emitted from the screen now hasn’t reached you yet. Also if you are from the future watching this as a hologram, make sure to look at me from the right. It’s my good side.

Maybe you hold the phone in your hand, but nerve signals are ridiculously slow compared to light. If you couldn’t see your hand and someone snatched your phone, it’d take several microseconds for the information that the phone is gone to even arrive in your brain. So how do you know your phone is there now?

One way to answer this question is to say, well, you don’t know, and really you don’t know that anything exists now, other than your own thoughts. I think, therefore I am, as Descartes summed it up. This isn’t wrong – I’ll come back to this later – but it’s not how normal people use the word “now”. We talk about things that happen “now” all the time, and we never worry about how long it takes for light to travel. Why can’t we just agree on some “now” and get on with it? I mean, think back to that space-time diagram. Clearly this flat line is “now”, so let’s just agree on this and move on.

Okay, but if this is to be physics rather than just a diagram you have to come up with an operational procedure to determine what we mean by “now”. You have to find a way to measure it. Einstein did just that in what he called Gedankenexperiment, a “thought experiment”.

He said, suppose you place a mirror to your right and one to your left. You and the mirrors are at fixed distance to each other, so in the space time diagram it looks like this. You send one photon left and one right, and make sure that both photons leave you at the same time. Then you wait to see whether the photons come back at the same time. If they don’t, you adjust your position until they do.

Now remember Einstein’s second assumption, the speed of light is always the same. This means if you can send photons to both mirrors and they come back at the same time, then you must be exactly in the middle between the mirrors. The final step is then to say that at exactly half the time it takes for the photons to return, you know they must be bouncing off the mirror. You could say “now” at the right moment even though the light from there hasn’t reached you yet. It looks like you’ve found a way to construct “now”.

But here’s the problem. Suppose you have a friend who flies by at some constant velocity, maybe in a space-ship. Her name is Alice, she is much cooler than you, and you have no idea why she's agreed to be friends with you. But here she is, speeding by in her space-ship left to right. As we saw earlier, in your space-time diagram, Alice moves on a tilted straight line. She does the exact same thing as you, places mirrors to both sides, sends photons and waits for them to come back, and then says when half the time has passed that’s the moment the photons hit the mirrors.

Except that this clearly isn’t right from your point of view. Because the mirrors to her right are in the direction of her flight, so the light takes longer to get there than it does to the mirrors on the left, which move towards the light. You would say that the photon which goes left clearly hits the mirror first because the mirror’s coming at it. From your perspective, she just doesn’t notice because when the photons go back to Alice, the exact opposite happens. The photon coming from left takes longer to get back, so the net effect cancels out. What Alice says happens “now” is clearly not what you think happens “now”.

For Alice on the other hand, you are the one moving relative to her. And she thinks that her notion of “now” is right and yours is wrong. So who is right? Probably Alice, you might say. Because she’s much cooler than you. She owns a spaceship, after all. Maybe. But let’s ask Einstein.

Here is where Einstein’s forth assumption comes in. The viewpoints of all observers are equally valid. So you’re both right. Or, to put it differently, the notion of “now” depends on the observer, it is “observer-dependent” as physicists say. Your “now” is not the same as my “now”. If you like technical terms, this is also called the relativity of simultaneity.

These mismatches in what different observers think happens “now” are extremely tiny in every-day life. They only become noticeable when relative velocities are close by the speed of light, so we don’t normally notice them. If you and I talk about who knocked at the door right now, we won’t misunderstand each other. If we’d zipped around with nearly the speed of light, however, referring to “now” would get very confusing.

This is pretty mind-bending already, but wait, it gets wilder. Let us have a look at the space-time diagrams again. Now let us take any two events that are not causally connected. This just means that if you wanted to send a signal from one to the other, the signal would have to go faster than light, so signaling from one to the other isn’t possible. Diagrammatically this means if you connect the two events the line has an angle less than 45 degrees to the horizontal.

The previous construction with the mirrors shows that for any two such events there is always some observer for whom those two events happen at the same time. You just have to imagine the mirrors fly through the events and the observer flies through directly in the middle. And then you adjust the velocity until the photons hit both events at the same time.

Okay, so any two causally disconnected events happen simultaneously for some observer. Now take any two events that are causally connected. Like eating too much cheese for dinner and then feeling terrible the morning after. Find some event that isn’t causally connected to either. Let’s say this event is a supernova going off in a distant galaxy. There are then always observers for whom the supernova and your cheese dinner are simultaneous, and there are observers for whom the supernova and your morning after are simultaneous.

Let’s then put all those together. If you are comfortable with saying that something, anything, exists “now” which isn’t here, then, according to Einstein’s fourth assumption, this must be the case for all observers. But if all the events that you think happen “now” exist and all other observers say the events that happen at the same time as those events, then all events exist “now”. Another way to put it is that all times exist in the same way.

This is called the “block universe”. It’s just there. It doesn’t come into being, it doesn’t change. It just sits there.

If you find that somewhat hard to accept, there is another possibility to consistently combine a notion of existence with Einstein’s Special Relativity. All that I just said came from assuming that you are willing to say something exists now even though you can’t see or experience it in any way. If you are willing to say that only things exist which are now and here, then you don’t get a block universe. But maybe that’s even more difficult to accept.

Another option is to simply invent a notion of “existence” and define it to be a particular slice in space-time for each moment in time. This is called a “slicing” but unfortunately it has nothing to do with pizza. If it had any observable consequences, that would contradict the fourth assumption Einstein made. So it’s in conflict with Special Relativity and since this theory is experimentally extremely well confirmed, this would almost certainly mean the idea is in conflict with observation. But if you just want to define a “now” that doesn’t have observable consequences, you can do that. Though I’m not sure why you would want to.

Quantum mechanics doesn’t change anything about the block universe because it’s still compatible with Special Relativity. The measurement update of the wave-function, which I talked about in this earlier video, happens faster than the speed of light. If it could be observed, you could use it to define a notion of simultaneity. But it can’t be observed, so there’s no contradiction.

Some people have argued that since quantum mechanics is indeterministic, the future can’t already exist in the block universe, and that therefore there must also be a special moment of “now” that divides the past from the future. And maybe that is so. But even if that was the case, the previous argument still applies to the past. So, yeah, it’s true. For all we currently know, the past exists the same way as the present.

Saturday, July 16, 2022

How do painkillers work?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Have you ever taken an Aspirin? Advil? Paracetamol, or Tylenol for the Americans. Most of you probably have. But do you know what’s the difference between them? What’s pain to begin with, where does it come from, how do painkillers work, and why is Sabine suddenly talking about pharmacology? That’s what we’ll talk about today.

Pain is incredibly common. According to a 2019 survey in the United States, almost 60 percent of adults had experienced physical pain in the three months prior to the survey. I’d have guessed that stubbing your toe was the most frequent one, but it’s actually back pain with 39 percent. The numbers in the European Union are similar. The healthcare costs for chronic pain disorders in the European Union alone have been estimated to exceed 400 billion dollars annually. Pain is such a universal problem that the United Nations say access to pain management is a human right.

But just what do we mean by pain? The International Association for the Study of Pain defines it as “an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage.” You probably don’t have to be told it’s unpleasant. But this definition tells you that the “unpleasant experience that accompanies tissue damage” is not always caused by actual tissue damage. We’ll talk about this later, but first we’ll talk about the most common cause of pain.

In most cases, pain is a signal that’s picked up by receptors in some part of the body, and from there it’s sent to your brain. So there’s three parts involved: The receptor, a long transmission channel that goes to the brain, and the brain itself. The most common cause of pain is that the pain receptors, which are called nociceptors, are triggered by cell damage.

What is pain good for? A clue comes from people who can’t feel pain. This is caused by rare genetic mutations that stop pain receptors or their transmission from working. It affects about 1 in 25 thousand people. Infants with this condition may try to chew off their tongue, lips, or fingers, and later accumulate bruises and broken bones.  So, pain is uncomfortable, but is actually good for something. It’s a warning signal that teaches you to not do some things. Still, sometimes you’d rather not have it, so let’s talk about ways to get rid of pain.

The most straightforward way to remove pain are local and regional anesthetics. Those are the ones you get at the dentist, but you also find them in lower doses in some creams. They take effect only at the place where you apply them, and they wear off as the body carries away and takes apart the substance.

Their names usually end in –caine. Like Benzocaine, Novocaine, and also cocaine. Yes, cocaine is a local anesthetic, and it had quite an interesting history, even before it ran wall-street in the 80s. Rohin did a great video about this. There are some exceptions to the nomenclature, such as naturally occurring anesthetics, including Menthol.

Local anesthetics prevent the pain signal from being created by changing the distribution of electric charges in cells. Cells use a difference in electric charges to create a signal. Normally, the outside of a nerve ending is slightly positively charged. With the right environmental trigger, channels open in the cell membrane and increase the number of positive charges inside. A local anesthetic blocks those cell channels, so the pain receptors can’t give alarm. But since *all nerve cells work this way, a local anesthetic doesn’t just take away the pain. It takes away all sensation. So the body part its applied to will feel completely numb.

This isn’t a good solution for any extended duration of time, which brings us to ways to stop the pain specifically and leave other sensation intact. Drugs which do that are called analgesics. To understand how they work, we’ll need a little more detail on what cell damage does.

When cells are damaged they release a chemical called arachidonic acid, which is then converted by certain enzymes into a type of prostaglandin. If someone had stopped me on the street last week and asked what prostaglandin is, I might have guessed it’s a small country in Europe. Turns out it’s yet another chemical that flows in your blood and it’s the one that causes swelling and redness. It also lowers the pain threshold. This means that with the prostaglandin around, the pain receptors fire very readily. And since the swelling can push on the pain receptors, they might fire a lot. So, well, it’ll hurt.

But the prostaglandin itself doesn’t live long in the body. It falls apart in about 30 seconds. This tells us that one way to stop the pain is to knock out the enzymes that create prostaglandin. This is what the most common painkillers do. They are called “nonsteroidal anti-inflammatory drugs”, NSAIDs (N said) for short. Some painkillers in this class are: ibuprofen, which is sold under brand names like Advil, Anadin or Norufen, acetylsalicylic acid that you may know as Aspirin, diclofenac which is the active ingredient in Voltaren and Dicloflex, and so on. I guess at some point pretty much all of us have awkwardly asked around for one of those.

How do they work? Magic. Thanks for watching. No wait. I’m just a physicist but I’ll do my best. The issue is, every article I read on biomedicine seems to come down to saying, there’s this thing which fits to that thing, and NSAIDs are no exception. These enzymes which they block come in two varieties called cox-1 and cox-2.  The painkillers work by latching into surface structures of those cox-enzymes which prevents the enzymes from doing their job. This means less prostaglandin is produced and the pain threshold goes back to normal. They don’t entirely turn pain off, like local anesthetics do, but you don’t feel pain that easily. And unlike local anesthetics, they work only on the pain receptors, not on other receptors.

Most of these pain killers block the cox enzymes temporarily and then fall off. Within a few hours, they’re usually out of the system and the pain comes back. The exception is Aspirin. Aspirin latches onto the coxes and then breaks off, taking them out permanently. The body has to produce new ones which takes time. This is why it takes much longer for the effects of aspirin to wear off, up to 10 days. Seriously. Aspirin is the weird cousin of the pain killers. The one that doesn't talk at family reunions but rearranges your bookshelf by color and 10 days later you haven’t fully recovered from that.

And of course there are side-effects. NSAIDs have a number of side effects in common because the type of prostaglandin which they block is necessary for some other things which they also inhibit. For example, you need it for keeping up the protective lining of the stomach and the rest of the digestive system. So long-term use of NSAIDs can cause stomach ulcers and inner bleeding. I know someone who took Aspirin regularly for years and had a stomach rupture. He survived but it was a *very close call. So, well, be careful.

NSAIDs also increase the risk of cardiovascular events. This doesn’t mean cardiologists will hold more conferences, though this risk also exists. No, cardiovascular events are things like strokes and heart attacks. If you wonder just how much NSAIDs increase the risk, that depends on just exactly which one you’re talking about. This table (3) lists the risk ratio for some common NSAIDs compared to a placebo. The relevant thing to pay attention to is that the numbers are almost all larger than 1. Not dramatically larger, but noticeably larger.

About 20 years ago research suggested that most of the adverse effects of NSAIDs come from blocking the COX-1 enzyme, while the effects that stop the pain come from the COX-2 enzyme. This is why several companies developed drugs to inhibit just the COX-2 enzyme, unlike traditional NSAIDs that block both versions.

These drugs are known as COXIBs. In theory, they should have been equally effective as painkillers as traditional NSAIDS but cause less problems with the digestive system. In practice, most of them were withdrawn from the market swiftly because they increased the risk of heart attacks even more than traditional NSAIDs. Some of them are still available because the risks are kind of similar.

NSAIDs are probably the most widely used over-the-counter painkillers and as you’ve seen scientists understand quite well how they work. Another widely used pain killer has however remained somewhat of a mystery: acetaminophen. In the US it’s sold under the brand name Tylenol, in Europe it’s Paracetamol.

And this brings me to the true reason I’m making this video. It’s because my friend Tim Palmer, the singing climate physicist, told me this joke. “Why is there no aspirin in the jungle? Because the paracetamol.” I didn’t understand this. And since I had nothing intelligent to say, I gave him a lecture about the difference between aspirin and paracetamol which eventually turned into this video. In case you also didn’t understand his joke, I’ll explain it later.

So what is the difference between NSAIDs and acetaminophen? For what pain relief is concerned, they’re kind of similar. Acetaminophen has the advantage that it’s easier on the digestive system. On the flipside, acetaminophen has a small active window, meaning the difference between the dose in which it has the desired effect and the dose where it’s seriously toxic is small, a factor of ten or so. Now take into account that acetaminophen is often added to other drugs, like cough syrup or drugs that help with menstrual cramps, and it becomes easy to accidentally overdose, especially in children. This is why child proof pill containers are so useful. Your children will not be able to open the pill bottle! And sometimes neither will you.

Acetaminophen is currently the most common cause of drug overdoses in the United States, the United Kingdom, Australia, and New Zealand. Indeed, it’s one of the most common causes of poisoning worldwide. Since it’s removed from the body through the liver, the biggest risk is liver damage. In the developed world, acetaminophen overdose is currently the leading cause of acute liver failure.

Of course you generally shouldn’t mix medicine with alcohol, never ever, but especially not acetaminophen. You’ll get drunk very quickly, get a huge hangover, and run the risk of liver damage. Guess how I know. I had a friend who was doing all kinds of illegal drugs who said he doesn’t touch paracetamol – tells you all you need to know.

We’ve been talking about the pain receptors, but remember that there are three parts involved in pain. The receptor, the nerve wiring, and the brain. All of them can be a cause of pain. When pain is caused by damage to the nerve system it’s called neuropathic pain. This type of pain often doesn’t respond to over the counter drugs. It can be caused by direct nerve damage, but also by chemotherapy or diabetes and other conditions. The American National Academy of Sciences has estimated that in the United States neuropathic pain affects as much as one in three adults and leads to an annual loss of productivity exceeding 400 Billion US$.  That’s enough money to build a factory and make your own painkillers – on Mars!

Neuropathic pain and other pain that doesn’t go away with over-the-counter drugs is often treated with opioids. What are opioids and how do they work? Opioids are substances that were originally derived from poppies, but that can now be synthetically produced. They come in a large variety: morphine, codeine, oxycodone, heroine, fentanyl, etc. These don’t work exactly the same way, but the basic mechanism is more or less the same. I’m afraid the explanation is again pretty much that this thing fits to that thing.

The nervous system is equipped with receptors that opioids fit to, they are called – drums please – opioid receptors. These receptors can be occupied by endorphin, which is a substance that the human body produces, among other things to regulate pain. Opioids fit very well to those receptors. They can block them efficiently and for long periods of time. So, this is a very powerful way to reduce pain.

But opioids do a lot of other things in the human body, so there are side-effects. For one thing, opioids also suppress the release of noradrenaline, which is a hormone that among other things controls digestion, breathing, and blood pressure. Consequently, opioids can cause constipation or, in high doses, decrease heart and breathing rates to dangerously low levels. And, I mean, I’m not a doctor, but this doesn’t really sound good.

Opioids also act in the brain where they trigger the release of dopamine. Dopamine is often called the “feel good hormone” and that’s exactly what it does, it makes you feel good. That in and by itself isn’t all that much of a problem, the bigger problem is that the body adapts to the presence of opioids. Exactly what happens isn’t entirely clear, but probably the body decreases the number of opioid receptors and increases the number of receptors for the neurotransmitters that were suppressed. The consequence is that over time you have to increase the opioid dose to get the same results, to which the body adapts again, and so on. It’s a vicious cycle.

When you suddenly stop taking opioids, the number of many hormone receptors isn’t right. It takes time for the body to readjust and that causes a number of withdrawal symptoms, for example an abnormally high heart rate, muscle and stomach aches, fever, vomiting, and so on.

To keep opioid withdrawal symptoms manageable, the CDC recommend to reduce the dose slowly. If you’ve been taking opioids for more than a year, they say to reduce no more than a 10% per month. If you’ve been taking them for a few weeks or months, they recommend a 10% reduction per week. I’ll leave you a link to the CDC guide for how to safely get off opioids in the info below the video.

There are a number of other painkillers that don’t fall into either of these categories. Going through them would be rather tedious, but I want to briefly mention cannabis which has recently become increasingly popular for self-treatments of pain. A meta study published last year in the British Journal of Medicine looked at 32 trials involving over 5000 patients who took cannabis for periods ranging from a month up to half a year. They found that the effect of pain relief does exist, but it’s small.

Let then talk about the third body part that’s involved in pain, which is the brain. The brain plays a huge role in our perception of pain, and scientists are only just beginning to understand this.

A particularly amazing case was reported in the British Medical Journal in 1995. A 29-year-old construction worker was rushed to the emergency department of a hospital in Leicester. He’d jumped onto a 6-inch nail that had gone through the sole of his boot. This is an actual photo from the incident. The smallest movement of the nail was so painful that he was sedated with fentanyl and midazolam. The doctors pulled out the nail and took off the boot. And saw that the nail had gone through between the toes. The foot was entirely uninjured. He felt pain not because he actually had an injury, but because his brain was *convinced he had an injury. It’s called somatic amplification.

The opposite effect, somatic deamplification, also happens. Take for example this other accident that happened to another construction worker. Tthey aren’t paid enough these guys. This 23-year old man from Denver had somewhat of a blurry vision and a toothache. He went to a dentist. The dentist took an x-ray and conclude that the likely source of the toothache was that the man had a 4-inch nail in his skull. He’d probably accidentally shot himself with a nail gun but didn’t notice. Part of reason he wasn’t in more pain was probably that he just didn’t know he had a nail in his head.

Severe pain also changes the brain. It activates a brain region called the hypothalamus which reacts by increasing the levels of several hormones, for example cortisol and pregnenolone. This affects all kinds of things from blood sugar levels to fat metabolism to memory functions. The body is simply unable to produce these hormones at a high level for a long time. But some of those hormones are critical to pain control. A deficiency may enhance pain and slow down healing and may be one of the causes for chronic pain.

Another thing that happens if some part of your body hurts is that you learn incredibly quickly to not touch or move it. This has nothing to do with the signal itself, it’s an adaptation in the brain. This adaptation too, may have something to do with chronic pain. For example, several studies have shown that the severity of tinnitus is correlated with chronic pain, which suggests that some people are prone to develop such conditions, though the details aren’t well understood.

Indeed, scientists have only recently understood that the brain itself plays a big role in how severely we experience pain, something that can now be studied with brain scans. Consequently, some pain treatments have been proposed that target neither pain receptors nor the nervous system, but the brain response to the signals.

For example, there is the idea of audioanalgesia, that’s trying to reduce pain by listening to white noise or music. Or electroanalgesia, which uses electricity to interfere with the electric currents of pain signals. And some people use hypnosis to deal with pain. Does this actually work? We haven’t looked into it, but if you’re interested let us know in the comments and we’ll find out for you.

Saturday, July 09, 2022

Quantum Games -- Really!

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


It’s difficult to explain quantum mechanics with words. We just talked about this the other day. The issue is, we simply don’t have the words to describe something that we don’t experience. But what if you could experience quantum effects. Not in the real world, but at least in a virtual world, in a computer game? Wait, there are games for quantum mechanics? Yes, there are, and better still, they are free. Where do you get these quantum games and how do they work? That’s what we’ll talk about today.

We’ll start with a game that’s called “Escape Quantum” which you can play in your browser.

“You find yourself in a place of chaos. Whether it’s a dream or an illusion escapes you as the shiny glint of a key catches your eye. A goal, a prize, an escape, whatever it means for you takes hold in your mind as your body pushes you forward, into the unknown.”

Alright. Let’s see.

Escape Quantum is an adventure puzzle game where you walk around and have to find keys and cards to unlock doors. The navigation works by keyboard and is pretty simple and straight forward. The main feature of the game is to introduce you to the properties of a measurement in quantum mechanics, that if you don’t watch an object, its wave-function can spread out and next time you look, it may be at a different place.

So sometimes you have to look away from something to make it change place. And if there’s something you don’t want to change place, you have to keep looking at it. At times this game can be a bit frustrating because much of it is dictated by random chance, but then that’s how it goes in quantum mechanics. Once you learn the principles the game can be completed quickly. Escape Quantum isn’t particularly difficult, but it’s both interesting and fun.

Another little game we tried is called quantum playground which also runs in your browser.

Yes, hello. What you do here is that you click on some of those shiny spheres to initialize the position of a quantum particle. You can initialize several of them together. Then you click the button down here which will solve the Schrödinger equation, with those boundary conditions, and you can see what happens to the initial distribution. You can then click somewhere to make a measurement, which will suddenly collapse the wave-function and the particle will be back in one place.

There isn’t much gameplay in this one, but it’s a nice and simple visualization of the spread of the wave-function and the measurement process. Didn’t really understand what this little bird thing is.

Somewhat more gameplay is going on in the next one which is called “Particle in a Box”.  This too runs in your browser but this time you control a character, that’s this little guy here, who can move side to side or jump up and down.

The game starts with a brief lesson about potential and kinetic energy in the classical world. You collect energy in terms of a lightning bolt and give it to a particle that’s rolling in a pit. This increases the energy of the particle and it escapes the pit. Then you can move on to the quantum world.

First you get a quick introduction. The quantum particle is trapped in a box, as the title of the game says. So it doesn’t have a definite position, but instead has a probability distribution that describes where it’s most likely to be if a measurement is made. Measurements happen spontaneously and if a measurement happens then one of these circles appears in a particular place.

You can then move on to the actual game which introduces you to the notion of energy levels. The particle starts at the lowest energy level. You have to collect photons, that’s those colorful things, with the right energy to move the particle from one energy level to the next. If you happen to run into a particle at a place where it’s being measured, that’s bad luck, and you have to start over. You can see here that when the particle changes to a higher energy level, then its probability distribution also changes. So you collect the photons until the particle’s in the highest energy level and then you can exit and go to the next room.

The controls of this one are little fiddly but they work reasonably well. This game isn’t going to test your puzzle-solving skills or reflexes, but does a good job in illustrating some key concepts in quantum mechanics: probability distributions, measurements, and energy levels.

The next one is called “Psi and Delta”. It was developed by the same team as “Particle in a Box” and works similarly, but this time you control a little robot that looks somewhat like BB8 from Star Wars. There’s no classical physics introduction in this one, you go straight to the quantum mechanics. Like the previous game, this one is based on two key features of quantum mechanics: that particles don’t have a definite position but a probability distribution, and that a measurement will “collapse” the wave-function and then the particle is in a particular place.

But in this game you have to do a little more. There’s an enemy robot, that’s this guy, which will try to get you, but to do so it will have to cross a series of platforms. If you press this lever, you make a measurement and the particle is suddenly in one place. If it’s in the same place as the enemy robot, the robot will take damage. If you damage it enough, it’ll explode and you get to the next level.

The levels increase in complexity, with platforms of different lengths and complicated probability distributions. Later in the game, you have to use lamps of specific frequencies to change the probability distribution into different shapes. Again, the controls can be a little fiddly, but this game has some charm. It requires a bit of good timing and puzzle solving skills too.

Next game we look at is called “Hello Quantum” and it’s a touchscreen game that you can play on your phone or tablet. You first have to download and install it, there’s no browser version for this one, but there’s one for android and one for ios. The idea here is that you have to control qubit states by applying quantum gates. The qubits are either on or off or something you don’t know. Quantum gates are the operations that a quantum computer computes with. They basically move around entanglement. In this game, you get an initial state, and a target state that you have to reach by applying the gates.

The game tells you the minimal number of moves by which you can solve the puzzle, and encourages you to try to find this optimal solution. You’re basically learning how to engineer a particular quantum state and how a quantum computer actually computes.

The app is professionally designed and works extremely well. The game comes with detailed descriptions of the gates and the physical processes behind them, but you can play it without any knowledge of qubits, or any understanding of what the game is trying to represent, just by taking note of the patterns and how the different gates move the black and white circles around. So this works well as a puzzle game whether or not you want to dig deep into the physics.

This brings us to the last game in our little review which is called the Quantum FlyTrap. This is again a game that you can play in your browser and it’s essentially a quantum optics simulator. This red triangle is your laser source, and the green venus flytraps are the detectors. You’re supposed to get the photons from the laser to the detectors, with certain additional requirements, for example you have to get a certain fraction of the photons to each detector.

You do this by dragging different items around and rotating them, like the mirrors and beam splitters and non-linear crystals and so on. In later levels you have to arrange mirrors to get the photons through a maze without triggering any bombs or mines.

A downside of this game is that the instructions aren’t particularly good. It isn’t always clear what the goal is in each level, until you fail and you get some information about what you were supposed to do in the first place. That said, the levels are fun puzzles with a unique visual style. I’ve found this to be a quite remarkable simulator. You can even use it to click together your own experiment.  

Saturday, July 02, 2022

Are we too many people or too few?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


There’s too many men, too many people, making too many problems. That’s how Genesis put it. Elon Musk, on the other hand, thinks there are too few people on the planet. “A lot of people think there’s too many people on the planet, but I think there’s, in fact, too few.” Okay, so who is right? Too many people or too few? That’s what we’ll talk about today.

This graph shows the increase of world population in the past twelve-thousand years. Leaving aside this dip in the 14th century when the plague wiped out big parts of the population in Europe and Asia, it looks pretty much like exponential growth.

If we extrapolate this curve, then in a thousand years there’ll be a few trillions of us! But this isn’t how population growth works. Sooner or later all species run into resource limits of some kind. So when will we hit ours?

When it comes to the question how close humans are to reaching this planet’s resource limits the two extremes are doomsters and boomsters. Yes, doomsters and boomsters sound like rival gangs from a rock musical that are about to break out in song, but reality is a bit less dire. We’ll look at what both sides have to say and then we look at what science says.

The doomsters have a long tradition, going back at least to Thomas Malthus in the 18th century. Malthus said, in a nutshell, the population is growing faster than food production and it’ll become increasingly more difficult to feed everyone. If that ever does happen, it’d be a huge bummer because, I don’t know about you guys, but I’d really like to keep eating food. Especially cheese. I’d really like to keep eating cheese.

Malthus’ problem was popularized in a 1968 book by Paul Ehrlich called The Population Bomb, title says it all. Ehrlich predicted that by the 1980s famines would be commonplace and global death rates would rise. As you may have noticed, this didn’t happen. In reality, death rates have dropped, continue to drop, and on the average calorie consumption has globally increased. Still Ehrlich claims that he was in principle right, it’ll just take somewhat longer than he anticipated.

Indeed, the Club of Rome report of 1972 predicted that we would reach the “limits to growth” in the mid 21st century, and population would steeply decrease after that basically because we weren’t careful enough handling the limited resources we have.

Several analyses in the early 21st century found that so far the business as usual predictions from the Club of Rome aren’t far off reality.

The Earth Overshoot Day is an intuitive way to quantify just how bad we are at using our resources. The idea was put forward by Andrew Simms from the University of Sussex and it’s to calculate by which date in each calendar year we’ve used up the resources that Earth regenerates in that year. If that date is before the end of the year, this means that each year we shrink the remaining resources which ultimately isn’t sustainable.  

In this figure you see the Earth Overshoot Days since 1970. As you can see, in the past ten years or so we used up all renewable resources in early August. In 2020, the COVID pandemic pushed that date temporarily back by a couple of days but now we’re back on track to reach Overshoot Day sooner and sooner. It’s like groundhog day meets honey, I shrunk the resources, clearly not something anyone wants.

So the doomster’s fears aren’t entirely unjustified. We’ve arguably not been dealing with our resources responsibly.  Overpopulation isn’t pretty and it’s very real already in some places. For example, the population density in Los Angeles is about 3000 people per square kilometer but that of Manila in the Philippines is more than ten times higher, a stunning 43 thousand people per square kilometer. There’s so little space, some families have settled in the cemetery. As a general rule, and I hope you’ll all agree, I think people should not have to sleep near dead bodies when possible.

Such extreme overpopulation benefits the spread of diseases and makes it very difficult to enforce laws meant to keep the environment clean, which is a health risk. You may argue the actual problem here isn’t overpopulation but poverty, but really it’s neither in isolation, it’s the relation between them. The number of people grows faster than the resources they’d need to keep the living standard at least stable. 

On the global level, the doomsters argue, the root problem of climate change and the loss of biodiversity that accompanies it is that there’s too many people on the planet.

You may have seen the headlines some years ago. “Want to fight climate change? Have fewer children!” “Scientists Say Having Fewer Kids Is Our Best Bet To Reduce Climate Change” “Science proves kids are bad for earth”. These headlines summarized a 2017 article that appeared in the magazine Environmental Research Letters. Its authors had looked at 39 peer-reviewed papers and government reports. They wanted to find out what lifestyle choices have the biggest impact on our personal share of emissions.

Turns out that recycling doesn’t make much of a difference, neither makes changing your car or avoiding transatlantic flight, which is unfortunate for those of you who are scared of flying, as not flying to protect the environment is no longer a good excuse. The one thing that really made a difference was not having children. Indeed, it was 25 times more important than the next one which was “live car free”. The key reason they arrived at this conclusion is that they assumed you inherit half the carbon emissions of your children and then a quarter of your grandchildren, etc.

Fast forward to the headlines of 2022 and we read that men are getting vasectomies so they don’t have to feel guilty if they keep driving a car. Elon Musk has meanwhile fathered eight children, though maybe by the time I’ve finished this sentence he has a few more. So let’s then look at the other side of the argument, the boomsters.

The boomsters’ fire is fueled by just how wrong both Malthus and Ehrlich were. They were both wrong because they dramatically underestimated how much technological progress would improve agricultural yield and how that in return would improve health and education and lead to more technological progress. Boomsters extrapolate this past success and argue that human ingenuity will always save the day.

To illustrate this point, the economist Julian Simon has developed what’s called the Simon Abundance Index. You may think it tells you if there is an abundance of Simons, but no, it tells you instead the abundance of 50 basic commodities and their relation to population growth. His list of basic commodities contains every-day needs such as uranium, platinum, and tobacco, but doesn’t contain cheese. Seems that Mr Simons and I don’t quite have the same idea of basic commodities.

The index is calculated as the ratio of the price of the commodity and the average hourly wage, so basically it’s a measure of how much of the stuff you’d be able to buy.

The index is normalized to 1980 which marks one hundred percent. In 2020, the index reached 708 point 4 percent. And hey, the curve goes mostly up, so certainly that’s a good thing. Boomsters like to quote this index to prove something.

Now, this seems a little overly simplistic and you may wonder what the amount of tobacco you can buy with your earnings has to do with natural resources. Indeed, if you look for this index in the scientific literature you won’t find it – it isn’t generally accepted as a good measure of resource abundance. What it captures is the tendency of technology to increase efficiency, which leads to dropping prices so long as resources are available. Tells you nothing about how long the resources will last.

However, the boomsters do have a point in that pessimistic predictions from the past didn’t come true and that underpopulation is also a problem. Indeed, countries like Canada, Norway, and Sweden, have an underpopulation problem in their northern territories. It’s just hard to keep up living standards if there aren’t enough people to maintain them, that’s true for infrastructure but also education and health services. A civilization as complex as the one we currently have would be impossible to maintain with merely some million people. There’d just not be enough of us to learn and carry out all the necessary tasks, like making youtube videos!

Another problem is the age distribution. For most of history, it’s had a pyramid shape, with more young people than old ones. This example shows the population pyramid for Japan and how it changed in the past century. When people have fewer children this changes to an “inverted pyramid”, with more old people than young ones, which makes it difficult to take proper care of the elderly.

The transition is already happening in countries such as Japan and South Korea and will soon happen in most of the developed world. But the inverted pyramid comes from decrease in population, not from underpopulation, so it’s a temporary problem that should resolve once a population stabilizes.

Okay, so we’ve seen what the doomsters and boomsters say, now let’s look at what science says.

A useful term to talk about overpopulation is the “carrying capacity” of an ecosystem, that is the maximum population of a given organism that the ecosystem can sustain indefinitely. So what we want to know is the carrying capacity of Earth for humans.

Scientists disagree about the best and most accurate way of determining that number and estimates vary dramatically. Most estimates lie  in the range between four and 16 billion people, but some pessimists say the carrying capacity is more like 2 billion so we’ve long exceeded it and some optimists think we can squeeze more than 100 billion people on the planet.

These estimates vary so much because they depend on factors that are extremely hard to predict. For example, how many people we can feed depends on what their typical diet is. Earth can sustain more vegans than it can sustain Jordan Petersons who eat nothing but meat, though some of you may think even one Jordan Peterson is too much. And of course the estimates depend on how quickly you think technology improves together with population increase which is basically guesswork.

The bottom line is that the conservative estimate for the carrying capacity of earth is roughly the current population, but if we’re very optimistic we might make it to a hundred billion. Another thing we can do is try to infer trends from population data.  

The graph I showed you in the beginning may look like an exponential increase, but this isn’t quite right. If you look at the past 50 years in more detail you can see that the rate of growth has been steady at about one billion people every 12 years. That’s not exponential. What’s going on becomes clearer if we look at the fertility rate in different regions of the planet.

The fertility rate is what demographers call the average number of children a woman gives birth to. If the number falls below approximately 2 point 1, then the size of the population starts to fall. The 2.1 is called replacement level fertility. It’s worth mentioning that the 2.1 is the replacement fertility in developed countries with a low child mortality rate. If child mortality is high, the replacement fertility level is higher.  

Current fertility rates differ widely between different nations. In the richest nations, fertility rates have long dropped below the replacement level for example, the current fertility rate in the USA is 1.81 and in Japan 1.33. But in the developing world fertility rates are still high for example in Afghanistan 6.01; and in /niːˈʒeə/ 7.08. How is this situation going to develop?

We don’t know, of course, but we can extrapolate the trends. In October 2020, The Lancet published the results of a massive study in which they did just that. A team of researchers from the University of Washington made forecasts for population trends in 185 countries from the present to the year 2100. They used several models to forecast the evolution of migration, educational attainment, use of contraceptives, and so on, and calculated the effects on life expectancy and birth rate.

According to their forecast, global population will peak in the year 2064 at 9.73 billion and gradually decline to 8.79 billion by 2100. By then, the fertility rate will have dropped to only 1.66 globally (95% 1.33-2.08).

This is remarkably consistent with the Club of Rome report. They also looked at individual countries. For example, by 2100 China is forecasted to decrease its population by 48 percent to the small, measly number of 732 million people. No wonder Xi Jinping is asking Chinese people to have more babies.

Both the US and the UK are expected to keep roughly the same population thanks mostly to immigration. Japan is expected to stay at its current low fertility rate and consequently its population will decrease from the current 128.4 million to only 59.7 million.

Just a few weeks ago Musk commented on this, claiming that Japan could “cease to exist”. Well, we have seen that Japan will indeed likely halve its population by the end of the century and if you extrapolate this trend indefinitely then, yeah, it’ll cease to exist. But let’s put the numbers into context.

This figure shows the evolution of the Japanese population from 1800 to the present. It peaked around 10 years ago at about 130 million. If that doesn’t sound like much, keep in mind that Japan is only about half the size of Texas. This means its population density is currently about ten times higher than that of the United States. The Lancet paper forecasts that Japan will remain the world’s 4th largest economy even after halving its population and no one expects the population to continue shrinking forever. So the future looks nice for Japanese people, regardless of what Musk thinks.
 

What’s with Europe? The population of Germany is expected to go from currently 83 to 66 million people in 2100. Spain and Portugal will see their population cut by more than half. But this isn’t the case in all European countries, especially those up north can expect moderate increases. Norway, for example, is projected to go from currently 5.5 to about 7 million, and Sweden from currently 10 to 13 million.

But the biggest population increase will happen in currently underdeveloped areas thanks to both high fertility rates and further improvements in living conditions. For example, according to the Lancet estimates Nigeria will increase from currently 206 million to a staggering 791 million. That’s right, by 2100 there will be more Nigerians than Chinese. Niger will explode from 21 to 185 million.

Overall the largest increase will be in sub-Saharan Africa, which will go from currently 1 billion to 3 billion, but even there the fertility rate is projected to decrease below the replacement rate by the end of the century. If you want to check the fertility forecast for your country just check out the paper.

Those extrapolations assumed business as usual. But the same paper also considers an alternative scenario in which the United Nations Sustainable Development Goals for education and contraceptive are met. In that case the population would start decreasing much sooner, peak in 2046 at 8.5 billion and by the year 2100 the world population would be between 6.3 and 6.9 billion.

What do we learn from this? According to the conservative estimates for the carrying capacity of the world and extrapolations for population trends, it looks like the global population is going to peak relatively soon below carrying capacity. Population decrease is going to lead to huge changes in power structures both nationally and internationally. That’ll cause a lot of political tension and economic stress. And this doesn’t even include the risk of killing off a billion people or so with pandemics, wars, or a major economic crisis induced by climate change.

So both the doomsters and boomsters are wrong. The doomsters are wrong to think that overpopulation is the problem, but right in thinking that we have a problem. The boomsters are right in thinking that the world can host many more people but wrong in thinking that we’re going to pull it off.  

And I’m afraid Musk is right. If we’d play our cards more wisely, we could almost certainly squeeze some more people on this planet. And seeing that the most relevant ingredient to progress is human brains, if progress is what you care about, then we’re not on the best possible track.