Saturday, July 02, 2022

Are we too many people or too few?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


There’s too many men, too many people, making too many problems. That’s how Genesis put it. Elon Musk, on the other hand, thinks there are too few people on the planet. “A lot of people think there’s too many people on the planet, but I think there’s, in fact, too few.” Okay, so who is right? Too many people or too few? That’s what we’ll talk about today.

This graph shows the increase of world population in the past twelve-thousand years. Leaving aside this dip in the 14th century when the plague wiped out big parts of the population in Europe and Asia, it looks pretty much like exponential growth.

If we extrapolate this curve, then in a thousand years there’ll be a few trillions of us! But this isn’t how population growth works. Sooner or later all species run into resource limits of some kind. So when will we hit ours?

When it comes to the question how close humans are to reaching this planet’s resource limits the two extremes are doomsters and boomsters. Yes, doomsters and boomsters sound like rival gangs from a rock musical that are about to break out in song, but reality is a bit less dire. We’ll look at what both sides have to say and then we look at what science says.

The doomsters have a long tradition, going back at least to Thomas Malthus in the 18th century. Malthus said, in a nutshell, the population is growing faster than food production and it’ll become increasingly more difficult to feed everyone. If that ever does happen, it’d be a huge bummer because, I don’t know about you guys, but I’d really like to keep eating food. Especially cheese. I’d really like to keep eating cheese.

Malthus’ problem was popularized in a 1968 book by Paul Ehrlich called The Population Bomb, title says it all. Ehrlich predicted that by the 1980s famines would be commonplace and global death rates would rise. As you may have noticed, this didn’t happen. In reality, death rates have dropped, continue to drop, and on the average calorie consumption has globally increased. Still Ehrlich claims that he was in principle right, it’ll just take somewhat longer than he anticipated.

Indeed, the Club of Rome report of 1972 predicted that we would reach the “limits to growth” in the mid 21st century, and population would steeply decrease after that basically because we weren’t careful enough handling the limited resources we have.

Several analyses in the early 21st century found that so far the business as usual predictions from the Club of Rome aren’t far off reality.

The Earth Overshoot Day is an intuitive way to quantify just how bad we are at using our resources. The idea was put forward by Andrew Simms from the University of Sussex and it’s to calculate by which date in each calendar year we’ve used up the resources that Earth regenerates in that year. If that date is before the end of the year, this means that each year we shrink the remaining resources which ultimately isn’t sustainable.  

In this figure you see the Earth Overshoot Days since 1970. As you can see, in the past ten years or so we used up all renewable resources in early August. In 2020, the COVID pandemic pushed that date temporarily back by a couple of days but now we’re back on track to reach Overshoot Day sooner and sooner. It’s like groundhog day meets honey, I shrunk the resources, clearly not something anyone wants.

So the doomster’s fears aren’t entirely unjustified. We’ve arguably not been dealing with our resources responsibly.  Overpopulation isn’t pretty and it’s very real already in some places. For example, the population density in Los Angeles is about 3000 people per square kilometer but that of Manila in the Philippines is more than ten times higher, a stunning 43 thousand people per square kilometer. There’s so little space, some families have settled in the cemetery. As a general rule, and I hope you’ll all agree, I think people should not have to sleep near dead bodies when possible.

Such extreme overpopulation benefits the spread of diseases and makes it very difficult to enforce laws meant to keep the environment clean, which is a health risk. You may argue the actual problem here isn’t overpopulation but poverty, but really it’s neither in isolation, it’s the relation between them. The number of people grows faster than the resources they’d need to keep the living standard at least stable. 

On the global level, the doomsters argue, the root problem of climate change and the loss of biodiversity that accompanies it is that there’s too many people on the planet.

You may have seen the headlines some years ago. “Want to fight climate change? Have fewer children!” “Scientists Say Having Fewer Kids Is Our Best Bet To Reduce Climate Change” “Science proves kids are bad for earth”. These headlines summarized a 2017 article that appeared in the magazine Environmental Research Letters. Its authors had looked at 39 peer-reviewed papers and government reports. They wanted to find out what lifestyle choices have the biggest impact on our personal share of emissions.

Turns out that recycling doesn’t make much of a difference, neither makes changing your car or avoiding transatlantic flight, which is unfortunate for those of you who are scared of flying, as not flying to protect the environment is no longer a good excuse. The one thing that really made a difference was not having children. Indeed, it was 25 times more important than the next one which was “live car free”. The key reason they arrived at this conclusion is that they assumed you inherit half the carbon emissions of your children and then a quarter of your grandchildren, etc.

Fast forward to the headlines of 2022 and we read that men are getting vasectomies so they don’t have to feel guilty if they keep driving a car. Elon Musk has meanwhile fathered eight children, though maybe by the time I’ve finished this sentence he has a few more. So let’s then look at the other side of the argument, the boomsters.

The boomsters’ fire is fueled by just how wrong both Malthus and Ehrlich were. They were both wrong because they dramatically underestimated how much technological progress would improve agricultural yield and how that in return would improve health and education and lead to more technological progress. Boomsters extrapolate this past success and argue that human ingenuity will always save the day.

To illustrate this point, the economist Julian Simon has developed what’s called the Simon Abundance Index. You may think it tells you if there is an abundance of Simons, but no, it tells you instead the abundance of 50 basic commodities and their relation to population growth. His list of basic commodities contains every-day needs such as uranium, platinum, and tobacco, but doesn’t contain cheese. Seems that Mr Simons and I don’t quite have the same idea of basic commodities.

The index is calculated as the ratio of the price of the commodity and the average hourly wage, so basically it’s a measure of how much of the stuff you’d be able to buy.

The index is normalized to 1980 which marks one hundred percent. In 2020, the index reached 708 point 4 percent. And hey, the curve goes mostly up, so certainly that’s a good thing. Boomsters like to quote this index to prove something.

Now, this seems a little overly simplistic and you may wonder what the amount of tobacco you can buy with your earnings has to do with natural resources. Indeed, if you look for this index in the scientific literature you won’t find it – it isn’t generally accepted as a good measure of resource abundance. What it captures is the tendency of technology to increase efficiency, which leads to dropping prices so long as resources are available. Tells you nothing about how long the resources will last.

However, the boomsters do have a point in that pessimistic predictions from the past didn’t come true and that underpopulation is also a problem. Indeed, countries like Canada, Norway, and Sweden, have an underpopulation problem in their northern territories. It’s just hard to keep up living standards if there aren’t enough people to maintain them, that’s true for infrastructure but also education and health services. A civilization as complex as the one we currently have would be impossible to maintain with merely some million people. There’d just not be enough of us to learn and carry out all the necessary tasks, like making youtube videos!

Another problem is the age distribution. For most of history, it’s had a pyramid shape, with more young people than old ones. This example shows the population pyramid for Japan and how it changed in the past century. When people have fewer children this changes to an “inverted pyramid”, with more old people than young ones, which makes it difficult to take proper care of the elderly.

The transition is already happening in countries such as Japan and South Korea and will soon happen in most of the developed world. But the inverted pyramid comes from decrease in population, not from underpopulation, so it’s a temporary problem that should resolve once a population stabilizes.

Okay, so we’ve seen what the doomsters and boomsters say, now let’s look at what science says.

A useful term to talk about overpopulation is the “carrying capacity” of an ecosystem, that is the maximum population of a given organism that the ecosystem can sustain indefinitely. So what we want to know is the carrying capacity of Earth for humans.

Scientists disagree about the best and most accurate way of determining that number and estimates vary dramatically. Most estimates lie  in the range between four and 16 billion people, but some pessimists say the carrying capacity is more like 2 billion so we’ve long exceeded it and some optimists think we can squeeze more than 100 billion people on the planet.

These estimates vary so much because they depend on factors that are extremely hard to predict. For example, how many people we can feed depends on what their typical diet is. Earth can sustain more vegans than it can sustain Jordan Petersons who eat nothing but meat, though some of you may think even one Jordan Peterson is too much. And of course the estimates depend on how quickly you think technology improves together with population increase which is basically guesswork.

The bottom line is that the conservative estimate for the carrying capacity of earth is roughly the current population, but if we’re very optimistic we might make it to a hundred billion. Another thing we can do is try to infer trends from population data.  

The graph I showed you in the beginning may look like an exponential increase, but this isn’t quite right. If you look at the past 50 years in more detail you can see that the rate of growth has been steady at about one billion people every 12 years. That’s not exponential. What’s going on becomes clearer if we look at the fertility rate in different regions of the planet.

The fertility rate is what demographers call the average number of children a woman gives birth to. If the number falls below approximately 2 point 1, then the size of the population starts to fall. The 2.1 is called replacement level fertility. It’s worth mentioning that the 2.1 is the replacement fertility in developed countries with a low child mortality rate. If child mortality is high, the replacement fertility level is higher.  

Current fertility rates differ widely between different nations. In the richest nations, fertility rates have long dropped below the replacement level for example, the current fertility rate in the USA is 1.81 and in Japan 1.33. But in the developing world fertility rates are still high for example in Afghanistan 6.01; and in /niːˈʒeə/ 7.08. How is this situation going to develop?

We don’t know, of course, but we can extrapolate the trends. In October 2020, The Lancet published the results of a massive study in which they did just that. A team of researchers from the University of Washington made forecasts for population trends in 185 countries from the present to the year 2100. They used several models to forecast the evolution of migration, educational attainment, use of contraceptives, and so on, and calculated the effects on life expectancy and birth rate.

According to their forecast, global population will peak in the year 2064 at 9.73 billion and gradually decline to 8.79 billion by 2100. By then, the fertility rate will have dropped to only 1.66 globally (95% 1.33-2.08).

This is remarkably consistent with the Club of Rome report. They also looked at individual countries. For example, by 2100 China is forecasted to decrease its population by 48 percent to the small, measly number of 732 million people. No wonder Xi Jinping is asking Chinese people to have more babies.

Both the US and the UK are expected to keep roughly the same population thanks mostly to immigration. Japan is expected to stay at its current low fertility rate and consequently its population will decrease from the current 128.4 million to only 59.7 million.

Just a few weeks ago Musk commented on this, claiming that Japan could “cease to exist”. Well, we have seen that Japan will indeed likely halve its population by the end of the century and if you extrapolate this trend indefinitely then, yeah, it’ll cease to exist. But let’s put the numbers into context.

This figure shows the evolution of the Japanese population from 1800 to the present. It peaked around 10 years ago at about 130 million. If that doesn’t sound like much, keep in mind that Japan is only about half the size of Texas. This means its population density is currently about ten times higher than that of the United States. The Lancet paper forecasts that Japan will remain the world’s 4th largest economy even after halving its population and no one expects the population to continue shrinking forever. So the future looks nice for Japanese people, regardless of what Musk thinks.
 

What’s with Europe? The population of Germany is expected to go from currently 83 to 66 million people in 2100. Spain and Portugal will see their population cut by more than half. But this isn’t the case in all European countries, especially those up north can expect moderate increases. Norway, for example, is projected to go from currently 5.5 to about 7 million, and Sweden from currently 10 to 13 million.

But the biggest population increase will happen in currently underdeveloped areas thanks to both high fertility rates and further improvements in living conditions. For example, according to the Lancet estimates Nigeria will increase from currently 206 million to a staggering 791 million. That’s right, by 2100 there will be more Nigerians than Chinese. Niger will explode from 21 to 185 million.

Overall the largest increase will be in sub-Saharan Africa, which will go from currently 1 billion to 3 billion, but even there the fertility rate is projected to decrease below the replacement rate by the end of the century. If you want to check the fertility forecast for your country just check out the paper.

Those extrapolations assumed business as usual. But the same paper also considers an alternative scenario in which the United Nations Sustainable Development Goals for education and contraceptive are met. In that case the population would start decreasing much sooner, peak in 2046 at 8.5 billion and by the year 2100 the world population would be between 6.3 and 6.9 billion.

What do we learn from this? According to the conservative estimates for the carrying capacity of the world and extrapolations for population trends, it looks like the global population is going to peak relatively soon below carrying capacity. Population decrease is going to lead to huge changes in power structures both nationally and internationally. That’ll cause a lot of political tension and economic stress. And this doesn’t even include the risk of killing off a billion people or so with pandemics, wars, or a major economic crisis induced by climate change.

So both the doomsters and boomsters are wrong. The doomsters are wrong to think that overpopulation is the problem, but right in thinking that we have a problem. The boomsters are right in thinking that the world can host many more people but wrong in thinking that we’re going to pull it off.  

And I’m afraid Musk is right. If we’d play our cards more wisely, we could almost certainly squeeze some more people on this planet. And seeing that the most relevant ingredient to progress is human brains, if progress is what you care about, then we’re not on the best possible track.

Saturday, June 25, 2022

Whatever happened to the Bee Apocalypse?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


15 years ago, dying bees were all over the news. Scientists called it the “colony collapse disorder”, headlines were warning of honey bees going extinct. Some suspected a virus, some pesticides, parasites, or a fungus. They spoke of a “honeybee apocalypse”, a “beepocalypse”, a “bee murder mystery” and the “head scratching case of the vanishing bees”, which are all names of movies I wouldn’t watch, and in any case the boring truth is that the honey bees are doing fine. It’s the wild bees that are in danger. Whatever happened to the bees and how much of a problem is it? That’s what we’ll talk about today.

The honeybees started dying in 2006. Beekeepers began to report unusually high losses, in some cases as high as 30 to 90 percent of their hives. In most of those cases the symptoms were inconsistent with known causes of honey bee death. The colonies had no shortage of honey or pollen, but the worker bees suddenly largely disappeared. Only a few of the bees were found dead, most of them just never returned to the hive. The queen and her brood remained, but without worker bees, they could not sustain themselves.

Scientists called it the Colony Collapse Disorder and they had many hypotheses for its cause. Some suspected mites, some a new type of virus, some blamed pesticides. Or maybe stress due to mismanagement or habitat changes or poor nutrition. And some suspected… the US government.

In the past years, the number of bees which have been dying from Colony Collapse Disorder has decreased, but we still don’t know what caused it. A 2009 paper that looked at 61 possible explanations concluded that “no single factor was found with enough consistency to suggest one causal agent.”

Quite possibly the issue is that there’s no single reason the bees are dying but rather it’s a combination of many different stressors that amplify each other. It’s parasites and pesticides and disease and a decreasing diversity of plants and loss of habitat. I know this may be controversial to some of you, but not all of your meals should be cheese. It’s not good for you. And it isn’t good for bees either – they should have more than one source of nutrition. Bees also probably shouldn’t eat cheese, so as human we’re a little better off. But much like for us, variety in nutrition is necessary for the bees to stay healthy and if they are faced with large areas of monocultures, diverse nutrition is hard to come by. Doesn’t help if those monocultures are full of pesticides. 

The issue with pesticides alone is far worse than originally recognized because if several of them are used together the effects on the bees can amplify. Just a few months ago, a group of researchers from the US and the UK published a paper in Nature in which they present a meta-analysis of 90 studies in which bees were exposed to combinations of chemicals used in agriculture. They found that if you expose bees to one pesticide that kills 10 percent and another pesticide that kills another 10 percent it’s possible that both together kill as much as 40 percent. The reason for this isn’t that bees are bad at maths, but that effects of chemicals on living creatures don’t add linearly. 

Pesticides and infections still plague honey bees, though they make fewer headlines these days. For example, in 2020 (1 April 2020 – 1 April 2021), beekeepers in the United States lost almost half (45.5%) of their managed honey bee colonies. The major reason that beekeepers reported was a parasitic mite.

However, the numbers may sound more alarming than they really are because honey bees are efficiently bred and managed by humans. Even if they die in large fractions each year, they repopulate quickly and the overall decline of population is small.

In fact, honey bees were brought to most places by humans in the first place. They are a native species to Europe and northern Africa and from there were introduced by salesmen to every other continent except Antarctica. Today there are over 90 million beehives in the world. The typical population of a hive is 20 thousand to 80 thousand. This means that all together there are a few trillion honeybees in the world today.

And while the number of colony losses in the United States is around 40-45 percent each year, which sounds like a lot, the total number of honey bees has been reasonably stable for the past twenty years or so, though it was higher in the 1940s.  Globally the number of honeybees has in fact increased about 45% in the last 50 years

The reason that you still read about bee keepers who are sounding the alarm has nothing to do with honey, but that the demand for pollinators in agriculture has increased faster than the supply. The fraction of agriculture that depends on animal pollination has tripled during the last half century whereas the number of honeybees hasn’t even doubled. How big of a problem that is depends strongly on the type of crops a country grows. And it depends on the wild bees.

And that brings us to the actual problem. The honey bee (Apis mellifera) is only one of about 20 thousand different bee species. The non-honey bees are usually referred to as wild bees, and each location has its native species. According to an estimate from researchers at Cornell University in 2006, wild bees contribute to the pollination of 85 percent of crops in agriculture.

While their contribution is in most cases small, with about 20% on the average, in some cases they do most of the work, for example for lemons, grapes, olives, strawberries, pumpkins and peanuts, for which 90 percent of the pollination comes from wild bees. So it’s not that we’d die without them, but according to the Food and Agriculture Organization of the United Nations, losing the wild bees would be a major threat to human food security, and without doubt a serious let down to those who enjoy eating lemon grape olive strawberry pumpkin peanut sandwiches.

Wild bees differ from honey bees in a number of ways. Honeybees live in colonies of several tens of thousands. They are social bees which means they like hanging out together at bee malls and have little bee block parties. No need to look that up, I'm a physicist, you can trust me on these things.

The majority of wild bees, on the contrary, live solitary lifestyles. They get together to mate and then separate again. The female lays her eggs, collects enough food for the larvae, and then leaves her offspring alone, which finally proves that anti-authoritarian parenting does work. If you’re a wild bee.   

Honeybees aren’t the only bees that produce honey, but they are by far the ones who produce most of it. Most wild bees don’t produce honey. But they are important pollinators. Since wild bees have been around for so long they’ve become very specialized at pollinating certain plants. And for those plants replacing them with honey bees is very difficult. For example, squash flowers are open only until the early morning, a time at which many honeybees are still sleeping because they were partying a little too hard at their block party the night before. But a type of wild bees aptly named squash bees (Peponapis and Xenoglossa genera) wake up very early to do that job.

Maybe the biggest difference between honey bees and wild bees is that wild bees receive very little attention because no one sees them dying. They are struggling with the same problems as the honey bees, but don’t have beekeepers who weep over them, and data for their decline have been hard to come by.

But last year the magazine Cell published the results of a study with a global estimate for the situation of wild bees. The authors looked at the numbers of bee species that were collected or observed over time using data publicly available at the Global Biodiversity Information Facility. They found that even though the number of records has been increasing, the number of different species in the records has been sharply decreasing in the past decades.

The decline rates differ between the continents, but the species numbers are dropping steeply everywhere except for Oceania. The researchers say there’s a number of factors in play here, such as the expansion of monocultures, loss of native habitat, pesticides, climate change, and bee trade that also trades around pathogens.

So the problems that wild bees face are similar to those of honey bees, but they have an additional problem which is… honeybees. Honey bees compete with wild bees for food and habitat and they also pass on viruses. Now, a big honey bee colony can deal with viruses by throwing out the infected bees. But this doesn’t work for wild bees because they don’t live in large colonies. And worse, when honey bees and wild bees fight for food they seem to both lose out.

In 2018 researchers from France published a paper in Nature which reported that in areas with high-density beekeeping the success of wild bees to find nectar dropped 50 percent and that of honeybees was reduced by about 40 percent. Something similar had been observed three years before by German researchers.

How bad is the situation for the wild bees? Hard to say. While we have estimates for the number of wild bee species, we don’t know how many wild bees there even are. We contacted almost a dozen experts and the brief summary is that it’s a really difficult question. Most of them just said they didn’t know, and a few said probably about as many as there are honey bees but that’s just an educated guess. So we don’t even know what we are doing to the environment.   

If all this sounds really complicated, that’s indeed the major message. Forget about quantum gravity: ecological systems are way more complex. There’s so many things going on that we never had a chance to properly study in the first place, so we have no idea what’s happening now.

What we do know is that we’ve been changing the ecosystems around us a lot. That has reduced and continues to reduce biodiversity significantly. And the decrease in biodiversity decreases the resilience of the ecosystems, which means that sooner or later parts of them will break down.

It’s really just a matter of time until there’ll be too few bees to pollinate some of the flowers or too few insects to support some of the birds, or too few birds to spread seeds and so on. And we may be able to fix a few of these problems with technology, but not all of them. So, while it is important to talk to your kids about the birds and the bees, it really is important to talk to your kids about the birds and the bees.

We simply don’t know what’s going to happen in response to what we do, and I’m afraid we’re not paying attention which is why I’m standing here recording this video. Because if we don’t pay attention, one day we’ll be surprised to be remembered that in the end we, too, are just part of the ecosystem.

So if you want to help the bees, don’t buy a bee hive. The honeybees are not at risk exactly because you can buy them. What’s at risk are natural resources that we exploit but that we haven’t put a price on. Like clean air, rain, or wild bees. If you have a garden, you can help the wild bees by preserving the variety of native flowers. Quite literally, let a thousand flowers bloom.

Saturday, June 18, 2022

Why does science news suck so much?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


I read a lot of news about science and science policies. This probably doesn’t surprise you. But it may surprise you that most of the time I find science news extremely annoying. It seems to be written for an audience which doesn’t know the first thing about science. But I wonder, is it just me who finds this annoying? So, in this video I’ll tell you the 10 things that annoy me most about science news, and then I want to hear what you think about this. Why does science news suck so much? That’s what we’ll talk about today.

1. Show me your uncertainty estimate.

I’ll start with my pet peeve which is numbers without uncertainty estimates. Example: You have 3 months left to live. Plus minus 100 years. Uncertainty estimates make a difference for how to interpret numbers. But science news quotes numbers all the time without mentioning the uncertainty estimates, confidence levels, or error bars.  

Here’s a bad example from NBC news, “The global death toll from Covid-19 topped 5 million on Monday”.

Exactly 5 million on exactly that day? Probably not. But if not exactly, then just how large is the uncertainty? Here’s an example for how to do it right, from the economist, with a central estimate and an upper and lower estimate.

The problem I have with this is that when I don’t see the error bars, I don’t know whether I can take the numbers seriously at all. In case you’ve wondered what this weird channel logo shows, that’s supposed to be a data point with an error bar.

2. Cite your sources

I constantly see websites that write about a study that was recently published in some magazine by someone from some university, but that doesn’t link to the actual study. I’ll then have to search for those researcher’s names and look up their publication list and find what the news article was referring to.

Here’s an example for how not to do it from the Guardian. This work is published in the journal Physical Review Letters. This isn’t helpful. Here’s the same paper covered by the BBC. This one has a link. That’s how you do it.

Another problem with sources is that science news also frequently just repeats press releases without actually saying where they got their information from. It’s a problem because university press releases aren’t exactly unbiased.

In fact, a study published in 2014 found that in biomedical research as many as 40 percent of press releases contain exaggerated results.

Since you ask, the 95 percent confidence interval is 33 to 46 percent.

A similar study in 2018 found a somewhat lower percentage of about 23 percent but still that’s a lot. In short, press releases are not reliable sources and neither are sources that don’t cite their sources.

3. Put a date on it

It happens a lot on social media, that magazines share the same article repeatedly, without mentioning it’s an old story. I’ve unfollowed a lot of pages because they’re wasting my time this way. In addition, some pages don’t have a date at the top, so I might read several paragraphs before figuring out that this is a story from two years ago.

A bad example for this is Aeon. It’s otherwise a really interesting magazine, but they hide the date in tiny font at the bottom of long essays. Please put the date on the top. Better still, if it’s an old story, make sure the reader can’t miss the date. Here’s an example for how to do it from the Guardian.

4. Tell me the history

Related to the previous one, selling an old story as new by forgetting to mention that it’s been done before. An example is this story from 2019 about a paper which proposed to use certain types of rocks as natural particle detectors to search for dark matter. The authors of paper called this paleo-detectors. And in the paper they write clearly on the first page “Our work on paleo-detectors builds on a long history of experiments.” But the quanta magazine article makes it sound like it’s a new idea.

This matters because knowing that it’s an old idea tells you two things. First, it probably isn’t entirely crazy. And second, it’s probably a gradual improvement rather than a sudden big breakthrough. That’s relevant context.

5. Don’t oversimplify it

For many questions of science policy, there just isn’t a simple answer, there is no good solution, and sometimes the best answer we have is “we don’t know.” Sometimes all possible solutions to a problem suck and trying to decide which one is the least bad option is difficult. But science news often presents simple answers and solutions probably thinking it’ll appeal to the reader.

What to do about climate change is a good example. Have a look at this recent piece in the Guardian. “Climate change can feel complex, but the IPCC has worked hard to make it simple for us.” Yeah it only took them 3000 pages. Look, if the problem was indeed simple to solve, then why haven’t we solved it. Maybe because it isn’t so simple? Because there are so many aspects to consider, and each country has their own problems, and one size doesn’t fit all. Pretending it’s simple when it isn’t doesn’t help us work out a solution.

6. It depends, but on what?

Related to the previous item, if you ask a scientist a question, then frequently the answer is “it depends”. Will this new treatment cure cancer? Well, depends on the patient and what cancer they have had and for how long they’ve had it and whether you trust the results of this paper and whether that study will get funded, and so on and so forth. Is nuclear power a good way to curb carbon dioxide emissions? Well, depends on how much wind blows in your corner of the earth and how high the earthquake risk is and how much place you have for solar panels, and so on. If science news don’t mention such qualifiers, I have to throw out the entire argument.

A particularly annoying special case of this are news pages which don’t tell you what country study participants were recruited from or where a poll was conducted. They just assume that everyone who comes to their website must know what country they’re located in.   

7. Tell me the whole story.

A lot of science news is guilty of lying by omission. I have talked about several cases of this this in earlier videos.

For example, stories about how climate models have correctly predicted the trend of the temperature anomaly that fail to mention that the same models are miserable at predicting the total temperature. Or stories about nuclear fusion that don’t tell you the total energy input. Yet another example are stories about exciting new experiments looking for some new particle that don’t tell you there’s no reason these particles should exist in the first place. Or stories about how the increasing temperatures from climate change kill people in heat waves, but fail to mention that the same increasing temperatures also save lives because fewer people freeze to death. Yeah, I don’t trust any of these sources.

8. Spare me the human interest stuff

A currently very common style of science writing is to weave an archetypical hero story of someone facing a challenge they have to overcome. You know, someone who encountered this big problem and they set out to solve it and but they made enemies and then they make a friend and they make a discovery but it doesn’t work and… and by that time I’ve fallen asleep. Really please just get to the point already. What’s new and how does it matter? I don’t care if the lead author is married.

9. Don’t forget that science is fallible

A lot of media coverage on science policy remembers that science is fallible only when it’s convenient for them. When they’ve proclaimed something as fact that later turns out to be wrong, then they’ll blame science. Because science is fallible. Facemasks? Yeah, well, we lacked the data. Alright.

But that’d be more convincing if science news acknowledged that their information might be wrong in the first place. The population bomb? Peak oil? The new ice age? Yeah, maybe if they’d made it clearer at the time that those stories might not pan out the way they said then we wouldn’t today have to cope with climate change deniers who think the media can’t tell fact from fiction.

10. Science doesn’t work by consensus

Science doesn’t work by voting on hypotheses. As Kuhn pointed out correctly, the scientific consensus can change quite suddenly. And if you’re writing science news then most of your audience knows that. So referring to the scientific consensus is more likely to annoy them rather than to inform them. And in any case, interpreting poll results is science in itself.

Take the results of this recent poll among geoscientists mostly in the United States and Canada, all associated with some research facility. They only counted replies from those participants who selected climate science and/or atmospheric science within their top three areas of research expertise.

They found that among the people who have worked in the field the longest, 20 years or more, more than 5% think climate change is due to natural causes. So what’s this mean? That there’s a 5% chance it’s just a statistical fluke?

Well, no, because science doesn’t work by consensus. It doesn’t matter how many people agree on one thing or another, or, for that matter, how long they’ve been in a field. It merely matters how good their evidence is.

To me, quoting the “scientific consensus” is an excuse that science journalists use for not even making an effort to actually explain the science. Maybe every once in a while an article about climate change should actually explain how the greenhouse effect works. Because, see earlier, it’s not as simple as it seems. And I suspect the reason that we still have a substantial fraction of climate change skeptics and deniers is not that they haven’t been told often enough what the consensus is. But that they don’t understand the science, and don’t understand that they don’t understand it. And that’s mostly because science news doesn’t explain it.

Good example for how to do it right, Lawrence Krauss’s book on the physics of climate change.

Okay, so those are my top ten misgivings about science news. Let me know what you think about this in the comments.

Saturday, June 11, 2022

Can particles really be in two places at once?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Today I want to talk to you about what happened when I wrote an opinion piece for the Guardian about quantum computing, had to explain what a qubit is, and decided against using the phrase that it “can be in two states at the same time”. What happened next and what did I learn from it? That’s what we’ll talk about today.

Three years ago, just before Google’s first demonstration of quantum supremacy I wrote a brief essay for the Guardian about how that isn’t going to change the world. Quantum supremacy has since been renamed to quantum advantage, but as you can see it indeed hasn’t changed the world. That I wrote this so briefly before the publication of the Google paper was of course totally entirely coincidental.

Now, you can’t write a popular science piece about quantum computing without explaining how a quantum computer works, and that normally involves fumbling together a paragraph that no one understands who doesn’t already know how a quantum computer works, but that has to give the reader the impression they understood it. This means you can’t use words like “superposition”, “Hilbert space”, “complex number” or “Bloch sphere”. Wait, don’t leave. I’ll explain what the Bloch sphere is in a minute.

So what I wrote was
“while a standard computer handles digital bits of 0s and 1s, quantum computers use quantum bits or qubits which can take any value between 0 and 1” and “when qubits are connected by quantum entanglement... such machines can rattle out computations that would take billions of years on a traditional computer”.
Though I am pretty sure the phrase “rattle out” came from the editor because I’m not usually that eloquent.

By writing this I wanted to get across two points. First, the phrase that “a qubit can be in two states at the same time” which you have probably read or heard somewhere makes no sense and would in my opinion better be avoided. Second, it’s the entanglement that makes the difference between a conventional computer and a quantum computer.

Why do I say that a qubit can take any value between 0 and 1? Well, a qubit is the simplest example of a wave-function. Here’s the mathematical expression. Remember what I told you in my earlier video that these mysterious looking brackets really just mean these things are vectors. So the zero and the one are two basis vectors. And then a qubit is a sum of those two basis vectors with coefficients in front of them. That sum is what’s called a superposition.

You might think this is like having vectors in a two dimensional flat space but this isn’t quite right. That’s because the wave-function describes probabilities. This means if you square the coefficients in front of the basis vectors, they have to add up to one. And also, these coefficients can be complex numbers. This is why, if you want to draw all possible qubit states, those do not lie in a flat grid, they lie on the surface of a sphere. This is the Bloch sphere.

The Bloch sphere is commonly drawn so that the state 0 points to the north and the state 1 points to the south pole. So what’s an arbitrary qubit state? Well, all places on the surface of Earth lie between the north and south pole, and all qubit states lie on the Bloch sphere between 0 and 1. That’s why I wrote what I wrote in my article.

Before we look at what happened in the comments, a big thank you our supporters on Patreon, and especially those in tier four. If you want to see more of our videos, you can help by joining us on Patreon or right here on YouTube by clicking on the join button below. Let’s then see what happened the comments. Pretty much as soon as the piece was published, someone wrote: “That’s an analogue computer, not a quantum computer! A qubit can have a superposition of the values 0 and 1.”

Yes, but you can’t just write “superposition” in a popular science article without explaining what that is. And the superpositions in quantum computers are indeed similar to analogue computers, just that the values “in between” can be complex numbers. This doesn’t mean that quantum computers are just conventional analogue computers. The relevant property that sets quantum computers apart from conventional computers is that you can entangle those qubits which you can’t do with a conventional computer, regardless of whether it’s digital or analogue. As I’d written in my article.

Next time I looked at the comments someone had replied: “Never understood why they give this kind of story to an arts grad.” Next person: “Actually she’s a physicist, a string theorist and a good one at that.” Ah, arts grad, string theorist, same thing really. Next. “I’m an “arts grad” with over 30 years experience in IT. I expect better writing and research about this subject from an “arts grad”.” Yes. Let that be a lesson to all the string theory arts grads writing about quantum computing. I couldn’t think of anything polite to respond, so I instead replied to some other comments. And luckily, next time I looked, two people had shown up to explain the matter. The first wrote:
“the author meant x |0> + y|1> as lying “between” the pure states |0> and |1> ; “between” in state space, not on the real number line.”
Exactly, they lie between 0 and 1 in state space, which can be illustrated by the Bloch sphere. All states on the Bloch sphere are pure states. Then another comment:
“you fail to take into account the near impossibility of explaining the concept of a superposition in PopSci language, which does not allow for concepts like “complex number”... Try by those rules yourself and see if you can produce anything that does not amount to “sort of like an average”, which would in this context be equivalent to “any number between 0 and 1”.”
This indeed captures the difficulty well. Finally, someone points out that I’m not a string theorist, and they lived happily ever after, the end.

So why am I telling you this? Well for one I want to belatedly thank those commenters for taking the time to sort this out. But also, I’ve been thinking about this episode quite a bit and wondered what went wrong there.

I believe the problem is that when we write about quantum mechanics we’re faced with the task of converting mathematical expressions into language. And regardless of which language we use, English, German, Chinese, or whatever, our language didn’t evolve to describe quantum behavior. So all the words that we can come up with will be wrong and will be misleading. There’s no way to get it right.

What’s a superposition? A superposition is a sum of vectors in a Hilbert space. Alright. But if one of the vectors is a particle going left and the other a particle going right, what does this superposition mean? I don’t know. Could you say it’s a particle going into both directions? I guess you could say that. I mean, you just said it, so arguably you can. But is that what it actually is? I don’t think so.

For one it’d be more accurate to say that the wave-function “describes” a particle instead of saying that it “is” a particle. But maybe more importantly, I don’t think such a superposition is anything in the space we inhabit. It’s a vector in this mathematical structure we call the Hilbert space. And what does that mean? I don’t know. I don’t think there are any words in our language to explain what it “means”.

I still think that the explanation that I gave for a quantum bit was more truthful to the mathematics than the more commonly used phrase that it can be in “two states at once”. But I also think we have to accept that regardless of what language we use to describe quantum mechanics, it will never be correct. Because our language isn’t fit to describe something we cannot experience.

Should this worry us? Does this mean there’s something wrong with quantum mechanics as a scientific theory? I don’t think so. I think it’d be surprising if it was otherwise. Quantum mechanics describes the behavior of matter in circumstances we don’t observe in daily life. We’ve never needed the language to explain quantum behavior so we don’t have it.

To give you a second opinion I've asked Arvin Ash to tell us what he thinks. Arvin is an expert in science communication in general and quantum mechanics in particular. He told me the following.

Hi Sabine, as I tried to illustrate in a recent video, the root of the problem and cause of so much confusion in quantum mechanics is the fact that when we measure things, that is, whenever we have the opportunity to actually observe a quantum object, it seems to lose its quantum behavior. The superposition is lost and what we see is something that looks like it's behaving classically. This is the case with the double slit experiment when individual photons or electrons always show up as dots on the screen rather than some kind of wave and we only see the wave interference pattern when we shoot many photons through the slits or when we observe quantum particles and cloud chambers, they leave trails as if they were little cannonballs.

This is also the case for quantum computers. While the math describes superposed states of quantum bits as, pardon the language, taking on any value between zero and one when the computer actually makes a measurement the bits are either zero or one, it's never in between. So while the calculation is quantum, the result is binary. We are surrounded by a quantum world we can't directly observe, and when we sample this world by taking measurements the quantum phenomena convert to classical results.

It's hard to describe something you can't ever experience. We humans like to make connections with familiar things. If we could see it, we could describe it. The math of quantum mechanics has no obvious classical analog. We could however certainly be more precise in our language. But I do think that in the future quantum mechanics will become more intuitive to more people through watching videos like this thanks for having me.
I think that’s a good point. Arvin has his own YouTube channel and if you find my channel interesting, I’m sure you’ll like his too, so go check it out.

When it comes to quantum mechanics, I think what will happen in the long run is that the mathematical expressions will just become better known and we will use them more widely. Like we’ve become used to talking about electromagnetic radiation. That was once a highly abstract mathematical concept, waves that travel through empty space, rather than traveling in some medium. But we now use electromagnetic radiation so frequently that it’s become part of our everyday language. I think that it’ll go the same way with qubits and superpositions.

Saturday, June 04, 2022

Trans women in sports: Is this fair?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


If you see a trans woman, like the American Swimmer Lia Thomas, in an athletic competition against women born female, it’s difficult not to ask whether that’s fair. In this video I want to look at what science says about this. How much of an advantage do men have over women and what, if anything, does hormone therapy change about it? That’s what we’ll talk about today.

The vast majority of humans can fairly easily be classified as either biologically male or female. The two sexes have rather obvious differences in inner and outer organs and those differences are strongly correlated with the expressions of the twenty-third chromosome pair, as you remember from your school days, XY for men, and XX for women. So far, so clear.

But this classification of the two biological sexes doesn’t always work. There is a surprisingly large variety of what’s known as “intersex” conditions in which the biological expression may be ambiguous or doesn’t align with the chromosome expression. Some cases are truly amazing.

For example, in 2014 researchers from India presented the case of a 70 year old man, father of four, who underwent surgery. The doctors discovered he also had a uterus and fallopian tubes. His chromosomes however turned out to be the standard male 46 XY.

Another example is the Spanish hurdler Martínez-Patiño who has feminine genitals but XY chromosomes and internal testes, something that she was herself unaware of until the age of 25. After this was discovered, she was banned from competing in the 1986 Olympics.

Such cases are called disorders of sex development, DSD for short. They are rare but not as rare as you may think. According to various estimates that you find in the literature they affect between one in a thousand and one in fifty.

This means that in the United States there are between about three hundred thousand and 7 million intersex people, and globally between 8 million and 160 million. Part of the problem with making those estimates is that the definition of an intersex condition is somewhat ambiguous.

Now, most intersex people aren’t transgender and most transgender people aren’t intersex, but the question what to do with intersex people in competitive sports is a precedent to the newer question of what to do with trans athletes. And Martínez-Patiño illustrates the difficulty.

Her condition is called hyperandrogenic 46XY DSD. In the general population, it happens at a rate of about 1 in 20 000 births. Women with this condition have elevated level of the male hormone testosterone. In 2014, a study by European researchers found that in elite female athletes the rate of this disorder is about 140 times higher than in the general population.

Testosterone helps muscles grow, strengthens bones, and increases levels of hemoglobin in the blood, which benefits oxygen transport. Synthetic forms of testosterone are often used for doping.

In case you think this pretty much settles the case that high testosterone levels are an unfair advantage for women, think again. Because the reason Martines-Patino was assigned female at birth to begin with is that she has what’s called complete androgen insensitivity syndrome. She has high levels of testosterone, yes, but her body doesn’t react to it. After being banned from the Olympics she appealed, arguing that she has no advantage from her elevated testosterone levels and the ban was indeed lifted.

However, there are other conditions that can lead to elevated testosterone levels in women. To make matters more complicated, there is a large natural variation in testosterone levels among both men and women and since women with naturally high testosterone levels tend to perform well in sports, they are overrepresented among athletes.

As a consequence, the testosterone distribution for male and female athletes has a big overlap. A paper published by researchers from Ireland and the UK in 2014 showed results of hormonal profiles of about 7 hundred elite athletes across 15 sports.

They found that 16.5% of men had low testosterone levels, 13.7% of women had high levels, and there was a significant overlap between the them.

So as you see, the business with the testosterone levels is much more difficult than you might think, and that doesn’t even touch the question how relevant it is. The advantages stemming from testosterone are particularly pronounced in disciplines that require upper body strength, and less pronounced in those that test endurance.

Having said that, let’s then talk about the trans athletes. Transgender people don’t identify with the sex they have been assigned at birth. Leaving aside the intersex conditions, this means a trans man was born biologically female and a trans woman was born biologically male. People-who-aren’t-trans are commonly referred to as cis.

Some trans people undergo surgery and/or hormone therapy to adjust their physical appearance to their gender. The results of the transition treatment differ dramatically depending on whether it’s started before or after puberty. During puberty, boys grow significantly more than girls and they develop more muscles whereas, to quote Meghan Trainor, women get all the right junk in all the right places. Physical changes during puberty are partly irreversible and transitioning later will not entirely undo them.

In 1990, a seminar convened by the World Athletics Federations recommended that any person who had undergone gender reassignment surgery before puberty should be accepted to participate under their new gender. This isn’t particularly controversial. The controversial bit is what to do with those who transition after puberty.

The International Olympic Committee has been leading the way on passing regulations. In 2004 they ruled that transgender athletes are allowed to compete two years after surgical anatomical changes have been completed and if they’ve undergone hormonal therapy. This means in particular that trans women must have taken hormone therapy for at least two years.

In 2021, the committee issued a framework that allows international federations to develop their own eligibility criteria for transgender and intersex athletes, so there’s no simple rule that applies to all disciplines.

But does the hormonal treatment make it fair for trans women to compete with cis women? 

In 2019 a team of European researchers from the Netherlands, Norway and Belgium measured the change in grip strength for trans people after a year of hormonal therapy. They had about 250 trans women and trans men each who participated in their study. So this isn’t a huge sample but decent.

They found that grip strength decreased in trans women by minus 1 point 8 kilogram but increased in trans men by 6 point 1 kilogram. In trans men, but not in trans women, the change in grip strength was associated with change in lean body mass. So it seems that hormonal therapy does more for trans men than for trans women.  

Another team of researchers from Sweden followed 11 untrained trans women and 12 untrained trans men before and up to one year after gender-affirming hormonal therapy.

They found that in trans women thigh muscle volume decreased by 5 percent and quadriceps cross-sectional area decreased by 4 percent, but muscle density remained unchanged and they roughly maintained their strength levels. In trans men, on the other hand, thigh muscle volume increased by 15 percent; quadriceps cross-sectional area also increased by 15 percent, muscle density increased by 6 percent, and they saw increased strength levels. Again it seems that hormonal therapy does more for trans men than for trans women.  

It’d be rather tedious to list all the papers, so let me just say that this finding has been reproduced numerous times. A meta analysis of from March last year surveyed two dozen studies and concluded that, even after 36 months of hormonal therapy, the values for strength, lean body mass and muscle area in trans women remained above those of cis women.

These numbers aren’t directly applicable to athletes because in the general population trans men have an incentive to build muscles while trans women have an incentive trying to lose it. But those studies pretty much agree that hormone therapy makes a faster difference for trans men than for trans women, and after 3 years the difference hasn’t entirely disappeared.

There is basically no data on what this hormone treatment does in the long run. A 2021 paper from Brazil suggests that after about 15 years differences between trans and cis women have basically disappeared. But this was a very small study with only 8 participants. And in any case, if you ask athletes to wait 15 years, they’ll be too old for the Olympics.

So let us come back to the question then whether it’s “fair” for trans women to compete with cis women. It seems clear from the data that trans women keep an advantage over cis women, even after several years of hormonal therapy. I guess that means it isn’t fair in the sense that no amount of training that cis women can do is going to make up for male puberty.

But then, athletic competition has never been fair in that sense. To begin with, let’s not forget that for athletic performance the most important factor isn’t your sex, it’s your age. And some people are born with an advantage at certain types of sports, being male is only one of them. Usain Bolt has long legs. Michael Phelps has big feet. And American Basketballers are tall.

Really tall. Here’s the American under16 Women’s Basketball team with the team from El Salvador. The Americans won 114-19. Is that fair?

And those are just the visible differences. There are also factors like bone density, cardiac output, or lung volume that are partly genetically determined. I never had a chance to become an Olympic swimmer. Is that fair? 

No. Athletes are biological extremes. “Fairness” has never been the point of these competitions. They’re really more like freak shows. Kind of like physics conferences.

There’s another aspect to consider, which is that these competitions should also entertain. I guess this is why the researcher Joanna Harper, who is a trans athlete herself, has suggested we talk about “meaningful competition” instead of “fair competition”.  We have historically segregated men and for women in sporting events because otherwise competition becomes too predictable, too boring. In some disciplines we have further categories for the same reason, like in weight lifting and boxing.

Now we’re asking if not we need additional categories for trans athletes. Alright, we could do this. But if you follow this logic to its conclusion then really the only person you can compete with is yourself.

Or you will have to try and measure every single parameter that contributes to athletic performance in a given discipline and then try to adjust for it. The result may be that the person who comes in last in a race is the winner, after you adjust for heart valve issues, testosterone levels, age, slightly misaligned legs, under average lung volume, and a number of other different conditions. And that would be “fair” in the sense that now everyone had chance to win provided they trained hard enough. But would people still watch it?

The question of entertainment brings up another issue. Most of the sport disciplines that are currently widely broadcast favor biological characteristics typically associated with men, with the possible exception of long distance swimming. But generally sex differences decrease the more emphasis a discipline puts on endurance rather than strength.

A 2019 study among casual athletes found that men still have an edge over women in marathns but somewhere between 100 and 200 miles, women begin to win out. Though this currently doesn’t reflect in the world records where men are still leading, it seems that men have less of an advance in endurance disciplines. Which brings up the question: Why don’t we see more of such sporting events? I don’t know for sure, but personally I find it hard to think of something more boring than watching someone run 200 miles. So maybe the solution is that we’ll just all just do esports in the end.

But let’s come back to the trans athletes. Researchers from the University of California estimated in 2017, that the percentage of transgender people in the United States is about 0 point four percent (0.39%). A similar estimate for Brazil put the number there at about 0 point seven percent(0.69%). If these number are roughly correct, transgender people are currently underrepresented in elite level sports. That isn’t fair either. This is why I think sporting associations are doing the right thing with putting forward regulations based on the best available scientific evidence, and as long as athletes comply with them, they shouldn’t have to shoulder accusations of unfair competition.

That said, professional sports associations will soon have a much bigger problem. Like that or not, genetic engineering has become reality. And as long as athletes can make a lot of money from having a genetic advantage, someone’s going to breed children who’ll bring in that money. This is why I suspect a century from now professional athletics will not exist anymore. It creates too many incentives for unethical behavior.

I hope this brief summary has helped you make sense of a somewhat confusing situation. 

Saturday, May 28, 2022

Chaos: The Real Problem with Quantum Mechanics

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


You’ve probably seen a lot of headlines claiming that quantum mechanics is “strange”, “weird” or “spooky”. In the best case it’s “unintuitive” and “no one understands it”. Poor thing. In this video I will try to convince you that the problem with quantum mechanics isn’t that it’s weird. The problem with quantum mechanics is chaos. And that’s what we’ll talk about today.

Saturn has 82 moons. This is one of them, its name is Hyperion. Hyperion has a diameter of about 200 kilometers and its motion is chaotic. It’s not the orbit that’s chaotic, it’s the orientation of the moon on that orbit.

It takes Hyperion about 3 weeks to go around Saturn once, and about 5 days to rotate about its own axis. But the orientation of the axis tumbles around erratically every couple of months. And that tumbling is chaotic in the technical sense. Even if you measure the position and orientation of Hyperion to utmost precision, you won’t be able to predict what the orientation will be a year later.

Hyperion is a big headache for physicists. Not so much for astrophysicists. Hyperion’s motion can be understood, if not predicted, with general relativity or, to good approximation, with Newtonian dynamics and Newtonian gravity. These are all theories which do not have quantum properties. Physicists call such theories without quantum properties “classical”.

But Hyperion is a headache for those who think that quantum mechanics is really the way nature works. Because quantum mechanics predicts that Hyperion’s chaotic motion shouldn’t last longer than about 20 years. But it has lasted much longer. So, quantum mechanics has been falsified.

Wait what? Yes, and it isn’t even news. That quantum mechanics doesn’t correctly reproduce the dynamics of classical, chaotic systems has been known since the 1950s. The particular example with the moon of Saturn comes from the 1990s. (For details see here or here.)

The origin of the problem isn’t all that difficult to see. If you remember, in quantum mechanics we describe everything with a wave-function, usually denoted psi. There aren’t just wave-functions for particles. In quantum mechanics there’s a wave-function for everything: atoms, cats, and also moons.

You calculate the change of the wave-function in time with the Schrödinger equation, which looks like this. The Schrödinger equation is linear, which just means that no products of the wave-function appear in it. You see, there’s only one Psi on each side. Systems with linear equations like this don’t have chaos. To have chaos you need non-linear equations.

But quantum mechanics is supposed to be a theory of all matter. So we should be able to use quantum mechanics to describe large objects, right? If we do that, we should just find that the motion of these large objects agrees with the classical non-quantum behavior. This is called the “correspondence principle”, a name that goes back to Niels Bohr.

But if you look at a classical chaotic system, like this moon of Saturn, the prediction you get from quantum mechanics only agrees with that from classical Newtonian dynamics for a certain period of time, known as the “Ehrenfest time”. Within this time, you can actually use quantum mechanics to study chaos. This is what quantum chaos is all about. But after the Ehrenfest time, quantum mechanics gives you a prediction that just doesn’t agree with what we observe. It would predict that the orientations of Hyperion don’t tumble around but instead blur out until they’re so blurred you wouldn’t notice any tumbling. Basically the chaos gets washed away in quantum uncertainty.

It seems to me that some of you are a little skeptical. It can’t possibly be that physicists have known of this problem for 60 years and just ignored it? Indeed, they haven’t exactly ignored it. The have come up with an explanation which goes like this.

Hyperion may be far away from us and not much is going on there, but it still interacts with dust and with light or, more precisely, with the quanta of light called “photons”. These are each really tiny interactions, but there are a lot of them. And they have to be added to the Schrödinger equation of the moon.

What these tiny interactions do is that they entangle the moon with its environment, with the dust and the light. This means that each time a grain of dust bumps into the moon, this very slightly changes some part of the moon’s wave-function, and afterwards they are both correlated. This correlation is the entanglement. And those little bumps slightly shift the crest and troughs of parts of the wave-function.

This is called “decoherence” and it’s just what the Schrödinger equation predicts. And this equation is still linear, so all those interactions don’t solve the problem that the prediction doesn’t agree with observation. The solution to the problem comes in the 2nd step of the argument. Physicists now say, okay, so we have this wave-function for the moon with this huge number of entangled dust grains and photons. But we don’t know exactly what this dust is or where it is or what the photons do and so on. So we do what we always do if we don’t know the exact details: We make guesses about what the details could plausibly be and then we average over them. And that average agrees with what classical Newtonian dynamics predicts.

So, physicists say, all is good! But there are two problems with this explanation. One is that it forces you to accept that in the absence of dust and light a moon will not follow Newton’s law of motion.

Ok, well, you could say that in this case you can’t see the moon either so for all we can tell that might be correct.

The more serious problem is that taking an average isn’t a physical process. It doesn’t change anything about the state that the moon is in. It’s still in one of those blurry quantum states that are now also entangled with dust and photons, you just don’t know exactly which one.

To see the problem with the argument, let me use an analogy. Take a classical chaotic process like throwing a die. The outcome is an integer from 1 to 6, and if you average over many throws then the average value per throw is 3.5. Just exactly which outcome you get is determined by a lot of tiny details like the positions of air molecules and the surface roughness and the motion of your hand and so on.

Now suppose I write down a model for the die. My model says that the outcome of throwing the die is either 106 or -99 each with probability 1/2. Wait, you say, there’s no way throwing a die will give you minus 99. Look, I say, the average is 3.5, all is good. Would you accept this? Probably not.

Clearly for the model to be correct it shouldn’t just get the average right, but each possible individual outcome should also agree with observations. And throwing a die doesn’t give minus 99 any more than a big blurry rock entangled with a lot of photons agrees with our observations of Hyperion.

Ok but what’s with the collapse of the wave-function? When we make a measurement, then the wave-function changes in a way that the Schrödinger-equation does not predict. Whatever happened to that?

Exactly! In quantum mechanics we use the wave-function to make probabilistic predictions. Say, an electron hits either the left or right side of a screen with 50% probability each. But then when we measure the electron, we know it’s, say, left with 100% probability.

This means after a measurement we have to update the wave-function from 50-50 to 100-0. Importantly, what we call a “measurement” in quantum mechanics doesn’t actually have to be done by a measurement device. I know it’s an awkward nomenclature, but in quantum mechanics a “measurement” can happen just by interaction with a lot of particles. Like grains of dust, or photons.

This means, Hyperion is in some sense constantly being “detected” by all those small particles. And the update of the wave-function is indeed a non-linear process. This neatly resolves the problem: Hyperion correctly tumbles around on its orbit chaotically. Hurray.

But here’s the thing. This only works if the collapse of the wave-function is a physical process. Because you have to actually change something about that blurry quantum state of the moon for it to agree with observations. But the vast majority of physicists today think the collapse of the wave-function isn’t a physical process. Because if it was, then it would have to happen instantaneously everywhere.

Take the example of the electron hitting the screen. When the wave-function arrives on the screen, it is spread out. But when the particle appears on one side of the screen, the wave-function on the other side of the screen must immediately change. Likewise, when a photon hits the moon on one side, then the wave-function of the moon has to change on the other side, immediately.

This is what Einstein called “spooky action at a distance”. It would break the speed of light limit. So, physicists said, the measurement is not a physical process. We’re just accounting for the knowledge we have gained. And there’s nothing propagating faster than light if we just update our knowledge about another place.

But the example with the chaotic motion of Hyperion tells us that we need the measurement collapse to actually be a physical process. Without it, quantum mechanics just doesn’t correctly describe our observations. But then what is this process? No one knows. And that’s the problem with quantum mechanics.

Saturday, May 21, 2022

The closest we have to a Theory of Everything

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


In English they talk about a “Theory of Everything”. In German we talk about the “Weltformel”, the world-equation. I’ve always disliked the German expression. That’s because equations in and by themselves don’t tell you anything. Take for example the equation x=y. That may well be the world-equation, the question is just what’s x and what is y. However, in physics we do have an equation that’s pretty damn close to a “world-equation”. It’s remarkably simple, looks like this, and it’s called the principle of least action. But what’s S? And what’s this squiggle. That’s what we’ll talk about today.

The principle of least action is an example of optimization where the solution you are looking for is “optimal” in some quantifiable way. Optimization principles are everywhere. For example, equilibrium economics optimizes the distribution of resources, at least that’s the idea. Natural selection optimizes the survival of offspring. If you shift around on your couch until you’re comfortable you are optimizing your comfort. What these examples have in common is that the optimization requires trial and error. The optimization we see in physics is different. It seems that nature doesn’t need trial and error. What happens is optimal right away, without trying out different options. And we can quantify just in which way it’s optimal.

I’ll start with a super simple example. Suppose a lonely rock flies through outer space, far away from any stars or planets, so there are no forces acting on the rock, no air friction, no gravity, nothing. Let’s say you know the rock goes through point A at a time we’ll call t_A and later through point B at time t_B. What path did the rock take to get from A to B?

Well, if no force is acting on the rock it must travel in a straight line with constant velocity, and there is only one straight line connecting the two dots, and only one constant velocity that will fit to the duration. It’s easy to describe this particular path between the two points – it’s the shortest possible path. So the path which the rock takes is optimal in that it’s the shortest.

This is also the case for rays of light that bounce off a mirror. Suppose you know the ray goes from A to B and want to know which path it takes. You find the position of point B in the mirror, draw the shortest path from A to B, and reflect the segment behind the mirror back because that doesn’t change the length of the path. The result is that the angle of incidence equals the angle of reflection, which you probably remember from middle school.

This “principle of the shortest path” goes back to the Greek mathematician Hero of Alexandria in the first century, so not exactly cutting edge science, and it doesn’t work for refraction in a medium, like for example water, because the angle at which a ray of light travels changes when it enters the medium. This means using the length to quantify how “optimal” a path is can’t be quite right.

In 1657, Pierre de Fermat figured out that in both cases the path which the ray of light takes from A to B is that which requires the least amount of time. If there’s no change of medium, then the speed of light doesn’t change and taking the least time means the same as taking the shortest path. So, reflection works as previously.

But if you have a change of medium, then the speed of light changes too. Let us use the previous example with a tank of water, and let us call speed of light in air c_1, and the speed of light in water c_2.

We already know that in either medium the light ray has to take a straight line, because that’s the fastest you can get from one point to another at constant speed. But you don’t know what’s the best point for the ray to enter the water so that the time to get from A to B is the shortest.

But that’s pretty straight-forward to calculate. We give names to these distances, calculate the length of the paths as a function of the point where it enters. Multiply each path with the speed in the medium and add them up to get the total time.

Now we want to know which is the smallest possible time if we change the point where the ray enters the medium. So we treat this time as a function of x and calculate where it has a minimum, so where the first derivative with respect to x vanishes.

The result you get is this. And then you remember that those ratios with square roots here are the sines of the angles. Et voila, Fermat may have said, this is the correct law of refraction. This is known as the principle of least time, or as Fermat’s principle, and it works for both reflection and refraction.

Let us pause here for a moment and appreciate how odd this is. The ray of light takes the path that requires the least amount of time. But how does the light know it will enter a medium before it gets there, so that it can pick the right place to change direction. It seems like the light needs to know something about the future. Crazy.

It gets crazier. Let us go back to the rock, but now we do something a little more interesting, namely throw the rock in a gravitational field. For simplicity let’s say the gravitational potential energy is just proportional to the height which it is to good precision near the surface of earth. Again I tell you the particle goes from point A at time T_A to point B at time t_B. In this case the principle of least time doesn’t give the right result.

But in the early 18th century, the French mathematician Maupertuis figured out that the path which the rock takes is still optimal in some other sense. It’s just that we have to calculate something a little more difficult. We have to take the kinetic energy of the particle, subtract the potential energy and integrate this over the path of the particle.

This expression, the time-integral over the kinetic minus potential energy is the “action” of the particle. I have no idea why it’s called that way, and even less do I know why it’s usually abbreviated S, but that’s how it is. This action is the S in the equation that I showed at the very beginning.

The thing is now that the rock always takes the path for which the action has the smallest possible value. You see, to keep this integral small you can either try to make the kinetic energy small, which means keeping the velocity small, or you make the potential energy large, because that enters with a minus.

But remember you have to get from A to B in a fixed time. If you make the potential energy large, this means the particle has to go high up, but then it has a longer path to cover so the velocity needs to be high and that means the kinetic energy is high. If on the other hand the kinetic energy is low, then the potential energy doesn’t subtract much. So if you want to minimize the action you have to balance both against each other. Keep the kinetic energy small but make the potential energy large.

The path that minimizes the action turns out to be a parabola, as you probably already knew, but again note how weird this is. It’s not that the rock actually tries all possible paths. It just gets on the way and takes the best one on first try, like it knows what’s coming before it gets there.

What’s this squiggle in the principle of least action? Well, if we want to calculate which path is the optimal path, we do this similarly to how we calculate the optimum of a curve. At the optimum of a curve, the first derivative with respect to the variable of the function vanishes. If we calculate the optimal path of the action, we have to take the derivative with respect to the path and then again we ask where it vanishes. And this is what the squiggle means. It’s a sloppy way to say, take the derivative with respect to the paths. And that has to vanish, which means the same as that the action is optimal, and it is usually a minimum, hence the principle of least action.

Okay, you may say but you don’t care all that much about paths of rocks. Alright, but here’s the thing. If we leave aside quantum mechanics for a moment, there’s an action for everything. For point particles and rocks and arrows and that stuff, the action is the integral over the kinetic energy minus potential energy.

But there is also an action that gives you electrodynamics. And there’s an action that gives you general relativity. In each of these cases, if you ask what the system must do to give you the least action, then that’s what actually happens in nature. You can also get the principle of least time and of the shortest path back out of the least action in special cases.

And yes, the principle of least action *really uses an integral into the future. How do we explain that?

Well. It turns out that there is another way to express the principle of least action. One can mathematically show that the path which minimizes the action is that path which fulfils a set of differential equations which are called the Euler-Lagrange Equations.

For example, the Euler Lagrange Equations of the rock example just give you Newton’s second law. The Euler Lagrange Equations for electrodynamics are Maxwell’s equations, the Euler Lagrange Equations for General Relativity are Einstein’s Field equations. And in these equations, you don’t need to know anything about the future. So you can make this future dependence go away.

What’s with quantum mechanics? In quantum mechanics, the principle of least action works somewhat differently. In this case a particle doesn’t just go one optimal path. It actually goes all paths. Each of these paths has its own action. It’s not only that the particle goes all paths, it also goes to all possible endpoints. But if you eventually measure the particle, the wave-function “collapses”, and the particle is only in one point. This means that these paths really only tell you probability for the particle to go one way or another. You calculate the probability for the particle to go to one point by summing over all paths that go there.

This interpretation of quantum mechanics was introduced by Richard Feynman and is therefore now called the Feynman path integral. What happens with the strange dependence on the future in the Feynman path integral? Well, technically it’s there in the mathematics. But to do the calculation you don’t need to know what happens in the future, because the particle goes to all points anyway.

Except, hmm, it doesn’t. In reality it goes to only one point. So maybe the reason we need the measurement postulate is that we don’t take this dependence on the future which we have in the path integral seriously enough.