Saturday, June 25, 2022

Whatever happened to the Bee Apocalypse?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


15 years ago, dying bees were all over the news. Scientists called it the “colony collapse disorder”, headlines were warning of honey bees going extinct. Some suspected a virus, some pesticides, parasites, or a fungus. They spoke of a “honeybee apocalypse”, a “beepocalypse”, a “bee murder mystery” and the “head scratching case of the vanishing bees”, which are all names of movies I wouldn’t watch, and in any case the boring truth is that the honey bees are doing fine. It’s the wild bees that are in danger. Whatever happened to the bees and how much of a problem is it? That’s what we’ll talk about today.

The honeybees started dying in 2006. Beekeepers began to report unusually high losses, in some cases as high as 30 to 90 percent of their hives. In most of those cases the symptoms were inconsistent with known causes of honey bee death. The colonies had no shortage of honey or pollen, but the worker bees suddenly largely disappeared. Only a few of the bees were found dead, most of them just never returned to the hive. The queen and her brood remained, but without worker bees, they could not sustain themselves.

Scientists called it the Colony Collapse Disorder and they had many hypotheses for its cause. Some suspected mites, some a new type of virus, some blamed pesticides. Or maybe stress due to mismanagement or habitat changes or poor nutrition. And some suspected… the US government.

In the past years, the number of bees which have been dying from Colony Collapse Disorder has decreased, but we still don’t know what caused it. A 2009 paper that looked at 61 possible explanations concluded that “no single factor was found with enough consistency to suggest one causal agent.”

Quite possibly the issue is that there’s no single reason the bees are dying but rather it’s a combination of many different stressors that amplify each other. It’s parasites and pesticides and disease and a decreasing diversity of plants and loss of habitat. I know this may be controversial to some of you, but not all of your meals should be cheese. It’s not good for you. And it isn’t good for bees either – they should have more than one source of nutrition. Bees also probably shouldn’t eat cheese, so as human we’re a little better off. But much like for us, variety in nutrition is necessary for the bees to stay healthy and if they are faced with large areas of monocultures, diverse nutrition is hard to come by. Doesn’t help if those monocultures are full of pesticides. 

The issue with pesticides alone is far worse than originally recognized because if several of them are used together the effects on the bees can amplify. Just a few months ago, a group of researchers from the US and the UK published a paper in Nature in which they present a meta-analysis of 90 studies in which bees were exposed to combinations of chemicals used in agriculture. They found that if you expose bees to one pesticide that kills 10 percent and another pesticide that kills another 10 percent it’s possible that both together kill as much as 40 percent. The reason for this isn’t that bees are bad at maths, but that effects of chemicals on living creatures don’t add linearly. 

Pesticides and infections still plague honey bees, though they make fewer headlines these days. For example, in 2020 (1 April 2020 – 1 April 2021), beekeepers in the United States lost almost half (45.5%) of their managed honey bee colonies. The major reason that beekeepers reported was a parasitic mite.

However, the numbers may sound more alarming than they really are because honey bees are efficiently bred and managed by humans. Even if they die in large fractions each year, they repopulate quickly and the overall decline of population is small.

In fact, honey bees were brought to most places by humans in the first place. They are a native species to Europe and northern Africa and from there were introduced by salesmen to every other continent except Antarctica. Today there are over 90 million beehives in the world. The typical population of a hive is 20 thousand to 80 thousand. This means that all together there are a few trillion honeybees in the world today.

And while the number of colony losses in the United States is around 40-45 percent each year, which sounds like a lot, the total number of honey bees has been reasonably stable for the past twenty years or so, though it was higher in the 1940s.  Globally the number of honeybees has in fact increased about 45% in the last 50 years

The reason that you still read about bee keepers who are sounding the alarm has nothing to do with honey, but that the demand for pollinators in agriculture has increased faster than the supply. The fraction of agriculture that depends on animal pollination has tripled during the last half century whereas the number of honeybees hasn’t even doubled. How big of a problem that is depends strongly on the type of crops a country grows. And it depends on the wild bees.

And that brings us to the actual problem. The honey bee (Apis mellifera) is only one of about 20 thousand different bee species. The non-honey bees are usually referred to as wild bees, and each location has its native species. According to an estimate from researchers at Cornell University in 2006, wild bees contribute to the pollination of 85 percent of crops in agriculture.

While their contribution is in most cases small, with about 20% on the average, in some cases they do most of the work, for example for lemons, grapes, olives, strawberries, pumpkins and peanuts, for which 90 percent of the pollination comes from wild bees. So it’s not that we’d die without them, but according to the Food and Agriculture Organization of the United Nations, losing the wild bees would be a major threat to human food security, and without doubt a serious let down to those who enjoy eating lemon grape olive strawberry pumpkin peanut sandwiches.

Wild bees differ from honey bees in a number of ways. Honeybees live in colonies of several tens of thousands. They are social bees which means they like hanging out together at bee malls and have little bee block parties. No need to look that up, I'm a physicist, you can trust me on these things.

The majority of wild bees, on the contrary, live solitary lifestyles. They get together to mate and then separate again. The female lays her eggs, collects enough food for the larvae, and then leaves her offspring alone, which finally proves that anti-authoritarian parenting does work. If you’re a wild bee.   

Honeybees aren’t the only bees that produce honey, but they are by far the ones who produce most of it. Most wild bees don’t produce honey. But they are important pollinators. Since wild bees have been around for so long they’ve become very specialized at pollinating certain plants. And for those plants replacing them with honey bees is very difficult. For example, squash flowers are open only until the early morning, a time at which many honeybees are still sleeping because they were partying a little too hard at their block party the night before. But a type of wild bees aptly named squash bees (Peponapis and Xenoglossa genera) wake up very early to do that job.

Maybe the biggest difference between honey bees and wild bees is that wild bees receive very little attention because no one sees them dying. They are struggling with the same problems as the honey bees, but don’t have beekeepers who weep over them, and data for their decline have been hard to come by.

But last year the magazine Cell published the results of a study with a global estimate for the situation of wild bees. The authors looked at the numbers of bee species that were collected or observed over time using data publicly available at the Global Biodiversity Information Facility. They found that even though the number of records has been increasing, the number of different species in the records has been sharply decreasing in the past decades.

The decline rates differ between the continents, but the species numbers are dropping steeply everywhere except for Oceania. The researchers say there’s a number of factors in play here, such as the expansion of monocultures, loss of native habitat, pesticides, climate change, and bee trade that also trades around pathogens.

So the problems that wild bees face are similar to those of honey bees, but they have an additional problem which is… honeybees. Honey bees compete with wild bees for food and habitat and they also pass on viruses. Now, a big honey bee colony can deal with viruses by throwing out the infected bees. But this doesn’t work for wild bees because they don’t live in large colonies. And worse, when honey bees and wild bees fight for food they seem to both lose out.

In 2018 researchers from France published a paper in Nature which reported that in areas with high-density beekeeping the success of wild bees to find nectar dropped 50 percent and that of honeybees was reduced by about 40 percent. Something similar had been observed three years before by German researchers.

How bad is the situation for the wild bees? Hard to say. While we have estimates for the number of wild bee species, we don’t know how many wild bees there even are. We contacted almost a dozen experts and the brief summary is that it’s a really difficult question. Most of them just said they didn’t know, and a few said probably about as many as there are honey bees but that’s just an educated guess. So we don’t even know what we are doing to the environment.   

If all this sounds really complicated, that’s indeed the major message. Forget about quantum gravity: ecological systems are way more complex. There’s so many things going on that we never had a chance to properly study in the first place, so we have no idea what’s happening now.

What we do know is that we’ve been changing the ecosystems around us a lot. That has reduced and continues to reduce biodiversity significantly. And the decrease in biodiversity decreases the resilience of the ecosystems, which means that sooner or later parts of them will break down.

It’s really just a matter of time until there’ll be too few bees to pollinate some of the flowers or too few insects to support some of the birds, or too few birds to spread seeds and so on. And we may be able to fix a few of these problems with technology, but not all of them. So, while it is important to talk to your kids about the birds and the bees, it really is important to talk to your kids about the birds and the bees.

We simply don’t know what’s going to happen in response to what we do, and I’m afraid we’re not paying attention which is why I’m standing here recording this video. Because if we don’t pay attention, one day we’ll be surprised to be remembered that in the end we, too, are just part of the ecosystem.

So if you want to help the bees, don’t buy a bee hive. The honeybees are not at risk exactly because you can buy them. What’s at risk are natural resources that we exploit but that we haven’t put a price on. Like clean air, rain, or wild bees. If you have a garden, you can help the wild bees by preserving the variety of native flowers. Quite literally, let a thousand flowers bloom.

Saturday, June 18, 2022

Why does science news suck so much?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


I read a lot of news about science and science policies. This probably doesn’t surprise you. But it may surprise you that most of the time I find science news extremely annoying. It seems to be written for an audience which doesn’t know the first thing about science. But I wonder, is it just me who finds this annoying? So, in this video I’ll tell you the 10 things that annoy me most about science news, and then I want to hear what you think about this. Why does science news suck so much? That’s what we’ll talk about today.

1. Show me your uncertainty estimate.

I’ll start with my pet peeve which is numbers without uncertainty estimates. Example: You have 3 months left to live. Plus minus 100 years. Uncertainty estimates make a difference for how to interpret numbers. But science news quotes numbers all the time without mentioning the uncertainty estimates, confidence levels, or error bars.  

Here’s a bad example from NBC news, “The global death toll from Covid-19 topped 5 million on Monday”.

Exactly 5 million on exactly that day? Probably not. But if not exactly, then just how large is the uncertainty? Here’s an example for how to do it right, from the economist, with a central estimate and an upper and lower estimate.

The problem I have with this is that when I don’t see the error bars, I don’t know whether I can take the numbers seriously at all. In case you’ve wondered what this weird channel logo shows, that’s supposed to be a data point with an error bar.

2. Cite your sources

I constantly see websites that write about a study that was recently published in some magazine by someone from some university, but that doesn’t link to the actual study. I’ll then have to search for those researcher’s names and look up their publication list and find what the news article was referring to.

Here’s an example for how not to do it from the Guardian. This work is published in the journal Physical Review Letters. This isn’t helpful. Here’s the same paper covered by the BBC. This one has a link. That’s how you do it.

Another problem with sources is that science news also frequently just repeats press releases without actually saying where they got their information from. It’s a problem because university press releases aren’t exactly unbiased.

In fact, a study published in 2014 found that in biomedical research as many as 40 percent of press releases contain exaggerated results.

Since you ask, the 95 percent confidence interval is 33 to 46 percent.

A similar study in 2018 found a somewhat lower percentage of about 23 percent but still that’s a lot. In short, press releases are not reliable sources and neither are sources that don’t cite their sources.

3. Put a date on it

It happens a lot on social media, that magazines share the same article repeatedly, without mentioning it’s an old story. I’ve unfollowed a lot of pages because they’re wasting my time this way. In addition, some pages don’t have a date at the top, so I might read several paragraphs before figuring out that this is a story from two years ago.

A bad example for this is Aeon. It’s otherwise a really interesting magazine, but they hide the date in tiny font at the bottom of long essays. Please put the date on the top. Better still, if it’s an old story, make sure the reader can’t miss the date. Here’s an example for how to do it from the Guardian.

4. Tell me the history

Related to the previous one, selling an old story as new by forgetting to mention that it’s been done before. An example is this story from 2019 about a paper which proposed to use certain types of rocks as natural particle detectors to search for dark matter. The authors of paper called this paleo-detectors. And in the paper they write clearly on the first page “Our work on paleo-detectors builds on a long history of experiments.” But the quanta magazine article makes it sound like it’s a new idea.

This matters because knowing that it’s an old idea tells you two things. First, it probably isn’t entirely crazy. And second, it’s probably a gradual improvement rather than a sudden big breakthrough. That’s relevant context.

5. Don’t oversimplify it

For many questions of science policy, there just isn’t a simple answer, there is no good solution, and sometimes the best answer we have is “we don’t know.” Sometimes all possible solutions to a problem suck and trying to decide which one is the least bad option is difficult. But science news often presents simple answers and solutions probably thinking it’ll appeal to the reader.

What to do about climate change is a good example. Have a look at this recent piece in the Guardian. “Climate change can feel complex, but the IPCC has worked hard to make it simple for us.” Yeah it only took them 3000 pages. Look, if the problem was indeed simple to solve, then why haven’t we solved it. Maybe because it isn’t so simple? Because there are so many aspects to consider, and each country has their own problems, and one size doesn’t fit all. Pretending it’s simple when it isn’t doesn’t help us work out a solution.

6. It depends, but on what?

Related to the previous item, if you ask a scientist a question, then frequently the answer is “it depends”. Will this new treatment cure cancer? Well, depends on the patient and what cancer they have had and for how long they’ve had it and whether you trust the results of this paper and whether that study will get funded, and so on and so forth. Is nuclear power a good way to curb carbon dioxide emissions? Well, depends on how much wind blows in your corner of the earth and how high the earthquake risk is and how much place you have for solar panels, and so on. If science news don’t mention such qualifiers, I have to throw out the entire argument.

A particularly annoying special case of this are news pages which don’t tell you what country study participants were recruited from or where a poll was conducted. They just assume that everyone who comes to their website must know what country they’re located in.   

7. Tell me the whole story.

A lot of science news is guilty of lying by omission. I have talked about several cases of this this in earlier videos.

For example, stories about how climate models have correctly predicted the trend of the temperature anomaly that fail to mention that the same models are miserable at predicting the total temperature. Or stories about nuclear fusion that don’t tell you the total energy input. Yet another example are stories about exciting new experiments looking for some new particle that don’t tell you there’s no reason these particles should exist in the first place. Or stories about how the increasing temperatures from climate change kill people in heat waves, but fail to mention that the same increasing temperatures also save lives because fewer people freeze to death. Yeah, I don’t trust any of these sources.

8. Spare me the human interest stuff

A currently very common style of science writing is to weave an archetypical hero story of someone facing a challenge they have to overcome. You know, someone who encountered this big problem and they set out to solve it and but they made enemies and then they make a friend and they make a discovery but it doesn’t work and… and by that time I’ve fallen asleep. Really please just get to the point already. What’s new and how does it matter? I don’t care if the lead author is married.

9. Don’t forget that science is fallible

A lot of media coverage on science policy remembers that science is fallible only when it’s convenient for them. When they’ve proclaimed something as fact that later turns out to be wrong, then they’ll blame science. Because science is fallible. Facemasks? Yeah, well, we lacked the data. Alright.

But that’d be more convincing if science news acknowledged that their information might be wrong in the first place. The population bomb? Peak oil? The new ice age? Yeah, maybe if they’d made it clearer at the time that those stories might not pan out the way they said then we wouldn’t today have to cope with climate change deniers who think the media can’t tell fact from fiction.

10. Science doesn’t work by consensus

Science doesn’t work by voting on hypotheses. As Kuhn pointed out correctly, the scientific consensus can change quite suddenly. And if you’re writing science news then most of your audience knows that. So referring to the scientific consensus is more likely to annoy them rather than to inform them. And in any case, interpreting poll results is science in itself.

Take the results of this recent poll among geoscientists mostly in the United States and Canada, all associated with some research facility. They only counted replies from those participants who selected climate science and/or atmospheric science within their top three areas of research expertise.

They found that among the people who have worked in the field the longest, 20 years or more, more than 5% think climate change is due to natural causes. So what’s this mean? That there’s a 5% chance it’s just a statistical fluke?

Well, no, because science doesn’t work by consensus. It doesn’t matter how many people agree on one thing or another, or, for that matter, how long they’ve been in a field. It merely matters how good their evidence is.

To me, quoting the “scientific consensus” is an excuse that science journalists use for not even making an effort to actually explain the science. Maybe every once in a while an article about climate change should actually explain how the greenhouse effect works. Because, see earlier, it’s not as simple as it seems. And I suspect the reason that we still have a substantial fraction of climate change skeptics and deniers is not that they haven’t been told often enough what the consensus is. But that they don’t understand the science, and don’t understand that they don’t understand it. And that’s mostly because science news doesn’t explain it.

Good example for how to do it right, Lawrence Krauss’s book on the physics of climate change.

Okay, so those are my top ten misgivings about science news. Let me know what you think about this in the comments.

Saturday, June 11, 2022

Can particles really be in two places at once?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Today I want to talk to you about what happened when I wrote an opinion piece for the Guardian about quantum computing, had to explain what a qubit is, and decided against using the phrase that it “can be in two states at the same time”. What happened next and what did I learn from it? That’s what we’ll talk about today.

Three years ago, just before Google’s first demonstration of quantum supremacy I wrote a brief essay for the Guardian about how that isn’t going to change the world. Quantum supremacy has since been renamed to quantum advantage, but as you can see it indeed hasn’t changed the world. That I wrote this so briefly before the publication of the Google paper was of course totally entirely coincidental.

Now, you can’t write a popular science piece about quantum computing without explaining how a quantum computer works, and that normally involves fumbling together a paragraph that no one understands who doesn’t already know how a quantum computer works, but that has to give the reader the impression they understood it. This means you can’t use words like “superposition”, “Hilbert space”, “complex number” or “Bloch sphere”. Wait, don’t leave. I’ll explain what the Bloch sphere is in a minute.

So what I wrote was
“while a standard computer handles digital bits of 0s and 1s, quantum computers use quantum bits or qubits which can take any value between 0 and 1” and “when qubits are connected by quantum entanglement... such machines can rattle out computations that would take billions of years on a traditional computer”.
Though I am pretty sure the phrase “rattle out” came from the editor because I’m not usually that eloquent.

By writing this I wanted to get across two points. First, the phrase that “a qubit can be in two states at the same time” which you have probably read or heard somewhere makes no sense and would in my opinion better be avoided. Second, it’s the entanglement that makes the difference between a conventional computer and a quantum computer.

Why do I say that a qubit can take any value between 0 and 1? Well, a qubit is the simplest example of a wave-function. Here’s the mathematical expression. Remember what I told you in my earlier video that these mysterious looking brackets really just mean these things are vectors. So the zero and the one are two basis vectors. And then a qubit is a sum of those two basis vectors with coefficients in front of them. That sum is what’s called a superposition.

You might think this is like having vectors in a two dimensional flat space but this isn’t quite right. That’s because the wave-function describes probabilities. This means if you square the coefficients in front of the basis vectors, they have to add up to one. And also, these coefficients can be complex numbers. This is why, if you want to draw all possible qubit states, those do not lie in a flat grid, they lie on the surface of a sphere. This is the Bloch sphere.

The Bloch sphere is commonly drawn so that the state 0 points to the north and the state 1 points to the south pole. So what’s an arbitrary qubit state? Well, all places on the surface of Earth lie between the north and south pole, and all qubit states lie on the Bloch sphere between 0 and 1. That’s why I wrote what I wrote in my article.

Before we look at what happened in the comments, a big thank you our supporters on Patreon, and especially those in tier four. If you want to see more of our videos, you can help by joining us on Patreon or right here on YouTube by clicking on the join button below. Let’s then see what happened the comments. Pretty much as soon as the piece was published, someone wrote: “That’s an analogue computer, not a quantum computer! A qubit can have a superposition of the values 0 and 1.”

Yes, but you can’t just write “superposition” in a popular science article without explaining what that is. And the superpositions in quantum computers are indeed similar to analogue computers, just that the values “in between” can be complex numbers. This doesn’t mean that quantum computers are just conventional analogue computers. The relevant property that sets quantum computers apart from conventional computers is that you can entangle those qubits which you can’t do with a conventional computer, regardless of whether it’s digital or analogue. As I’d written in my article.

Next time I looked at the comments someone had replied: “Never understood why they give this kind of story to an arts grad.” Next person: “Actually she’s a physicist, a string theorist and a good one at that.” Ah, arts grad, string theorist, same thing really. Next. “I’m an “arts grad” with over 30 years experience in IT. I expect better writing and research about this subject from an “arts grad”.” Yes. Let that be a lesson to all the string theory arts grads writing about quantum computing. I couldn’t think of anything polite to respond, so I instead replied to some other comments. And luckily, next time I looked, two people had shown up to explain the matter. The first wrote:
“the author meant x |0> + y|1> as lying “between” the pure states |0> and |1> ; “between” in state space, not on the real number line.”
Exactly, they lie between 0 and 1 in state space, which can be illustrated by the Bloch sphere. All states on the Bloch sphere are pure states. Then another comment:
“you fail to take into account the near impossibility of explaining the concept of a superposition in PopSci language, which does not allow for concepts like “complex number”... Try by those rules yourself and see if you can produce anything that does not amount to “sort of like an average”, which would in this context be equivalent to “any number between 0 and 1”.”
This indeed captures the difficulty well. Finally, someone points out that I’m not a string theorist, and they lived happily ever after, the end.

So why am I telling you this? Well for one I want to belatedly thank those commenters for taking the time to sort this out. But also, I’ve been thinking about this episode quite a bit and wondered what went wrong there.

I believe the problem is that when we write about quantum mechanics we’re faced with the task of converting mathematical expressions into language. And regardless of which language we use, English, German, Chinese, or whatever, our language didn’t evolve to describe quantum behavior. So all the words that we can come up with will be wrong and will be misleading. There’s no way to get it right.

What’s a superposition? A superposition is a sum of vectors in a Hilbert space. Alright. But if one of the vectors is a particle going left and the other a particle going right, what does this superposition mean? I don’t know. Could you say it’s a particle going into both directions? I guess you could say that. I mean, you just said it, so arguably you can. But is that what it actually is? I don’t think so.

For one it’d be more accurate to say that the wave-function “describes” a particle instead of saying that it “is” a particle. But maybe more importantly, I don’t think such a superposition is anything in the space we inhabit. It’s a vector in this mathematical structure we call the Hilbert space. And what does that mean? I don’t know. I don’t think there are any words in our language to explain what it “means”.

I still think that the explanation that I gave for a quantum bit was more truthful to the mathematics than the more commonly used phrase that it can be in “two states at once”. But I also think we have to accept that regardless of what language we use to describe quantum mechanics, it will never be correct. Because our language isn’t fit to describe something we cannot experience.

Should this worry us? Does this mean there’s something wrong with quantum mechanics as a scientific theory? I don’t think so. I think it’d be surprising if it was otherwise. Quantum mechanics describes the behavior of matter in circumstances we don’t observe in daily life. We’ve never needed the language to explain quantum behavior so we don’t have it.

To give you a second opinion I've asked Arvin Ash to tell us what he thinks. Arvin is an expert in science communication in general and quantum mechanics in particular. He told me the following.

Hi Sabine, as I tried to illustrate in a recent video, the root of the problem and cause of so much confusion in quantum mechanics is the fact that when we measure things, that is, whenever we have the opportunity to actually observe a quantum object, it seems to lose its quantum behavior. The superposition is lost and what we see is something that looks like it's behaving classically. This is the case with the double slit experiment when individual photons or electrons always show up as dots on the screen rather than some kind of wave and we only see the wave interference pattern when we shoot many photons through the slits or when we observe quantum particles and cloud chambers, they leave trails as if they were little cannonballs.

This is also the case for quantum computers. While the math describes superposed states of quantum bits as, pardon the language, taking on any value between zero and one when the computer actually makes a measurement the bits are either zero or one, it's never in between. So while the calculation is quantum, the result is binary. We are surrounded by a quantum world we can't directly observe, and when we sample this world by taking measurements the quantum phenomena convert to classical results.

It's hard to describe something you can't ever experience. We humans like to make connections with familiar things. If we could see it, we could describe it. The math of quantum mechanics has no obvious classical analog. We could however certainly be more precise in our language. But I do think that in the future quantum mechanics will become more intuitive to more people through watching videos like this thanks for having me.
I think that’s a good point. Arvin has his own YouTube channel and if you find my channel interesting, I’m sure you’ll like his too, so go check it out.

When it comes to quantum mechanics, I think what will happen in the long run is that the mathematical expressions will just become better known and we will use them more widely. Like we’ve become used to talking about electromagnetic radiation. That was once a highly abstract mathematical concept, waves that travel through empty space, rather than traveling in some medium. But we now use electromagnetic radiation so frequently that it’s become part of our everyday language. I think that it’ll go the same way with qubits and superpositions.

Saturday, June 04, 2022

Trans women in sports: Is this fair?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


If you see a trans woman, like the American Swimmer Lia Thomas, in an athletic competition against women born female, it’s difficult not to ask whether that’s fair. In this video I want to look at what science says about this. How much of an advantage do men have over women and what, if anything, does hormone therapy change about it? That’s what we’ll talk about today.

The vast majority of humans can fairly easily be classified as either biologically male or female. The two sexes have rather obvious differences in inner and outer organs and those differences are strongly correlated with the expressions of the twenty-third chromosome pair, as you remember from your school days, XY for men, and XX for women. So far, so clear.

But this classification of the two biological sexes doesn’t always work. There is a surprisingly large variety of what’s known as “intersex” conditions in which the biological expression may be ambiguous or doesn’t align with the chromosome expression. Some cases are truly amazing.

For example, in 2014 researchers from India presented the case of a 70 year old man, father of four, who underwent surgery. The doctors discovered he also had a uterus and fallopian tubes. His chromosomes however turned out to be the standard male 46 XY.

Another example is the Spanish hurdler Martínez-Patiño who has feminine genitals but XY chromosomes and internal testes, something that she was herself unaware of until the age of 25. After this was discovered, she was banned from competing in the 1986 Olympics.

Such cases are called disorders of sex development, DSD for short. They are rare but not as rare as you may think. According to various estimates that you find in the literature they affect between one in a thousand and one in fifty.

This means that in the United States there are between about three hundred thousand and 7 million intersex people, and globally between 8 million and 160 million. Part of the problem with making those estimates is that the definition of an intersex condition is somewhat ambiguous.

Now, most intersex people aren’t transgender and most transgender people aren’t intersex, but the question what to do with intersex people in competitive sports is a precedent to the newer question of what to do with trans athletes. And Martínez-Patiño illustrates the difficulty.

Her condition is called hyperandrogenic 46XY DSD. In the general population, it happens at a rate of about 1 in 20 000 births. Women with this condition have elevated level of the male hormone testosterone. In 2014, a study by European researchers found that in elite female athletes the rate of this disorder is about 140 times higher than in the general population.

Testosterone helps muscles grow, strengthens bones, and increases levels of hemoglobin in the blood, which benefits oxygen transport. Synthetic forms of testosterone are often used for doping.

In case you think this pretty much settles the case that high testosterone levels are an unfair advantage for women, think again. Because the reason Martines-Patino was assigned female at birth to begin with is that she has what’s called complete androgen insensitivity syndrome. She has high levels of testosterone, yes, but her body doesn’t react to it. After being banned from the Olympics she appealed, arguing that she has no advantage from her elevated testosterone levels and the ban was indeed lifted.

However, there are other conditions that can lead to elevated testosterone levels in women. To make matters more complicated, there is a large natural variation in testosterone levels among both men and women and since women with naturally high testosterone levels tend to perform well in sports, they are overrepresented among athletes.

As a consequence, the testosterone distribution for male and female athletes has a big overlap. A paper published by researchers from Ireland and the UK in 2014 showed results of hormonal profiles of about 7 hundred elite athletes across 15 sports.

They found that 16.5% of men had low testosterone levels, 13.7% of women had high levels, and there was a significant overlap between the them.

So as you see, the business with the testosterone levels is much more difficult than you might think, and that doesn’t even touch the question how relevant it is. The advantages stemming from testosterone are particularly pronounced in disciplines that require upper body strength, and less pronounced in those that test endurance.

Having said that, let’s then talk about the trans athletes. Transgender people don’t identify with the sex they have been assigned at birth. Leaving aside the intersex conditions, this means a trans man was born biologically female and a trans woman was born biologically male. People-who-aren’t-trans are commonly referred to as cis.

Some trans people undergo surgery and/or hormone therapy to adjust their physical appearance to their gender. The results of the transition treatment differ dramatically depending on whether it’s started before or after puberty. During puberty, boys grow significantly more than girls and they develop more muscles whereas, to quote Meghan Trainor, women get all the right junk in all the right places. Physical changes during puberty are partly irreversible and transitioning later will not entirely undo them.

In 1990, a seminar convened by the World Athletics Federations recommended that any person who had undergone gender reassignment surgery before puberty should be accepted to participate under their new gender. This isn’t particularly controversial. The controversial bit is what to do with those who transition after puberty.

The International Olympic Committee has been leading the way on passing regulations. In 2004 they ruled that transgender athletes are allowed to compete two years after surgical anatomical changes have been completed and if they’ve undergone hormonal therapy. This means in particular that trans women must have taken hormone therapy for at least two years.

In 2021, the committee issued a framework that allows international federations to develop their own eligibility criteria for transgender and intersex athletes, so there’s no simple rule that applies to all disciplines.

But does the hormonal treatment make it fair for trans women to compete with cis women? 

In 2019 a team of European researchers from the Netherlands, Norway and Belgium measured the change in grip strength for trans people after a year of hormonal therapy. They had about 250 trans women and trans men each who participated in their study. So this isn’t a huge sample but decent.

They found that grip strength decreased in trans women by minus 1 point 8 kilogram but increased in trans men by 6 point 1 kilogram. In trans men, but not in trans women, the change in grip strength was associated with change in lean body mass. So it seems that hormonal therapy does more for trans men than for trans women.  

Another team of researchers from Sweden followed 11 untrained trans women and 12 untrained trans men before and up to one year after gender-affirming hormonal therapy.

They found that in trans women thigh muscle volume decreased by 5 percent and quadriceps cross-sectional area decreased by 4 percent, but muscle density remained unchanged and they roughly maintained their strength levels. In trans men, on the other hand, thigh muscle volume increased by 15 percent; quadriceps cross-sectional area also increased by 15 percent, muscle density increased by 6 percent, and they saw increased strength levels. Again it seems that hormonal therapy does more for trans men than for trans women.  

It’d be rather tedious to list all the papers, so let me just say that this finding has been reproduced numerous times. A meta analysis of from March last year surveyed two dozen studies and concluded that, even after 36 months of hormonal therapy, the values for strength, lean body mass and muscle area in trans women remained above those of cis women.

These numbers aren’t directly applicable to athletes because in the general population trans men have an incentive to build muscles while trans women have an incentive trying to lose it. But those studies pretty much agree that hormone therapy makes a faster difference for trans men than for trans women, and after 3 years the difference hasn’t entirely disappeared.

There is basically no data on what this hormone treatment does in the long run. A 2021 paper from Brazil suggests that after about 15 years differences between trans and cis women have basically disappeared. But this was a very small study with only 8 participants. And in any case, if you ask athletes to wait 15 years, they’ll be too old for the Olympics.

So let us come back to the question then whether it’s “fair” for trans women to compete with cis women. It seems clear from the data that trans women keep an advantage over cis women, even after several years of hormonal therapy. I guess that means it isn’t fair in the sense that no amount of training that cis women can do is going to make up for male puberty.

But then, athletic competition has never been fair in that sense. To begin with, let’s not forget that for athletic performance the most important factor isn’t your sex, it’s your age. And some people are born with an advantage at certain types of sports, being male is only one of them. Usain Bolt has long legs. Michael Phelps has big feet. And American Basketballers are tall.

Really tall. Here’s the American under16 Women’s Basketball team with the team from El Salvador. The Americans won 114-19. Is that fair?

And those are just the visible differences. There are also factors like bone density, cardiac output, or lung volume that are partly genetically determined. I never had a chance to become an Olympic swimmer. Is that fair? 

No. Athletes are biological extremes. “Fairness” has never been the point of these competitions. They’re really more like freak shows. Kind of like physics conferences.

There’s another aspect to consider, which is that these competitions should also entertain. I guess this is why the researcher Joanna Harper, who is a trans athlete herself, has suggested we talk about “meaningful competition” instead of “fair competition”.  We have historically segregated men and for women in sporting events because otherwise competition becomes too predictable, too boring. In some disciplines we have further categories for the same reason, like in weight lifting and boxing.

Now we’re asking if not we need additional categories for trans athletes. Alright, we could do this. But if you follow this logic to its conclusion then really the only person you can compete with is yourself.

Or you will have to try and measure every single parameter that contributes to athletic performance in a given discipline and then try to adjust for it. The result may be that the person who comes in last in a race is the winner, after you adjust for heart valve issues, testosterone levels, age, slightly misaligned legs, under average lung volume, and a number of other different conditions. And that would be “fair” in the sense that now everyone had chance to win provided they trained hard enough. But would people still watch it?

The question of entertainment brings up another issue. Most of the sport disciplines that are currently widely broadcast favor biological characteristics typically associated with men, with the possible exception of long distance swimming. But generally sex differences decrease the more emphasis a discipline puts on endurance rather than strength.

A 2019 study among casual athletes found that men still have an edge over women in marathns but somewhere between 100 and 200 miles, women begin to win out. Though this currently doesn’t reflect in the world records where men are still leading, it seems that men have less of an advance in endurance disciplines. Which brings up the question: Why don’t we see more of such sporting events? I don’t know for sure, but personally I find it hard to think of something more boring than watching someone run 200 miles. So maybe the solution is that we’ll just all just do esports in the end.

But let’s come back to the trans athletes. Researchers from the University of California estimated in 2017, that the percentage of transgender people in the United States is about 0 point four percent (0.39%). A similar estimate for Brazil put the number there at about 0 point seven percent(0.69%). If these number are roughly correct, transgender people are currently underrepresented in elite level sports. That isn’t fair either. This is why I think sporting associations are doing the right thing with putting forward regulations based on the best available scientific evidence, and as long as athletes comply with them, they shouldn’t have to shoulder accusations of unfair competition.

That said, professional sports associations will soon have a much bigger problem. Like that or not, genetic engineering has become reality. And as long as athletes can make a lot of money from having a genetic advantage, someone’s going to breed children who’ll bring in that money. This is why I suspect a century from now professional athletics will not exist anymore. It creates too many incentives for unethical behavior.

I hope this brief summary has helped you make sense of a somewhat confusing situation. 

Saturday, May 28, 2022

Chaos: The Real Problem with Quantum Mechanics

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


You’ve probably seen a lot of headlines claiming that quantum mechanics is “strange”, “weird” or “spooky”. In the best case it’s “unintuitive” and “no one understands it”. Poor thing. In this video I will try to convince you that the problem with quantum mechanics isn’t that it’s weird. The problem with quantum mechanics is chaos. And that’s what we’ll talk about today.

Saturn has 82 moons. This is one of them, its name is Hyperion. Hyperion has a diameter of about 200 kilometers and its motion is chaotic. It’s not the orbit that’s chaotic, it’s the orientation of the moon on that orbit.

It takes Hyperion about 3 weeks to go around Saturn once, and about 5 days to rotate about its own axis. But the orientation of the axis tumbles around erratically every couple of months. And that tumbling is chaotic in the technical sense. Even if you measure the position and orientation of Hyperion to utmost precision, you won’t be able to predict what the orientation will be a year later.

Hyperion is a big headache for physicists. Not so much for astrophysicists. Hyperion’s motion can be understood, if not predicted, with general relativity or, to good approximation, with Newtonian dynamics and Newtonian gravity. These are all theories which do not have quantum properties. Physicists call such theories without quantum properties “classical”.

But Hyperion is a headache for those who think that quantum mechanics is really the way nature works. Because quantum mechanics predicts that Hyperion’s chaotic motion shouldn’t last longer than about 20 years. But it has lasted much longer. So, quantum mechanics has been falsified.

Wait what? Yes, and it isn’t even news. That quantum mechanics doesn’t correctly reproduce the dynamics of classical, chaotic systems has been known since the 1950s. The particular example with the moon of Saturn comes from the 1990s. (For details see here or here.)

The origin of the problem isn’t all that difficult to see. If you remember, in quantum mechanics we describe everything with a wave-function, usually denoted psi. There aren’t just wave-functions for particles. In quantum mechanics there’s a wave-function for everything: atoms, cats, and also moons.

You calculate the change of the wave-function in time with the Schrödinger equation, which looks like this. The Schrödinger equation is linear, which just means that no products of the wave-function appear in it. You see, there’s only one Psi on each side. Systems with linear equations like this don’t have chaos. To have chaos you need non-linear equations.

But quantum mechanics is supposed to be a theory of all matter. So we should be able to use quantum mechanics to describe large objects, right? If we do that, we should just find that the motion of these large objects agrees with the classical non-quantum behavior. This is called the “correspondence principle”, a name that goes back to Niels Bohr.

But if you look at a classical chaotic system, like this moon of Saturn, the prediction you get from quantum mechanics only agrees with that from classical Newtonian dynamics for a certain period of time, known as the “Ehrenfest time”. Within this time, you can actually use quantum mechanics to study chaos. This is what quantum chaos is all about. But after the Ehrenfest time, quantum mechanics gives you a prediction that just doesn’t agree with what we observe. It would predict that the orientations of Hyperion don’t tumble around but instead blur out until they’re so blurred you wouldn’t notice any tumbling. Basically the chaos gets washed away in quantum uncertainty.

It seems to me that some of you are a little skeptical. It can’t possibly be that physicists have known of this problem for 60 years and just ignored it? Indeed, they haven’t exactly ignored it. The have come up with an explanation which goes like this.

Hyperion may be far away from us and not much is going on there, but it still interacts with dust and with light or, more precisely, with the quanta of light called “photons”. These are each really tiny interactions, but there are a lot of them. And they have to be added to the Schrödinger equation of the moon.

What these tiny interactions do is that they entangle the moon with its environment, with the dust and the light. This means that each time a grain of dust bumps into the moon, this very slightly changes some part of the moon’s wave-function, and afterwards they are both correlated. This correlation is the entanglement. And those little bumps slightly shift the crest and troughs of parts of the wave-function.

This is called “decoherence” and it’s just what the Schrödinger equation predicts. And this equation is still linear, so all those interactions don’t solve the problem that the prediction doesn’t agree with observation. The solution to the problem comes in the 2nd step of the argument. Physicists now say, okay, so we have this wave-function for the moon with this huge number of entangled dust grains and photons. But we don’t know exactly what this dust is or where it is or what the photons do and so on. So we do what we always do if we don’t know the exact details: We make guesses about what the details could plausibly be and then we average over them. And that average agrees with what classical Newtonian dynamics predicts.

So, physicists say, all is good! But there are two problems with this explanation. One is that it forces you to accept that in the absence of dust and light a moon will not follow Newton’s law of motion.

Ok, well, you could say that in this case you can’t see the moon either so for all we can tell that might be correct.

The more serious problem is that taking an average isn’t a physical process. It doesn’t change anything about the state that the moon is in. It’s still in one of those blurry quantum states that are now also entangled with dust and photons, you just don’t know exactly which one.

To see the problem with the argument, let me use an analogy. Take a classical chaotic process like throwing a die. The outcome is an integer from 1 to 6, and if you average over many throws then the average value per throw is 3.5. Just exactly which outcome you get is determined by a lot of tiny details like the positions of air molecules and the surface roughness and the motion of your hand and so on.

Now suppose I write down a model for the die. My model says that the outcome of throwing the die is either 106 or -99 each with probability 1/2. Wait, you say, there’s no way throwing a die will give you minus 99. Look, I say, the average is 3.5, all is good. Would you accept this? Probably not.

Clearly for the model to be correct it shouldn’t just get the average right, but each possible individual outcome should also agree with observations. And throwing a die doesn’t give minus 99 any more than a big blurry rock entangled with a lot of photons agrees with our observations of Hyperion.

Ok but what’s with the collapse of the wave-function? When we make a measurement, then the wave-function changes in a way that the Schrödinger-equation does not predict. Whatever happened to that?

Exactly! In quantum mechanics we use the wave-function to make probabilistic predictions. Say, an electron hits either the left or right side of a screen with 50% probability each. But then when we measure the electron, we know it’s, say, left with 100% probability.

This means after a measurement we have to update the wave-function from 50-50 to 100-0. Importantly, what we call a “measurement” in quantum mechanics doesn’t actually have to be done by a measurement device. I know it’s an awkward nomenclature, but in quantum mechanics a “measurement” can happen just by interaction with a lot of particles. Like grains of dust, or photons.

This means, Hyperion is in some sense constantly being “detected” by all those small particles. And the update of the wave-function is indeed a non-linear process. This neatly resolves the problem: Hyperion correctly tumbles around on its orbit chaotically. Hurray.

But here’s the thing. This only works if the collapse of the wave-function is a physical process. Because you have to actually change something about that blurry quantum state of the moon for it to agree with observations. But the vast majority of physicists today think the collapse of the wave-function isn’t a physical process. Because if it was, then it would have to happen instantaneously everywhere.

Take the example of the electron hitting the screen. When the wave-function arrives on the screen, it is spread out. But when the particle appears on one side of the screen, the wave-function on the other side of the screen must immediately change. Likewise, when a photon hits the moon on one side, then the wave-function of the moon has to change on the other side, immediately.

This is what Einstein called “spooky action at a distance”. It would break the speed of light limit. So, physicists said, the measurement is not a physical process. We’re just accounting for the knowledge we have gained. And there’s nothing propagating faster than light if we just update our knowledge about another place.

But the example with the chaotic motion of Hyperion tells us that we need the measurement collapse to actually be a physical process. Without it, quantum mechanics just doesn’t correctly describe our observations. But then what is this process? No one knows. And that’s the problem with quantum mechanics.

Saturday, May 21, 2022

The closest we have to a Theory of Everything

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


In English they talk about a “Theory of Everything”. In German we talk about the “Weltformel”, the world-equation. I’ve always disliked the German expression. That’s because equations in and by themselves don’t tell you anything. Take for example the equation x=y. That may well be the world-equation, the question is just what’s x and what is y. However, in physics we do have an equation that’s pretty damn close to a “world-equation”. It’s remarkably simple, looks like this, and it’s called the principle of least action. But what’s S? And what’s this squiggle. That’s what we’ll talk about today.

The principle of least action is an example of optimization where the solution you are looking for is “optimal” in some quantifiable way. Optimization principles are everywhere. For example, equilibrium economics optimizes the distribution of resources, at least that’s the idea. Natural selection optimizes the survival of offspring. If you shift around on your couch until you’re comfortable you are optimizing your comfort. What these examples have in common is that the optimization requires trial and error. The optimization we see in physics is different. It seems that nature doesn’t need trial and error. What happens is optimal right away, without trying out different options. And we can quantify just in which way it’s optimal.

I’ll start with a super simple example. Suppose a lonely rock flies through outer space, far away from any stars or planets, so there are no forces acting on the rock, no air friction, no gravity, nothing. Let’s say you know the rock goes through point A at a time we’ll call t_A and later through point B at time t_B. What path did the rock take to get from A to B?

Well, if no force is acting on the rock it must travel in a straight line with constant velocity, and there is only one straight line connecting the two dots, and only one constant velocity that will fit to the duration. It’s easy to describe this particular path between the two points – it’s the shortest possible path. So the path which the rock takes is optimal in that it’s the shortest.

This is also the case for rays of light that bounce off a mirror. Suppose you know the ray goes from A to B and want to know which path it takes. You find the position of point B in the mirror, draw the shortest path from A to B, and reflect the segment behind the mirror back because that doesn’t change the length of the path. The result is that the angle of incidence equals the angle of reflection, which you probably remember from middle school.

This “principle of the shortest path” goes back to the Greek mathematician Hero of Alexandria in the first century, so not exactly cutting edge science, and it doesn’t work for refraction in a medium, like for example water, because the angle at which a ray of light travels changes when it enters the medium. This means using the length to quantify how “optimal” a path is can’t be quite right.

In 1657, Pierre de Fermat figured out that in both cases the path which the ray of light takes from A to B is that which requires the least amount of time. If there’s no change of medium, then the speed of light doesn’t change and taking the least time means the same as taking the shortest path. So, reflection works as previously.

But if you have a change of medium, then the speed of light changes too. Let us use the previous example with a tank of water, and let us call speed of light in air c_1, and the speed of light in water c_2.

We already know that in either medium the light ray has to take a straight line, because that’s the fastest you can get from one point to another at constant speed. But you don’t know what’s the best point for the ray to enter the water so that the time to get from A to B is the shortest.

But that’s pretty straight-forward to calculate. We give names to these distances, calculate the length of the paths as a function of the point where it enters. Multiply each path with the speed in the medium and add them up to get the total time.

Now we want to know which is the smallest possible time if we change the point where the ray enters the medium. So we treat this time as a function of x and calculate where it has a minimum, so where the first derivative with respect to x vanishes.

The result you get is this. And then you remember that those ratios with square roots here are the sines of the angles. Et voila, Fermat may have said, this is the correct law of refraction. This is known as the principle of least time, or as Fermat’s principle, and it works for both reflection and refraction.

Let us pause here for a moment and appreciate how odd this is. The ray of light takes the path that requires the least amount of time. But how does the light know it will enter a medium before it gets there, so that it can pick the right place to change direction. It seems like the light needs to know something about the future. Crazy.

It gets crazier. Let us go back to the rock, but now we do something a little more interesting, namely throw the rock in a gravitational field. For simplicity let’s say the gravitational potential energy is just proportional to the height which it is to good precision near the surface of earth. Again I tell you the particle goes from point A at time T_A to point B at time t_B. In this case the principle of least time doesn’t give the right result.

But in the early 18th century, the French mathematician Maupertuis figured out that the path which the rock takes is still optimal in some other sense. It’s just that we have to calculate something a little more difficult. We have to take the kinetic energy of the particle, subtract the potential energy and integrate this over the path of the particle.

This expression, the time-integral over the kinetic minus potential energy is the “action” of the particle. I have no idea why it’s called that way, and even less do I know why it’s usually abbreviated S, but that’s how it is. This action is the S in the equation that I showed at the very beginning.

The thing is now that the rock always takes the path for which the action has the smallest possible value. You see, to keep this integral small you can either try to make the kinetic energy small, which means keeping the velocity small, or you make the potential energy large, because that enters with a minus.

But remember you have to get from A to B in a fixed time. If you make the potential energy large, this means the particle has to go high up, but then it has a longer path to cover so the velocity needs to be high and that means the kinetic energy is high. If on the other hand the kinetic energy is low, then the potential energy doesn’t subtract much. So if you want to minimize the action you have to balance both against each other. Keep the kinetic energy small but make the potential energy large.

The path that minimizes the action turns out to be a parabola, as you probably already knew, but again note how weird this is. It’s not that the rock actually tries all possible paths. It just gets on the way and takes the best one on first try, like it knows what’s coming before it gets there.

What’s this squiggle in the principle of least action? Well, if we want to calculate which path is the optimal path, we do this similarly to how we calculate the optimum of a curve. At the optimum of a curve, the first derivative with respect to the variable of the function vanishes. If we calculate the optimal path of the action, we have to take the derivative with respect to the path and then again we ask where it vanishes. And this is what the squiggle means. It’s a sloppy way to say, take the derivative with respect to the paths. And that has to vanish, which means the same as that the action is optimal, and it is usually a minimum, hence the principle of least action.

Okay, you may say but you don’t care all that much about paths of rocks. Alright, but here’s the thing. If we leave aside quantum mechanics for a moment, there’s an action for everything. For point particles and rocks and arrows and that stuff, the action is the integral over the kinetic energy minus potential energy.

But there is also an action that gives you electrodynamics. And there’s an action that gives you general relativity. In each of these cases, if you ask what the system must do to give you the least action, then that’s what actually happens in nature. You can also get the principle of least time and of the shortest path back out of the least action in special cases.

And yes, the principle of least action *really uses an integral into the future. How do we explain that?

Well. It turns out that there is another way to express the principle of least action. One can mathematically show that the path which minimizes the action is that path which fulfils a set of differential equations which are called the Euler-Lagrange Equations.

For example, the Euler Lagrange Equations of the rock example just give you Newton’s second law. The Euler Lagrange Equations for electrodynamics are Maxwell’s equations, the Euler Lagrange Equations for General Relativity are Einstein’s Field equations. And in these equations, you don’t need to know anything about the future. So you can make this future dependence go away.

What’s with quantum mechanics? In quantum mechanics, the principle of least action works somewhat differently. In this case a particle doesn’t just go one optimal path. It actually goes all paths. Each of these paths has its own action. It’s not only that the particle goes all paths, it also goes to all possible endpoints. But if you eventually measure the particle, the wave-function “collapses”, and the particle is only in one point. This means that these paths really only tell you probability for the particle to go one way or another. You calculate the probability for the particle to go to one point by summing over all paths that go there.

This interpretation of quantum mechanics was introduced by Richard Feynman and is therefore now called the Feynman path integral. What happens with the strange dependence on the future in the Feynman path integral? Well, technically it’s there in the mathematics. But to do the calculation you don’t need to know what happens in the future, because the particle goes to all points anyway.

Except, hmm, it doesn’t. In reality it goes to only one point. So maybe the reason we need the measurement postulate is that we don’t take this dependence on the future which we have in the path integral seriously enough.

The Superdetermined Workshop finally took place

In case you’re still following this blog, I think I owe you an apology for the silence. I keep thinking I’ll get back to posting more than the video scripts but there just isn’t enough time in the day. 

Still, I’m just back from Bonn, where our workshop on Superdeterminism and Retrocausality finally took place. And since I told you how this story started three years ago I thought I’d tell you today how it went.

Superdeterminism and Retrocausality are approaches to physics beyond quantum mechanics, at least that’s how I think about it – and that already brings us to the problem: we don’t have an agreed-upon definition for these terms. Everyone is using them in a slightly different way and it’s causing a lot of confusion. 

So one of the purposes of the workshop was to see if we can bring clarity into the nomenclature. The other reason was to bring in experimentalists, so that the more math-minded among us could get a sense of what tests are technologically feasible.

I did the same thing 15 years ago with the phenomenology of quantum gravity, on which I organized a series of conferences (if you’ve followed this blog for a really long time you’ll remember). This worked out beautifully – the field of quantum gravity phenomenology is in much better condition today than it was 20 years ago.

It isn’t only that I think we’ll quite possibly see experimental confirmation (or falsification!) of quantum gravity in the next decade or two, because I thought that’d be possible all along. Much more important is that the realization that it’s possible to test quantum gravity (without building a Milky-Way sized particle collider) is slowly sinking into the minds of the community, so something is actually happening.

But, characteristically, the moment things started moving I lost interest in the whole quantum gravity thing and moved on to attack the measurement problem in quantum mechanics. I have a lot of weaknesses, but lack of ambition isn’t one of them.

The workshop was originally scheduled to take place in Cambridge in May 2020. We picked Cambridge because my one co-organizer, Huw Price, was located there, the other one, Tim Palmer, is in Oxford, and both places collect a decent number of quantum foundations people. We had the room reserved, had the catering sorted out, and had begun to book hotels. Then COVID happened and we had to cancel everything at the last minute. We tentatively postponed the meeting to late 2020, but that didn’t come into being either.

Huw went to Australia, and by the time the pandemic was tapering out, he’d moved on to Bonn. We moved the workshop with him to Bonn, more specifically to a place called the International Center for Philosophy. Then we started all over again.

We didn’t want to turn this workshop into an online event because that’d have defeated the purpose. There are few people working on superdeterminism and retrocausality and we wanted them to have a chance to get to personally know each other. Luckily our sponsor, the Franklin Fetzer Fund, was extremely supportive even though we had to postpone the workshop twice and put up with some cancellation fees.

Of course the pandemic isn’t quite over and several people still have travel troubles. In particular, it turned out there’s a nest of retrocausalists in Australia and they were more or less stuck there. Traveling from China is also difficult at the moment. And we had a participant affiliated with a Russian university who had difficulties traveling for yet another reason. The world is in many ways a different place now than it was 2 years ago.

One positive thing that’s come out of the pandemic though is that it’s become much easier to set up zoom links and live streams and people are more used to it. So while we didn’t have remote talks, we did have people participating from overseas, from Australia, China, and Canada. It worked reasonably well, leaving aside the usually hiccups, that they partly couldn’t see or hear, the zoom event expired when it shouldn’t have, etc.

I have organized a lot of workshops and conferences and I have attended even more of them. This meeting was special in a way I didn’t anticipate. Many of the people who are working on superdeterminism and retrocausality have for decades been met with a mix of incredulity, ridicule, and insults. In fact, you might have seen this play out with your own eyes in the comment sections of this and other blogs. For many of us, me included, this was the first time we had an audience who took our work seriously.

All of this talk about superdeterminism and new physics beyond quantum mechanics may turn out to be complete rubbish of course. But at least at present I think it’s the most promising route to make progress in the foundations of physics. The reason is quite simple: If it’s right, then new physics should appear in a parameter range that we can experimentally access from two sides, by making measuring devices smaller, and by bringing larger objects into quantum states. And by extrapolating the current technological developments, we'll get there soon enough anyway. The challenge is now to figure out what to look for when the data come in.

The talks from the workshop were recorded. I will post a link when they appear online. We’re hoping to produce a kind of white paper that lays out the terminology that we can refer to in the future. And I am working on a new paper in which I try to better explain why I think that either superdeterminism or retrocausality is almost certainly correct. So this isn’t the end of the story, it’s just the beginning. Stay tuned. 

Friday, May 13, 2022

Can we make a black hole? And if we could, what could we do with it?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Wouldn’t it be cool to have a little black hole in your office? You know, maybe as a trash bin. Or to move around the furniture. Or just as a kind of nerdy gimmick. Why can we not make black holes? Or can we? If we could, what could we do with them? And what’s a black hole laser? That’s what we’ll talk about today.

Everything has a gravitational pull, the sun and earth but also you and I and every single molecule. You might think that it’s the mass of the object that determines how strong the gravitational pull is, but this isn’t quite correct.

If you remember Newton’s gravitational law, then, sure, a higher mass means a higher gravitational pull. But a smaller radius also means a higher gravitational pull. So, if you hold the mass fixed and compress an object into a smaller and smaller radius, then the gravitational pull gets stronger. Eventually, it becomes so strong that not even light can escape. You’ve made a black hole.

This happens when the mass is compressed inside a radius known as the Schwarzschild-radius. Every object has a Schwarzschild radius, and you can calculate it from the mass. For the things around us the Schwarzschild-radius is much much smaller than the actual radius.

For example, the actual radius of earth is about 6000 kilometers, but the Schwarzschild-radius is only about 9 millimeters. Your actual radius is maybe something like a meter, but your Schwarzschild radius is about 10 to the minus 24 meters, that’s about a billion times smaller than a proton.

And the Schwarzschild radius of an atom is about 10 to the minus 53 meters, that’s even smaller than the Planck length which is widely regarded to be the smallest possible length, though I personally think this is nonsense, but that’s a different story.

So the reason we can’t just make a black hole is that the Schwarzschild radius of stuff we can handle is tiny, and it would take a lot of energy to compress matter sufficiently. It happens out there in the universe because if you have really huge amounts of matter with little internal pressure, like burned out stars, then gravity compressed it for you. But we can’t do this ourselves down here on earth. It’s basically the same problem like making nuclear fusion work, just many orders of magnitude more difficult.

But wait. Einstein said that mass is really a type of energy, and energy also has a gravitational pull. Yes, that guy again. Doesn’t this mean, if we want to create a black hole, we can just speed up particles to really high velocity, so that they have a high energy, and then bang them into each other. For example, hmm, with a really big collider. 

Indeed, we could do this. But even the biggest collider we have built so far, which is currently the Large Hadron Collider at CERN, is nowhere near reaching the required energy to make a black hole. Let’s just put in the numbers.

In the collisions at the LHC we can reach energies about 10 TeV, that corresponds to a Schwarzschild radius of about 10 to the minus 50 meters. But the region in which the LHC compresses this energy is more like 10 to the minus 19 meters. We’re far far away from making a black hole.

So why were people so worried 10 years ago that the LHC might create a black hole? This is only possible if gravity doesn’t work the way Einstein said. If gravity for whatever reason would be much stronger on short distances than Einstein’s theory predicts, then it’d become much easier to make black holes. And 10 years ago the idea that gravity could indeed get stronger on very short distances was popular for a while. But there’s no reason to think this is actually correct and, as you’ve noticed, the LHC didn’t produce any black holes.

Alright, so far it doesn’t sound like you’ll get your black hole trash can. But what if we built a much bigger collider? Yes, well, with current technology it’d have to have a diameter about the size of the milky-way. It’s not going to happen. Something else we can do?

We could try to focus a lot of lasers on a point. If we used the world’s currently most powerful lasers and focused them on an area about 1 nanometer wide, we’d need about 10 to the 37 of those lasers. It’s not strictly speaking impossible, but clearly it’s not going to happen any time soon.  

Ok, good, but what if we could make a black hole? What could we do with it? Well, surprise, there’s a couple of problems. Black holes have a reputation for sucking stuff in, but actually if they’re small, the problem is the opposite. They throw stuff out. That stuff is Hawking radiation. 

Stephen Hawking discovered in the early 1970s that all black holes emit radiation due to quantum effects, so they lose mass and evaporate. The smaller the black holes, the hotter, and the faster they evaporate. A black hole with a mass of about 100 kilograms would entirely evaporate in less than a nanosecond.

Now “Evaporation” sounds rather innocent and might make you think of a puddle turning into water vapor. But for the black hole it’s far from innocent. And if the black hole’s temperature is high, the radiation is composed of all elementary particles, photons, electrons, quarks, and so on. It’s really unhealthy. And a small black hole converts energy into a lot of those particles very quickly. This means a small black hole is black basically a bomb. So it wouldn’t quite work out the way it looks in the Simpson’s clip. Rather than eating up the city it’d blast it apart.

But if you’d manage to make a black hole with masses about a million tons, those would live a few years, so that’d make more sense. Hawking suggested to surround them with mirrors and use them to generate power. It’d be very climate friendly, too. Louis Crane suggested to put such a medium sized black hole in the focus of a half mirror and use its radiation to propel a spaceship.

Slight problem with this is that you can’t touch black holes, so there’s nothing to hold them with. A black hole isn’t really anything, it’s just strongly curved space. They can be electrically charged but since they radiate they’ll shed their electric charge quickly, and then they are neutral again and electric fields won’t hold them. So some engineering challenges that remain to be solved.

What if we don’t make a black hole but just use one that’s out there? Are those good for anything? The astrophysical black holes which we know exist are very heavy. This means their Hawking temperature is very small, so small indeed that we can’t measure it, as I just explained in a recent video. But if we could reach such a black hole it might be useful for something else.

Roger Penrose already pointed out in the early 1970s that it’s possible to extract energy from a big, spinning black hole by throwing an object just past it. This slows down the black hole by a tiny bit, but speeds up the object you’ve thrown. So energy is conserved in total, but you get something out of it. It’s a little like a swing-by that’s used in space-flight to speed up space missions by using a path that goes by near a planet.

And that too can be used to build a bomb… This was pointed out in 1972 in a letter to Nature by Press and Teukolsky. They said, look, we’ll take the black hole, surround it with mirrors, and then we send in a laser beam, just past the black hole. That gets bend around and comes back with a somewhat higher energy, like Penrose said. But then it bounces off the mirror, goes around the black hole again, gains a little more energy, and so on. This exponentially increases the energy in the laser light until the whole thing blasts apart.

Ok, so now that we’ve talked about blowing things up with bombs that we can’t actually build, let us talk about something that we can actually build, which is called an analogue black hole. The word “analogue” refers to “analogy” and not to the opposite of digital. Analogue black holes are simulations of black holes in fluids or solids where you can “trap” some kind of radiation.

In some cases, what you trap are sound waves in a fluid, rather than light. I should add here that “sound waves” in physics don’t necessarily have something to do with what you can hear. They are just periodic density changes, like the sound you can hear, but not necessarily something your ears can detect.

You can trap sounds waves in a similar way to how a black hole traps light. This can happen if a fluid flows faster than the sound speed in that fluid. You see, in this case there’s some region from within which the sound waves can’t escape.

Those fluids aren’t really black holes of course, they don’t actually trap light. But they affect sound very much like real black holes affect light. If you want to observe Hawking radiation in such fluids, they need to have quantum properties, so in practice one uses superfluids. Another way to create a black hole analogue it is with solids in which the speed of light changes from one place to another.

And those analogue black holes can be used to amplify radiation too. It works a little differently than the amplifications we already discussed because one needs two horizons, but the outcome is pretty much the same: you send in radiation with some energy, and get out radiation with more energy. Of course the total energy is conserved, you take that from the background field which is the analogy for the black hole. This radiation which you amplify isn’t necessarily light, as I said it could be sound waves, but it’s an “amplified stimulated emission”, which is why this is called a black hole laser.

Black hole lasers aren’t just a theoretical speculation. It’s reasonably well confirmed that analogue black holes actually act much like real black holes and do indeed emit Hawking radiation. And there have been claims that black hole lasing has been observed as well. It has remained somewhat controversial exactly what the experiment measured, but either way it shows that black hole lasers are within experimental reach. They’re basically a new method to amplify radiation. This isn’t going to result in new technology in the near future, but it serves to show that speculations about what we could do with black holes aren’t as far removed from reality as you may have thought.

Saturday, May 07, 2022

How Bad is Diesel?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


I need a new car, and in my case “new” really means “used”. I can’t afford one of the shiny new electric ones, so it’s either gasoline or diesel. But in recent years we’ve seen a lot of bad headlines about diesel. Why do diesel engines have such a bad reputation? How much does diesel exhaust affect our health really? And what’s the car industry doing about it? That’s what we will talk about today.

In September 2015, news broke about the Volkswagen emissions scandal, sometimes referred to as Dieselgate. It turned out Volkswagen had equipped cars with a special setting for emission tests, so that they would create less pollution during the test than on the road. Much of the world seems to have been shocked how the allegedly accurate and efficient Germans could possibly have done such a thing. I wasn’t really surprised. Let me tell you why.

My first car was a little red ford fiesta near the end of its life. For the emissions test I used to take it to a cheap repair shop in the outskirts of a suburb of a suburb. There was no train station and really nothing else nearby, so I’d usually just wait there. One day I saw the guy from the shop fumbling around on the engine before the emissions test and asked him what he was doing. Oh, he said, he’s just turning down the engine so it’ll pass the test. But with that setting the car wouldn’t properly drive, so later he’ll turn it up again.

Well, I thought, that probably wasn’t the point of the emissions test. But I didn’t have money for a better car. When I heard the news about the Volkswagen scandal, that made total sense to me. Of course the always efficient Germans would eventually automatize this little extra setting for the emissions test.

But why is diesel in particular so controversial? Diesel and gasoline engines are similar in that they’re both internal combustion engines. In these engines, fuel is ignited which moves a piston, so it converts chemical energy into mechanical energy.

The major difference between diesel and gasoline is the way these explosions happen. In a gasoline engine, the fuel is mixed with air, compressed by pistons and ignited by sparks from spark plugs. In a diesel engine, the air is compressed first which heats it up. Then the fuel is injected into the hot air and ignites. 

One advantage of diesel engines is that they don’t need a constant ignition spark. You just have to get them going once and then they’ll keep on running. Another advantage is that the energy efficiency is about thirty percent higher than that of gasoline engines. They also have lower carbon dioxide emissions per kilometer. For this reason, they were long considered environmentally preferable.

The disadvantage of diesel engines is that the hotter and more compressed gas produces more nitrous oxide and more particulates. And those are a health hazard.

Nitrous Oxides are combinations of one Nitrogen and one or several Oxygen atoms. The most prevalent ones in diesel exhaust are nitric oxide (NO) and nitrogen dioxide (NO2 ). When those molecules are hit by sunlight they can also split off an oxygen atom which then creates ozone by joining an O2 in the air. Many studies have shown that breathing in ozone or nitrous oxides irritates airways and worsens respiratory illness, especially asthma.

It’s difficult to find exact numbers for comparing nitric oxide components for diesel with gasoline because they depend strongly on the car and make and road conditions and how long the car’s been driving etc.

A road test on 149 diesel and gasoline cars manufactured from 2013 to 2016 found that Nitrogen oxide emissions from diesel cars are about a factor ten higher than those of gasoline cars.

This is nicely summarized in this figure, where you can see why this discussion is so heated. Diesel has on average lower carbon-dioxide emission but higher emissions of nitrogen oxides, gasoline cars the other way round. However, you also see that there are huge differences between the cars. You can totally find diesel engines that are lower in both emissions than some gasoline cars. Also note the two hybrid cars which are low on both emissions.

The other issue with diesel emissions are the particulates, basically tiny grains. Particulates are classified by their size, usually abbreviated with PM for ‘particulate matter’ and then a number which tells you their maximal size in micrometers. For example, PM2.5 stands for particulates the size of 2 point 5 micrometers or smaller.

This classification is somewhat confusing because technically PM 10 includes PM2.5. But it makes sense if you know that regulations put bounds on the total amount of particulates in a certain class in terms of weight, and most of the weight in some size classification comes from the largest particles.

So a PM10 limit will for all practical purposes just affect the largest of those particles. To reduce the smaller ones, you then add another limit for, say PM2.5.

Diesel particulates are made of soot and ash from incomplete burning of the fuel, but also abrasion from the engine parts, that includes metals, sulfates, and silicates. Diesel engines generate particulates with a total mass of up to 100 times more than similar-sized petrol engines.

What these particulates do depends strongly on their size. PM10 particles tend to settle to the ground by gravity in a matter of hours whereas PM0.1 can stay in the atmosphere for weeks and are then mostly removed by precipitation. The numbers strongly depend on weather conditions.

When one talks about the amount of particulate matter in diesel exhaust one has to be very careful exactly how one quantifies them. Most of the *mass* of particulate matter in diesel exhaust is in range of about a tenth of a micrometer. But most of the particles are about a factor of ten in size smaller. It’s just that since they’re so much smaller they don’t have much total mass.

This figure (p 157) shows the typical distribution of particulate matter in diesel exhaust. The brown dotted line is the distribution of mass. As you can see it peaks somewhat above tenth of a micrometer, that’s where PM 0.1 begins. For comparison, that’s a hundred to a thousand times smaller than pollen. The blue line is the distribution of the number of particles.

As you can see it peaks at a much smaller size, about 10 nanometers. That’s roughly the same size as viruses, so these particulates are really, really tiny, you can’t see them by eye. The green curve shows yet something else, it’s the surface of those particles. The surface is important because it determines how much the particles can interact with living tissue.  

The distinction between mass, surface, and amounts of particulate matter may seem like nitpicking but it’s really important because regulations are based on them.

What do we know about the health impacts of particulates? The WHO has classified airborne particulates as a Group 1 carcinogen. That they’re in group 1 means that the causal relation has been established. But the damage that those particles can do depends strongly on their size. Roughly speaking, the smaller they are, the more easily they can enter the body and the more damage they can do.

PM10 can get into the lower part of the respiratory system, PM 2.5 and smaller can enter the blood through the lung, and from thereon it can reach pretty much every organ.

The body does have some defense mechanisms. First there’s the obvious like coughing and sneezing, but once the stuff’s in the lower lungs it can stay there for months and if you breathe in new particulates all the time, the lung doesn’t have a chance to clear out. In other organs, the immune system tries to attack the particles but the most prevalent element in these particulates is carbon, and that is biopersistent, which means they just sit there and accumulate in the tissue.

Here’s a photo of such particulates that have accumulated in bronchial tissue. (Fig 2) It isn’t just that having dirt accumulate in your organs isn’t good news, the particulates can also carry toxic compounds on their surfaces. According to the WHO, PM 2.5 exposure has been linked to an increased risk heart attacks, strokes, respiratory disease, and premature death [Source (3)].

One key study was published in 2007 by researchers from several American institutions. They followed over 65,000 postmenopausal American women who had no history of diagnosed cardiovascular disease.

They found that a 10 microgram increase of PM 2.5 per cubic meter was associated with a 24 percent increase for experiencing a first cardiovascular event (at 95% CL), and a 76% increase for death resulting from cardiovascular disease, also at 95% CL. These results were already adjusted to remove already known risk factors, such as those stemming from age, household income, pre-existing conditions, and so on.

OA 2013 study that was published in The Lancet followed over 300,000 people from nine European countries for more than a decade. They found that a 5 microgram increase of PM 2.5 per cubic meter was correlated with an 18% increased risk of developing lung cancer. Again those results are already adjusted to take into account otherwise known risk factors. The PM exposure adds on top of that.

There’ve been lots of other studies claiming correlations between exposure to particulate matter and all kinds of diseases, though not all of them have great statistics. One even claimed they found a correlation between exposure to particulate pollution and decreasing intelligence, which explains it all, really.

Okay, so far we have seen that diesel exhaust really isn’t healthy. Well, glad we talked about it, but that doesn’t really help me to decide what to do about my car. Let’s then look at what the regulations are and what the car industry has been doing about it.

The World Health organization has put out guideline values for PM10 and PM2.5, both an annual mean and a daily mean, but as you see in this table the actual regulations in the EU are considerably higher. In the US people are even more relaxed about air pollution. Australia has some of the strictest air pollution standards but even those are above what the WHO recommends.

If you want to put these numbers in perspective, you can look up the air quality at your own location on the website iqair.com that’ll tell you the current PM 2.5 concentration. If you live in a major city chances are you’ll find the level frequently exceeds the WHO recommendation.

Of course the reason for this is not just diesel exhaust. In fact, if you look at this recently published map of global air pollution levels, you’ll see that some of the most polluted places are tiny villages in southern Chile and Patagonia. The reason is not that they love diesel so much down there, but almost everybody heats the house and cooks with firewood.

Indeed, more than half of PM2.5 pollution comes from fuel combustion in industry and households, while road transport accounts merely for about 11 percent. But more than half of the road traffic contribution to particulate matter comes from abrasion, not from exhaust. The additional contribution from diesel exhaust to the total pollution is therefore in the single percent values. Though you have to keep in mind that these are average values, the percentages can be very different in specific locations. These numbers are for the European Union but they are probably similar in the United States and the UK.

And of the fraction coming from diesel, only some share come from passenger cars, the rest is trucks which are almost exclusively diesel. Just how the breakdown between trucks and diesel passenger cars looks depends strongly on location.

Nevertheless, diesel exhaust is a significant contribution to air pollution, especially in cities where traffic is dense and air flow small. This is why many countries have passed regulation to force car manufacturers to make diesel cleaner.

Europeans have regularly updated their emission standards since 1992. The standards are called Euro 1, Euro 2, and so on, with the current one being Euro 6. The Euro 7 standard is expected for 2025. The idea is that only cars with certain standards are allowed into cities, though each city picks its own standard.

For example, London currently uses Euro 6, Brussels 5, and in Paris the rules change every couple of months and depend on the time of the day and just paying the fee may be less painful than figuring out what you’re supposed to do.

Basically these European standards limit the emissions of carbon dioxide, nitrogen oxides, and particulates, and some other things. (Table) The industry is getting increasingly better at adapting to these restrictions. As a consequence, new diesel cars pollute considerably less than those from one or two decades ago.

One of the most popular ways to make diesel cleaner is filtering the exhaust before it is released into the air.  A common type of filter are cordierite wall flow filters which you see in this image. They are very efficient and relatively inexpensive. These filters remove particles of size 100 nano meters and up.

The ones approved by the Environmental Protection Agency in the USA filter at least 85 percent of particulates, though some filter percentages in the upper 90s.  When the filter is “full” it gets burned by the engine itself. Remember that most of the particulates gets created by incomplete combustion in the first place, so you can in principle burn them again.

However, a consequence of this is some of the particulates simply get too small to be caught in the filter and they eventually escape. Another downside is that some filters result in an increase of nitrogen oxide emission when the filter is burned. Still, the filters do take out a significant fraction of the particulates.

Another measure to reduce pollution is exhaust gas recirculation. This isn’t only used in diesel cars but also in gasoline cars and it works by recirculating a portion of the exhaust gas back to the engine cylinders. This dilutes the oxygen in the incoming air stream and brings down the peak temperature. Since nitrogen oxides are mostly produced at higher temperature, this has the effect of reducing their fraction in the exhaust. But this recirculation has the downside that with the drop of combustion temperature the car drives less efficiently.

These technologies have been around for decades, but since emission regulations have become more stringent, carmakers have pushed their development and integration forward. This worked so well that in 2017 an international team of researchers published a paper in Science magazine in which they claimed that  modern gasoline produces more carbonaceous particulate matter than modern filter-equipped diesel cars.

What’s carbonaceous? That’s particles which contain carbon, and those make up about 50 percent of the particulates in the emissions. So not all of it but a decent fraction. In the paper they argue that whether gasoline or diesel cars are more polluting depends on what pollutant you look at, the age of the engine and whether it carries a filter or a catalytic converter.

I think what we learn from this is that being strict about environmental requirements and regulations seems to work out pretty well for diesel emissions, and the industry has proved capable of putting their engineers at work and finding solutions. Not all is good, but it’s getting better.

And this has all been very interesting but hasn’t really helped me make up my mind about what car to buy. So what should I do? Let me know what you think in the comments.