Saturday, February 26, 2022

Will the Big Bang repeat?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


This video is about Roger Penrose’s idea for the beginning of the universe and its end, conformal cyclic cosmology, CCC for short. It’s a topic that a lot of you have asked for ever since Roger Penrose won the Nobel Prize in 2020. The reason I’ve put off talking about it is that I don’t enjoy criticizing other people’s ideas, especially if they’re people I personally know. And also, who am I to criticize a Nobel Prize winner. on YouTube, out of all places.

However, Penrose himself has been very outspoken about his misgivings of string theory and contemporary cosmology, in particular inflation, and so in the end I think it’ll be okay if I tell you what I think about conformal cyclic cosmology. And that’s what we’ll talk about today.

First thing first, what does conformal cyclic cosmology mean. I think we’re all good with the word cosmology, it’s a theory for the history of the entire universe, alright. That it’s cyclic means it repeats in some sense. Penrose calls these cycles eons. Each starts with a big bang, but it doesn’t end with a big crunch.

A big crunch would happen when the expansion of the universe changes to a contraction and eventually all the matter is well, crunched together. A big crunch is like a big bang in reverse. This does not happen in Conformal Cyclic Cosmology. Rather, the history of the universe just kind of tapers out. Matter becomes more and more thinly diluted. And then there’s the word conformal. We need that to get from the thinly diluted end of one eon to the beginning of the next. But what does conformal mean?

A conformal rescaling is a stretching or shrinking that maintains all relative angles. Penrose uses that because you can use a conformal rescaling to make something that has infinite size into something that has finite size.

Here is a simple example of a conformal rescaling. Suppose you have an infinite two-dimensional plane. And suppose you have half of a sphere. Now from every point on the infinite plane, you draw a line to the center of the sphere. At the point where it pierces the sphere, you project that down onto a disk. That way you map every point of the infinite plane into the disk underneath the sphere. A famous example of a conformal rescaling is this image from Escher. Imagine that those bats are all the same size and once filled in an infinite plane. In this image they are all squeezed into a finite area.

Now in Penrose’s case, the infinite thing that you rescale is not just space, but space-time. You rescale them both and then you glue the end of our universe to a new beginning. Mathematically you can totally do that. But why would you? And what’s with the physics?

Let’s first talk about why you would want to do that. Penrose is trying to solve a big puzzle in our current theories for the universe. It’s the second law of thermodynamics: entropy increases. We see it increase. But that entropy increases means it must have been smaller in the past. Indeed, the universe must have started out with very small entropy, otherwise we just can’t explain what we see. That the early universe must have had small entropy is often called the Past Hypothesis, a term coined by the philosopher David Albert.

Our current theories work perfectly fine with the past hypothesis. But of course it would be better if one didn’t need it. If one instead had a theory from which one can derive it.

Penrose has attacked this problem by first finding a way to quantify the entropy in the gravitational field. He argued already in the 1970s, that it’s encoded in the Weyl curvature tensor. That’s loosely speaking part of the complete curvature tensor of space-time. This Weyl curvature tensor, according to Penrose, should be very small in the beginning of the universe. Then the entropy would be small and the past hypothesis would be explained. He calls this the Weyl Curvature Hypothesis. 

So, instead of the rather vague past hypothesis, we now have a mathematically precise Weyl Curvature Hypothesis. Like the entropy, the Weyl Curvature would start initially very small and then increase as the universe gets older. This goes along with the formation of bigger structures like stars and galaxies.

Remains the question how do you get the Weyl Curvature to be small. Here’s where the conformal rescaling kicks in. You take the end of a universe where the Weyl curvature is large, you rescale it which makes it very small, and then you postulate that this is the beginning of a new universe.

Okay, so that explains why you may want to do that, but what’s with the physics. The reason why this rescaling works mathematically is that in a conformally invariant universe there’s no meaningful way to talk about time. It’s like if I show you a piece of the Koch snowflake and ask if that’s big or small. These pieces repeat infinitely often so you can’t tell. In CCC it’s the same with time at the end of the universe.

But the conformal rescaling and gluing only works if the universe approaches conformal invariance towards the end of its life. This may or may not be the case. The universe contains massive particles, and massive particles are not conformally invariant. That’s because particles are also waves and massive particles are waves with a particular wavelength. That’s the Compton wave-length, which is inversely proportional to the mass. This is a specific scale, so if you rescale the universe, it will not remain the same.

However, the masses of the elementary particles all come from the Higgs field, so if you can somehow get rid of the Higgs at the end of the universe, then that would be conformally invariant and everything would work. Or maybe you can think of some other way to get rid of massive particles. And since no one really knows what may happen at the end of the universe anyway, ok, well, maybe it works somehow.

But we can’t test what will happen in a hundred billion years. So how could one test Penrose’s cyclic cosmology? Interestingly, this conformal rescaling doesn’t wash out all the details from the previous eon. Gravitational waves survive because they scale differently than the Weyl curvature. And those gravitational waves from the previous eon affect how matter moves after the big bang of our eon, which in turn leaves patterns in the cosmic microwave background. Indeed, rather specific patterns.

Roger Penrose first said one should look for rings. These rights would come from the collisions of supermassive black holes in the eon before ours. This is pretty much the most violent event one can think of and so should produce a lot of gravitational waves. However, the search for those signals remained inconclusive.

Penrose then found a better observational signature from the earlier eon which he called Hawking points. Supermassive black holes in the earlier eon evaporate and leave behind a cloud of Hawking radiation which spreads out over the whole universe. But at the end of the eon, you do the rescaling and you squeeze all that Hawking radiation together. That carries over into the next eon and makes a localized point with some rings around it in the CMB.

And these Hawking points are actually there. It’s not only Penrose and his people who have found them in the CMB. The thing is though that some cosmologists have argued they should also be there in the most popular model for the early universe, which is inflation. So, this prediction may not be wrong, but it’s maybe not a good way to tell Penrose’s model from others.

Penrose also says that this conformal rescaling requires that one introduces a new field which gives rise to a new particle. He has called this particle the “erebon”, named after erebos, the god of darkness. The erebons might make up dark matter. They are heavy particles with masses of about the Planck mass, so that’s much heavier than the particles astrophysicists typically consider for dark matter. But it’s not ruled out that dark matter particles might be so heavy and indeed other astrophysicists have considered similar particles as candidates for dark matter.

Penrose’s erebons are ultimately unstable. Remember you have to get rid of all the masses at the end of the eon to get to conformal invariance. So Penrose predicts that dark matter should slowly decay. That decay however is so slow that it is hard to test. He has also predicted that there should be rings around the Hawking points in the CMB B-modes which is the thing that the BICEP experiment was looking for. But those too haven’t been seen – so far.

Okay, so that’s my brief summary of conformal cyclic cosmology, now what do I think about it. Mostly I have questions. The obvious thing to pick on is that actually the universe isn’t conformally invariant and that postulating all Higgs bosons disappear or something like that is rather ad hoc. But this actually isn’t my main problem. Maybe I’ve spent too much time among particle physicists, but I’ve seen far worse things. Unparticles, anybody?

One thing that gives me headaches is that it’s one thing to do a conformal rescaling mathematically. Understanding what this physically means is another thing entirely. You see, just because you can create an infinite sequence of eons doesn’t mean the duration of any eon is now finite. You can totally glue together infinitely many infinitely large space-times if you really want to. Saying that time becomes meaningless doesn’t really explain to me what this rescaling physically does.

Okay, but maybe that’s a rather philosophical misgiving. Here is a more concrete one. If the previous eon leaves information imprinted in the next one, then it isn’t obvious that the cycles repeat in the same way. Instead, I would think, they will generally end up with larger and larger fluctuations that will pass on larger and larger fluctuations to the next eon because that’s a positive feedback. If that was so, then Penrose would have to explain why we are in a universe that’s special for not having these huge fluctuations.

Another issue is that it’s not obvious you can extend these cosmologies back in time indefinitely. This is a problem also for “eternal inflation.” Eternal inflation is eternal really only into the future. It has a finite past. You can calculate this just from the geometry. In a recent paper Kinney and Stein showed that this is also the case for a model of cyclic cosmology put forward by Ijjas and Steinhard has the same problem. The cycle might go on infinitely, alright, but only into the future not into the past. It’s not clear at the moment whether this is also the case for conformal cyclic cosmology. I don’t think anyone has looked at it.

Finally, I am not sure that CCC actually solves the problem it was supposed to solve. Remember we are trying to explain the past hypothesis. But a scientific explanation shouldn’t be more difficult than the thing you’re trying to explain. And CCC requires some assumptions, about the conformal invariance and the erebons, that at least to me don’t seem any better than the past hypothesis.

Having said that, I think Penrose’s point that the Weyl curvature in the early universe must have been small is really important and it hasn’t been appreciated enough. Maybe CCC isn’t exactly the right conclusion to draw from it, but it’s a mathematical puzzle that in my opinion deserves a little more attention.

An update on the status of superdeterminism with some personal notes

In December I put out a video on superdeterminism that many of you had asked for. I hesitated with this for a long time. As you have undoubtedly noticed, I don’t normally do videos about my own research. This is because I can’t lay out all the ifs and buts in a 10 minutes video, and that makes it impossible to meet my own scientific standards.

I therefore eventually decided to focus the video on the most common misunderstandings about superdeterminism, which is (a) that superdeterminism has something to do with free will and (b) that it destroys science. I sincerely hope that after my video we can lay these two claims to rest.

However, on the basis of this video, a person by name Bernado Kastrup chose to criticize my research. He afterwards demanded on twitter that we speak together about his criticism. I initially ignored him for several reasons.  

First of all, he got things wrong pretty much as soon as he started writing, showing that he either didn’t read my papers or didn’t understand them. Second, a lot of people pick on me because they want to draw attention to themselves and that’s a game I’m not willing to take part in. 

Third, Kastrup has written a bunch of essays about consciousness and something-with-quantum and “physicalism” which makes him the kind of person I generally want nothing to do with. Just to give you an idea, let me quote from one of his essays:
“Ordinary phenomenal activity in cosmic consciousness can thus be modelled as a connected directed graph. See Figure 1a. Each vertex in the graph represents a particular phenomenal content and each edge a cognitive association logically linking contents together.”
And here is the figure: 



Hence, in contrast to what Kastrup accused me of, the reason I didn’t want to talk to him was not that I hadn’t read what he wrote, but that I had read it.

The fourth and final reason that I didn’t want to talk to him is that I get a lot of podcast request, and I don’t reply to most of them simply because I don’t have the time.

I consulted on this matter with some friends and collaborators, and after that decided that I’d talk to Kastrup anyway. Mostly because I quite like Curt Jaimugal who offered to host the discussion and who I’d spoken with before. He’s a smart young man and if you take away nothing else from this blogpost, then at least go check out his YouTube channel which is well worth some of your time. Also, I thought that weeding out Kastrup’s understandings might help other people, too.

A week later, the only good thing I can report about my conversation with Kastrup is that he didn’t bring up free will, which I think is progress. Unfortunately, he didn’t seem to know much about superdeterminism even after having had time to prepare. He eventually ran out of things to say and then accused me of being “combative” after clearly being surprised to hear that an interaction with a single photon isn’t a measurement. Srsly. Go listen to it.


Instead of concluding that he’s out of his depth, he then wrote another blogpost in which he accused me of “misleading, hollow, but self-confident, assertive rhetoric”, claimed that “Sabine has a big mouth and seems to be willing to almost flat-out lie in order to NOT look bad when confronted on a point she doesn’t have a good counter for.” And, “Her rhetorical assertiveness is, at least sometimes, a facade that hides a surprising lack of actual substance.”

Keep in mind that this is a person who claimed to “model” the “phenomenal activity in cosmic consciousness” with 16 circles. Speak of lack of substance.

Lesson learned: I was clearly too optimistic about the possibility of rational discourse, and don’t think it makes sense to further communicate with this person.

Having said that, I gather that some people who watched the exchange were genuinely interested in the details, so I want to add some explanations that didn’t come across as clearly as I hoped they would.

First of all, the reason I am interested in superdeterminism has nothing whatsoever to do with physicalism or realism (I don’t know what these words mean to begin with). It’s simply that the collapse postulate in quantum mechanics isn’t compatible with general relativity because it isn’t local. That’s a well-defined mathematical problem and solving such problems is what I do for a living.

Note that simply declaring that the collapse isn’t a physical process doesn’t explain what happens and hence doesn’t solve the problem. We need to have some answer for what happens with the expectation value of the stress-energy-tensor during a measurement. I’m an instrumentalist; I am looking for a mathematical prescription that reproduces observations, one of which is that the outcome of a measurement is a detector eigenstate.

The obvious solution to this problem is that the measurement process which we have in quantum mechanics is an effective, statistical description of an underlying local process in a hidden variables theory. We know from Bell’s theorem (or its observed violations, respectively) that if a local theory underlies quantum mechanics then it has to violate statistical independence. That’s what is commonly called “superdeterminism”. In such theories the wave-function is an average description, hence not “real” or “physical” in any meaningful way.

So: Why am I interested in superdeterminism? Because general relativity is local. It is beyond me why pretty much everybody else wants to hold onto an assumption as problematic and unjustified as statistical independence, and is instead willing to throw out locality, but that’s the situation.

Now, the variables in this yet-to-be-found underlying theory are only “hidden” in so far as that they don’t appear in quantum mechanics; they may well be observable with suitable experiments. This brings up the question what a suitable experiment would be.

It is clear that Bell-type tests are not the right experiments, because superdeterministic theories violate Bell inequalities just like quantum mechanics. In fact, superdeterministic theories, since they reproduce quantum mechanics when averaged over the hidden variables, will give the same inequality violations and obey the same bounds as quantum mechanics. (Some people seem to find this hard to understand and try to impress me by quoting other inequalities than Bell’s. You can check for yourself that all those inequalities assume statistical independence, so they cannot be used to test superdeterminism.)

This is why, in 2011, I wrote a paper in which I propose a mostly model-independent test for hidden variables that relies on repeated measurements on non-commuting variables. I later learned from Chris Fuchs (see note at end of paper) that von Neumann made a similar proposal 50 years ago, but the experiment was never done. It still hasn’t been done.

A key point of the 2011 paper was that one does not need to make specific assumptions about the hidden variables. One reason I did this is that Bell’s theorem works the same way: you don’t need to know just what the hidden variables are, you just need to make some assumptions about their properties.

Another reason is, as I have explained in my book “Lost in Math”, that math alone isn’t sufficient to develop a new theory. We need data to develop the underlying hidden variables theory. Without that, we can only guess models and the chance that any one of them is correct is basically zero. 

This is why I did not want to develop a model for the hidden variables – it would be a waste of time. It didn’t work for phenomenology beyond the standard model and it won’t work here either. Instead, we have to identify the experimental range where evidence could be found, collect the data, and then develop the model.

Unfortunately and, in hindsight, unsurprisingly, the 2011 paper didn’t go anywhere. I think it’s just too far off the current mode of thinking in physics, which is all about guessing models and then showing that those guesses are wrong, a methodology that works incredibly badly. Nevertheless, I have since spent some time on developing a hidden variables model, but it’s going slowly, partly because I think it’s a waste of time (see above), but also because I am merely one person working four jobs while raising two kids and my day only has 24 hours.

However, in contrast to what Kastrup accuses me of, I have repeatedly and clearly stated that we do not have a satisfactory superdeterministic hidden variables model at the moment. I say this in pretty much all of my talks, it’s explicitly stated in my paper with Tim (“These approaches [...] leave open many questions and it might well turn out that none of them is the right answer.”). I also said this in my conversation with Kastrup. 

But I want to stress that the reason I (and quite possibly others too) didn’t write down a particular hidden variables model is not that it can’t be done, but that there are too many ways it could be done.

Next thing he got confused about is that two years ago, Sandro an I cooked up a superdeterministic toy model. The point of this model was not to say that it should be experimentally tested. We merely put this forward to demonstrate once and for all that superdeterministic models do not require “finetuning” or any “conspiracies”.

The toy model reproduces quantum mechanics exactly, but – in contrast to quantum mechanics – it’s local and deterministic, on the expense of violating statistical independence. Since it reproduces quantum mechanics it’s as falsifiable as quantum mechanics, hence the claim that superdeterminism somehow ruins science is arguably wrong. 

But besides this, it is a rather pointless and ad hoc toy model that I don’t think makes a lot of sense for a number of reasons (which are stated in the paper). Still, it demonstrates that of course if you want to then can define your hidden variables somehow. I should also mention that our model is certainly not the first superdeterministic hidden variables model. (See references in paper.)

There are a lot of toy models in quantum foundations like this with the purpose of shedding light on one particular assumption or another, and my model falls in this tradition. I could easily modify this model so that it would make predictions that deviate from quantum mechanics, so that one could experimentally test it. But the predictions would be wrong, so why would I do this.

Having said that, my thinking about superdeterminism has somewhat changed since 2011. I was at the time thinking about the hidden variables the way that they are usually portrayed as some kind of extra information that resides within particles. I have since become convinced that this doesn’t work, and that the hidden variables are instead the degrees of freedom of the detector. If that is so, then we do know what the hidden variables are, and we can estimate how likely they are to change. Hence, it becomes easier to test the consequences.

This is why in my later papers and in my more recent talks I mention a simpler type of experiment that works for this case – when the hidden variables are the details of the detector – specifically. I have to stress though that there are other models of superdeterminism which work differently and that can’t be tested this way.

Just what the evolution law looks like I still don’t know. I think it can’t be done with a differential equation, which is why I have been looking at path integrals. I wrote a paper about this with Sandro recently in which we propose new path-integral formalism that can incorporate the required type of evolution law. (It was just accepted for publication the other day.)

What we do in the paper is to define the formalism and show that it can reproduce quantum mechanics exactly – a finding I think is interesting in and by itself. As Kastrup said entirely correctly, there are no hidden variables in that paper. I don’t know why he thought there would be. The paper is just not about hidden variables theories. 

I have a number of ideas of how to include the detector degrees of freedom as hidden variables into the path integral. But again the problem isn’t that it can’t be done, but that there are too many ways it can be done. And in none of the ways I have tried can you still calculate something with the integral. So this didn’t really go anywhere – at least so far. It doesn’t help that I have no funding for this research.

I still think the best way forward would be to just experimentally push into this region of parameter space (small, cold system with quick repeated measurements on simple states) and see if any deviations from quantum mechanics can be found. However, if anyone reading is interested in helping with the path integral, please shoot me a note because I have a lot more to say than what’s in our papers.

Finally, I have given a number of seminars about superdeterminism, at least one of which is on YouTube here. I also some months ago did a discussion with Matt Leifer which is here. Leifer, I would say, is one of the leading people in the foundations of quantum mechanics at the moment. I may have some disagreements with him but he knows his stuff. You will learn more from him than from Kastrup.

In case you jumped over some of the more cumbersome paragraphs above, here is the brief summary. You can either go off the deep end and join people like Kastrup who complain about “physicalism”, claim that photons are observers, detectors can both click and not click at the same time, and other bizarre consequences you have to accept if you insist that quantum mechanics is fundamental. 

Or you conclude, like I have, that quantum mechanics is not a fundamental theory. In this case we just need to find the right experiment get a handle on the underlying physics. 

Saturday, February 19, 2022

Has quantum mechanics proved that reality does not exist?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Physicists have shown that objective reality doesn’t exist. This is allegedly an insight derived from quantum mechanics. And not only this, it’s been experimentally confirmed. Really? How do you prove that reality doesn’t exist? Has it really been done? And do we have to stop saying “really” now? That’s what we’ll talk about today.

Many of you’ve asked me to comment on those headlines claiming that reality doesn’t exist. It’s a case in which physicists have outdone themselves in the attempt to make linear algebra sound mysterious. The outcome is patently absurd. In one article, Eric Calvacanti, a physicist who works in the foundations of quantum mechanics, writes: “If a tree falls in a forest and no one is there to hear it, does it make a sound? Perhaps not, some say.”

So what are those people talking about? The story begins in 1961 with the Hungarian physicist Eugene Wigner. Wigner was part of the second generation of quantum physicists. At his time, the mathematical basis of quantum mechanics had been settled and was experimentally confirmed. Now, physicists moved on to quantum field theory, which would eventually give rise to the standard model. But they still couldn’t make sense of what it means to make a measurement in quantum mechanics.

The way that quantum mechanics works is that everything is described by a wave-function, usually denoted psi. The change of the wave-function in time is given by the Schrödinger equation. But the wave-function itself isn’t measurable. Instead, from the wave-function you just calculate probabilities for measurement outcomes.

Quantum mechanics might for example predict that a particle hits the left side or the right side of a screen with 50% probability each. Before the particle hits the screen, it is in a “superposition” of those two states, which means it’s neither here nor there, instead it’s in some sense both here and there . But once you have measured the particle, you know where it is with 100 percent probability. This means after a measurement, you have to update the wave-function. This update is also called the “reduction” or “collapse” of the wave-function. What is a measurement? Quantum mechanics doesn’t tell you. And that’s the problem.

Wigner illustrated this problem with a thought experiment now known as “Wigner’s friend.” Suppose Wigner’s friend Alice is in a laboratory and does an experiment like the one we just talked about. Wigner waits outside the door. Inside the lab, the particle hits the screen with 50% probability left or right. When Alice measures the particle, the wave-function collapses and it’s either left or right. She then opens the door and tells Wigner what she’s measured.

But how would Wigner describe the experiment? He only finds out whether the particle went left or right when his friend tells him. So, according to quantum mechanics, Wigner has to assume that before he knows what’s happened, Alice is in a superposition of two states. One in which the particle went left and she knows it went left. And one in which it went right and she knows it went right.

The problem is now that according to Alice, the outcome of her measurement never was in a superposition, whereas for Wigner it was. So they don’t agree on what happened. Reality seems to be subjective.

Now. It’s rather obvious what’s going on, namely that one needs to specify what physical process constitutes a measurement, otherwise the prediction is of course ambiguous. Once you have specified what you mean by measurement, Alice will either do a measurement in her laboratory, or not, but not both. And in a real experiment, rather than a thought experiment, the measurement happens when the particle hits the screen, and that’s that. Alice is of course never in a superposition, and she and Wigner agree on what’s objectively real.

If that’s so obvious then why did Wigner worry about it? Because in the standard interpretation of quantum mechanics the update of the wave-function isn’t a physical process. It’s just a mathematical update of your knowledge, which you do after you have learned something new about the system. It doesn’t come with any physical change. And if Alice didn’t physically change anything then, according to Wigner, she must indeed herself have been in a superposition.

Okay, so that was Wigner’s friend in the 1960s. You can’t experimentally test this, but in 2016 Daniela Frauchinger and Renato Renner proposed another thought experiment that moved physicists closer to experimental test. This has been dubbed the “Extended Wigner’s Friend Scenario.”

In this thought experiment you have two Wigners, each of whom has a friend. We will call these the Wigners and the Alices. The Alices each measure one of a pair of entangled particles. As a quick reminder, entangled particles share some property but you don’t know which particle has which share. You may know for example that the particles spins must add up to zero, but you don’t know whether the left particle has spin plus one and the right particle spin minus one, or the other way round.

So the Alices each measure an entangled particle. Now the thing with entangled particles is that if their measurements don’t collapse the wave-function, then now the two Alices are entangled. Either the left one thinks the spin was up and the right one thinks it’s down, or the other way round. And then there’s the two Wigners, each of which goes to ask their friend something about their measurement. Formally this “asking” just means they do another measurement. Frauchinger and Renner then show that there are combinations of measurements in which the two Alices cannot agree with the two Wigners on what the measurement outcomes were.

Again the obvious answer to what happens is that the Alices either measured the particles and collapsed the wave-function or they didn’t. If they did, then their measurement result settles what happens. If they didn’t do it, then it’s the Wigners’ results which settle the case. Or some combination thereof, depending on who measures what. Again, this is only problematic if you think that a measurement is not a physical process. Which is insanity, of course, but that’s indeed what the most widely held interpretation of quantum mechanics says.

And so, Frauchinger and Renner conclude in their paper that quantum mechanics “cannot consistently describe the use of itself” because you run into trouble if you’re one of the Wigner’s and try to apply quantum mechanics to understand how the Alice’s used quantum mechanics.

You may find this a rather academic argument, but don’t get fooled, this innocent sounding statement is a super-important result. Historically, internal inconsistencies have historically been THE major cause of theory-led breakthroughs in the foundations of physics. So we’re onto something here.

But the Frauchinger-Renner paper is somewhat philosophical because they go on a lot about what knowledge you can have about other people’s knowledge. But in 2018, Caslav Brukner looked at the problem from a somewhat different perspective and derived a “No go theorem for observer independent facts.”

His formulation allows one to use the measurement of certain correlations in measurement outcomes to demonstrate that the observers in the extended Wigner’s friend scenario actually had measurement results which disagree with each other. If that was so, there would in certain cases be no “observer independent facts”. This is the origin of all the talk about objective reality not existing and so on.

And yeah it’s really just linear algebra. There aren’t even differential equations to be solved. I assure you I’m not saying this to be condescending or anything, I just mean to say, this isn’t rocket science.

Finally, in 2019, a group from Edinburgh actually measured those correlations which Brukner calculated. That’s the experimental test which the headlines are referring to. Now you may wonder who in Edinburgh played the role of the two Alices and the two Wigners? How did the Alices feel while they were in a superposition? Did the Wigners see any wave-functions collapse?

Well, I am afraid I have to disappoint you because the two Alices were single photons and the two Wigners were photo-detectors. That’s okay, of course, I mean, some of my best friends are photons too. But of course an interaction with a single photon doesn’t constitute a measurement. We already know this experimentally. A measurement requires an apparatus big enough to cause decoherence. If you claim that a single photon is an observer who make a measurement, that’s not just a fanciful interpretation, that’s nonsense.

The alleged mystery of all those arguments and experiments disappears once you take into account that a measurement is an actual physical process. But since quantum mechanics does not require you to define just what this process is, you can make contradictory assumptions about it and then more contradictions follow from it. It’s like you have assumed that zero equals one, and then show that a lot of contradictions follow from it.

So to summarize, no one has proved that reality doesn’t exist and no experiment has confirmed this. What these headlines tell you instead is that physicists slowly come to see that quantum mechanics is internally inconsistent and must be replaced with a better theory, one that describes what physically happens in a measurement. And when they find that theory, that will be the breakthrough of the century.

Saturday, February 12, 2022

Epic Fights in Science

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Scientists are rational by profession. They objectively evaluate the evidence and carefully separate fact from opinion. Except of course they don’t, really. In this episode, we will talk about some epic fights among scientists that show very much that scientists, after all, are only human. Who dissed whom and why and what can we learn from that? That’s what we’ll talk about today.

1. Wilson vs Dawkins

Edward Wilson passed away just a few weeks ago at age 92. He is widely regarded as one of the most brilliant biologists in history. But some of his ideas about evolution got him into trouble with another big shot of biology: Richard Dawkins.

In 2012 Dawkins reviewed Wilson’s book “The Social Conquest of Earth”. He left no doubt about his misgivings. In his review Dawkins wrote: 
“unfortunately one is obliged to wade through many pages of erroneous and downright perverse misunderstandings of evolutionary theory. In particular, Wilson now rejects “kin selection” [...] and replaces it with a revival of “group selection”—the poorly defined and incoherent view that evolution is driven by the differential survival of whole groups of organisms.”
Wilson idea of group selection is based on a paper that he wrote together with two mathematicians in 2010. When their paper was published in Nature magazine, it attracted criticism from more than 140 evolutionary biologists, among them some big names in the field. 

In his review, Dawkins also said that Wilson’s paper probably would never have been published if Wilson hadn’t been so famous. That Wilson then ignored the criticism and published his book pretending nobody disagreed with him was to Dawkins “an act of wanton arrogance”.

Dawkins finished his review: 
“To borrow from Dorothy Parker, this is not a book to be tossed lightly aside. It should be thrown with great force. And sincere regret.”
Wilson replied that his theory was mathematically more sound that of kin selection, and that he also had a list of names who supported his idea but, he said,
“if science depended on rhetoric and polls, we would still be burning objects with phlogiston and navigating with geocentric maps.”
In a 2014 BBC interview, Wilson said:
“There is no dispute between me and Richard Dawkins and never has been. Because he is a journalist, and journalists are people who report what the scientists have found. And the arguments I’ve had, have actually been with scientists doing research.”
Right after Wilson passed away, Dawkins tweeted: 
“Sad news of death of Ed Wilson. Great entomologist, ecologist, greatest myrmecologist, invented sociobiology, pioneer of island biogeography, genial humanist & biophiliac, Crafoord & Pulitzer Prizes, great Darwinian (single exception, blind spot over kin selection). R.I.P.”

2. Leibniz vs Newton

Newton and Leibniz were both instrumental in the development of differential calculus, but they approached the topic entirely differently. Newton came at it from a physical perspective and thought about the change of variables with time. Leibniz had a more abstract, analytical approach. He looked at general variables x and y that could take on infinitely close values. Leibniz introduced dx and dy as differences between successive values of these sequences.

The two men also had a completely different attitude to science communication. Leibniz put a lot of thought into the symbols he used and how he explained himself. Newton, on the other hand, wrote mostly for himself and often used whatever notation he liked on that day. Because of this, Leibniz’s notation was much easier to generalize to multiple variables and much of the notation we use in calculus today goes back to Leibniz. Though the notation xdot for speed and x double dot for acceleration that we use in physics comes from Newton.

Okay, so they both developed differential calculus. But who did it *first? Historians say today it’s clear that Newton had the idea first, during the plague years sixteen sixty-five and sixty-six, but he didn’t write it up until 5 years later and it wasn’t published for more than 20 years.

Meanwhile, Leibniz invented calculus in the mid 1670s. So, by the time word got out, it looked as if they’d both had the idea at the same time.

Newton and Leibniz then got into a bitter dispute over who was first. Leibniz wrote to the British Royal Society to ask for a committee to investigate the matter. But at that time the society’s president was… Isaac Newton. And Newton simply drafted the report himself. He wrote “we reckon Mr Newton the first inventor” and then presented it to the members of the committee to sign, which they did.

The document was published in 1712 by the Royal Society with the title Commercium Epistolicum Collinii et aliorum, De Analysi promota. In the modern translation the title would be “Newton TOTALLY DESTROYS Leibniz”.

On top of that, a comment on the report was published in the Philosophical Transactions of the Royal Society of London. The anonymous author, who was also Newton, explained in this comment: 
“the Method of Fluxions, as used by Mr. Newton, has all the Advantages of the Differential, and some others. It is more elegant ... Newton has recourse to converging Series, and thereby his Method becomes incomparably more universal than that of Mr. Leibniz.”
Leibniz responded with his own anonymous publication, a four page paper which in the modern translation would be titled “Leibniz OWNS Newton”. That “anonymous” text gave all the credit to Leibniz and directly accused Newton of stealing calculus. Leibniz even wrote his own History and Origin of Differential Calculus in 1714. He went so far to change the dates on some of his manuscripts to pretend he knew about calculus before he really did.

And Newton? Well, even after Leibniz died, Newton refused mentioning him in the third edition of his Principia.

You can read the full story in Rupert Hall’s book “Philosophers at war.”

3. Edison vs Tesla  

Electric lights came in use around the end of the 19th Century. At first, they all worked with Thomas Edison’s direct current system, DC for short. But his old employee Nicola Tesla had developed a competing system, the alternate current system, or AC for short. Tesla had actually offered it to Edison when he was working for him, but Edison didn’t want it.

Tesla then went to work for the engineer George Westinghouse. Together they created an AC system that was threatening Edison’s dominance on the market. The “war of the currents” began.

An engineer named Harold Brown, later found to be paid by Edison’s company, started writing letters to newspapers trying to discredit AC, saying that it was really dangerous and that the way to go was DC.

This didn’t have the desired effect, and Edison soon took more drastic steps. I have to warn you that the following is a really ugly story and in case you find animal maltreatment triggering, I think you should skip over the next minute.

Edison organized a series of demonstrations in which he killed dogs by electrocuting them with AC, arguing that a similar voltage in DC was not so deadly. Edison didn’t stop there. He went on to electrocute a horse, and then an adult elephant which he fried with a stunning 6000 volts. There’s an old still movie of this, erm, demonstration on YouTube. If you really want to see it, I’ll leave a link in the info below.

Still Edison wasn’t done. He paid Brown to build an electric chair with AC generators that they bought from Westinghouse and Tesla, and then had Brown lobby for using it to electrocute people so the general public would associate AC with death. And that partly worked. But in the end AC won mostly because it’s more efficient when sent over long distances.

4. Cope vs Marsh  

Another scientific fight from the 19th Century happened in paleontology, and this one I swear only involves animals that were already dead anyway.

The American paleontologists, Edward Cope and Othniel Marsh met in 1863 as students in Germany. They became good friends and later named some discoveries after each other.

Cope for example named an amphibian fossl Ptyonius marshii, after Marsh and, in return Marsh named a gigantic serpent Mosasaurus copeanus.

However, they were both very competitive and soon they were trying to outdo each other. Cope later claimed it all started when he showed Marsh a location where he’d found fossils and Marsh, behind Cope’s back, bribed the quarry operators to send anything they’d find directly to Marsh.

Marsh’s version of events is that things went downhills after he pointed out that Cope had published a paper in which he had reconstructed a dinosaur fossil but got it totally wrong. Cope had mistakenly reversed the vertebrae and then put the skull at the end of the tail! Marsh claimed that Cope was embarrassed and wanted revenge.

Whatever the reason, their friendship was soon forgotten. Marsh hired spies to track Cope and on some occasions had people destroy fossils before Cope could get his hands on them. Cope tried to boost his productivity by publishing the discovery of every new bone as that of a new species, a tactic which the American paleontologist Robert Bakker described as “taxonomic carpet-bombing.” Cope’s colleagues disapproved, but it was remarkably efficient. Cope would publish about 1400 academic papers in total. Marsh merely made it to 300.

But Marsh eventually became chief paleontologists of the United States Geological Survey, USGS, and used its funds to promote his own research while cutting funds for Cope’s expeditions. And when Cope still managed to do some expeditions, Marsh tried to take his fossils, claiming that since the USGS funded them, they belonged to the government.

This didn’t work out as planned. Cope could prove that he had financed most of his expeditions with his own money. He then contacted a journalist at the New York Herald who published an article claiming Marsh had misused USGS funds. An investigation found that Cope was right. Marsh was expelled from the Society without his fossils, because they had been obtained with USGS funds.

In a last attempt to outdo Marsh, Cope stated in his will that he’d donated his skull to science. He wanted his brain to be measured and compared to that of Marsh! But Marsh didn’t accept the challenge, so the world will never know which of the two had the bigger brain.

Together the two men discovered 136 species of dinosaurs (Cope 56 and Marsh 80) but they died financially ruined with their scientific reputation destroyed.

5. Hoyle vs The World

British astronomer Fred Hoyle is known as the man who discovered how nuclear reactions work inside stars. In 1983, the Nobel Prize in physics was given... to his collaborator Willy Fowler, not to Hoyle. Everyone, including Fowler, was stunned. How could that happen?

Well, the Swedish Royal Academy isn’t exactly forthcoming with information, but over the years Hoyle’s colleagues have offered the following explanation. Let’s go back a few years to 1974.

In that year, the Nobel Prize for physics went to Anthony Hewish for his role in the discovery of pulsars. Upon hearing the news Hoyle told a reporter: “Jocelyn Bell was the actual discoverer, not Hewish, who was her supervisor, so she should have been included.” Bell’s role in the discovery of pulsars is widely recognized today, but in 1974, that Hoyle put in a word for Bell made global headlines.

Hewish was understandably upset, and Hoyle clarified in a letter to The Times that his issue wasn’t with Hewish, but with the Nobel committee: “I would add that my criticism of the Nobel award was directed against the awards committee itself, not against Professor Hewish. It seems clear that the committee did not bother itself to understand what happened in this case.”

Hoyle’s biographer Simon Mitton claimed this is why Hoyle didn’t get the Nobel Prize: The Nobel Prize committee didn’t like being criticized. However, the British scientist Sir Harry Kroto, who won the Nobel Prize for chemistry in 1996, doesn’t think this is what happened.

Kroto points out that while Hoyle may have made a groundbreaking physics discovery, he was also a vocal defender of some outright pseudoscience, for example, he believed that the flu was caused by microbes that rain down on us from outer space.

Hoyle was also, well, an unfriendly and difficult man who had offended most of his colleagues at some point. According to Sir Harry, the actual reason that Hoyle didn’t get a Nobel Prize was that he’d use it to promote pseudoscience. He said
“Hoyle was so arrogant and dismissive of others that he would use the prestige of the Nobel prize to foist his other truly ridiculous ideas on the lay public. The whole scientific community felt that.”
So what do we learn from that? Well, one thing we can take away is that if you want to win a Nobel Prize, don’t spread pseudoscience. But the bigger lesson I think is that while some competition is a good thing, it’s best enjoyed in small doses.

Saturday, February 05, 2022

What did COVID do to life expectancy?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


According to the American Center for Disease Control and Prevention, in 2020 the COVID pandemic decreased life expectancy in the United States by about one year and a half. This finding was reported in countless media outlets, including the New York Times, Bloomberg, and Reuters. It was the largest one-year decline of life expectancy since World War II, when it dropped 2.9 years.

Does this mean that we all have to expect we’ll die one and a half years earlier? How can that be, if we haven’t even had COVID? And if that’s not what it means, then what did COVID do to life expectancy? That’s what we will talk about today.

First, let’s have a look at the numbers for the United States because that’s where the report made headlines. A group of researchers from Europe and the US published a paper in Nature’s Scientific Reports last year, in which they estimated how much total life-time was lost to COVID. They took the age at which people died from COVID and compared that with the age at which those people were expected to die. They found that on average each COVID death shortened a life by 14 years. So that’s the first number to take away. If you die from COVID in the US, you die on the average about 14 years earlier than expected.

At the time the study was done, in early 2021, about 300 thousand people had died in the US from COVID. We can then multiply the average of years lost per person with the number of deaths. This gives us an estimate for the total number of years lost, which is a little over 1.3 million.

Naively you could now say we calculate the average number of life-time lost per person by taking all those missing years and dividing them up over the entire population. The US currently has about 330 million inhabitants, so that makes 1.3 million years divided by 330 million people, which is about 1.5 days per person. Days. Not years.

Since the Nature study was done, the number of deaths in the US has risen to more than 800 thousand, so the lifetime lost per person has gone up by a factor two to three. And that’s the second number to take away, on the average each American has lost a few days of life, though if you’re still alive that’s in all fairness a rather meaningless number.

Clearly that can’t have been what the Center for Disease Control was referring to. Indeed, what the CDC did to get to the 1.5 years is something entirely different. They calculated what is called the period life expectancy at birth. For this you take the number of people who die in a certain age group in a certain year (for example 2020), and divide it by the total number of people in that age group. This period life expectancy therefore tells you something about the conditions of living in a certain period of time.

This means if you want to interpret the decrease of life expectancy which the CDC found, they assumed that from now on every year would be like 2020. If every year was like 2020, then people born today would lose 1.5 years of their life expectancy compared to 2019.

Of course no one actually thinks that every year from now on will be like 2020, to begin with because we now have vaccines. The number that the CDC calculated is a standard definition for life expectancy, which is formally entirely fine. It’s just that this standard definition is easy to misunderstand. The 1.5 years sound much more alarming than they really are.

More recently, in October 2021, another study appeared in the British Medical Journal which looked at the changes to life expectancy for 37 countries for which they could get good data. For this they compared the mortality during the pandemic with the trends in the period from 2005 to 19.

In the USA they found a decline of about 2 years, so a little more than the estimate from the CDC. For England and Wales, they found a decline in life expectancy of about 1 year. For Germany about 0.3 years. If you want to look up your country of residence, I’ll leave a link to the paper in the info below. The same paper also reported that in countries where life expectancy decreased, the years of lives lost to the covid pandemic in 2020 were more than five times higher than those lost to the seasonal influenza in 2015.

Keep in mind that this is the period life expectancy which really tells you something about the present conditions rather than about the future. When we go through a war or a pandemic, then this period life expectancy decreases, but when the bad times are over, the period life expectancy bounces back up again. That’s likely going to happen with COVID too. Indeed, in many countries the period life expectancy will probably bounce back to a value somewhat higher than before the pandemic, because the previous trend was rising.

Also, since people with pre-existing conditions are more likely to die from COVID, this will probably increase the average period life expectancy for the survivors. But before you think this is good news, keep in mind that this is a statistical value. It just means that the fractions of people with and without pre-existing conditions has shifted. It hasn’t actually changed anything about you. The changing weights between the groups don’t change the average life expectancy in either group. It’s like removing foxes from the race won’t make the turtles run any faster, but it will certainly shift the average over everybody in the race.

The Spanish flu pandemic from nineteen eighteen is an interesting example. In 1917, life expectancy in the USA was 54. During the pandemic, it dropped to 47.2, but in 1919 it went up to 55.3. In Germany it went from 40.1 in 1917, to 33 in 1918, and in 1919 went back up 48.4.

If this period life expectancy is so difficult to interpret, why do scientists use it to begin with? Well, it’s just the best number you can calculate from existing data. What you would actually want to know is how long it will take for the people born in a given year to die. The people born in a given time-span, for example a year, are called a cohort. The average of that distribution, when the people in the cohort die, is called the cohort life expectancy. Problem is, you can only calculate this when everyone in the cohort has died, so it’s really only interesting for people who are dead already.

This is why, if we want to talk about the living conditions today, we use instead the period life expectancy. This makes pragmatic sense because at least it’s something we know how to calculate, but one has to be careful with interpreting it. The period life expectancy is really a snapshot of the present conditions, and when conditions change rapidly, like with a pandemic, it isn’t a very useful indicator for what to actually expect.

Another common misunderstanding with life expectancy is what I briefly mentioned in an earlier video. What we normally, colloquially, but also in the media, refer to as “life expectancy” is strictly speaking the “life expectancy at birth”. But this life expectancy at birth doesn’t actually tell you when you can expect to die if you are any older than zero years which is probably the case for most of you.

If you look at the probability of dying at a certain age by present conditions, so the period life expectancy, then the distribution for a developed world country typically looks like this. So for every age, there is a certain probability of dying. The average age at which people die is here. But the age at which most people die is higher. In other words, the mode of the distribution isn’t the same as the mean.

But now, the older you get, the fewer years you have to average over to get the mean value of the remaining years. If you have survived the first year, you only have to average over the rest, so the mean value, which is how long you can expect to live on the average, shifts a bit to higher age. And the older you get, the less there is to average over and the further the mean shifts.

This means, counterintuitive as it sounds, the life expectancy at a given age increases with age. And this does not take into account that living conditions may further improve your expectations. The consequence is that if you have reached what was your life expectancy at birth, you can currently expect to live another eight years or so. That’s certainly something you should take into account in your retirement plans!

This shift of the mean of the curve with age is the origin of another misconception, namely that people died young in the middle age. Since this is history we can actually use the cohort life expectancy. For example, when Isaac Newton was born, the life expectancy in England was just 36 years. But he died at age 84. Was he also a genius at outliving people?

No, the thing is that the average life expectancy is incredibly misleading when you’re talking about the middle age. Back then, infant mortality was very high, so that pulled down the mean value far below the mode. Indeed, Newton himself almost died as an infant! But once people made it past the first couple of years, they had a reasonable chance to get grey hair.

So, it’s really misleading to take the low life expectancy at birth in the middle age to mean that people died young. This also makes clear that the biggest advances we have seen for the increases in life expectancy worldwide have actually been advances in decreasing infant mortality.

Around the year 1800, globally, more than 40 percent of children died during the first 5 years of their life. Today it’s less than 4 percent though there are big differences between the developed and developing world.

If you want to get a reasonable assessment of your own life expectancy at your present age, you should look for something called life period table for your country of residence. For example, here you see the 2017 table for the USA. At birth, life expectancy for men is currently 76 years and for women 81. But if you check the table at these ages you see that men still have an expectancy of another 10 years to live and 9 years for women.

And when you’ve reached that age you still have a few more years to live. It looks like some kind of Zeno paradox in which you never reach death. But as in the Zeno paradox, an infinite sum of smaller and smaller number can well be finite and indeed, spoiler alert, we will all die. So use your time wisely and click the subscribe button to learn more fascinating stuff.