Saturday, October 24, 2020

How can climate be predictable if weather is chaotic?

[This is a transcript of the video embedded below. Some parts of the text may not make sense without the graphics in the video.]

Today I want to take on a question that I have not been asked, but that I have seen people asking – and not getting a good answer. It’s how can scientists predict the climate in one hundred years if they cannot make weather forecasts beyond two weeks – because of chaos. The answer they usually get is “climate is not weather”, which is correct, but doesn’t really explain it. And I think it’s actually a good question. How is it possible that one can make reliable long-term predictions when short-term predictions are impossible. That’s what we will talk about today.


Now, weather forecast is hideously difficult, and I am not a meteorologist, so I will instead just use the best-known example of a chaotic system, that’s the one studied by Edward Lorenz in 1963.

Edward Lorenz was a meteorologist who discovered by accident that weather is chaotic. In the 1960s, he repeated a calculation to predict a weather trend, but rounded an initial value from six digits after the point to only three digits. Despite the tiny difference in the initial value, he got wildly different results. That’s chaos, and it gave rise to the idea of the “butterfly effect”, that the flap of a butterfly in China might cause a tornado in Texas two weeks later.

To understand better what was happening, Lorenz took his rather complicated set of equations and simplified it to a set of only three equations that nevertheless captures the strange behavior he had noticed. These three equations are now commonly known as the “Lorenz Model”. In the Lorenz model, we have three variables, X, Y, and Z and they are functions of time, that’s t. This model can be interpreted as a simplified description of convection in gases or fluids, but just what it describes does not really matter for our purposes.

The nice thing about the Lorenz model is that you can integrate the equations on a laptop. Let me show you one of the solutions. Each of the axes in this graph is one of the directions X, Y, Z, so the solution to the Lorenz model will be a curve in these three dimensions. As you can see, it circles around two different locations, back and forth.

It's not only this one solution which does that, actually all the solutions will end up doing circles close by these two places in the middle, which is called the “attractor”. The attractor has an interesting shape, and coincidentally happens to look somewhat like a butterfly with two parts you could call “wings”. But more relevant for us is that the model is chaotic. If we take two initial values that are very similar, but not exactly identical, as I have done here, then the curves at first look very similar, but then they run apart, and after some while they are entirely uncorrelated.

These three dimensional plots are pretty, but it’s somewhat hard to see just what is going on, so in the following I will merely look at one of these coordinates, that is the X-direction. From the three dimensional plot, you expect that the value in X-direction will go back and forth between two numbers, and indeed that’s what happens.

Here you see again the curves I previously showed for two initial values that differ by a tiny amount. At first the two curves look pretty much identical, but then they diverge and after some time they become entirely uncorrelated. As you see, the curves flip back and forth between positive and negative values, which correspond to the two wings of the attractor. In this early range, maybe up to t equals five, you would be able to make a decent weather forecast. But after that, the outcome depends very sensitively on exactly what initial value you used, and then measurement error makes a good prediction impossible. That’s chaos.

Now, I want to pretend that these curves say something about the weather, maybe they describes the weather on a strange planet where it either doesn’t rain at all or it pours and the weather just flips back and forth between these two extremes. Besides making the short-term weather forecast you could then also ask what’s the average rainfall in a certain period, say, a year.

To calculate this average, you would integrate the curve over some period of time, and then divide by the duration of that period. So let us plot these curves again, but for a longer period. Just by eyeballing these curves you’d expect the average to be approximately zero. Indeed, I calculated the average from t equals zero to t equals one hundred, and it comes out to be approximately zero. What this means is that the system spends about equal amounts of time on each wing of the attractor.

To stick with our story of rainfall on the weird planet, you can imagine that the curve shows deviations from a reference value that you set to zero. The average value depends on the initial value and will fluctuates around zero because I am only integrating over a finite period of time, so I arbitrarily cut off the curve somewhere. If you’d average over longer periods of time, the average would inch closer and closer to zero.

What I will do now is add a constant to the equations of the Lorenz model. I will call this constant “f” and mimics what climate scientists call “radiative forcing”. The radiative forcing is the excess power per area that Earth captures due to increasing carbon dioxide levels. Again that’s relative to a reference value.

I want to emphasize again that I am using this model only as an analogy. It does not actually describe the real climate. But it does make a good example for how to make predictions in chaotic systems.

Having said that, let us look again at how the curves look like with the added forcing. These are the curves for f equals one. Looks pretty much the same as previously if you ask me. f=2. I dunno. You wouldn’t believe how much time I have spent staring at these curves for this video. f=3. Looks like the system is spending a little more time in this upper range, doesn’t it? f=4. Yes, it clearly does. And just for fun, If you turn f up beyond seven or so, the system will get stuck on one side of the attractor immediately.

The relevant point is now that this happens for all initial values. Even though the system is chaotic, one clearly sees that the response of the system does have a predictable dependence on the input parameter.

To see this better, I have calculated the average of these curves as a function of the “radiative forcing”, for a sample of initial values. And this is what you get. You clearly see that the average value is strongly correlated with the radiative forcing. Again, the scatter you see here is because I am averaging over a rather arbitrarily chosen finite period.

What this means is that in a chaotic system, the trends of average values can be predictable, even though you cannot predict the exact state of the system beyond a short period of time. And this is exactly what is happening in climate models. Scientists cannot predict whether it will rain on June 15th, 2079, but they can very well predict the average rainfall in 2079 as a function of increasing carbon dioxide levels.

This video was sponsored by Brilliant, which is a website that offers interactive courses on a large variety of topics in science and mathematics. In this video I showed you the results of some simple calculations, but if you really want to understand what is going on, then Brilliant is a great starting point. Their courses on Differential Equations I and II, probabilities and statistics cover much of the basics that I used here.

To support this channel and learn more about Brilliant, go to Brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get 20 percent off the annual premium subscription.



You can join the chat about this week’s video, tomorrow (Sunday, Oct 25) at 5pm CET, here.

Thursday, October 22, 2020

Particle Physicists Continue To Make Empty Promises

[This is a transcript of the video embedded below]

Hello and welcome back to my YouTube channel. Today I want to tell you how particle physicists are wasting your money. I know that’s not nice, but at the end of this video I think you will understand why I say what I say.


What ticked me off this time was a comment published in Nature Physics, by CERN Director-General Fabiola Gianotti and Gian Giudice, who is Head of CERN's Theory Department. It’s called a comment, but what it really is is an advertisement. It’s a sales pitch for their next larger collider for which they need, well, a few dozen billion Euro. We don’t know exactly because they are not telling us how expensive it would be to actually run the thing. When it comes to the question what the new mega collider could do for science, they explain:
“A good example of a guaranteed result is dark matter. A proton collider operating at energies around 100 TeV [that’s the energy of the planned larger collider] will conclusively probe the existence of weakly interacting dark-matter particles of thermal origin. This will lead either to a sensational discovery or to an experimental exclusion that will profoundly influence both particle physics and astrophysics.”
Let me unwrap this for you. The claim that dark matter is a guaranteed result, followed by weasel words about weakly interacting and thermal origin, is the physics equivalent of claiming “We will develop a new drug with the guaranteed result of curing cancer” followed by weasel words to explain, well, actually it will cure a type of cancer that exists only theoretically and has never been observed in reality. That’s how “guaranteed” this supposed dark matter result is. They guarantee to rule out some very specific hypotheses for dark matter that we have no reason to think are correct in the first place. What is going on here?

What’s going on is that particle physicists have a hard time understanding that when Popper went on about how important it is that a scientific hypothesis is falsifiable, he did not mean that a hypothesis is scientific just because it is falsifiable. There are lots of falsifiable hypotheses that are clearly unscientific.

For example, YouTube will have a global blackout tomorrow at noon central time. That’s totally falsifiable. If you give me 20 billion dollars, I can guarantee that I can test this hypothesis. Of course it’s not worth the money. Why? Because my hypothesis may be falsifiable, but it’s unscientific because it’s just guesswork. I have no reason whatsoever to think that my blackout prediction is correct.

The same is the case with particle physicists’ hypotheses for dark matter that you are “guaranteed” to rule out with that expensive big collider. Particle physicists literally have thousands of theories for dark matter, some thousandths of which have already been ruled out. Can they guarantee that a next larger collider can rule out some more? Yes. What is the guaranteed knowledge we will gain from this? Well, the same as the gain that we have gotten so far from ruling out their dark matter hypotheses, which is that we still have no idea what dark matter is. We don’t even know it is a particle to begin with.

Let us look again at that quote, they write:
“This will lead either to a sensational discovery or to an experimental exclusion that will profoundly influence both particle physics and astrophysics.”
No. The most likely outcome will be that particle physicists and astrophysicsts will swap their current “theories” for new “theories” according to which the supposed particles are heavier than expected. Then they will claim that we need yet another bigger collider to find them. What makes me think this will happen? Am I just bitter or cynical, as particle physicists accuse me? No, I am just looking at what they have done in the past.

For example, here’s an oldie but goldie, a quote from a piece written by string theorists David Gross and Edward Witten for the Wall street journal
“There is a high probability that supersymmetry, if it plays the role physicists suspect, will be confirmed in the next decade.”
They wrote this in 1996. Well, clearly that didn’t pan out.

And because it’s so much fun, I want to read you a few more quotes. But they are a little bit more technical, so I have to give you some background first.

When particle physicists say “electroweak scale” or “TeV scale” they mean energies that can be tested at the Large Hadron Collider. When they say “naturalness” they refer to a certain type of mathematical beauty that they think a theory should fulfil.

You see, particle physicists think it is a great problem that theories which have been experimentally confirmed are not as beautiful as particle physicists think nature should be. They have therefore invented a lot of particles that you can add to the supposedly ugly theories to remedy the lack of beauty. If this sounds like a completely non-scientific method, that’s because it is. There is no reason this method should work, and it does as a matter of fact not work. But they have done this for decades and still have not learned that it does not work.

Having said that, here is a quote from Giudice and Rattazzi in 1998. That’s the same Guidice who is one of the authors of the new nature commentary that I mentioned in the beginning. In 1998 he wrote:
“The naturalness (or hierarchy) problem, is considered to be the most serious theoretical argument against the validity of the Standard Model (SM) of elementary particle interactions beyond the TeV energy scale. In this respect, it can be viewed as the ultimate motivation for pushing the experimental research to higher energies.”
Higher energies, at that time, were the energies that have now been tested at the Large Hadron Collider. The supposed naturalness problem was the reason they thought the LHC should see new fundamental particles besides the Higgs. This has not happened. We now know that those arguments were wrong.

In 2004, Fabiola Gianotti, that’s the other author of the new Nature Physics comment, wrote:
“[Naturalness] arguments open the door to new and more fundamental physics. There are today several candidate scenarios for physics beyond the Standard Model, including Supersymmetry (SUSY), Technicolour and theories with Extra-dimensions. All of them predict new particles in the TeV region, as needed to stabilize the Higgs mass. We note that there is no other scale in particle physics today as compelling as the TeV scale, which strongly motivates a machine like the LHC able to explore directly and in detail this energy range.”
So, she claimed in 2004 that the LHC would see new particles besides the Higgs. Whatever happened to this prediction? Did they ever tell us what they learned from being wrong? Not to my knowledge.

These people were certainly not the only ones who repeated this story. Here is for example a quote from the particle physicist Michael Dine, who wrote in 2007:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
Well, you know what, it hasn’t done either.

I could go on for quite some while quoting particle physicists who made wrong predictions and now pretend they didn’t, but it’s rather repetitive. I have collected the references here. Let us instead talk about what this means.

All these predictions from particle physicists were wrong. There is no shame in being wrong. Being wrong is essential for science. But what is shameful is that none of these people ever told us what they learned from being wrong. They did not revise their methods for making predictions for new particles. They still use the same methods that have not worked for decades. Neither did they do anything about the evident group think in their community. But they still want more money.

The tragedy is I actually like most of these particle physicists. They are smart and enthusiastic about science and for the most part they’re really nice people.

But look, they refuse to learn from evidence. And someone has to point it out: The evidence clearly says their methods are not working. Their methods have led to thousands of wrong predictions. Scientists should learn from failure. Particle physicists refuse to learn.

Particle physicists, of course, are entirely ignoring my criticism and instead call me “anti-science”. Let that sink in for a moment. They call me “anti-science” because I say we should think about where to best invest science funding, and if you do a risk-benefit assessment it is clear that building a bigger collider is not currently a good investment. It is both high risk and low benefit. We would be better off if we'd instead invest in the foundations of quantum mechanics and astroparticle physics. They call me “anti-science” because I ask scientists to think. You can’t make up this shit.

Frankly, the way that particle physicists behave makes me feel embarrassed I ever had anything to do with their field.

Saturday, October 17, 2020

I Can’t Forget [Remix]

In the midst of the COVID lockdown I decided to remix some of my older songs. Just as I was sweating over the meters, I got an email out of the blue. Steven Nikolic from Canada wrote he’ be interested in remixing some of my old songs. A few months later, we have started a few projects together. Below you see the first result, a remake of my 2014 song “I Can’t Forget”.


If you want to see what difference 6 years can make, in hardware, software, and wrinkles, the original is here.

David Bohm’s Pilot Wave Interpretation of Quantum Mechanics

Today I want to take on a topic many of you requested, repeatedly. That is David Bohm’s approach to Quantum Mechanics, also known as the Pilot Wave Interpretation, or sometimes just Bohmian Mechanics. In this video, I want to tell you what Bohmian mechanics is, how it works, and what’s good and bad about it.

Ahead, I want to tell you a little about David Bohm himself, because I think the historical context is relevant to understand today’s situation with Bohmian Mechanics. David Bohm was born in 1917 in Pennsylvania, in the Eastern United States. His early work in physics was in the areas we would now call plasma physics and nuclear physics. In 1951, he published a textbook about quantum mechanics. In the course of writing it, he became dissatisfied with the then prevailing standard interpretation of quantum mechanics.

The standard interpretation at the time was that pioneered by the Copenhagen group – notably Bohr, Heisenberg, and Schrödinger – and is today usually referred to as the Copenhagen Interpretation. It works as follows. In quantum mechanics, everything is described by a wave-function, usually denoted Psi. Psi is a function of time. One can calculate how it changes in time with a differential equation known as the Schrödinger equation. When one makes a measurement, one calculates probabilities for the measurement outcomes from the wave-function. The equation by help of which one calculates these probabilities is known as Born’s Rule. I explained in an earlier video how this works.

The peculiar thing about the Copenhagen Interpretation is now that it does not tell you what happens before you make a measurement. If you have a particle described by a wave-function that says the particle is in two places at once, then the Copenhagen Interpretation merely says, at the moment you measure the particle it’s either here or there, with a certain probability that follows from the wave-function. But how the particle transitioned from being in two places at once to suddenly being in only one place, the Copenhagen Interpretation does not tell you. Those who advocate this interpretation would say that’s a question you are not supposed to ask because, by definition, what happens before the measurement is not measureable.

Bohm was not the only one dismayed that the Copenhagen people would answer a question by saying you’re not supposed to ask it. Albert Einstein didn’t like it either. If you remember, Einstein famously said “God does not throw dice”, by which he meant he does not believe that the probabilistic nature of quantum mechanics is fundamental. In contrast to what is often claimed, Einstein did not think quantum mechanics was wrong. He just thought it is probabilistic the same way classical physics is probabilistic, namely, that our inability to predict the outcome of a measurement in quantum mechanics comes from our lack of information. Einstein thought, in a nutshell, there must be some more information, some information that is missing in quantum mechanics, which is why it appears random.

This missing information in quantum mechanics is usually called “hidden variables”. If you knew the hidden variables, you could predict the outcome of a measurement. But the variables are “hidden”, so you can only calculate the probability of getting a particular outcome.

Back to Bohm. In 1952, he published two papers in which he laid out his idea for how to make sense of quantum mechanics. According to Bohm, the wave-function in quantum mechanics is not what we actually observe. Instead, what we observe are particles, which are guided by the wave-function. One can arrive at this interpretation in a few lines of calculation. I will not go through this in detail because it’s probably not so interesting for most of you. Let me just say you take the wave-function apart into an absolute value and a phase, insert it into the Schrödinger equation, and then separate the resulting equation into its real and imaginary part. That’s pretty much it.

The result is that in Bohmian mechanics the Schrödinger equation falls apart into two equations. One describes the conservation of probability and determines what the guiding field does. The other determines the position of the particle, and it depends on the guiding field. This second equation is usually called the “guiding equation.” So this is how Bohmian mechanics works. You have particles, and they are guided by a field which in return depends on the particle.

To use Bohm’s theory, you then need one further assumption, one that tells what the probability is for the particle to be at a certain place in the guiding field. This adds another equation, usually called the “quantum equilibrium hypothesis”. It is basically equivalent to Born’s rule and says that the probability for finding the particle in a particular place in the guiding field is given by the absolute square of the wave-function at that place. Taken together, these equations – the conservation of probability, the guiding equation, and the quantum equilibrium hypothesis – give the exact same predictions as quantum mechanics. The important difference is that in Bohmian mechanics, the particle is really always in only one place, which is not the case in quantum mechanics.

As they say, a picture speaks a thousand words, so let me just show you how this looks like for the double slit experiment. These thin black curves you see here are the possible ways that the particle could go from the double slit to the screen where it is measured by following the guiding field. Just which way the particle goes is determined by the place it started from. The randomness in the observed outcome is simply due to not knowing exactly where the particle came from.

What is it good for? The great thing about Bohmian mechanics is that it explains what happens in a quantum measurement. Bohmian mechanics says that the reason we can only make probabilistic predictions in quantum mechanics is just that we did not exactly know where the particle initially was. If we measure it, we find out where it is. Nothing mysterious about this. Bohm’s theory, therefore, says that probabilities in quantum mechanics are of the same type as in classical mechanics. The reason we can only predict probabilities for outcomes is because we are missing information. Bohmian mechanics is a hidden variables theory, and the hidden variables are the positions of those particles.

So, that’s the big benefit of Bohmian mechanics. I should add that while Bohm was working on his papers, it was brought to his attention that a very similar idea had previously been put forward in 1927 by De Broglie. This is why, in the literature, the theory is often more accurately referred to as “De Broglie Bohm”. But de Broglie’s proposal did, at the time, not attract much attention. So how did physicists react to Bohm’s proposal in fifty-two. Not very kindly. Niels Bohr called it “very foolish”. Leon Rosenfeld called it “very ingenious, but basically wrong”. Oppenheimer put it down as “juvenile deviationism”. And Einstein, too, was not convinced. He called it “a physical fairy-tale for children” and “not very hopeful.”

Why the criticism? One of the big disadvantages of Bohmian mechanics, that Einstein in particular disliked, is that it is even more non-local than quantum mechanics already is. That’s because the guiding field depends on all the particles you want to measure. This means, if you have a system of entangled particles, then the guiding equation says the velocity of one particle depends on the velocity of the other particles, regardless of how far away they are from each other.

That’s a problem because we know that quantum mechanics is strictly speaking only an approximation. The correct theory is really a more complicated version of quantum mechanics, known as quantum field theory. Quantum field theory is the type of theory that we use for the standard model of particle physics. It’s what people at CERN use to make predictions for their experiments. And in quantum field theory, locality and the speed of light limit, are super-important. They are built very deeply into the math.

The problem is now that since Bohmian mechanics is not local, it has turned out to be very difficult to make a quantum field theory out of it. Some have made attempts, but currently there is simply no Pilot Wave alternative for the Standard Model of Particle Physics. And for many physicists, me included, this is a game stopper. It means the Bohmian approach cannot reproduce the achievements of the Copenhagen Interpretation.

Bohmian mechanics has another odd feature that seems to have perplexed Albert Einstein and John Bell in particular. It’s that, depending on the exact initial position of the particle, the guiding field tells the particle to go either one way or another. But the guiding field has a lot of valleys where particles could be going. So what happens with the empty valleys if you make a measurement? In principle, these empty valleys continue to exist. David Deutsch has claimed this means “pilot-wave theories are parallel-universes theories in a state of chronic denial.”

Bohm himself, interestingly enough, seems to have changed his attitude towards his own theory. He originally thought it would in some cases give predictions different from quantum mechanics. I only learned this recently from a Biography of Bohm written by David Peat. Peat writes

“Bohm told Einstein… his only hope was that conventional quantum theory would not apply to very rapid processes. Experiments done in a rapid succession would, he hoped, show divergences from the conventional theory and give clues as to what lies at a deeper level.”

However, Bohm had pretty much the whole community against him. After a particularly hefty criticism by Heisenberg, Bohm changed course and claimed that his theory made the same predictions as quantum mechanics. But it did not help. After this, they just complained that the theory did not make new predictions. And in the end, they just ignored him.

So is Bohmian mechanics in the end just a way of making you feel better about the predictions of quantum mechanics? Depends on whether or not you think the “quantum equilibrium hypothesis” is always fulfilled. If it is always fulfilled, the two theories give the same predictions. But if the equilibrium is actually a state the system must first settle in, as the name certainly suggests, then there might be cases when this assumption is not fulfilled. And then, Bohmian mechanics is really a different theory. Physicists still debate today whether such deviations from quantum equilibrium can happen, and whether we can therefore find out that Bohm was right."" This video was sponsored by Brilliant which is a website that offers interactive courses on a large variety of topics in science and mathematics. I always try to show you some of the key equations, but if you really want to understand how to use them, then Brilliant is a great starting point. For this video, for example, I would recommend their courses on differential equations, linear algebra, and quantum objects. To support this channel and learn more about Brilliant, go to Brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get 20 percent off the annual premium subscription.



You can join the chats on this week’s topic using the Converseful app in the bottom right corner:

Saturday, October 10, 2020

You don’t have free will, but don’t worry.

Today I want to talk about an issue that must have occurred to everyone who spent some time thinking about physics. Which is that the idea of free will is both incompatible with the laws of nature and entirely meaningless. I know that a lot of people just do not want to believe this. But I think you are here to hear what the science says. So, I will tell you what the science says. In this video I first explain why free will does not exist, indeed makes no sense, and then tell you why there are better things to worry about.


I want to say ahead that there is much discussion about free will in neurology, where the question is whether we subconsciously make decisions before we become consciously aware of having made one. I am not a neurologist, so this is not what I am concerned with here. I will be talking about free will as the idea that in this present moment, several futures are possible, and your “free will” plays a role for selecting which one of those possible futures becomes reality. This, I think, is how most of us intuitively think of free will because it agrees with our experience of how the world seems to works. It is not how some philosophers have defined free will, and I will get to this later. But first, let me tell you what’s wrong with this intuitive idea that we can somehow select among possible futures.

Last week, I explained what differential equations are, and that all laws of nature which we currently know work with those differential equations. These laws have the common property that if you have an initial condition at one moment in time, for example the exact details of the particles in your brain and all your brain’s inputs, then you can calculate what happens at any other moment in time from those initial conditions. This means in a nutshell that the whole story of the universe in every single detail was determined already at the big bang. We are just watching it play out.

These deterministic laws of nature apply to you and your brain because you are made of particles, and what happens with you is a consequence of what happens with those particles. A lot of people seem to think this is a philosophical position. They call it “materialism” or “reductionism” and think that giving it a name that ends on –ism is an excuse to not believe it. Well, of course you can insist to just not believe reductionism is correct. But this is denying scientific evidence. We do not guess, we know that brains are made of particles. And we do not guess, we know, that we can derive from the laws for the constituents what the whole object does. If you make a claim to the contrary, you are contradicting well-established science. I can’t prevent you from denying scientific evidence, but I can tell you that this way you will never understand how the universe really works.

So, the trouble with free will is that according to the laws of nature that we know describe humans on the fundamental level, the future is determined by the present. That the system – in this case, your brain – might be partly chaotic does not make a difference for this conclusion, because chaos is still deterministic. Chaos makes predictions difficult, but the future still follows from the initial condition.

What about quantum mechanics? In quantum mechanics some events are truly random and cannot be predicted. Does this mean that quantum mechanics is where you can find free will? Sorry, but no, this makes no sense. These random events in quantum mechanics are not influenced by you, regardless of exactly what you mean by “you”, because they are not influenced by anything. That’s the whole point of saying they are fundamentally random. Nothing determines their outcome. There is no “will” in this. Not yours and not anybody else’s.

Taken together we therefore have determinism with the occasional, random quantum jump, and no combination of these two types of laws allows for anything resembling this intuitive idea that we can somehow choose which possible future becomes real. The reason this idea of free will turns out to be incompatible with the laws of nature is that it never made sense in the first place. You see, that thing you call “free will” should in some sense allow you to choose what you want. But then it’s either determined by what you want, in which case it’s not free, or it’s not determined, in which case it’s not a will.

Now, some have tried to define free will by the “ability to have done otherwise”. But that’s just empty words. If you did one thing, there is no evidence you could have done something else because, well, you didn’t. Really there is always only your fantasy of having done otherwise.

In summary, the idea that we have a free will which gives us the possibility to select among different futures is both incompatible with the laws of nature and logically incoherent. I should add here that it’s not like I am saying something new. Look at the writing of any philosopher who understand physics, and they will acknowledge this.

But some philosophers insist they want to have something they can call free will, and have therefore tried to redefine it. For example, you may speak of free will if no one was in practice able to predict what you would do. This is certainly presently the case, that most human behavior is unpredictable, though I can predict that some people who didn’t actually watch this video will leave a comment saying they had no other choice than leaving their comment and think they are terribly original.

So, yeah, if you want you can redefine “free will” to mean “no one was able to predict your decision.” But of course your decision was still determined or random regardless of whether someone predicted it. Others have tried to argue that free will means some of your decisions are dominated by processes internal to your brain and not by external influences. But of course your decision was still determined or random, regardless of whether it was dominated by internal or external influences. I find it silly to speak of “free will” in these cases.

I also find it unenlightening to have an argument about the use of words. If you want to define free will in such a way that it is still consistent with the laws of nature, that is fine by me, though I will continue to complain that’s just verbal acrobatics. In any case, regardless of how you want to define the word, we still cannot select among several possible futures. This idea makes absolutely no sense if you know anything about physics.

What is really going on if you are making a decision is that your brain is running a calculation, and while it is doing that, you do not know what the outcome of the calculation will be. Because if you did, you wouldn’t have to do the calculation. So, the impression of free will comes from our self-awareness, that we think about what to do, combined with our inability to predict the result of that thinking before we’re done.

I feel like I must add here a word about the claim that human behavior is unpredictable because if someone told you that they predicted you’d do one thing, you could decide to do something else. This is a rubbish argument because it has nothing to do with human behavior, it comes from interfering with the system you are making predictions for. It is easy to see that this argument is nonsense because you can make the same claim about very simple computer codes.

Suppose you have a computer that evaluates whether an equation has a real-valued root. The answer is yes or no. You can predict the answer. But now you can change the algorithm so that if you input the correct answer, the code will output the exact opposite answer, ie “yes” if you predicted “no” and “no” if you predicted “yes”. As a consequence, your prediction will never be correct. Clearly, this has nothing to do with free will but with the fact that the system you make a prediction for gets input which the prediction didn’t account for. There’s nothing interesting going on in this argument.

Another objection that I’ve heard is that I should not say free will does not exist because that would erode people’s moral behavior. The concern is, you see, that if people knew free will does not exist, then they would think it doesn’t matter what they do. This is of course nonsense. If you act in ways that harm other people, then these other people will take steps to prevent that from happening again. This has nothing to do with free will. We are all just running software that is trying to optimize our well-being. If you caused harm, you are responsible, not because you had “free will” but because you embody the problem and locking you up will solve it.

There have been a few research studies that supposedly showed a relation between priming participants to not believe in free will and them behaving immorally. The problem with these studies, if you look at how they were set up, is that people were not primed to not believe in free will. They were primed to think fatalistically. In some cases, for example, they were being suggested that their genes determine their future, which, needless to say, is only partly correct, regardless of whether you believe in free will. And some more nuanced recent studies have actually shown the opposite. A 2017 study on free will and moral behavior concluded “we observed that disbelief in free will had a positive impact on the morality of decisions toward others”. Please check the information below the video for a reference.

So I hope I have convinced you that free will is nonsense, and that the idea deserves going into the rubbish bin. The reason this has not happened yet, I think, is that people find it difficult to think of themselves in any other way than making decisions drawing on this non-existent “free will.” So what can you do? You don’t need to do anything. Just because free will is an illusion does not mean you are not allowed to use it as a thinking aid. If you lived a happy life so far using your imagined free will, by all means, please keep on doing so.

If it causes you cognitive dissonance to acknowledge you believe in something that doesn’t exist, I suggest that you think of your life as a story which has not yet been told. You are equipped with a thinking apparatus that you use to collect information and act on what you have learned from this. The result of that thinking is determined, but you still have to do the thinking. That’s your task. That’s why you are here. I am curious to see what will come out of your thinking, and you should be curious about it too.

Why am I telling you this? Because I think that people who do not understand that free will is an illusion underestimate how much their decisions are influenced by the information they are exposed to. After watching this video, I hope, some of you will realize that to make the best of your thinking apparatus, you need to understand how it works, and pay more attention to cognitive biases and logical fallacies.



You can join the chat about this week's post using these links:
    Chat #1 - Sunday, October 11 @ 9 AM PST / 12PM EST / 6PM CEST
    Chat #2 - Tuesday, October 13 @ 9 AM PST / 12PM EST / 6PM CEST

Thursday, October 08, 2020

[Guest Post] New on BackRe(action): Real-Time Chat Rooms

[This post is written by Ben Alderoty.]

For those who’ve been keeping tabs, my team and I have been working with Sabine since earlier this year to give commenters on her site more ways to talk. Based on your feedback, we’re launching a new way to make that happen: real-time chat rooms. Here’s how they’ll work.



Chat rooms (chats) live in the bottom right corner of the blog. For the time being, they are only available on Desktop with support for mobile devices to come soon. Unlike traditional, always-available chat rooms, chats on BackRe(action) happen at scheduled times. This ensures people will be there at the same time as you and the conversation can happen in real-time. Chats start at their scheduled times and end when everyone has left.



You’ll see the first couple of chats have already been scheduled when you open the app. The topic for these chats is Sabine’s upcoming post on free will she is releasing on Saturday. If you’re interested in attending, you can set up a reminder by clicking ‘Remind me’ and selecting either Email or Calendar. You can also share links to the chat by clicking the icon next to the chat name. We’ll be trying out different topics and times for chats based on feedback we receive. 


 

The chats themselves happen right here on BackRe(action). You won’t need an account to participate, just a name (real, fake, pseudonym… anything works). Depending on how many people join, the group may be split into separate rooms to allow for better discussion. Chats will remain open for late joiners as long as there’s an active discussion taking place. Spectators are welcome too! All of the messages will disappear when the chat ends, so you’ll have to be there to see what’s said.

As a reminder, the first two chats are happening on:

Chat #1 - Sunday, October 11 @ 9 AM PST / 12PM EST / 6PM CEST
Chat #2 - Tuesday, October 13 @ 9 AM PST / 12PM EST / 6PM CEST

Come to one or come to both! New chats will be up mid-next week for the week after.

So, what do you think? Are you ready for chat rooms on BackRe(action)? What topics do you want to talk about? Let us know what you think in the comments section or in the app via the ‘Give Feedback’ button below the chats.

Saturday, October 03, 2020

What are Differential Equations and how do they work?

[This is a transcript of the video embedded below. Some parts of the text may not make sense without the graphics in the video.]

Today I want to talk about that piece of mathematics which describes, for all we currently know, everything: Differential Equations. Pandemic models? Differential equations. Expansion of the universe? Differential equations. Climate models? Differential equations. Financial markets? Differential equations. Quantum mechanics? Guess what, differential equations.


I find it hard to think of anything that’s more relevant for understanding how the world works than differential equations. Differential equations are the key to making predictions and to finding out what is predictable, from the motion of galaxies to the weather, to human behavior. In this video I will tell you what differential equations are and how they work, give you some simple examples, tell you where they are used in science today, and discuss what they mean for the question whether our future is determined already.

To get an idea for how differential equations work, let us look at a simple example: The spread of a disease through the population. Suppose you have a number of people, let’s call it N, which are infected with a disease. You want to know how N will change in time, so N is a function of t, where t is time. Each of the N people has a certain probability to spread the disease to other people during some period of time. We will quantify this infectiousness by a constant, k. This means that the change in the number of people per time equals that constant k times the number of people who already are infected.

Now, the change of a function per time is the derivative of the function with respect to time. So, this gives you an equation which says that the derivative of the function is proportional to the function itself. And this is a differential equation. A differential equation is more generally an equation for an unknown function which contains derivatives of the function. So, a differential equation must be solved not for a parameter, say x, but for a whole function.

The solution to the differential equation for disease spread is an exponential function, where the probability of infecting someone appears in the exponent, and there is a free constant in front of the exponential, which I called N0. This function will solve the equation for any value of this free constant. If you put in the time t equals zero, then you can see that this constant N0 is simply the number of infected people at the initial time.

So, this is why infectious diseases begin by spreading exponentially, because the increase in the number of infected people is proportional to the number of people who are already infected. You are probably wondering now how these constants relate to the basic reproduction number of the disease, the R naught we have all become familiar with. When a disease begins to spread, this constant k in the exponent is (R0-1)/ τ, where τ is the time an infected person remains infectious.

So, R naught can be interpreted as the average number of people someone infects. Of course in reality diseases do not continue spreading exponentially, because eventually everyone is either immune or dead and there’s no one left to infect. To get a more realistic model for disease spread, one would have to take into account that the number of susceptible people begins to decrease as the infection spreads. But this is not a video about pandemic models, so let us instead get back to differential equations. Another simple example for a differential equation is one you almost certainly know, Newton’s second law, F equals m times a. Let us just take the case where the force is a constant. This could describe, for example, the gravitational force near the surface of the earth, in a range so small you can neglect that the force is actually a function of the distance from the center of Earth. The equation is then just a equals F over m, which I will rename to small g, and this is a constant. a is the acceleration, so the second time-derivative of position. Physicists typically denote the position with x, and a derivative with respect to time with a dot, so that is double-dot x equals g. And that’s a differential equation for the function x of t.

For simplicity, let us take x to be just the vertical direction. The solution to this equation is then x(t)= gt2/2 + vt +x0, where v and x0 are constants. If you take the first derivative of this function, you get g times t plus v, and another derivative gives just g. And that’s regardless of what the two constants were.

These two new constants in this solution, v and x0, can easily be interpreted, by looking at the time t=0. x0 is the position of the particle at time t = 0, and, if we look at the derivative of the function, we see that v is the velocity of the particle at t=0. If you take an initial velocity that’s pointed up, the curve for the position as a function of time is a parabola, telling you the particle goes up and comes back down. You already knew that, of course. The relevant point for our purposes is that, again, you do not get one function as a solution to the equation, but a whole family of functions, one for each possible choice of the constants.

Physicists call these free constants which appear in the possible solutions to a differential equation “initial values”. You need such initial values to pick the solution of the differential equation which fits to the system you want to describe. The reason we have two initial values for Newton’s law is that the highest order of derivative in the differential equation is two. Roughly speaking, you need one initial value per order of derivative. In the first example of disease growth, if you remember, we had one derivative and correspondingly only one initial value.

Now, Newton’s second law is not exactly frontier research, but the thing is that all theories we use in the foundations of physics today are of this type. They are given by differential equations, which have a large number of possible solutions. Then we insert initial values to identify the solution that actually describes what we observe.

Physicists use differential equations for everything, for stars, for atoms, for gases and fluids, for electromagnetic radiation, for the size of the universe, and so on. And these differential equations always work the same. You solve the equation, insert your initial values, and then you know what happens at any other moment in time.

I should add here that the “initial values” do not necessarily have to be at an initial time from which you make predictions for later times. The terminology is somewhat confusing, but you can also choose initial values at a final time and make predictions for times before that. This is for example what we do in cosmology. We know how the universe looks today, that are our “initial” values, and then we run the equations backwards in time to find out what the universe must have looked like earlier.

These differential equations are what we call “deterministic”. If I tell you how many people are ill today, you can calculate how many will be ill next week. If I tell you where I throw a particle with what initial velocity, you can tell me where it comes down. If I tell you what the universe looks like today, and you have the right differential equation, you can calculate what happens at every other moment of time. This consequence is that, according to the natural laws that physicists have found so far, the future is entirely fixed already; indeed, it was fixed already when the universe began.

This was pointed out first by Pierre Simon Laplace in 1814 who wrote:
“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”

This “intellect” Laplace is referring to is now sometimes called “Laplace’s demon”. But physics didn’t end with Laplace. After Laplace wrote those words, Poincare realized that even deterministic systems can become unpredictable for all practical purposes because they are “chaotic”. I talked about this in my earlier video about the Butterfly effect. And then, in the 20th century, along came quantum mechanics. Quantum mechanics is a peculiar theory because it does not only use an differential equations. Quantum mechanics uses another equation in addition to the differential equation. The additional equation describes what happens in a measurement. This is the so-called measurement update and it is not deterministic.

What does this mean for the question whether we have free will? That’s what we will talk about next week, so stay tuned.

Saturday, September 26, 2020

Understanding Quantum Mechanics #6: It’s not just a theory for small things.

[This is a transcript of the video embedded below. Some parts of the text may not make sense without the graphics in the video.]

One of the most common misunderstandings about quantum mechanics that I encounter is that quantum mechanics is about small things and short distances. It’s about atomic spectral lines, electrons going through double slits, nuclear decay, and so on. There’s a realm of big things where stuff behaves like we’re used to, and then there’s a realm of small things, where quantum weirdness happens. It’s an understandable misunderstanding because we do not experience quantum effects in daily life. But it’s wrong and in this video I will explain why. Quantum mechanics applies to everything, regardless of size.

The best example of a big quantum thing is the sun. The sun shines thanks to nuclear fusion, which relies on quantum tunneling. You have to fuse two nuclei together even though they repel each other because they are both positively charged. Without tunneling, this would not work. And the sun certainly is not small.

Ah, you may say, that doesn’t count because the fusion itself only happens on short distances. It’s just that the sun contains a lot of matter so it’s big.

Ok. Here is another example. All that matter around you, air, walls, table, what have you, is only there because of quantum mechanics. Without quantum mechanics, atoms would not exist. Indeed, this was one of the major reasons for the invention of quantum mechanics in the first place.

You see, without quantum mechanics, an electron circling around the atomic nucleus would emit electromagnetic radiation, lose energy, and fall into the nucleus very quickly. So, atoms would be unstable. Quantum mechanics explains why this does not happen. It’s because the electrons are not particles that are localized at a specific point, they are instead described by wave-functions which merely tell you the probability for the electron to be at a particular point. And for atoms this probability distribution is focused on shells around the nucleus. These shells correspond to different energy levels and are also called the “orbitals” of the electron, but I find that somewhat misleading. It’s not like the electron is actually orbiting as in going around in a loop.

I get frequently asked why this is not a problem for the orbits of planets in the solar system. Why don’t the planets emit radiation and fall into the sun? The answer is: They do! But in the case of the solar system, the force which acts is not the electromagnetic force, as in the case of the atom, but the gravitational force. Correspondingly, the radiation that’s emitted when planets go around the sun is not electromagnetic radiation, but gravitational radiation, which means gravitational waves. These carry away energy. And this indeed causes planets to lose energy which gradually shrinks the radius of their orbits.

However, the gravitational force is much, much weaker than the electromagnetic force, so this effect is extremely small and it does not noticeably affect planetary orbits. The effect can become large enough to be observable if you have a system of two stars that circle each other at short distance. In this case the energy loss from gravitational radiation will cause the stars to spiral into each other. Indeed, this is how gravitational waves were first indirectly confirmed, for which a Nobel Prize was handed out in 1993.

But this brings up another question, doesn’t it. Why aren’t the orbits of planets quantized like the orbits of electrons around the atomic nucleus? Again the answer is: they are! It’s just that for such large objects the shells are so close together that the gaps between them are unmeasureably small and the wave-function of the planets is very well localized. So it is an excellent approximation to treat the planets as balls – or indeed points – moving on curves. For the electron in an atom, on the other hand, this approximation is terribly bad.

So, all the matter around us is evidence that quantum mechanics works because it’s necessary to make atoms stable. Does that finally convince you that quantum mechanics isn’t just about small things? Ah, you may say, but all this normal matter does not look like a quantum thing.

Well, then how about lasers? Lasers work by pumping energy into a crystal or gas that makes the electrons mostly populate unstable energy levels. This is called “population inversion.” If one of the electrons drops down to a stable state, that emits a photon which causes another electron to drop, and so on. This process is called “stimulated emission”. Lasers then amplify this signal by putting mirrors around the crystal or gas. The light that is emitted in this way is coherent and very strongly focused. And that’s thanks to quantum mechanics because if the atomic energy levels were not quantized this would not work.

Nah, you say, this still doesn’t count because it is not weird. Isn’t quantum theory supposed to be weird?

Ok, so you want weird. Enter Zeilinger. Anton Zeilinger is famous for, well, for many things actually. He’s been on the hotlist for a NobelPrize for some while. But one of his most famous experiments is showing that entanglement between photons persists for more than one-hundred kilometers. Zeilinger and his group did this experiment between two of the Canary Islands in 2008. They produced pairs of entangled photons on La Palma, sent one of each pair to Tenerife, which is one-hundred-forty-four kilometers away, and let the other photon do circles in an optical fibre on La Palma. When they measured the polarization on both photons, they could unambiguously demonstrate that they were still entangled.

So, quantum mechanics is most definitely not a theory for short distances. It’s just that the weird stuff that’s typical for quantum mechanics – entanglement and quantum uncertainty and the ability of particles to act like waves – are under normal circumstances really really tiny for big and warm objects. I am here using the words “big” and “warm” the way physicists do, so “warm” means anything more than a few degrees above absolute zero and “big” means anything exceeding the size of a molecule. As I explained in the previous video in this series, it’s decoherence that ruins quantum effects for big and warm objects just because they frequently interact with other things, air or radiation.

But if you control the environment of an object very closely, if you keep it cool and in an ultra-high vacuum, you can slow down decoherence. This way, physicists have been able to demonstrate quantum behavior for big molecules. The record holder is presently a molecule made of about 2000 atoms or about 40,000 protons, neutrons and electrons.

An entirely different type of “large” quantum states are Bose Einstein condensates. These are clouds of atoms cooled to very low temperature, where they combine to one coherent state that has quantum effects throughout. For Bose Einstein Condensates, the record is presently at a few hundred million atoms.

Now, you may still think that’s small, and I can’t blame you for it. But the relevant point is that there is no limit in size or weight or distance where quantum effects suddenly stop. In principle, everything has quantum effects, even you. It’s just that those effects are so small you don’t notice.

This video was brought to you by Brilliant, which is a website on which you can take interactive courses on a large variety of topics in science and mathematics, including quantum mechanics. Brilliant has courses covering both the mathematical basis of quantum mechanics, as well as quantum objects, quantum computing, quantum logics, and many of the key experiments in quantum mechanics. I have spent some time browsing the courses offered by Brilliant, and I think they are a great starting point if you want to really understand what I explained in this video.

To support my YouTube channel and learn more about Brilliant, go to brilliant.org/Sabine, and sign up for free. The first two-hundred people who go to that link will get 20 percent off the annual Premium subscription.

Wednesday, September 23, 2020

Follow the Science? Nonsense, I say.

Today I want to tell you why I had to stop reading news about climate science. Because it pisses me off. Every. Single. Time.



There’s all these left-wing do-gooders who think their readers are too fucking dumb to draw their own conclusions so it’s not enough to tell me what’s the correlation between hurricane intensity and air moisture, no, they also have to tell me that, therefore, I should donate to save the polar bears. There’s this implied link: Science says this, therefore you should do that. Follow the science, stop flying. Follow the science, go vegan. Follow the science and glue yourself to a bus, because certainly that’s the logical conclusion to draw from the observed weakening of the atlantic meridional circulation.

When I was your age, we learned science does not say anything about what we should do. What we should do is a matter of opinion, science is matter of fact.

Science tells us what situation we are in and what consequences our actions are likely to have, but it does not tell us what to do. Science does not say you shouldn’t pee on high voltage lines, it says urine is an excellent conductor. Science does not say you should stop smoking, science says nicotine narrows arteries, so if you smoke you’ll probably die young lacking a few toes. Science does not say we should cut carbondioxide emissions. It says if we don’t, then by the end of the century estimated damages will exceed some Trillion US $. Is that what we should go for? Well, that’s a matter of opinion.

Follow the Science is a complete rubbish idea, because science does not know the direction. We have to decide what way to go.

You’d think it’s bad enough that politicians conflate scientific fact with opinion, but the media actually make it worse. They make it worse by giving their audience the impression that it matters what someone whose job it is to execute the will of the electorate believes about scientific facts. But I couldn’t care less if Donald Trump “believes” in climate change. Look, this is a man who can’t tell herd immunity from herd mentality, he probably thinks winter’s the same as an ice age. It’s not his job to offer opinions about science he clearly doesn’t understand, so why do you keep asking him. His job is to say if the situation is this, we will do that. At least in principle, that’s what he should be doing. Then you look up what science says which situation we are in and act accordingly.

The problem, the problem, you see, is that by conflating the two things – the facts with the opinions – the media give people an excuse to hide opinions behind scientific beliefs. If you don’t give a shit that today’s teenagers will struggle their whole life cleaning up the mess that your generation left behind fine, that’s a totally valid opinion. But please just say it out loud, so we can all hear it. Don’t cover it up by telling us a story about how you weren’t able to reproduce a figure in the IPCC report even though you tried really hard for almost ten seconds, because no one gives a shit whether you have your own “theory.”

If you are more bothered by the prospect of rising gasoline prices than by rising sea levels because you don’t know anyone who lives by the sea anyway, then just say so. If you worry more about the pension for your friend the coal miner than about drought and famine in the developing world because after all there’s only poor people in the developing world, then just say so. If you don’t give a shit about a global recession caused by natural catastrophes that eat up billion after billion because you’re a rich white guy with a big house and think you’re immune to trouble, then just say so. Say it loud, so we can all hear it.

And all the rest of you stop chanting we need to “follow the science”. People who oppose action on climate change are not anti-science, they simply worry more that a wind farm might ruin the view from their summer vacation house, than they worry wild fires will burn down the house. That’s not anti-scientific, that’s just dumb. But then that’s only my opinion.

Saturday, September 19, 2020

What is quantum cryptography and how does it work?

[This is a transcript of the video embedded below. Some parts of the text may not make sense without the graphics in the video.]

If you punch your credit card number into a website and hit “submit”, I bet you don’t want to have twenty fraudulent charges on your bank account a week later. This is why all serious online retailers use encryption protocols. In this video, I want to tell you how quantum mechanics can help us keep secrets safe.


Before I get to quantum cryptography, I briefly have to tell you how the normal, non-quantum cryptography works, the one that most of the internet uses today. If you know this already, you can use the YouTube tool bar to jump to the next chapter.

The cryptographic codes that are presently being used online are for the most part public key systems. The word “key” refers to the method that you use to encrypt a message. It’s basically an algorithm that converts readable text or data into a mess, but it creates this mess in a predictable way, so that the messing up can be undone. If the key is public, this means everybody knows how to encrypt a message, but only the recipient knows how to decrypt it.

This may sound somewhat perplexing, because if the key is public and everybody knows how to scramble up a message, then it seems everybody also knows how to unscramble it. It does not sound very secure. But the clever part of public key cryptography is that to encode the message you use a method that is easy to do, but hard to undo.

You can think of this as if the website you are buying from gives you, not a key, but an empty treasure chest that locks when you close it. You take the chest. Put in your credit card number, close it. And now the only person who can open it, is the one who knows how to unlock it. So your message is safe to send. In practice that treasure chest is locked by a mathematical problem that is easy to pose but really hard to solve.

There are various mathematical problems that can, and that are being used, in cryptographic protocols for locking the treasure chest. The best known one is the factorization of a large number into primes. This method is used by the algorithm known as RSA, after its inventors Rivest (i as in kit), Shamir, and Adleman. The idea behind RSA is that if you have two large prime numbers, it is easy to multiply them. But if you only have the product of the two primes, then it is very difficult to find out what its prime-factors are.

For RSA, the public key, the one that locks the treasure chest, is a number that is derived from the product of the primes, but does not contain the prime factors themselves. You can therefore use the public key to encode a message, but to decode it, you need the prime factors, which only the recipient of your message has, for example the retailer to whom you are sending your credit card information.

Now, this public key can be broken, in principle, because we do know algorithms to decompose numbers into their prime factors. But for large numbers, these algorithms take very, very long, to give you a result, even on the world’s presently most powerful computers. So, maybe that key you are using can be broken, given a hundred thousand years of computation time. But really who cares. For all practical purposes, these keys are safe.

But here’s the thing. Whether or not someone can break one of these public keys depends on how quickly they can solve the mathematical problem behind it. And quantum computers can vastly speed up computation. You can see the problem: Quantum computers can break cryptographic protocols, such as RSA, in a short time. And that is a big security risk.

I explained in a previous video what quantum computers are and what to expect from them, so check this out if you want to know more. But just how quantum computers work doesn’t matter so much here. It only matters that you know, if you had a powerful quantum computer, it could break some public key cryptosystems that are currently widely being used, and it could do that quickly.

This is a problem which does not only affect your credit card number but really everything from trade to national security. Now, we are nowhere near having a quantum computer that could actually do such a computation. But the risk that one could be built in the next decades is high enough so that computer scientists and physicists have thought of ways to make public key cryptography more secure.

They have come up with various cryptographic protocols that cannot be broken by quantum computers. This is possible by using protocols which rely on mathematical problems for which a quantum computer does not bring an advantage. This cryptography, which is safe from quantum computers is called “post-quantum cryptography” or, sometimes, “quantum resistant cryptography”.

Post-quantum cryptographic protocols do not themselves use quantum effects. They have the word “quantum” in their name merely to say that they cannot be broken even with quantum computers. At least according to present knowledge. This situation can change because it’s possible that in the future someone will find a way to use a quantum computer to break a code currently considered unbreakable. However, at least at the moment, some cryptographic protocols exist for which no one knows how a quantum computer could break them.

So, computer scientists have ways to keep the internet safe, even if someone, somewhere develops a powerful quantum computer. Indeed, most nations already have plans to switch to post-quantum cryptography in the coming decade, if not sooner.

Let us then come to quantum cryptography, and its application for “quantum key distribution”. Quantum key distribution is a method for two parties to securely share a key that they can then use to encode messages. And quantum physics is what helps keep the key safe. To explain how this works, I will again just use the simplest example, that’s a protocol known as BB Eighty-four, after the authors Bennett and Brassard and the year of publication.

When physicists talk about information transfer, they like to give names to senders and receivers. Usually they are called Alice and Bob, so that’s what I will call them to. Alice wants to send a secret key to Bob so they can then have a little chat, but she does not want Bob’s wife, Eve, to know what they’re talking about. In the literature, this third party is normally called “Eve” because she is “eavesdropping”, hahaha, physics humor.

So, Alice creates a random sequence of particles that can have spin either up or down. She measures the spin of each particle and then sends it to Bob who also measures the spin. Each time they measure spin up, they note down a zero, and each time they measure spin down, they note down a one. This way, they get a randomly created, shared sequence of bits, which they can use to encode messages.

But this is no good. The problem is, this key can easily be intercepted by Eve. She could catch the particle meant for Bob in midflight, measure it, note down the number, and then pass it on to Bob. That’s a recipe for disaster.

So, Alice picks up her physics textbooks and makes the sequence of particles that she sends to Bob more complicated.

That the spin is up or down means Alice has to choose a direction along which to create the spin. Bob has to know this direction to make his measurement, because different directions of spins obey an uncertainty relation. It is here where quantum mechanics becomes important. If you measure the direction of a spin into one direction, then the measurement into a perpendicular direction is maximally uncertain. For a binary variable like the spin, this just means the measurements in two orthogonal directions are uncorrelated. If Alice sends a particle that has spin up or down, but Bob mistakenly measures the spin in the horizontal direction, he just gets left or right with fifty percent probability.

Now, what Alice does is to randomly choose whether the particles’ spin goes in the up-down or left-right direction. As before, she sends the particles to Bob, but – and here is the important bit – does not tell him whether the particle was created in the up-down or left-right direction. Since Bob does not know the direction, he randomly picks one for his measurement. If he happens to pick the same direction that Alice used to create the particle, then he gets, as previously, a perfectly correlated result. But if he picks the wrong one, he gets a completely uncorrelated result.

After they have done that, Alice sends Bob information about which directions she used. For that, she can use an unencrypted channel. Once Bob knows that, he discards the measurements where he picked the wrong setting. The remaining measurements are then correlated, and that’s the secret key.

What happens now if Eve tries to intersect the key that Alice sends? Here’s the thing: She cannot do that without Bob and Alice noticing. That’s because she does not know either which direction Alice used to create the particles. If Eve measures in the wrong direction – say, left-right instead of up-down – she changes the spin of the particle, but she has no way of knowing whether that happened or not.

If she then passes on her measurement result to Bob, and it’s a case where Bob did pick the correct setting, then his measurement result will no longer be correlated with Alice’s, when it should be. So, what Alice and Bob do is that they compare some part of the sequence they have shared, again they can do that using an unencrypted channel, and they can check whether their measurements were indeed correlated when they should have been. If that’s not the case, they know someone tried to intercept the message. This is what makes the key safe.

The deeper reason this works is that in quantum mechanics it is impossible to copy an arbitrary state without destroying it. This is known as the no-cloning theorem, and this is ultimately why Eve cannot listen in without Bob and Alice finding out.

So, quantum key distribution is a secure way to exchange a secret key, which can be done either through optical fiber or just free space. Quantum key distribution actually already exists and is being used commercially, though it is not in widespread use. However, in this case the encoded message itself is still sent through a classical channel without quantum effects.

Quantum key distribution is an example for quantum cryptography, but quantum cryptography also more generally refers to using quantum effects to encode messages, not just to exchange keys. But this more general quantum cryptography so far exists only theoretically.

So, to summarize: “Post quantum cryptography” refers to non-quantum cryptography that cannot be broken with a quantum computer. It exists and is in the process of becoming widely adopted. “Quantum key distribution” exploits quantum effects to share a key that is secure from eavesdropping. It does already exist though it is not widely used. “Quantum cryptography” beyond quantum key distribution would use quantum effects to actually share messages. The theory exists but it has not been realized technologically.

I want to thank Scott Aaronson for fact-checking parts of this transcript, Tim Palmer for trying to fix my broken English even though it’s futile, and all of you for watching. See you next week.

Saturday, September 12, 2020

Path Dependence and Tipping Points

[This is a transcript for the video embedded below. Part of the text may not make sense without the graphics in the video.]



Most of the physics we learn about in school is, let’s be honest, a little dull. It’s balls rolling down slopes, resistance proportional to voltage, pendula going back and forth and back and forth... wait don’t fall asleep, that’s what I will not talk about today. Today I will talk about weird things that can happen in physics: path dependence and tipping points.

I want to start with chocolate. What’s chocolate got to do with physics? Chocolate is a crystal. No, really. A complicated crystal, alright, but a crystal, and a truly fascinating one. If you buy chocolate in a store you get it in this neat smooth and shiny form. It melts at a temperature between thirty-three and thirty-four degrees Celsius, or about ninety-two degrees Fahrenheit. That’s just below body temperature, so the chocolate will melt if you stuff it into your mouth but not too much earlier. Exactly what you want.

But suppose your chocolate melts for some other reason, maybe you left it sitting in the sun, or you totally accidentally held a hair drier above it. Now you have a mush. The physicist would say the crystal has undergone a phase transition from solid to liquid. But no problem, you think, you will just put it into the fridge. And sure enough, as you lower the temperature, the chocolate undergoes another phase transition and turns back into a solid.

Here’s the interesting thing. The chocolate now looks different. It’s not only that it has lost some of its original shape, it actually has a different structure now. It’s not as smooth and shiny as it previously wass. Even weirder, it now melts more easily! The melting point has dropped from about thirty-four to something like twenty-eight degrees Celsius. What the heck is going on?

What happens is that if the chocolate melts and becomes solid again, it does not form the same crystal structure that it had before. Instead, it ends up in a mixture of other crystal structures. If you want to get the crystal structure that chocolate is normally sold in, you have to cool it down very carefully and add seeds for the structure you want to get. This process is called “tempering”. The crystal structure which you get with tempering, the one that you normally buy, is actually unstable. Even if you do not let it melt, it will decay after some time. This is why chocolate gets “old” and then has this white stuff on the surface. Depending on what chocolate you have, the white stuff is sugar or fat or both, and it tells you that the crystal structure is decaying.

For our purposes the relevant point is that the chocolate can be in different states at the same temperature, depending on how you got there. In physics, we call this a “path dependence” of the state of the system. It normally means that the system has several different states of equilibrium. An equilibrium state is simply one that does not change in time. Though, as in the case of chocolate these states may merely be long-lived and not actually be eternally stable.

Chocolate is not exactly the example physicists normally use for path dependence. The go-to example for physicists is the magnetization of a ferromagnet. A ferromagnet is a metal that can be permanently magnetized. It’s what normal people call a “magnet”, period. The reason ferromagnets can be magnetized is that the electron shell structure means the atoms in the metal are tiny little magnets themselves. And these tiny magnets like to align their orientation with that of their neighbors.

Now, if you find a ferromagnetic metal somewhere out in the field, then its atomic magnets are almost certainly disordered and look somewhat like this. To make the illustration simpler, I will pretend that the atomic magnets can point in only one of two directions. If the little magnets are randomly pointing into one of these directions, then the metal has no overall magnetization.

If you apply a magnetic field to this metal, then the atoms will begin to align with the field because that’s energetically the most favorable state. At some point they’re just all aligned in the same direction, and the magnetization of the metal saturates. If you now turn off the magnetic field, some of those atoms will switch back again just because there’s some thermal motion and so on. However, at room temperature, the metal will keep most of the magnetization. That’s what makes ferromagnets special.

If you turn on the external magnetic field again but increase its strength into the other direction, then the atomic magnets will begin to line up pointing into that other direction until saturated. If you turn down the field back to zero, again most of them will continue to point there. Turn the external field back to the other side and you go back to saturating the magnetization in the first direction.

We can plot this behavior of the magnet in a graph that shows the external magnetic field and the resulting magnetization of the magnet. We started from zero, zero, saturated the magnetization pointing right, turned the external field to zero, but kept most of the magnetization. Saturated the magnetization pointing left, turned the field back to zero but kept most of the magnetization. And saturated the magnetization again to the right.

This is what is called the “hysteresis loop”. Hysteresis means the same as “path dependence”. Whether the magnetization of the metal points into one direction or the other does not merely depend on the external field. It also depends on how you got to that value of the field. In particular, if the external field is zero, the magnet has two different, stable, equilibrium states.

This path-dependence is also why magnets can be used to store information. Path-dependence basically means that the system has a memory.

Path-dependence sounds like a really peculiar physics-y thing but really it’s everywhere. Just to illustrate this I have squeezed myself into this T-shirt from my daughter. See, it has two stable equilibrium states. And they keep a memory of how you got there. That’s a path-dependence too.

Another common example of a path dependence are air conditioning units. To avoid a lot of switching on and off, they’re usually configured so that if you input a certain target temperature, they will begin to cool if the temperature rises more than a degree above the target temperature, but will stop cooling if the temperature has dropped to a degree below the target temperature. So whether or not the air condition is running at the target temperature depends on how you got to that temperature. That’s a path-dependence.

A common property of path-dependent systems is that they have multiple stable equilibrium states. As a reminder, equilibrium merely means it does not change in time. In some cases, a system can very suddenly switch between different equilibrium states. Like this parasol. It has a heavy weight at the bottom, so if the wind sways it a little, it will stay upright. That’s an equilibrium state. But if the wind blows too hard, it will suddenly top over. Also an equilibrium state. But a much more stable one. Even if the wind now blows into the other direction, the system is not going back to the first state.

Such a sudden transition between two equilibrium states is called a “tipping point”. You have probably heard the word “tipping point” in the context of climate models, where they are a particular pain. I say “pain” because by their very nature they are really hard to predict with mathematical modeling, exactly because there are so many path-dependencies in the system. A glacier that melts off at a certain level of carbondioxide will not climb back onto the mountain if carbondioxide levels fall. And that’s one of the better understood path-dependencies.

A much discussed tipping point in climate models is the Atlantic meridional overturning circulation. That’s a water cycle in the atlantic ocean. Warm surface water from the equator flows north. Along the way it cools and partly evaporates, which increases the density of salt in the water and makes the water heavy. The cool, salty water sinks down to the bottom of the ocean, comes back up where it came from, warms, and the cycle repeats. Why does it come back up in the same place? Well, if some water sinks down somewhere, then some water has to come up elsewhere. And a cycle is a stable configuration, so once the system settles in the cycle, it just continues cycling.

But. This particular cycle is not the only equilibrium configuration and the system does not have to stay there. In fact, there’s a high risk this water cycle is going to be interrupted if global temperatures continue to rise.

That’s because ice in the arctic is mostly fresh water. If it melts in large amounts, as it presently does, this reduces the salt content of the water. This can prevent the water in the atlantic overtuning circulation from sinking down and thereby shut off the cycle.

Now, this circulation is responsible for much of the warm wind that Europe gets. Did you ever look at a world map and noticed that the UK and much of middle Europe is North of Montreal? Why is the climate in these two places so dramatically different? Well, that atlantic overturning circulation is one of the major reasons. If it shuts off, we’re going to see a lot of climate changes very suddenly. Aaaand it’s a path-dependent system. Reducing carbondioxide after we’ve crossed that tipping point will not just turn the circulation back on. And some evidence suggests that this cycle is weakening already.

There are many other tipping points in climate models, that, once crossed can bring sudden changes that will stay with us for thousands of years, even if we bring carbondioxide levels back down. Like the collapse of the Greenland and West Antarctic Ice Sheet. If warming continues, the question is not whether it will happen but just when. I don’t want to go through this whole list, I just want to make clear that tipping points are not fear mongering. They are a very real risk that should not be dismissed easily.

I felt it was necessary to spell this out because I recently read an article by Michael Shellenberger who wrote: “Speculations about tipping points are unscientific because levels of uncertainty and complexity are too high, which is exactly why IPCC does not take such scenarios seriously.”

This is complete rubbish. First, tipping points are covered in the IPCC report, it’s just that they are not collected in a chapter called “tipping points,” they are called large scale singular events. I found this out by googling “tipping points IPCC”, so not like it would have taken Shellenberger much of an effort to get this right. Here is a figure from the summary for policy makers about the weakening of the atlantic overturning circulation, that’s the tipping point that we just talked about. And here they are going on about the collapse of ice sheets, another tipping point.

Having said that, tipping points are not emphasized much by the IPCC, but that’s not because they do not take them seriously, but because the existing climate models simply are not good enough to make reliable predictions for exactly when and how tipping points will be crossed. That does not mean tipping points are unscientific. Just because no one can presently put a number to the risk posed by tipping points does not mean the risk does not exist. It does mean, however, that we need better climate models.

Path-dependence and tipping points are cases where naïve extrapolations can badly fail and they are common occurrences in non-linear systems, like the global climate. Just because we’ve been coping okay with climate change so far does not mean it will remain that way.

I want to thank Michael Mann for checking parts of this transcript.

Saturday, September 05, 2020

What is a singular limit?

Imagine you bite into an apple and find a beheaded worm. Eeeh. But it could have been worse. If you had found only half a worm in the apple, you’d now have the other half in your mouth. And a quarter of worm in the apple would be even worse. Or a hundredth. Or a thousandth. If we extrapolate this, we find that the worst apple ever is one without worm.

Eh, no, this can’t be right, can it? What went wrong?

I borrowed the story of the wormy apple from Michael Berry, who has used it to illustrate a “singular limit”. In this video, I will explain what a singular limit is and what we can learn from it.


A singular limit is also sometimes called a “discontinuous limit” and it means that if some variable gets closer to a certain point, you do not get a good approximation for the value of a function at this point. In the case of the apple, the variable is the length of the worm that remains in the apple, and the point you are approaching is a worm-length of zero. The function is what you could call the yuckiness of the apple. The yuckiness increases the less worm is left in the apple, but then it suddenly jumps to totally okay. This is a discontinuity, or a singular limit.

You can simulate such a function on your smartphone easily if you punch in a positive number smaller than one and square it repeatedly. This will always give zero, eventually, regardless of how close your original number was to 1. But if you start from 1 exactly, you will stay at 1. So, if you define a function from the limit of squaring a number infinitely often, that would be f(x) is the limit n to infinity of x2n, where n is a natural number, then this function makes a sudden jump at x equals to 1.

This is a fairly obvious example, but singular limits are not always easy to spot. Here is an example from John Baez that will blow your mind, trust me, even if you are used to weird math. Look at this integral. Looks like a pretty innocent integral over the positive, real numbers. You are integrating the function sin(t) over t, and the result turns out to be π/2. Nothing funny going on.

You can make this integral a little more complicated by multiplying the function you are integrating with another function. This other function is just the same function as previously, except that it divides the integration variable by 101. If you integrate the product of these two functions, it comes out to be π/2 again. You can multiply these two functions by a third function in which you divide the integration variable by 201. The result is π/2 again. And so on.

We can write these integrals in a nicely closed form because zero times 100 plus 1 is just one. So, for an arbitrary number of factors, that we can call N, you get an integral over this product. And you can keep on evaluating these integrals, which will give you π/2, π/2, π/2 until you give up at N equals 2000 or what have you. It certainly looks like this series just gives π/2 regardless of N. But it doesn’t. When N takes on this value:
    15,341,178,777,673,149,429,167,740,440,969,249,338,310,889
The result of the integral is, for the first time, not π/2, and it never becomes π/2 for any N larger than that. You can find a proof for this here. The details of the proof don’t matter here, I am just telling you about this to show that mathematics can be far weirder than it appears at first sight.

And this matters because a lot of physicists act like the only numbers in mathematics are 2, π, and Euler’s number. If they encounter anything else, then that’s supposedly “unnatural”. Like, for example, the strength of the electromagnetic force relative to the gravitational force between, say, an electron and a proton. That ratio turns out to be about ten to the thirty-nine. So what, you may say. Well, physicists believe that a number like this just cannot come out of the math all by itself. They called it the “Hierarchy Problem” and it supposedly requires new physics to “explain” where this large number comes from.

But pure mathematics can easily spit out numbers that large. There isn’t a priori anything wrong with the physics if a theory contains a large number. We just saw one such oddly specific large number coming out of a rather innocent looking integral series. This number is of the order of magnitude 1043. Another example of a large number coming out of pure math is the dimension of the monster group that is about 1053. So the integral series is not an isolated case. It’s just how mathematics is.

Let me be clear that I am not saying these particular numbers are somehow relevant for physics. I am just saying if we find experimentally that a constant without units is very large, then this does not mean math alone cannot explain it and it must therefore be a signal for new physics. That’s just wrong.

But let me come back to the singular limits because there’s more to learn from them. You may put the previous examples down as mathematical curiosities, but they are just very vivid demonstrations for how badly naïve extrapolations can fail. And this is something we do not merely encounter in mathematics, but also in a lot of physical systems.

I am here not thinking of the man who falls off the roof and, as he passes the 2nd floor, thinks “so far, so good”. In this case we full well know that his good luck will soon come to an end because the surface of earth is in the way of his well-being. We have merely ignored this information because otherwise it would not be funny. So, this is not what I am talking about. I am talking about situations where we observe sudden changes in a system that are not due to just willfully ignoring information.

An example you are probably familiar with are phase transitions. If you cool down water, it is liquid, liquid, liquid, until suddenly it isn’t. You cannot extrapolate from the water being liquid to it being a solid. It’s a pattern that does not continue. There are many such phase transitions in physical systems where the behavior of a system suddenly changes, and they usually come along with observable properties that make sudden jumps, like entropy or viscosity. These are singular limits.

Singular limits are all over the place in condensed matter physics, but in other areas, physicists seem to have a hard time acknowledging their existence. An example that you find frequently in the popular science press are calculations in a universe with a negative cosmological constant, that’s the so-called Anti-de Sitter space, which falsely raise the impression that these calculations tell us something about the real world, which has a positive cosmological constant.

A lot of physicists believe the one case tells us something about the other because, well, you could take the limit from a very small but negative cosmological constant to a very small but positive cosmological constant, and then, so they argue, the physics should be kind of the same. But. We know that the limit from a small negative cosmological constant to zero and then on to positive values is a singular limit. Space-time has a conformal boundary for all values strictly smaller than zero, but no longer for exactly zero. We have therefore no reason to think these calculations that have been done for a negative cosmological constant tell us anything about our universe, which has a positive cosmological constant.

Here are a few examples of such misleading headlines. They usually tell stories about black holes or wormholes because that’s catchy. Please do not fall for this. These calculations tell us nothing, absolutely nothing, about the real world.