Saturday, November 21, 2020

Warp Drive News. Seriously!

[This is a transcript of the video embedded below.]

As many others, I became interested in physics by reading too much science fiction. Teleportation, levitation, wormholes, time-travel, warp drives, and all that, I thought was super-fascinating. But of course the depressing part of science fiction is that you know it’s not real. So, to some extent, I became a physicist to find out which science fiction technologies have a chance to one day become real technologies. Today I want to talk about warp drives because I think on the spectrum from fiction to science, warp drives are on the more scientific end. And just a few weeks ago, a new paper appeared about warp drives that puts the idea on a much more solid basis.


But first of all, what is a warp drive? In the science fiction literature, a warp drive is a technology that allows you to travel faster than the speed of light or “superluminally” by “warping” or deforming space-time. The idea is that by warping space-time, you can beat the speed of light barrier. This is not entirely crazy, for the following reason.

Einstein’s theory of general relativity says you cannot accelerate objects from below to above the speed of light because that would take an infinite amount of energy. However, this restriction applies to objects in space-time, not to space-time itself. Space-time can bend, expand, or warp at any speed. Indeed, physicists think that the universe expanded faster than the speed of light in its very early phase. General Relativity does not forbid this.

There are two points I want to highlight here: First, it is a really common misunderstanding, but Einstein’s theories of special and general relativity do NOT forbid faster-than-light motion. You can very well have objects in these theories that move faster than the speed of light. Neither does this faster-than light travel necessarily lead to causality paradoxes. I explained this in an earlier video. Instead, the problem is that, according to Einstein, you cannot accelerate from below to above the speed of light. So the problem is really crossing the speed of light barrier, not being above it.

The second point I want to emphasize is that the term “warp drive” refers to a propulsion system that relies on the warping of space-time, but just because you are using a warp drive does not mean you have to go faster than light. You can also have slower-than-light warp drives. I know that sounds somewhat disappointing, but I think it would be pretty cool to move around by warping spacetime at any speed.

Warp drives were a fairly vague idea until in 1994, Miguel Alcubierre found a way to make them work in General Relativity. His idea is now called the Alcubierre Drive. The explanation that you usually get for how the Alcubierre Drive works, is that you contract space-time in front of you and expand it behind you, which moves you forward.

That didn’t make sense to you? Just among us, it never made sense to me either. Because why would this allow you to break the speed of light barrier? Indeed, if you look at Alcubierre’s mathematics, it does not explain how this is supposed to work. Instead, his equations say that this warp drive requires large amounts of negative energy.

This is bad. It’s bad because, well, there isn’t any such thing as negative energy. And even if you had this negative energy that would not explain how you break the speed of light barrier. So how does it work? A few weeks ago, someone sent me a paper that beautifully sorts out the confusion surrounding warp drives.

To understand my problem with the Alcubierre Drive, I have to tell you briefly how General Relativity works. General Relativity works by solving Einstein’s field equations. Here they are. I know this looks somewhat intimidating, but the overall structure is fairly easy to understand. It helps if you try to ignore all these small Greek indices, because they really just say that there is an equation for each combination of directions in space-time. More important is that on the left side you have these R’s. The R’s quantify the curvature of space-time. And on the right side you have T. T is called the stress-energy tensor and it collects all kinds of energy densities and mass densities. That includes pressure and momentum flux and so on. Einstein’s equations then tell you that the distribution of different types of energy determines the curvature, and the curvature in return determines the how the distribution of the stress-energy changes.

The way you normally solve these equations is to use a distribution of energies and masses at some initial time. Then you can calculate what the curvature is at that initial time, and you can calculate how the energies and masses will move around and how the curvature changes with that.

So this is what physicists usually mean by a solution of General Relativity. It is a solution for a distribution of mass and energy.

But. You can instead just take any space-time, put it into the left side of Einstein’s equations, and then the equations will tell you what the distribution of mass and energy would have to be to create this space-time.

On a purely technical level, these space-times will then indeed be “solutions” to the equations for whatever is the stress energy tensor you get. The problem is that in this case, the energy distribution which is required to get a particular space-time is in general entirely unphysical.

And that’s the problem with the Alcubierre Drive. It is a solution to a General Relativity, but in and by itself, this is a completely meaningless statement. Any space-time will solve the equations of General Relativity, provided you assume that you have a suitable distribution of masses and energies to create it. The real question is therefore not whether a space-time solves Einstein’s equations, but whether the distribution of mass and energy required to make it a solution to the equations is physically reasonable.

And for the Alcubierre drive the answer is multiple no’s. First, as I already said, it requires negative energy. Second, it requires a huge amount of that. Third, the energy is not conserved. Instead, what you actually do when you write down the Alcubierre space-time, is that you just assume you have something that accelerates it beyond the speed of light barrier. That it’s beyond the barrier is why you need negative energies. And that it accelerates is why you need to feed energy into the system. Please check the info below the video for a technical comment about just what I mean by “energy conservation” here.

Let me then get to the new paper. The new paper is titled “Introducing Physical Warp Drives” and was written by Alexey Bobrick and Gianni Martire. I have to warn you that this paper has not yet been peer reviewed. But I have read it and I am pretty confident it will make it through peer review.

In this paper, Bobrick and Martire describe the geometry of a general warp-drive space time. The warp-drive geometry is basically a bubble. It has an inside region, which they call the “passenger area”. In the passenger area, space-time is flat, so there are no gravitational forces. Then the warp drive has a wall of some sort of material that surrounds the passenger area. And then it has an outside region. This outside region has the gravitational field of the warp-drive itself, but the gravitational field falls off and in the far distance one has normal, flat space-time. This is important so you can embed this solution into our actual universe.

What makes this fairly general construction a warp drive is that the passage of time inside of the passenger area can be different from that outside of it. That’s what you need if you have normal objects, like your warp drive passengers, and want to move them faster than the speed of light. You cannot break the speed of light barrier for the passengers themselves relative to space-time. So instead, you keep them moving normally in the bubble, but then you move the bubble itself superluminally.

As I explained earlier, the relevant question is then, what does the wall of the passenger area have to be made of? Is this a physically possible distribution of mass and energy? Bobrick and Martire explain that if you want superluminal motion, you need negative energy densities. If you want acceleration, you need to feed energy and momentum into the system. And the only reason the Alcubierre Drive moves faster than the speed of light is that one simply assumed it does. Suddenly it all makes sense!

I really like this new paper because to me it has really demystified warp drives. Now, you may find this somewhat of a downer because really it says that we still do not know how to accelerate to superluminal speeds. But I think this is a big step forward because now we have a much better mathematical basis to study warp drives.

For example, once you know how the warped space-time looks like, the question comes down to how much energy do you need to achieve a certain acceleration. Bobrick and Martire show that for the Alcubiere drive you can decrease the amount of energy by seating passengers next to each other instead of behind each other, because the amount of energy required depends on the shape of the bubble. The flatter it is in the direction of travel, the less energy you need. For other warp-drives, other geometries may work better. This is the kind of question you can really only address if you have the mathematics in place.

Another reason I find this exciting is that, while it may look now like you can’t do superluminal warp drives, this is only correct if General Relativity is correct. And maybe it is not. Astrophysicists have introduced dark matter and dark energy to explain what they observe, but it is also possible that General Relativity is ultimately not the correct theory for space-time. What does this mean for warp drives? We don’t know. But now we know we have the mathematics to study this question.

So, I think this is a really neat paper, but it also shows that research is a double-edged sword. Sometimes, if you look closer at a really exciting idea, it turns out to be not so exciting. And maybe you’d rather not have known. But I think the only way to make progress is to not be afraid of learning more. 

Note: This paper has not appeared yet. I will post a link here once I have a reference.




You can join the chat on this video on Saturday 11/21 at 12PM EST / 6PM CET or on Sunday 11/22 at 2PM EST / 8PM CET.

We will also have a chat on Black Hole Information loss on Tuesday 11/24 at 8PM EST / 2AM CET and on Wednesday 11/25 at 2PM EST / 8PM CET.

Wednesday, November 18, 2020

The Black Hole information loss problem is unsolved. Because it’s unsolvable.

Hi everybody, welcome and welcome back to science without the gobbledygook. I put in a Wednesday video because last week I came across a particularly bombastically nonsensical claim that I want to debunk for you. The claim is that the black hole information loss problem is “nearing its end”. So today I am here to explain why the black hole information loss problem is not only unsolved but will remain unsolved because it’s for all practical purposes unsolvable.


First of all, what is the black hole information loss problem, or paradox, as it’s sometimes called. It’s an inconsistency in physicists’ currently most fundamental laws of nature, that’s quantum theory and general relativity.

Stephen Hawking showed in the early nineteen-seventies that if you combine these two theories, you find that black holes emit radiation. This radiation is thermal, which means besides the temperature, that determines the average energy of the particles, the radiation is entirely random.

This black hole radiation is now called Hawking Radiation and it carries away mass from the black hole. But the radius of the black hole is proportional to its mass, so if the black hole radiates, it shrinks. And the temperature is inversely proportional to the black hole mass. So, as the black hole shrinks, it gets hotter, and it shrinks even faster. Eventually, it’s completely gone. Physicists refer to this as “black hole evaporation.”

When the black hole has entirely evaporated, all that’s left is this thermal radiation, which only depends on the initial mass, angular momentum, and electric charge of the black hole. This means that besides these three quantities, it does not matter what you formed the black hole from, or what fell in later, the result is the same thermal radiation.

Black hole evaporation, therefore, is irreversible. You cannot tell from the final state – that’s the outcome of the evaporation – what the initial state was that formed the black holes. There are many different initial states that will give the same final state.

The problem is now that this cannot happen in quantum theory. Processes in quantum theory are always time-reversible. There are certainly processes that are in practice irreversible. For example, if you mix dough. You are not going to unmix it, ever. But. According to quantum mechanics, this process is reversible, in principle.

In principle, one initial state of your dough leads to exactly one final state, and using the laws of quantum mechanics you could reverse it, if only you tried hard enough, for ten to the five-hundred billion years or so. It’s the same if you burn paper, or if you die. All these processes are for all practical purposes irreversible. But according to quantum theory, they are not fundamentally irreversible, which means a particular initial state will give you one, and only one, final state. The final state, therefore, tells you what the initial state was, if you have the correct differential equation. For more about differential equations, please check my earlier video.

So you set out to combine quantum theory with gravity, but you get some something that contradicts what you started with. That’s inconsistent. Something is wrong about this. But what? That’s the black hole information loss problem.

Now, four points I want to emphasize here. First, the black hole information loss problem has actually nothing to do with information. John, are you listening? Really the issue is not loss of information, which is an extremely vague phrase, the issue is time irreversibility. General Relativity forces a process on you which cannot be reversed in time, and that is inconsistent with quantum theory.

So it would better be called the black hole time irreversibility problem, but you know how it goes with nomenclature, it doesn’t always make sense. Peanuts aren’t nuts, vacuum cleaners don’t clean the vacuum. Dark energy is neither dark nor energy. And black hole information loss is not about information.

Second, black hole evaporation is not an effect of quantum gravity. You do not need to quantize gravity to do Hawking’s calculation. It merely uses quantum mechanics in the curved background of non-quantized general relativity. Yes, it’s something with quantum and something with gravity. No, it’s not quantum gravity.

The third point is that the measurement process in quantum mechanics does not resolve the black hole information loss problem. Yes, according to the Copenhagen interpretation a quantum measurement is irreversible. But the inconsistency in black hole evaporation occurs before you make a measurement.

And related to this is the fourth point, it does not matter whether you believe time-irreversibility is wrong even leaving aside the measurement. It’s a mathematical inconsistency. Saying that you do not believe one or the other property of the existing theories does not explain how to get rid of the problem.

So, how do you get rid of the black hole information loss problem. Well, the problem comes from combining a certain set of assumptions, doing a calculation, and arriving at a contradiction. This means any solution of the problem will come down to removing or replacing at least one of the assumptions.

Mathematically there are many ways to do that. Even if you do not know anything about black holes or quantum mechanics, that much should be obvious. If you have a set of inconsistent axioms, there are many ways to fix that. It will therefore not come as a surprise to you that physicists have spent the past forty years coming up with always new “solutions” to the black hole information loss problem, yet they can’t agree which one is right.

I have already made a video about possible solutions to the black hole information loss problem, so let me just summarize this really quickly. For details, please check the earlier video.

The simplest solution to the black hole information loss problem is that the disagreement is resolved when the effects of quantum gravity become large, which happens when the black hole has shrunk to a very small size. This simple solution is incredibly unpopular among physicists. Why is that? It’s because we do not have a theory of quantum gravity, so one cannot write papers about it.

Another option is that the black holes do not entirely evaporate and the information is kept in what’s left, usually called a black hole remnant. Yet another way to solve the problem is to simply accept that information is lost and then modify quantum mechanics accordingly. You can also put information on the singularity, because then the evaporation becomes time-reversible.

Or you can modify the topology of space-time. Or you can claim that information is only lost in our universe but it’s preserved somewhere in the multiverse. Or you can claim that black holes are actually fuzzballs made of strings and information creeps out slowly. Or, you can do ‘t Hooft’s antipodal identification and claim what goes in one side comes out the other side, fourier transformed. Or you can invent non-local effects, or superluminal information exchange, or baby universes, and that’s not an exhaustive list.

These solutions are all mathematically consistent. We just don’t know which one of them is correct. And why is that? It’s because we cannot observe black hole evaporation. For the black holes that we know exist the temperature is way, way too small to be observable. It’s below even the temperature of the cosmic microwave background. And even if it wasn’t, we wouldn’t be able to catch all that comes out of a black hole, so we couldn’t conclude anything from it.

And without data, the question is not which solution to the problem is correct, but which one you like best. Of course everybody likes their own solution best, so physicists will not agree on a solution, not now, and not in 100 years. This is why the headline that the black hole information loss problem is “coming to an end” is ridiculous. Though, let me mention that I know the author of the piece, George Musser, and he’s a decent guy and, the way this often goes, he didn’t choose the title.

What’s the essay actually about? Well, it’s about yet another proposed solution to the black hole information problem. This one is claiming that if you do Hawking’s calculation thoroughly enough then the evaporation is actually reversible. Is this right? Well, depends on whether you believe the assumptions that they made for this calculation. Similar claims have been made several times before and of course they did not solve the problem.

The real problem here is that too many theoretical physicists don’t understand or do not want to understand that physics is not mathematics. Physics is science. A theory of nature needs to be consistent, yes, but consistency alone is not sufficient. You still need to go and test your theory against observations.

The black hole information loss problem is not a math problem. It’s not like trying to prove the Riemann hypothesis. You cannot solve the black hole information loss problem with math alone. You need data, there is no data, and there won’t be any data. Which is why the black hole information loss problem is for all practical purposes unsolvable.

The next time you read about a supposed solution to the black hole information loss problem, do not ask whether the math is right. Because it probably is, but that isn’t the point. Ask what reason do we have to think that this particular piece of math correctly describes nature. In my opinion, the black hole information loss problem is the most overhyped problem in all of science, and I say that as someone who has published several papers about it.

On Saturday we’ll be talking about warp drives, so don’t forget to subscribe.

Saturday, November 14, 2020

Understanding Quantum Mechanics #8: The Tunnel Effect

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]

Have you heard that quantum mechanics is impossible to understand? You know what, that’s what I was told, too, when I was a student. But twenty years later, I think the reason so many people believe one cannot understand quantum mechanics is because they are constantly being told they can’t understand it. But if you spend some time with quantum mechanics, it’s not remotely as strange and weird as they say. The strangeness only comes in when you try to interpret what it all means. And there’s no better way to illustrate this than the tunnel effect, which is what we will talk about today.


Before we can talk about tunneling, I want to quickly remind you of some general properties of wave-functions, because otherwise nothing I say will make sense. The key feature of quantum mechanics is that we cannot predict the outcome of a measurement. We can only predict the probability of getting a particular outcome. For this, we describe the system we are observing – for example a particle – by a wave-function, usually denoted by the Greek letter Psi. The wave-function takes on complex values, and probabilities can be calculated from it by taking the absolute square.

But how to calculate probabilities is only part of what it takes to do quantum mechanics. We also need to know how the wave-function changes in time. And we calculate this with the Schrödinger equation. To use the Schrödinger equation, you need to know what kind of particle you want to describe, and what the particle interacts with. This information goes into this thing labeled H here, which physicists call the “Hamiltonian”.

To give you an idea for how this works, let us look at the simplest possible case, that’s a massive particle, without spin, that moves in one dimension, without any interaction. In this case, the Hamiltonian merely has a kinetic part which is just the second derivative in the direction the particle travels, divided by twice the mass of the particle. I have called the direction x and the mass m. If you had a particle without quantum behavior – a “classical” particle, as physicists say – that didn’t interact with anything, it would simply move at constant velocity. What happens for a quantum particle? Suppose that initially you know the position of the particle fairly well, so the probability distribution is peaked. I have plotted here an example. Now if you solve the Schrödinger equation for this initial distribution, what happens is the following.

The peak of the probability distribution is moving at constant velocity, that’s the same as for the classical particle. But the width of the distribution is increasing. It’s smearing out. Why is that?

That’s the uncertainty principle. You initially knew the position of the particle quite well. But because of the uncertainty principle, this means you did not know its momentum very well. So there are parts of this wave-function that have a somewhat larger momentum than the average, and therefore a larger velocity, and they run ahead. And then there are some which have a somewhat lower momentum, and a smaller velocity, and they lag behind. So the distribution runs apart. This behavior is called “dispersion”.

Now, the tunnel effect describes what happens if a quantum particle hits an obstacle. Again, let us first look at what happens with a non-quantum particle. Suppose you shoot a ball in the direction of a wall, at a fixed angle. If the kinetic energy, or the initial velocity, is large enough, it will make it to the other side. But if the kinetic energy is too small, the ball will bounce off and come back. And there is a threshold energy that separates the two possibilities.

What happens if you do the same with a quantum particle? This problem is commonly described by using a “potential wall.” I have to warn you that a potential wall is in general not actually a wall, in the sense that it is not made of bricks or something. It is instead just generally a barrier for which a classical particle would have to have an energy above a certain threshold.

So it’s kind of like in the example I just showed with the classical particle crossing over an actual wall, but that’s really just an analogy that I have used for the purpose of visualization.

Mathematically, a potential wall is just a step function that’s zero everywhere except in a finite interval. You then add this potential wall as a function to the Hamiltonian of the Schrödinger equation. Now that we have the equation in place, let us look at what the quantum particle does when it hits the wall. For this, I have numerically integrated the Schrödinger equation I just showed you.

The following animations are slow-motion compared to the earlier one, which is why you cannot see that the wave-function smears out. It still does, it’s just so little that you have to look very closely to see it. It did this because it makes it easier to see what else is happening. Again, what I have plotted here is the probability distribution for the position of the particle.

We will first look at the case when the energy of the quantum particle is much higher than the potential wall. As you can see, not much happens. The quantum particle goes through the barrier. It just gets a few ripples.

Next we look at the case where the energy barrier of the potential wall is much, much higher than the energy of the particle. As you can see, it bounces off and comes back. This is very similar to the classical case.

The most interesting case is when the energy of the particle is smaller than the potential wall but the potential wall is not extremely much higher. In this case, a classical particle would just bounce back. In the quantum case, what happens is this. As you can see, part of the wave-function makes it through to the other side, even though it’s energetically forbidden. And there is a remaining part that bounces back. Let me show you this again.

Now remember that the wave-function tells you what the probability is for something to happen. So what this means is that if you shoot a particle at a wall, then quantum effects allow the particle to sometimes make it to the other side, when this should actually be impossible. The particle “tunnels” through the wall. That’s the tunnel effect.

I hope that these little animations have convinced you that if you actually do the calculation, then tunneling is half as weird as they say it is. It just means that a quantum particle can do some things that a classical particle can’t do. But, wait, I forgot to tell you something...

Here you see the solutions to the Schrödinger equation with and without the potential wall, but for otherwise identical particles with identical energy and momentum. Let us stop this here. If you compare the position of the two peaks, the one that tunneled and the one that never saw a wall, then the peak of the tunneled part of the wave-function has traveled a larger distance in the same time.

If the particle was travelling at or very close by the speed of light, then the peak of the tunneled part of the wave-function seems to have moved faster than the speed of light. Oops.

What is happening? Well, this is where the probabilistic interpretation of quantum mechanics comes to haunt you. If you look at where the faster-than light particles came from in the initial wave-function, then you find that they were the ones which had a head-start at the beginning. Because, remember, the particles did not all start from exactly the same place. They had an uncertainty in the distribution.

Then again, if the wave-function really describes single particles, as most physicists today believe it does, then this explanation makes no sense. Because then only looking at parts of the wave-function is just not an allowed way to define the particle’s time of travel. So then, how do you define the time it takes a particle to travel through a wall? And can the particle really travel faster than the speed of light? That’s a question which physicists still argue about today.

This video was sponsored by Brilliant which is a website that offers interactive courses on a large variety of topics in science and mathematics. I hope this video has given you an idea how quantum mechanics works. But if you really want to understand the tunnel effect, then you have to actively engage with the subject. Brilliant is a great starting point to do exactly this. To get more background on this video’s content, I recommend you look at their courses on quantum objects, differential equations, and probabilities.

To support this channel and learn more about Brilliant, go to brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get 20 percent off their annual premium subscription.



You can join the chat on this week’s video here:
  • Saturday at 12PM EST / 6PM CET (link)
  • Sunday at 2PM EST / 8PM CET (link)

Saturday, November 07, 2020

Understanding Quantum Mechanics #7: Energy Levels

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]


Today I want to tell you what these plots show. Has anybody seen them before? Yes? Atomic energy levels, right! It’s one of the most important applications of quantum mechanics. And I mean important both historically and scientifically. Today’s topic also a good opportunity to answer a question one of you asked on a previous video “Why do some equations even actually need calculating, as the answer will always be the same?” That’s a really good question. I just love it, because it would never have occurred to me.

Okay, so we want to calculate what electrons do in an atom. Why is this interesting? Because what the electrons do determines the chemical properties of the elements. Basically, the behavior of the electrons explains the whole periodic table: Why do atoms come in particular groups, why do some make good magnets, why are some of them good conductors? The electrons tell you.

How do you find out what the electrons do? You use quantum mechanics. Quantum mechanics, as we discussed previously, works with wave-functions, usually denoted Psi. Here is Psi. And you calculate what the wave-function does with the Schrödinger equation. Here is the Schrödinger equation.

Now, the way I have written this equation here, it’s completely useless. We know what Psi is, that’s the thing we want to calculate, and we know how to take a time-derivative, but what is H? H is called the “Hamiltonian” and it contains the details about the system you want to describe. The Hamiltonian consists of two parts. The one part tells you what the particles do when you leave them alone and they don’t know anything of each other. So that would be in empty space, with no force acting on them, with no interaction. This is usually called the “kinetic” part of the Hamiltonian, or sometimes the “free” part. Then you have a second part that tells you how the particle, or particles if there are several, interact.
 
In the simplest case, this interaction term can be written as a potential, usually denoted V. And for an electron near an atomic nucleus, the potential is just the Coulomb potential. So that’s proportional to the charge of the nucleus, and falls with one over r, where r is the distance to the center of the nucleus. There is a constant in front of this term that I have called alpha, but just what it quantifies doesn’t matter for us today. And the kinetic term, for a slow-moving particle is just the square of the spatial derivatives, up to constants.

So, now we have a linear, partial differential equation that we need to solve. I don’t want to go through this calculation, because it’s not so relevant here just how to solve it, let me just say there is no magic involved. It’s pretty straight forward. But there some interesting things to learn from it.

The first interesting thing you find when you solve the Schrödinger equation for electrons in a Coulomb potential is that the solutions fall apart in two different classes. The one type of solution is a wave that can propagate through all of space. We call these the “unbound states”. And the other type of solution is a localized wave, stuck in the potential of the nucleus. It just sits there while oscillating. We call these the “bound states”. The bound states have a negative energy. That’s because you need to put energy in to rip these electrons off the atom.

The next interesting thing you find is that the bound states can be numbered, so you can count them. To count these states, one commonly uses, not one, but three numbers. These numbers are all integers and are usually called n, l, and m.

“n” starts at 1 and then increases, and is commonly called the “principal” quantum number. “l” labels the angular momentum. It starts at zero, but it has to be smaller than n.

So for n equal to one, you have only l equal to zero. For n equal to 2, l can be 0 or 1. For n equal to three, l can be zero, one or two, and so on.

The third number “m” tells you what the electron does in a magnetic field, which is why it’s called the magnetic quantum number. It takes on values from minus l to l. And these three numbers, n l m, together uniquely identify the state of the electron.

Let me then show you how the solutions to the Schrödinger equation look like in this case, because there are more interesting things to learn from it. The wave-functions give you a complex value for each location, and the absolute value tells you the probability of finding the electron. While the wave-function oscillates in time, the probability does not depend on time.

I have here plotted the probability as a function of the radius, so I have integrated over all angular directions. This is for different principal quantum numbers n, but with l and m equal to zero.

You can see that the wave-function has various maxima and minima, but with increasing n, the biggest maximum, so that’s the place you are most likely to find the electron, moves away from the center of the atom. That’s where the idea of electron “shells” comes from. It’s not wrong, but also somewhat misleading. As you can see here, the actual distribution is more complicated.

A super interesting property of these probability distributions is that they are perfectly well-behaved at r equals zero. That’s interesting because, if you remember, we used a Coulomb potential that goes as 1 over r. This potential actually diverges at r equal zero. Nevertheless, the wave-functions avoids this divergence. Some people have argued that actually something similar can avoid that a singularity forms in black holes. Please check the information below the video for a reference.

But these curves show only the radial direction, what about the angular direction? To show you how this looks like, I will plot the probability of finding the electron with a color code for slices through the sphere.

And I will start with showing you the slices for the cases of which you just saw the curves in the radial direction, that is, different n, but with the other numbers at zero.

The more red-white the color, the more likely you are to find the electron. I have kept the radius fix, so this is why the orbitals with small n only make a small blip when we scan through the middle. Here you see it again. Note how the location of the highest probability moves to a larger radius with increasing n.

Then let us look at a case where l is nonzero. This is for example for n=3, l=1 and m equals plus minus 1. As you can see, the distribution splits up in several areas of high probability and now has an orientation. Here is the same for n=4, l=2, m equals plus minus 2. It may appear as if this is no longer spherically symmetric. But actually if you combine all the quantum numbers, you get back spherical symmetry, as it has to be.

Another way to look at the electron probability distributions is to plot them in three dimensions. Personally I prefer the two-dimensional cuts because the color shading contains more information about the probability distribution. But since some people prefer the 3-dimensional plots, let me show you some examples. The surface you see here is the surface inside of which you will find the electron with a probability of 90%. Again you see that thinking of the electrons as sitting on “shells” doesn’t capture very well what is going on.

Now that you have an idea how we calculate atomic energy levels and what they look like, let me then get to the question: Why do we calculate the same things over and over again?

So, this particular calculation of the atomic energy levels was frontier research a century ago. Today students do it as an exercise. The calculations physicists now do in research in atomic physics are considerably more advanced than this example, because we have made a lot of simplifications here.

First, we have neglected that the electron has a spin, though this is fairly easy to integrate. More seriously, we have assumed that the nucleus is a point. It is not. The nucleus has a finite size and it is neither perfectly spherically symmetric, nor does it have a homogeneous charge distribution, which makes the potential much more complicated. Worse, nuclei themselves have energy levels and can wobble. Then the electrons on the outer levels actually interact with the electrons in the inner levels, which we have ignored. There are further corrections from quantum field theory, which we have also ignored. Yet another thing we have ignored is that electrons in the outer shells of large atoms get corrections from special relativity. Indeed, fun fact: without special relativity, gold would not look gold.

And then, for most applications it’s not energy levels of atoms that we want to know, but energy levels of molecules. This is a huge complication. The complication is not that we don’t know the equation. It’s still the Schrödinger equation. It’s also not that we don’t know how to solve it. The problem is, with the methods we currently use, doing these calculations for even moderately sized molecules, takes too long, even on supercomputers.

And that’s an important problem. Because the energy levels of molecules tell you whether a substance is solid or brittle, what its color is, how good it conducts electricity, how it reacts with other molecules, and so on. This is all information you want to have. Indeed, there’s a whole research area devoted to this question, which is called “quantum chemistry”. It also one of the calculations physicists hope to speed up with quantum computers.

So, why do we continue solving the same equation? Because we are improving how good the calculation is, we are developing new methods to solve it more accurately and faster, and we are applying it to new problems. Calculating the energy levels of electrons is not yesterday’s physics, it’s still cutting edge physics today.

If you really want to understand how quantum mechanics works, I recommend you check out Brilliant, who have been sponsoring this video. Brilliant is a website that offers a large variety of interactive courses in mathematics and science, including quantum mechanics, and it’s a great starting point to dig deeper into the topic. For more background on what I just talked about, have a look for example at their courses on quantum objects, differential equations, and linear algebra.  

To support this channel and learn more about Brilliant go to Brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get twenty percent off the annual premium subscription.

Thanks for watching, see you next week.



You can join the chat about this video today (Saturday, Nov 7) at 6pm CET or tomorrow at the same time.

Wednesday, November 04, 2020

Guestpost: "Launch of the Initiative 'For a Smarter Science'" by Daniel Moreno

[This post was written by Daniel Moreno]

I remember when I first told a professor at my home University that I wanted to do a PhD and become a researcher. I was expecting him to react with enthusiasm and excitement, yet his response was a warning. That was the first clue I received about the academic world's inefficiencies, although I did not realize to what extent until many years later.

I was a postdoctoral researcher for six years after finishing my PhD in 2013, working on areas such as holographic QCD, numerical General Relativity and gravity dualities. It all came to an end last year, and I decided to rekindle an idea I developed with Sabine back in 2016, one that never came to fruition, but which has now turned into my current project, under the name 'For a Smarter Science'.



The precariousness of our scientific system is a topic of common discussion among informal circles in academic events and institutions. It should be familiar to readers of this blog, as well as it is to researchers from the many fields of Science. During lunch with colleagues, dinners with invited speakers, conference coffee breaks… conversation commonly drifts toward some version of it.

Increasing numbers of publications of decreasing relevance. Increasing numbers of temporary contracts of decreasing stability. Acceptance of bad scientific practice to the benefit of bare productivity. These are some of the criticisms and complaints, prompting warnings to young researchers.

Our initiative brings no original solution to any of these problems. It actually brings questions: What would happen if there existed a formal platform to discuss the current state of academic culture? What if there was a scientific method to approach issues about the way Science is done today?

If unsupervised, human structures tend to evolve in a predictable way. Issues such as the ones mentioned above are sociological issues, they form a sociological trend. This can be, and is, studied by experts who have been writing on the topic for decades. Educated analyses published in peer-reviewed journals.

These studies largely go unnoticed and/or dismissed by the people involved in the very scientific fields they talk about, for a variety of reasons. The assumption that social topics are naturally subjects of informal conversation only, the belief that intelligent people are not affected by cognitive biases, or the selection of like-minded people by the academic system itself, leading to communal reinforcement.

And so, the academic wheel continues running along the railway already set in front of it, with no one there to steer its course. Short-term thinking permeates research. Researchers go from one application deadline to the next. Academic metrics go unquestioned.

Our proposal is simple. One conference. One formal event bringing together experts from a specific subfield of Science (high energy theoretical physics) and from Sociology of Science, to elevate these questions to well-informed discussion. We trust that such an event has the potential to trigger a positive change in the historical development of all of Science.

Of course, not everyone necessarily agrees with the idea that scientific progress is being discouraged by the current peculiarities of the academic culture. This is why we find it fitting to fund the event by means of a crowd-funding call. Its success will be an effective measure of how worthy of consideration the scientific community considers these discussions. Are they just light topics for chit-chat during coffee breaks, or is it time to pause and actually examine the social evolution that scientific careers have been following?

This initiative is for all those who have been waiting for a chance to do something about the state of modern scientific research. Writing posts and discussing with our colleagues can only take us so far. If you believe in the place of Science as a shining light for human progress, make your voice be heard. #ForaSmarterScience

 

Saturday, October 31, 2020

What is Energy? Is Energy Conserved?

Why save energy if physics says energy is conserved anyway? Did Einstein really say that energy is not conserved? And what does energy have to do with time? This is what we will talk about today.


I looked up “energy” in the Encyclopedia Britannica and it told me that energy is “the capacity for doing work”. Which brings up the question, what is work? The Encyclopedia says work is “a measure of energy transfer.” That seems a little circular. And as if that wasn’t enough, the Encyclopedia goes on to say, well, actually not all types of energy do work, and also energy is always associated with motion, which actually it is not because E equals m c squared. I hope you are sufficiently confused now to hear how to make sense of this.

A good illustration for energy conservation is a roller-coaster. At the starting point, it has only potential energy, that comes from gravity. As it rolls down, the gravitational potential energy is converted into kinetic energy, meaning that the roller-coaster speeds up. At the lowest point it moves the fastest. And as it climbs up again, it slows down because the kinetic energy is converted back into potential energy. If you neglect friction, energy conservation means the roller-coaster should have just exactly the right total energy to climb back up to the top where it started. In reality of course, friction cannot be neglected. This means the roller-coaster loses some energy into heating the rails or creating wind. But this energy is not destroyed. It is just no longer useful to move the roller coaster.

This simple example tells us two things right away. First, there are different types of energy, and they can be converted into each other. What is conserved is only the total of these energies. Second, some types of energy are more, others less useful to move things around.

But what really is this energy we are talking about? There was indeed a lot of confusion about this among physicists in the 19th century, but it was cleared up beautifully by Emmy Noether in 1915. Noether proved that if you have a system whose equations do no change in time then this system has a conserved quantity. Physicists would say, such a system has time-translation invariance. Energy is then by definition the quantity that is conserved in a system with time-translation invariance.

What does this mean? Time-translation invariance does not mean the system itself does not change in time. Even if the equations do not change in time, the solutions to these equations, which are what describe the system, usually will depend on time. Time-translation invariance just means that the change of the system depends only on the amount of time that passed since you started an experiment, but you could have started it at any moment and gotten the same result. Whether you fall off a roof at noon or a midnight, it will take the same time for you to hit the ground. That’s what “time-translation invariance” means.

So, energy is conserved by definition, and Noether’s theorem gives you a concrete mathematical procedure to derive what energy is. Okay, I admit it is a little more complicated, because if you have some quantity that is conserved, then any function of that quantity is also conserved. The missing ingredient is that energy times time has to have the dimension of Pla()nck’s constant. Basically, it has to have the right units.

I know this sounds rather abstract and mathematical, but the relevant point is just that physicists have a way to define what energy is, and it’s by definition conserved, which means it does not change in time. If you look at a simple system, for example that roller coaster, then the conserved energy is as usual the kinetic energy plus the potential energy. And if you add air molecules and the rails to the system, then their temperature would also add to the total, and so on.

But. If you look at a system with many small constituents, like air, then you will find that not all configurations of such a system are equally good at causing a macroscopic change, even if they have the same energy. A typical example would be setting fire to coal. The chemical bonds of the coal-molecules store a lot of energy. If you set fire to it, this causes a chain reaction between the coal and the oxygen in the air. In this reaction, energy from the chemical bonds is converted into kinetic energy of air molecules. This just means the air is warm, and since it’s warm, it will rise. You can use this rising air to drive a turb(ain), which you can then use to, say, move a vehicle or feed it into the grid to create electricity.

But suppose you don’t do anything with this energy, you just sit there and burn coal. This does not change anything about the total energy in the system, because that is conserved. The chemical energy of the coal is converted into kinetic energy of air molecules which distributes into the atmosphere. Same total energy. But now the energy is useless. You can no longer drive any turbine with it. What’s the difference?

The difference between the two cases is entropy. In the first case, you have the energy packed into the coal and entropy is small. In the latter case, you have the energy distributed in the motion of air molecules, and in this case the entropy is large.

A system that has energy in a state of low entropy is one whose energy you can use to create macroscopic changes, for example driving that turbine. Physicists call this useful energy “free energy” and say it “does work”. If the energy in a system is instead at high entropy, the energy is useless. Physicists then call it “heat” and heat cannot “do work”. The important point is that while energy is conserved, free energy is not conserved.

So, if someone says you should “save energy” by switching off the light, they really mean you should “save free energy”, because if you let the light on when you do not need it you convert useful free energy, from whatever is your source of electricity, into useless heat, that just warms the air in your room.

Okay, so we have seen that the total energy is by definition conserved, but that free energy is not conserved. Now what about the claim that Einstein actually told us energy is not conserved. That is correct. I know this sounds like a contradiction, but it’s not. Here is why.

Remember that energy is defined by Noether’s theorem, which says that energy is that quantity which is conserved if the system has a time-translation invariance, meaning, it does not really matter just at which moment you start an experiment.

But now remember, that Einstein’s theory of general relativity tells us that the universe expands. And if the universe expands, it does matter when you start an experiment. And expanding universe is not time-translation invariant. So, Noether’s theorem does not apply. Now, strictly speaking this does not mean that energy is not conserved in the expanding universe, it means that energy cannot be defined. However, you can take the thing you called energy when you thought the universe did not expand and ask what happens to it now that you know the universe does expand. And the answer is, well, it’s just not conserved.

A good example for this is cosmological redshift. If you have light of a particular wavelength early in the universe, then the wave-length of this light will increase when the universe expands, because it stretches. But the wave-length of light is inversely proportional to the energy of the light. So if the wave-length of light increases with the expansion of the universe, then the energy decreases. Where does the energy go? It goes nowhere, it just is not conserved. No, it really isn’t.

However, this non-conservation of energy in Einstein’s theory of general relativity is a really tiny effect that for all practical purposes plays absolutely no role here on Earth. It is really something that becomes noticeable only if you look at the universe as a whole. So, it is technically correct that energy is not conserved in Einstein’s theory of General Relativity. But this does not affect our earthly affairs.

In summary: The total energy of a system is conserved as long as you can neglect the expansion of the universe. However, the amount of useful energy, which is what physicists call “free energy,” is in general not conserved because of entropy increase.

Thanks for watching, see you next week. And remember to switch off the light.


We have two chats about this video’s topic, one today (Saturday, Oct 31) at noon Eastern Time (5pm CET). And one tomorrow (Sunday, Nov 1) also at noon Eastern Time (6pm CET).

Wednesday, October 28, 2020

A new model for the COVID pandemic

I spoke with the astrophysicist Niayesh Afshordi about his new pandemic model, what he has learned from it, and what the reaction has been to it.



You find more information about Niayesh's model on his website, and the paper is here.

You can join the chat with him tomorrow (Oct 29) at 5pm CET (noon Eastern Time) here.

Herd Immunity, Facts and Numbers

Today, I have a few words to say about herd immunity because there’s very little science in the discussion about it. I also want to briefly comment on the Great Barrington Declaration and on the conversation about it that we are not having.


First things first, herd immunity refers to that stage in the spread of a disease when a sufficient fraction of the population has become immune to the pathogen so that transmission will be suppressed. It does not mean that transmission stops, it means that on the average one infected person gives the disease to less than one new person, so outbreaks die out, instead of increasing.

It’s called “herd immunity” because it was first observed about a century ago in herds of sheep and, in some ways we’re not all that different from sheep.

Now, herd immunity is the only way a disease that is not contained will stop spreading. It can be achieved either by exposure to the live pathogen or by vaccination. However, in the current debate about the pursuit of herd immunity in response to the ongoing COVID outbreak, the term “herd immunity” has specifically been used to refer to herd immunity achieved by exposure to the virus, instead of waiting for a vaccine.

Second things second, when does a population reach herd immunity? The brief answer is, it’s complicated. This should not surprise you because whenever someone claims the answer to a scientific question is simple they either don’t know what they’re talking about, or they’re lying. There is a simple answer to the question when a population reaches herd immunity. But it does not tell the whole story.

This simple answer is that one can calculate the fraction of people who must be immune for herd immunity from the basic reproduction number R_0 as 1- 1/R_0.

Why is that? It’s because, R_0 tells you how many new people one infected person infects on the average. But the ones who will get ill are only those which are not immune. So if 1-1/R_0 is the fraction of people who are immune, then the fraction of people who are not immune is 1/R_0.

This then means that average number of susceptible people that one infected person reaches is R_0 * 1/R_0 which is 1. So, if the fraction of immune people has reached 1 – 1/R_0, then one infected person will on the average only pass on the disease to one other person, meaning at any level of immunity above 1 – 1/R_0, outbreaks will die out.

R_0 for COVID has been estimated with 2 to 3, meaning that the fraction of people who must have had the disease for herd immunity would be around 50 to 70 percent. For comparison, R_0 of the 1918 Spanish influenza has been estimated with 1.4 to 2.8, so that’s comparable to COVID, and R_0 of measles is roughly 12 to 18, with a herd immunity threshold of about 92-95%. Measles is pretty much the most contagious disease known to mankind.

That was the easy answer.

Here’s the more complicated but also more accurate answer. R_0 is not simply a property of the disease. It’s a number that quantifies successful transmission, and therefore depends on what measures people take to protect themselves from infection, such as social distancing, wearing masks, and washing hands. This is why epidemiologists use in their models instead an “effective R” coefficient that can change with time and with people’s habits. Roughly speaking this means that if we would all be very careful and very reasonable, then herd immunity would be easier to achieve.

But that R can change is not the biggest problem with estimating herd immunity. The biggest problem is that the simple estimate I just talked about assumes that everybody is equally likely to meet other people, which is just not the case in reality.

In realistic populations under normal circumstances, some people will have an above average number of contacts, and others below average. Now, people who have many contacts are likely to contribute a lot to the spread of the disease, but they are also likely to be among the first ones to contract the disease, and therefore become immune early on.

This means, if you use information about the mobility patterns, social networks, and population heterogeneity, the herd immunity threshold is lower because the biggest spreaders are the first to stop spreading. Taking this into account, some researchers have estimated the COVID herd immunity threshold to be more like 40% or in some optimistic cases even below 20%.

How reliable are these estimates? To me it looks like these estimates are based on more or less plausible models with little empirical data to back them up. And plausible models are the ones one should be especially careful with.

So what do the data say? Unfortunately, so far not much. The best data on herd immunity so far come from an antibody study in the Brazilian city of Manaus. That’s one of the largest cities in Brazil, with an estimated population of two point one million.

According to data from the state government, there have been about fifty five thousand COVID cases and two thousand seven hundred COVID fatalities in Manaus. These numbers likely underestimate the true number of infected and deceased people because the Brazilians have not been testing a lot. Then again, most countries did not have sufficient testing during the first wave.

If you go by the reported numbers, then about two point seven percent of the population in Manaus tested positive for COVID at some point during the outbreak. But the study which used blood donations collected during this time found that about forty-four percent of the population developed antibodies in the first three months of the outbreak.

After that, the infections tapered off without interventions. The researchers estimate the total number of people who eventually developed antibodies with sixty-six percent. The researchers claim that’s a sign for herd immunity. Please check the information below the video for references.

The number from this Brazilian study, about 44 to 66 percent seems consistent with the more pessimistic estimates for the COVID herd immunity threshold. But what it took to get there is not pretty.

2700 dead of about two million that’s more than one in a thousand. Hospitals run out of intensive care units, people were dying in the corridors, the city was scrambling to find ways to bury the dead quickly enough. And that’s even though the population of Manaus is pretty young; just six percent are older than sixty years. For comparison, in the United States, about 20% are above sixty years of age, and older people are more likely to die from the disease.

There are other reasons one cannot really compare Manaus with North America or Europe. Their health care system was working at almost full capacity even before the outbreak, and according to data from the world bank, in the Brazilian state which Manaus belongs to, the state of Amazonas, about 17% of people live below the poverty line. Also, most of the population in Manaus did not follow social distancing rules and few of them wore masks. These factors likely contributed to the rapid spread of the disease.

And I should add that the paper with the antibody study in Manaus has not yet been peer reviewed. There are various reasons why the people who donated blood may not be representative for the population. The authors write they corrected for this, but it remains to be seen what the reviewers think.

You probably want to know now how close we are to reaching herd immunity. The answer is, for all can tell, no one knows. That’s because, even leaving aside that we have no reliable estimates on the herd immunity threshold, we do not how many people have developed immunity to COVID.

In Manaus, the number of people who developed antibodies was more than twenty times higher than the number of those who tested positive. As of date in the United States about eight point five million people tested positive for COVID. The total population is about 330 Million.

This means about 2.5% of Americans have demonstrably contracted the disease, a rate that just by number is similar to the rate in Manaus, though Manaus got there faster with devastating consequences. However, the Americans are almost certainly better at testing and one cannot compare a sparsely populated country, like the United States, with one densely populated city in another country. So, again, it’s complicated.

For the Germans here, in Germany so far about 400,000 people have tested positive. That’s about 0.5 percent of the population.

And then, I should not forget to mention that antibodies are not the only way one can develop immunity. There is also T-cell immunity, that is basically a different defense mechanism of the body. The most relevant difference for the question of herd immunity is that it’s much more difficult to test for T-cell immunity. Which is why there are basically no data on it. But there are pretty reliable data by now showing that immunity to COVID is only temporary, antibody levels fall after a few months, and reinfections are possible, though it remains unclear how common they will be.

So, in summary: Estimates for the COVID herd immunity threshold range from roughly twenty percent to seventy percent, there are pretty much no data to make these estimates more accurate, we have no good data on how many people are presently immune, but we know reinfection is possible after a couple of months.

Let us then talk about the Great Barrington Declaration. The Great Barrington Declaration is not actually Great, it was merely written in place called Great Barrington. The declaration was formulated by three epidemiologists, and according to claims on the website, it has since been signed by more than eleven thousand medical and public health scientists.

The supporters of the declaration disapprove of lockdown measures and instead argue for an approach they call Focused Protection. In their own words:
“The most compassionate approach that balances the risks and benefits of reaching herd immunity, is to allow those who are at minimal risk of death to live their lives normally to build up immunity to the virus through natural infection, while better protecting those who are at highest risk. We call this Focused Protection.”

The reaction by other scientists and the media has been swift and negative. The Guardian called the Barrington Declaration “half baked” “bad science” and “a folly”. A group of scientists writing for The Lancet called it a “dangerous fallacy unsupported by scientific evidence”, the US American infectious disease expert Fauci called it “total nonsense,” and John Barry, writing for the New York Times, went so far to suggest it be called “mass murder” instead of herd immunity. Though they later changed the headline.

Some of the criticism focused on the people who wrote the declaration, or who they might have been supported by. These are ad hominem attacks that just distract from the science, so I don’t want to get into this.

The central element of the criticism is that the Barrington Declaration is vague on how the “Focused Protection” is supposed to work. This is a valid criticism. The declaration left it unclear just to how identify those at risk and how to keep them efficiently apart from the rest of the population, which is certainly difficult to achieve. But of course if no one is thinking about how to do it, there will be no plan for how to do it.

Why am I telling you this? Because I think all these commentators missed the point of the Barrington Declaration. Let us take this quote from an opinion piece in the Guardian in which three public health scientists commented on the idea of focused protection:
“It’s time to stop asking the question “is this sound science?” We know it is not.”
It’s right that arguing for focused protection is not sound science, but that is not because it’s not sound, it’s because it’s not science. It’s a value decision.

The authors of the Great Barrington Declaration point out, entirely correctly, that we are in a situation where we have only bad options. Lockdown measures are bad, pursuing natural herd immunity is also bad.

The question is, which is worse, and just what do you mean by “worse”. This is the decision that politicians are facing now and it is not obvious what is the best strategy. This decision must be supported by data for the consequences of each possible path of action. So we need to discuss not only how many people die from COVID and what the long-term health problems may be, but also how lockdowns, social distancing, and economic distress affect health and health care. We need proper risk estimates with uncertainties. We do not need scientists who proclaim that science tells us what’s the right thing to do.

I hope that this brief survey of the literature on herd immunity was helpful for you.


I have a video upcoming later today with astrophysicist (!) Niayesh Afshordi from Perimeter Institute about his new pandemic model (!!), so stay tuned. He will also join the Thursday chat at 5pm CET. Note that this is the awkward week of the year when the NYC-Berlin time shift is only 5 hours, so that's noon Eastern Time.

Saturday, October 24, 2020

How can climate be predictable if weather is chaotic?

[This is a transcript of the video embedded below. Some parts of the text may not make sense without the graphics in the video.]

Today I want to take on a question that I have not been asked, but that I have seen people asking – and not getting a good answer. It’s how can scientists predict the climate in one hundred years if they cannot make weather forecasts beyond two weeks – because of chaos. The answer they usually get is “climate is not weather”, which is correct, but doesn’t really explain it. And I think it’s actually a good question. How is it possible that one can make reliable long-term predictions when short-term predictions are impossible. That’s what we will talk about today.


Now, weather forecast is hideously difficult, and I am not a meteorologist, so I will instead just use the best-known example of a chaotic system, that’s the one studied by Edward Lorenz in 1963.

Edward Lorenz was a meteorologist who discovered by accident that weather is chaotic. In the 1960s, he repeated a calculation to predict a weather trend, but rounded an initial value from six digits after the point to only three digits. Despite the tiny difference in the initial value, he got wildly different results. That’s chaos, and it gave rise to the idea of the “butterfly effect”, that the flap of a butterfly in China might cause a tornado in Texas two weeks later.

To understand better what was happening, Lorenz took his rather complicated set of equations and simplified it to a set of only three equations that nevertheless captures the strange behavior he had noticed. These three equations are now commonly known as the “Lorenz Model”. In the Lorenz model, we have three variables, X, Y, and Z and they are functions of time, that’s t. This model can be interpreted as a simplified description of convection in gases or fluids, but just what it describes does not really matter for our purposes.

The nice thing about the Lorenz model is that you can integrate the equations on a laptop. Let me show you one of the solutions. Each of the axes in this graph is one of the directions X, Y, Z, so the solution to the Lorenz model will be a curve in these three dimensions. As you can see, it circles around two different locations, back and forth.

It's not only this one solution which does that, actually all the solutions will end up doing circles close by these two places in the middle, which is called the “attractor”. The attractor has an interesting shape, and coincidentally happens to look somewhat like a butterfly with two parts you could call “wings”. But more relevant for us is that the model is chaotic. If we take two initial values that are very similar, but not exactly identical, as I have done here, then the curves at first look very similar, but then they run apart, and after some while they are entirely uncorrelated.

These three dimensional plots are pretty, but it’s somewhat hard to see just what is going on, so in the following I will merely look at one of these coordinates, that is the X-direction. From the three dimensional plot, you expect that the value in X-direction will go back and forth between two numbers, and indeed that’s what happens.

Here you see again the curves I previously showed for two initial values that differ by a tiny amount. At first the two curves look pretty much identical, but then they diverge and after some time they become entirely uncorrelated. As you see, the curves flip back and forth between positive and negative values, which correspond to the two wings of the attractor. In this early range, maybe up to t equals five, you would be able to make a decent weather forecast. But after that, the outcome depends very sensitively on exactly what initial value you used, and then measurement error makes a good prediction impossible. That’s chaos.

Now, I want to pretend that these curves say something about the weather, maybe they describes the weather on a strange planet where it either doesn’t rain at all or it pours and the weather just flips back and forth between these two extremes. Besides making the short-term weather forecast you could then also ask what’s the average rainfall in a certain period, say, a year.

To calculate this average, you would integrate the curve over some period of time, and then divide by the duration of that period. So let us plot these curves again, but for a longer period. Just by eyeballing these curves you’d expect the average to be approximately zero. Indeed, I calculated the average from t equals zero to t equals one hundred, and it comes out to be approximately zero. What this means is that the system spends about equal amounts of time on each wing of the attractor.

To stick with our story of rainfall on the weird planet, you can imagine that the curve shows deviations from a reference value that you set to zero. The average value depends on the initial value and will fluctuates around zero because I am only integrating over a finite period of time, so I arbitrarily cut off the curve somewhere. If you’d average over longer periods of time, the average would inch closer and closer to zero.

What I will do now is add a constant to the equations of the Lorenz model. I will call this constant “f” and mimics what climate scientists call “radiative forcing”. The radiative forcing is the excess power per area that Earth captures due to increasing carbon dioxide levels. Again that’s relative to a reference value.

I want to emphasize again that I am using this model only as an analogy. It does not actually describe the real climate. But it does make a good example for how to make predictions in chaotic systems.

Having said that, let us look again at how the curves look like with the added forcing. These are the curves for f equals one. Looks pretty much the same as previously if you ask me. f=2. I dunno. You wouldn’t believe how much time I have spent staring at these curves for this video. f=3. Looks like the system is spending a little more time in this upper range, doesn’t it? f=4. Yes, it clearly does. And just for fun, If you turn f up beyond seven or so, the system will get stuck on one side of the attractor immediately.

The relevant point is now that this happens for all initial values. Even though the system is chaotic, one clearly sees that the response of the system does have a predictable dependence on the input parameter.

To see this better, I have calculated the average of these curves as a function of the “radiative forcing”, for a sample of initial values. And this is what you get. You clearly see that the average value is strongly correlated with the radiative forcing. Again, the scatter you see here is because I am averaging over a rather arbitrarily chosen finite period.

What this means is that in a chaotic system, the trends of average values can be predictable, even though you cannot predict the exact state of the system beyond a short period of time. And this is exactly what is happening in climate models. Scientists cannot predict whether it will rain on June 15th, 2079, but they can very well predict the average rainfall in 2079 as a function of increasing carbon dioxide levels.

This video was sponsored by Brilliant, which is a website that offers interactive courses on a large variety of topics in science and mathematics. In this video I showed you the results of some simple calculations, but if you really want to understand what is going on, then Brilliant is a great starting point. Their courses on Differential Equations I and II, probabilities and statistics cover much of the basics that I used here.

To support this channel and learn more about Brilliant, go to Brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get 20 percent off the annual premium subscription.



You can join the chat about this week’s video, tomorrow (Sunday, Oct 25) at 5pm CET, here.

Thursday, October 22, 2020

Particle Physicists Continue To Make Empty Promises

[This is a transcript of the video embedded below]

Hello and welcome back to my YouTube channel. Today I want to tell you how particle physicists are wasting your money. I know that’s not nice, but at the end of this video I think you will understand why I say what I say.


What ticked me off this time was a comment published in Nature Physics, by CERN Director-General Fabiola Gianotti and Gian Giudice, who is Head of CERN's Theory Department. It’s called a comment, but what it really is is an advertisement. It’s a sales pitch for their next larger collider for which they need, well, a few dozen billion Euro. We don’t know exactly because they are not telling us how expensive it would be to actually run the thing. When it comes to the question what the new mega collider could do for science, they explain:
“A good example of a guaranteed result is dark matter. A proton collider operating at energies around 100 TeV [that’s the energy of the planned larger collider] will conclusively probe the existence of weakly interacting dark-matter particles of thermal origin. This will lead either to a sensational discovery or to an experimental exclusion that will profoundly influence both particle physics and astrophysics.”
Let me unwrap this for you. The claim that dark matter is a guaranteed result, followed by weasel words about weakly interacting and thermal origin, is the physics equivalent of claiming “We will develop a new drug with the guaranteed result of curing cancer” followed by weasel words to explain, well, actually it will cure a type of cancer that exists only theoretically and has never been observed in reality. That’s how “guaranteed” this supposed dark matter result is. They guarantee to rule out some very specific hypotheses for dark matter that we have no reason to think are correct in the first place. What is going on here?

What’s going on is that particle physicists have a hard time understanding that when Popper went on about how important it is that a scientific hypothesis is falsifiable, he did not mean that a hypothesis is scientific just because it is falsifiable. There are lots of falsifiable hypotheses that are clearly unscientific.

For example, YouTube will have a global blackout tomorrow at noon central time. That’s totally falsifiable. If you give me 20 billion dollars, I can guarantee that I can test this hypothesis. Of course it’s not worth the money. Why? Because my hypothesis may be falsifiable, but it’s unscientific because it’s just guesswork. I have no reason whatsoever to think that my blackout prediction is correct.

The same is the case with particle physicists’ hypotheses for dark matter that you are “guaranteed” to rule out with that expensive big collider. Particle physicists literally have thousands of theories for dark matter, some thousandths of which have already been ruled out. Can they guarantee that a next larger collider can rule out some more? Yes. What is the guaranteed knowledge we will gain from this? Well, the same as the gain that we have gotten so far from ruling out their dark matter hypotheses, which is that we still have no idea what dark matter is. We don’t even know it is a particle to begin with.

Let us look again at that quote, they write:
“This will lead either to a sensational discovery or to an experimental exclusion that will profoundly influence both particle physics and astrophysics.”
No. The most likely outcome will be that particle physicists and astrophysicsts will swap their current “theories” for new “theories” according to which the supposed particles are heavier than expected. Then they will claim that we need yet another bigger collider to find them. What makes me think this will happen? Am I just bitter or cynical, as particle physicists accuse me? No, I am just looking at what they have done in the past.

For example, here’s an oldie but goldie, a quote from a piece written by string theorists David Gross and Edward Witten for the Wall street journal
“There is a high probability that supersymmetry, if it plays the role physicists suspect, will be confirmed in the next decade.”
They wrote this in 1996. Well, clearly that didn’t pan out.

And because it’s so much fun, I want to read you a few more quotes. But they are a little bit more technical, so I have to give you some background first.

When particle physicists say “electroweak scale” or “TeV scale” they mean energies that can be tested at the Large Hadron Collider. When they say “naturalness” they refer to a certain type of mathematical beauty that they think a theory should fulfil.

You see, particle physicists think it is a great problem that theories which have been experimentally confirmed are not as beautiful as particle physicists think nature should be. They have therefore invented a lot of particles that you can add to the supposedly ugly theories to remedy the lack of beauty. If this sounds like a completely non-scientific method, that’s because it is. There is no reason this method should work, and it does as a matter of fact not work. But they have done this for decades and still have not learned that it does not work.

Having said that, here is a quote from Giudice and Rattazzi in 1998. That’s the same Guidice who is one of the authors of the new nature commentary that I mentioned in the beginning. In 1998 he wrote:
“The naturalness (or hierarchy) problem, is considered to be the most serious theoretical argument against the validity of the Standard Model (SM) of elementary particle interactions beyond the TeV energy scale. In this respect, it can be viewed as the ultimate motivation for pushing the experimental research to higher energies.”
Higher energies, at that time, were the energies that have now been tested at the Large Hadron Collider. The supposed naturalness problem was the reason they thought the LHC should see new fundamental particles besides the Higgs. This has not happened. We now know that those arguments were wrong.

In 2004, Fabiola Gianotti, that’s the other author of the new Nature Physics comment, wrote:
“[Naturalness] arguments open the door to new and more fundamental physics. There are today several candidate scenarios for physics beyond the Standard Model, including Supersymmetry (SUSY), Technicolour and theories with Extra-dimensions. All of them predict new particles in the TeV region, as needed to stabilize the Higgs mass. We note that there is no other scale in particle physics today as compelling as the TeV scale, which strongly motivates a machine like the LHC able to explore directly and in detail this energy range.”
So, she claimed in 2004 that the LHC would see new particles besides the Higgs. Whatever happened to this prediction? Did they ever tell us what they learned from being wrong? Not to my knowledge.

These people were certainly not the only ones who repeated this story. Here is for example a quote from the particle physicist Michael Dine, who wrote in 2007:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
Well, you know what, it hasn’t done either.

I could go on for quite some while quoting particle physicists who made wrong predictions and now pretend they didn’t, but it’s rather repetitive. I have collected the references here. Let us instead talk about what this means.

All these predictions from particle physicists were wrong. There is no shame in being wrong. Being wrong is essential for science. But what is shameful is that none of these people ever told us what they learned from being wrong. They did not revise their methods for making predictions for new particles. They still use the same methods that have not worked for decades. Neither did they do anything about the evident group think in their community. But they still want more money.

The tragedy is I actually like most of these particle physicists. They are smart and enthusiastic about science and for the most part they’re really nice people.

But look, they refuse to learn from evidence. And someone has to point it out: The evidence clearly says their methods are not working. Their methods have led to thousands of wrong predictions. Scientists should learn from failure. Particle physicists refuse to learn.

Particle physicists, of course, are entirely ignoring my criticism and instead call me “anti-science”. Let that sink in for a moment. They call me “anti-science” because I say we should think about where to best invest science funding, and if you do a risk-benefit assessment it is clear that building a bigger collider is not currently a good investment. It is both high risk and low benefit. We would be better off if we'd instead invest in the foundations of quantum mechanics and astroparticle physics. They call me “anti-science” because I ask scientists to think. You can’t make up this shit.

Frankly, the way that particle physicists behave makes me feel embarrassed I ever had anything to do with their field.

Saturday, October 17, 2020

I Can’t Forget [Remix]

In the midst of the COVID lockdown I decided to remix some of my older songs. Just as I was sweating over the meters, I got an email out of the blue. Steven Nikolic from Canada wrote he’ be interested in remixing some of my old songs. A few months later, we have started a few projects together. Below you see the first result, a remake of my 2014 song “I Can’t Forget”.


If you want to see what difference 6 years can make, in hardware, software, and wrinkles, the original is here.

David Bohm’s Pilot Wave Interpretation of Quantum Mechanics

Today I want to take on a topic many of you requested, repeatedly. That is David Bohm’s approach to Quantum Mechanics, also known as the Pilot Wave Interpretation, or sometimes just Bohmian Mechanics. In this video, I want to tell you what Bohmian mechanics is, how it works, and what’s good and bad about it.

Ahead, I want to tell you a little about David Bohm himself, because I think the historical context is relevant to understand today’s situation with Bohmian Mechanics. David Bohm was born in 1917 in Pennsylvania, in the Eastern United States. His early work in physics was in the areas we would now call plasma physics and nuclear physics. In 1951, he published a textbook about quantum mechanics. In the course of writing it, he became dissatisfied with the then prevailing standard interpretation of quantum mechanics.

The standard interpretation at the time was that pioneered by the Copenhagen group – notably Bohr, Heisenberg, and Schrödinger – and is today usually referred to as the Copenhagen Interpretation. It works as follows. In quantum mechanics, everything is described by a wave-function, usually denoted Psi. Psi is a function of time. One can calculate how it changes in time with a differential equation known as the Schrödinger equation. When one makes a measurement, one calculates probabilities for the measurement outcomes from the wave-function. The equation by help of which one calculates these probabilities is known as Born’s Rule. I explained in an earlier video how this works.

The peculiar thing about the Copenhagen Interpretation is now that it does not tell you what happens before you make a measurement. If you have a particle described by a wave-function that says the particle is in two places at once, then the Copenhagen Interpretation merely says, at the moment you measure the particle it’s either here or there, with a certain probability that follows from the wave-function. But how the particle transitioned from being in two places at once to suddenly being in only one place, the Copenhagen Interpretation does not tell you. Those who advocate this interpretation would say that’s a question you are not supposed to ask because, by definition, what happens before the measurement is not measureable.

Bohm was not the only one dismayed that the Copenhagen people would answer a question by saying you’re not supposed to ask it. Albert Einstein didn’t like it either. If you remember, Einstein famously said “God does not throw dice”, by which he meant he does not believe that the probabilistic nature of quantum mechanics is fundamental. In contrast to what is often claimed, Einstein did not think quantum mechanics was wrong. He just thought it is probabilistic the same way classical physics is probabilistic, namely, that our inability to predict the outcome of a measurement in quantum mechanics comes from our lack of information. Einstein thought, in a nutshell, there must be some more information, some information that is missing in quantum mechanics, which is why it appears random.

This missing information in quantum mechanics is usually called “hidden variables”. If you knew the hidden variables, you could predict the outcome of a measurement. But the variables are “hidden”, so you can only calculate the probability of getting a particular outcome.

Back to Bohm. In 1952, he published two papers in which he laid out his idea for how to make sense of quantum mechanics. According to Bohm, the wave-function in quantum mechanics is not what we actually observe. Instead, what we observe are particles, which are guided by the wave-function. One can arrive at this interpretation in a few lines of calculation. I will not go through this in detail because it’s probably not so interesting for most of you. Let me just say you take the wave-function apart into an absolute value and a phase, insert it into the Schrödinger equation, and then separate the resulting equation into its real and imaginary part. That’s pretty much it.

The result is that in Bohmian mechanics the Schrödinger equation falls apart into two equations. One describes the conservation of probability and determines what the guiding field does. The other determines the position of the particle, and it depends on the guiding field. This second equation is usually called the “guiding equation.” So this is how Bohmian mechanics works. You have particles, and they are guided by a field which in return depends on the particle.

To use Bohm’s theory, you then need one further assumption, one that tells what the probability is for the particle to be at a certain place in the guiding field. This adds another equation, usually called the “quantum equilibrium hypothesis”. It is basically equivalent to Born’s rule and says that the probability for finding the particle in a particular place in the guiding field is given by the absolute square of the wave-function at that place. Taken together, these equations – the conservation of probability, the guiding equation, and the quantum equilibrium hypothesis – give the exact same predictions as quantum mechanics. The important difference is that in Bohmian mechanics, the particle is really always in only one place, which is not the case in quantum mechanics.

As they say, a picture speaks a thousand words, so let me just show you how this looks like for the double slit experiment. These thin black curves you see here are the possible ways that the particle could go from the double slit to the screen where it is measured by following the guiding field. Just which way the particle goes is determined by the place it started from. The randomness in the observed outcome is simply due to not knowing exactly where the particle came from.

What is it good for? The great thing about Bohmian mechanics is that it explains what happens in a quantum measurement. Bohmian mechanics says that the reason we can only make probabilistic predictions in quantum mechanics is just that we did not exactly know where the particle initially was. If we measure it, we find out where it is. Nothing mysterious about this. Bohm’s theory, therefore, says that probabilities in quantum mechanics are of the same type as in classical mechanics. The reason we can only predict probabilities for outcomes is because we are missing information. Bohmian mechanics is a hidden variables theory, and the hidden variables are the positions of those particles.

So, that’s the big benefit of Bohmian mechanics. I should add that while Bohm was working on his papers, it was brought to his attention that a very similar idea had previously been put forward in 1927 by De Broglie. This is why, in the literature, the theory is often more accurately referred to as “De Broglie Bohm”. But de Broglie’s proposal did, at the time, not attract much attention. So how did physicists react to Bohm’s proposal in fifty-two. Not very kindly. Niels Bohr called it “very foolish”. Leon Rosenfeld called it “very ingenious, but basically wrong”. Oppenheimer put it down as “juvenile deviationism”. And Einstein, too, was not convinced. He called it “a physical fairy-tale for children” and “not very hopeful.”

Why the criticism? One of the big disadvantages of Bohmian mechanics, that Einstein in particular disliked, is that it is even more non-local than quantum mechanics already is. That’s because the guiding field depends on all the particles you want to measure. This means, if you have a system of entangled particles, then the guiding equation says the velocity of one particle depends on the velocity of the other particles, regardless of how far away they are from each other.

That’s a problem because we know that quantum mechanics is strictly speaking only an approximation. The correct theory is really a more complicated version of quantum mechanics, known as quantum field theory. Quantum field theory is the type of theory that we use for the standard model of particle physics. It’s what people at CERN use to make predictions for their experiments. And in quantum field theory, locality and the speed of light limit, are super-important. They are built very deeply into the math.

The problem is now that since Bohmian mechanics is not local, it has turned out to be very difficult to make a quantum field theory out of it. Some have made attempts, but currently there is simply no Pilot Wave alternative for the Standard Model of Particle Physics. And for many physicists, me included, this is a game stopper. It means the Bohmian approach cannot reproduce the achievements of the Copenhagen Interpretation.

Bohmian mechanics has another odd feature that seems to have perplexed Albert Einstein and John Bell in particular. It’s that, depending on the exact initial position of the particle, the guiding field tells the particle to go either one way or another. But the guiding field has a lot of valleys where particles could be going. So what happens with the empty valleys if you make a measurement? In principle, these empty valleys continue to exist. David Deutsch has claimed this means “pilot-wave theories are parallel-universes theories in a state of chronic denial.”

Bohm himself, interestingly enough, seems to have changed his attitude towards his own theory. He originally thought it would in some cases give predictions different from quantum mechanics. I only learned this recently from a Biography of Bohm written by David Peat. Peat writes

“Bohm told Einstein… his only hope was that conventional quantum theory would not apply to very rapid processes. Experiments done in a rapid succession would, he hoped, show divergences from the conventional theory and give clues as to what lies at a deeper level.”

However, Bohm had pretty much the whole community against him. After a particularly hefty criticism by Heisenberg, Bohm changed course and claimed that his theory made the same predictions as quantum mechanics. But it did not help. After this, they just complained that the theory did not make new predictions. And in the end, they just ignored him.

So is Bohmian mechanics in the end just a way of making you feel better about the predictions of quantum mechanics? Depends on whether or not you think the “quantum equilibrium hypothesis” is always fulfilled. If it is always fulfilled, the two theories give the same predictions. But if the equilibrium is actually a state the system must first settle in, as the name certainly suggests, then there might be cases when this assumption is not fulfilled. And then, Bohmian mechanics is really a different theory. Physicists still debate today whether such deviations from quantum equilibrium can happen, and whether we can therefore find out that Bohm was right."" This video was sponsored by Brilliant which is a website that offers interactive courses on a large variety of topics in science and mathematics. I always try to show you some of the key equations, but if you really want to understand how to use them, then Brilliant is a great starting point. For this video, for example, I would recommend their courses on differential equations, linear algebra, and quantum objects. To support this channel and learn more about Brilliant, go to Brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get 20 percent off the annual premium subscription.



You can join the chats on this week’s topic using the Converseful app in the bottom right corner: