Friday, May 29, 2020

Understanding Quantum Mechanics #3: Non-locality

Locality means that to get from one point to another you somehow have to make a connection in space between these points. You cannot suddenly disappear and reappear elsewhere. At least that was Einstein’s idea. In quantum mechanics it’s more difficult. Just exactly how quantum mechanics is and is not local, that’s what we will talk about today.


To illustrate why it’s complicated, let me remind you of an experiment we already talked about in a previous video. Suppose you have a particle with total spin zero. The spin is conserved and the particle decays in two new particles. One goes left, one goes right. But you know that the two new particles cannot each have spin zero. Each can only have a spin with an absolute value of 1. The easiest way to think of this spin is as a little arrow. Since the total spin is zero, these two spin-arrows of the particles have to point in opposite directions. You do not know just which direction either of the arrows points, but you do know that they have to add to zero. Physicists then say that the two particles are “entangled”.

The question is now what happens if you measure one of the particles’ spins. This experiment was originally proposed as a thought experiment by Einstein, Podolsky, and Rosen, and is therefore also known as the EPR experiment. Well, actually the original idea was somewhat more complicated, and this is a simpler version that was later proposed by Bohm, but the distinction really doesn’t matter for us. The EPR experiment has meanwhile actually been done, many times, so we know what the outcome is. The outcome is... that if you measure the spin on the particle on one side, then the spin of the particle on the other side has the opposite value. Ok, I see you are not surprised. Because, eh, we knew this already, right? So what is the big deal?

Indeed, at first sight entanglement does not appear particularly remarkable because it seems you can do the same thing without quantum anything. Suppose you take a pair of shoes and put them in separate boxes. You don’t know which box contains the left shoe and which the right shoe. You send one box to your friend overseas. The moment the friend opens their box, she will instantaneously know what’s in your box. That seems to be very similar to the two particles with total spin zero.

But it is not, and here’s why. Shoes do not have quantum properties, so the question which box contained the left shoe and which the right shoe was decided already when you packed them. The one box travels entirely locally to your friend, while the other one stays with you. When she opens the box, nothing happens with your box, except that now she knows what’s in it. That’s indeed rather unsurprising.

The surprising bit is that in quantum mechanics this explanation does not work. If you assume that the spin of the particle that goes left was already decided when the original particle decayed, then this does not fit with the observations.

The way that you can show this is to not measure the spin in the same direction on both sides, but to measure them in two different directions. In quantum mechanics, the spin in two orthogonal directions has the same type of mutual uncertainty as the position and momentum. So if you measure the spin in one direction, then you don’t know what’s with the other direction. This means if you on the left side measure the spin in up-down direction and on the right side measure in a horizontal direction, then there is no correlation between the measurements. If you measure them in the same direction, then the measurements are maximally correlated. Where quantum mechanics becomes important is for what happens in between, if you dial the difference in directions of the measurements from orthogonal to parallel. For this case you can calculate how strongly correlated the measurement outcomes are if the spins had been determined already at the time the original particle decayed. This correlation has an upper bound, which is known as Bell’s inequality. But, and here is the important point: Many experiments have shown that this bound can be violated.

And this creates the key conundrum of quantum mechanics. If the outcome of the measurement had been determined at the time that the entangled state was created, then you cannot explain the observed correlations. So it cannot work the same way as the boxes with shoes. But if the spins were not already determined before the measurement, then they suddenly become determined on both sides the moment you measure at least one of them. And that appears to be non-local.

So this is why quantum mechanics is said to be non-local. Because you have these correlations between separated particles that are stronger than they could possibly be if the state had been determined before measurement. Quantum mechanics, it seems, forces you to give up on determinism and locality. It is fundamentally unpredictable and non-local.

Ok, you may say, cool, then let us build a transmitter, forget our frequent flyer cards and travel non-locally from here on. Unfortunately, that does not work. Because while quantum mechanics somehow seems to be non-local with these strong correlations, there is nothing that actually observably travels non-locally. You cannot use these correlations to send information of any kind from one side of the experiment to the other side. That’s because on neither side do you actually know what the outcome of these measurements will be if you chose a particular setting. You only know the probability distribution. The only way you can send information is from the place where the particle decayed to the detectors. And that is local in the normal way.

So, oddly enough, quantum mechanics is entirely local in the common meaning of the word. When physicists say that it is non-local, they mean that particles which have a common origin but then were separated can be stronger correlated than particles without quantum properties could ever be. I know this sounds somewhat lame, but that’s what quantum non-locality really means.

Having said this, let me add a word of caution. The conclusion that it is not possible to explain the observations by assuming the spins were already determined at the moment the original particle decays requires the assumption that this decay is independent of the settings of the detectors. This assumption is known as “statistical independence”. If is violated, it is very well possible to explain the observations locally and deterministically. This option is known as “superdeterminism” and I will tell you more about this some other time.

Friday, May 22, 2020

Is faster-than-light travel possible?

Einstein said that nothing can travel faster than the speed of light. You have probably heard something like that. But is this really correct? This is what we will talk about today.


But first, a quick YouTube announcement. My channel has seen a lot of new subscribers in the past year. And I have noticed that the newcomers are really confused each time I upload a music video. They’re like oh my god she sings, what’s that? So, to make it easier for you, I will no longer post my music videos here, but I have set up a separate channel for those. This means if you want to continue seeing my music videos, please go and subscribe to my other channel.

Now about faster than light travel. To get the obvious out of the way, no one currently knows how to travel faster than light, so in this sense it’s not possible. But you already knew that and it’s not what I want to talk about. Instead, I want to talk about whether it is possible in principle. Like, is there anything actually preventing us from ever developing a technology for faster than light travel?

To find out let us first have a look at what Einstein really said. Einstein’s theory of Special Relativity contains a speed that all observers will measure to be the same. One can show that this is the speed of massless particles. And since the particles of light are, for all we currently know, massless, we usually identify this invariant speed with the speed of light. But if it turned out one day that the particles of light have a small, nonzero mass, then we would still have this invariant speed in Einstein’s theory, it just would not be the speed of light any more.

Next, Einstein also showed that if you have any particle which moves slower than the speed of light, then you cannot accelerate it to faster than the speed of light. You cannot do that because it would take an infinite amount of energy. And this is why you often hear that the speed of light is an upper limit.

However, there is nothing in Einstein’s theory that forbids a particle to move faster than light. You just don’t know how to accelerate anything to such a speed. So really Einstein did not rule out faster than light motion, he just said, no idea how to get there. However, there is a problem with particles that go faster than light, which is that for some observers they look like they go backwards in time. Really, that’s what the mathematics says.

And that, so the argument goes, is a big problem because once you can travel back in time, you can create causal paradoxes, the so-called “grandfather paradoxes”. The idea is, that you could go back in time, kill your own grandfather – accidentally, we hope – so that you would never be born and could not have travelled back in time to kill him, which does not make any sense whatsoever.

So, faster than light travel is a problem because it can lead to causal inconsistencies. At least that’s what most physicists will tell you or maybe have already told you. I will now explain why this is complete nonsense.

It’s not even hard to see what’s wrong with this argument. Imagine you have a particle that goes right to left backwards in time, what would it look like? It would look like a particle going left to right forward in time. These two descriptions are mathematically just entirely identical. A particle does not know which direction of time is forward.

*Our observation that forward in time is different than backward in time comes from entropy increase. It arises from the behavior of large numbers of particles together. If you have many particles together, you can still in principle reverse any particular process in time, but the reversed process will usually be extremely unlikely. Take the example of mixing dough. It’s very easy to get it mixed up and very difficult to unmix, though that is in principle possible.

In any case, you probably don’t need convincing that we do have an arrow of time and that arrow of time points towards more wrinkles. One direction is forward, the other one is not. That’s pretty obvious. Now the reason for the grandfather paradox is not faster than light travel, but it’s that these stories screw up the direction of the arrow of time. You are going back in time, yet you are getting older. *That is the inconsistency. But as long as you have a consistent arrow of time, there is nothing physically wrong with faster-than-light travel.

So, really, the argument from causal paradoxes is rubbish, they are easy to prevent, you just have to demand a consistent arrow of time. But there is another problem with faster-than-light travel and that comes from quantum mechanics. If you take into account quantum mechanics, then a particle that travels faster than light will destroy the universe, basically, which would be unfortunate. Also, it should already have happened, so the existence of faster-than-light particles seems to not agree with what we observe.

The reason is that particles which move faster than light can have negative energy. And in quantum mechanics you can create pairs of new particles provided you conserve the total energy. Now, if you have particles with negative energy, you can pair them with particles of positive energy, and then you can create arbitrarily many of these pairs from nothing. Physicists then say that the vacuum is unstable. Btw, since it is a common confusion, let me mention that anti-particles do NOT have negative energy. But faster than light particles can have negative energy.

This is clearly something to worry about. However, the conclusion depends on how seriously you take quantum theory. Personally I think quantum theory is not itself fundamental, but it is only an approximation to a better theory that has not yet been developed. The best evidence for this is the measurement problem which I talked about in an earlier video. So I think that this supposed problem with the vacuum instability comes from taking quantum mechanics too seriously.

Also, you can circumvent the problem with the negative energies if you travel faster than light by using wormholes because in this case you can use entirely normal particles. Wormholes are basically shortcuts in space. Instead of taking the long way from here to Andromeda, you could hop into one end of a wormhole and just reappear at the other end. Unfortunately, there are good reasons to think that wormholes don’t exist which I talked about in an earlier video.

In summary, there is no reason in principle why faster than light travel or faster than light communication is impossible. Maybe we just have not figured out how to do it.

Talk To Me [I've been singing again]

This is for the guy who recommended I “release” my “inner Whitney Houston”.

Book Update

The US version of my book "Lost in Math" is about to be published as paperback. You can now pre-order it, which of course you should.

I am quite pleased that what I wrote in the book five years ago has held up so beautifully. There has been zero further progress in the foundations of physics and, needless to say, there will be zero progress until physicists understand that they need to change their methodology. Chances that they actually understand this are not exactly zero, but very close by.

In other news, on Monday I gave an online seminar about Superdeterminism, which was recorded and is now available on YouTube. Don't despair if Tim doesn't quite make sense to you; it took me a year to figure out that he isn't as crazy as he sounds.

The How The Light Gets in Festival which is normally in Hay-on-Wye has also been moved online. I think that's great, because Hay-on-Wye is a tiny village somewhere in the middle of nowhere and traveling there has been somewhat of a pain. Indeed, I had actually declined the invitation months ago. But since I can now attend without having to sit on a car, a bus, a plane, a train, and a taxi, I will be in a debate about Supersymmetry tomorrow (May 23) at 11:30am BST (not CEST) and giving a 30 mins talk about my book at 2pm (again that's BST).

Tuesday, May 19, 2020

[Guest Post] Conversful 101: Explaining What’s In The Bottom Corner Of Your Screen

[This post is written by Ben Alderoty from Conversful.]

You may have noticed something new in the bottom corner of BackRe(action) recently. It appears only if you’re on a computer. So if you’re on a phone or tablet right now, finish reading this post, but then come back another time from a computer to see what I’m talking about. That thing is called Conversful & myself with a team of a few others are behind it. I wanted to take a second to give some context as to what Conversful is & how it works.

We built Conversful to create new conversations. We believe that people on the same website at the same time probably have a lot in common. So much so that if they were to meet randomly at a conference, an airport or a bar, they would probably get into a fantastic conversation. But nothing exists right now to make these spontaneous connections happen. With Conversful, we’re trying to create a space where these connections can happen - a “virtual meeting place” of sorts to borrow Sabine’s words.



To open Conversful, just click the globe icon in the bottom corner. With the app open you can do one of two things; start a new conversation or join a conversation. 1️) To start a new conversation, all you’ll need is a topic and your first name. Topics can be anything. So far we’ve seen topics range from “Physics” to “Stephen Wolfram thinks he is close to a unified theory of physics unifying QM and GR. Some opinions?”. Both of these work. There’s no need to overthink a topic, keep it short, and submit it. 2) Joining a conversation is even easier. With the app open, click ‘Join Conversation’ and enter your first name.


Here’s a few other things:
  • Conversations on Conversful are 1-1. They are between the person who started the conversation and the person who joined it.
  • Conversations on Conversful are real-time. If you post a topic and then leave before someone joins, your topic will disappear. When you come back to the website at a later time you will not have any responses.
  • Conversful is for everyone. We designed Conversful to make it feel like you’re texting a friend. Be yourself, share your thoughts, there’s always someone online to hear them out as long as you’re willing to hear theirs.
Today we rolled out a handful of new features to make it easier for conversations to happen. You’re probably seeing some of them right now. If you’ve already tried Conversful and it didn’t end in a conversation, I ask you to please give it another try!

P.S. To make Conversful the best it can be, I would love to hear from you. If you have any thoughts/ideas/feedback on what’s working (or not) and what else you’d like to see, please feel free to email me at (ben@conversful.com). Cheers from NYC & happy conversing!

Friday, May 15, 2020

Understanding Quantum Mechanics #2: Superposition and Entanglement

If you know one thing about quantum mechanics, it’s that Schrödinger’s cat is both dead and alive. This is what physicists call a “superposition”. But what does this really mean? And what does it have to do with entanglement? This is what we will talk about today.


The key to understanding superpositions is to have a look at how quantum mechanics works. In quantum mechanics, there are no particles and no waves and no cats either. Everything is described by a wave-function, usually denoted with the Greek letter Ψ (Psi). Ψ is a complex valued function and from its absolute square you calculate the probability of a measurement outcome, for example, whether the cat is dead or whether the particle went into the left detector, and so on.

But how do you know what the wave-function does? We have an equation for this, which is the so-called Schrödinger equation. Exactly how this equation looks like is not so important. The important thing is that the solutions to this equation are the possible things that the system can do. And the Schrödinger equation has a very important property. If you have two solutions to the equation, then any sum of those two solutions with arbitrary pre-factors is also a solution.

And that’s what is called a “superposition”. It’s a sum with arbitrary pre-factors. It really sounds more mysterious than it is.

It is relevant because this means if you have two solutions of the Schrödinger equation that reasonably correspond to realistic situations, then any superposition of them also reasonably corresponds to a realistic situation. This is where the idea comes from that if the cat can be dead and the cat can be alive, then the cat can also be in a superposition of dead and alive. Which some people interpret to means, it’s neither dead nor alive but somehow, both, until you measure it. Personally, I am an instrumentalist and I don’t assign any particular meaning to such a superposition. It’s merely a mathematical tool to make a prediction for a measurement outcome.

Having said that, talking about superpositions is not particularly useful, because “superposition” is not an absolute term. It only makes sense to talk about superpositions of something. A wave-function can be a superposition of, say, two different locations. But it makes no sense to say it is a superposition, period.

To see why, let us stick with the simple example of just two solutions, Ψ1 and Ψ2. Now let us create two superpositions, that are a sum and a difference of the two original solutions, Ψ1 and Ψ2. Then you have two new solutions, let us call them Ψ3 and Ψ4. But now you can write the original Ψ1 and Ψ2 as a superposition of Ψ3 and Ψ4. So which one is a superposition? Well, there is no answer to this. Superposition is just not an absolute term. It depends on your choice of a specific set of solutions. You could say, for example, that Schrodinger’s cat is not in a superposition of dead and alive, but that it is instead in the not-superposed state dead-and-alive. And that’s mathematically just as good.

So, superpositions are sums with prefactors, and it only makes sense to speak about superpositions of something. In some sense, I have to say, superpositions are really not terribly interesting.

Much more interesting is entanglement, which is where the quantum-ness of quantum mechanics really shines. To understand entanglement, let us look at a simple example. Suppose you have a particle that decays but that has some conserved quantity. It doesn’t really matter what it is, but let’s say it’s the spin. The particle has spin zero, and the spin is conserved. This particle decays into two other particles, one flies to the left and one to the right. But now let us assume that each of the new particles can have only spin plus or minus 1. This means that either the particle going left had spin plus 1 and the particle going left had spin minus 1. Or it’s the other way round, the particle going left had spin minus 1, and the particle going right had spin plus 1.

In this case, quantum mechanics tells you that the state is in a superposition of the two possible outcomes of the decay. But, and here is the relevant point, now the solutions that you take a superposition of each contain two particles. Mathematically this means you have a sum of products of wave-functions. And in such a case we say that the two particles are “entangled”. If you measure the spin of the one particle, this tells you something about the spin of the other particle. The two are correlated.

This looks like it’s not quite local, but we will talk about just how quantum mechanics is local or not some other time. For today, the relevant point is that entanglement does not depend on the way that you select solutions to the Schrödinger equation. A state is either entangled or it is not. And while entanglement is a type of superposition, not every superposition is also entangled.

A curious property of quantum mechanics is that superpositions of macroscopic non-quantum states, like the dead and alive cat, quickly become entangled with their environment, which makes the quantum properties disappear in a process called “decoherence”. We will talk about this some other time, so stay tuned.

Thanks for watching, see you next week. Oh, and don’t forget to subscribe.

Saturday, May 09, 2020

A brief history of black holes

Today I want to talk about the history of black holes. But before I get to this, let me mention that all my videos have captions. You turn them on by clicking on “CC” in the YouTube toolbar.


Now about the black holes. The possibility that gravity can become so strong that it traps light appears already in Newtonian gravity, but black holes were not really discussed by scientists until it turned out that they are a consequence of Einstein’s theory of general relativity.

General Relativity is a set of equations for the curvature of space and time, called Einstein’s field equations. And black holes are one of the possible solution to Einstein’s equations. This was first realized by Karl Schwarzschild in 1916. For this reason, black holes are also sometimes called the “Schwarzschild solution”.

Schwarzschild of course was not actually looking for black holes. He was just trying to understand what Einstein’s theory would say about the curvature of space-time outside an object that is to good precision spherically symmetric, like, say, our sun or planet earth. Now, outside these objects, there is approximately no matter, which is good, because in this case the equations become particularly simple and Schwarzschild was able to solve them.

What happens in Schwarzschild’s solution is the following. As I said, this solution only describes the outside of some distribution of matter. But you can ask then, what happens on the surface of that distribution of matter if you compress the matter more and more, that is, you keep the mass fixed but shrink the radius. Well, it turns out that there is a certain radius, at which light can no longer escape from the surface of the object, and also not from any location inside this surface. This dividing surface is what we now call the black hole horizon. It’s a sphere whose radius is now called the Schwarzschild radius.

Where the black hole horizon is, depends on the mass of the object, so every mass has its own Schwarzschild radius, and if you could compress the mass to below that radius, it would keep collapsing to a point and you’d make a black hole. But for most stellar objects, their actual radius is much larger than the Schwarzschild radius, so they do not have a horizon, because inside of the matter one has to use a different solution to Einstein’s equations. The Schwarzschild radius of the sun, for example, is a few miles*, whereas the actual radius of the sun is some hundred-thousand miles. The Schwarzschild radius of planet Earth is merely a few millimeters.

Now, it turns out that in Schwarzschild’s original solution, there is a quantity that goes to infinity as you approach the horizon. For this reason, physicists originally thought that the Schwarzschild solution makes no physical sense. However, it turns out that there is nothing physically wrong with that. If you look at any quantity that you can actually measure as you approach a black hole, none of them becomes infinitely large. In particular, the curvature just goes with the inverse of the square of the mass. I explained this in an earlier video. And so, physicists concluded, this infinity at the black hole horizon is a mathematical artifact and, indeed, it can be easily removed.

With that clarified, physicists accepted that there is nothing mathematically wrong with black holes, but then they argued that black holes would not occur in nature because there is no way to make them. The idea was that, since the Schwarzschild solution is perfectly spherically symmetric, the conditions that are necessary to make a black hole would just never happen.

But this too turned out to be wrong. Indeed, it was proved by Stephen Hawking and Roger Penrose in the 1960s that the very opposite is the case. Black holes are what you generally get in Einstein’s theory if you have a sufficient amount of matter that just collapses because it cannot build up sufficient pressure. And so, if a star runs out of nuclear fuel and has no new way to create pressure, a black hole will be the outcome. In contrast to what physicists thought previously, black holes are hard to avoid, not hard to make.

So this was the situation in the 1970s. Black holes had turned from mathematically wrong, to mathematically correct* but non-physical, to a real possibility. But there was at the time no way to actually observe a black hole. That’s because back then the main mode of astrophysical observation was using light. And black holes are defined by the very property that they do not emit light.

However, there are other ways of observing black holes. Most importantly, black holes influence the motion of stars in their vicinity, and the other stars are observable. From this one can infer the mass of the object that the stars orbit around and one can put a limit on the radius. Black holes also swallow material in their vicinity, and from the way that they swallow it, one can tell that the object has no hard surface. The first convincing observations that our own galaxy contains a black hole came in the late 1990s. About ten years later, there were so many observations that could only be explained by the existence of black holes that today basically no one who understands the science doubts black holes exist.

What makes this story interesting to me is how essential it was that Penrose and Hawking understood the mathematics of Einstein’s theory and could formally prove that black holes should exist. It was only because of this that black holes were taken seriously at all. Without that, maybe we’d never have looked for them to begin with. A friend of mine thinks that Penrose deserves a Nobel Prize for his contribution to the discovery of black holes. And I think that’s right.

* Unfortunately, a mistake in the spoken text.

Monday, May 04, 2020

Predictions are overrated

Fortune Teller. Image: Vecteezy.
The world, it seems, is full with people who mistakenly think that a theory which makes correct predictions is a good theory. This is rubbish, of course, and it has led to a lot of unnecessary confusion. I blame this confusion on the many philosophers, notably Popper and Lakatos, who have gone on about the importance of predictions, but never clearly said that it’s not a scientific criterion.

You see, the philosophers wanted a quick way to figure out whether a scientific theory is good or not that would not require them to actually understand the science. This, needless to say, is not possible. But the next best thing you can do is to ask how much you can trust the scientists. It is for this latter purpose, to evaluate the trust you can put in scientists, that predictions are good. But they cannot, and should not, ultimately decide what the scientific value of a theory is.

The problem is well illustrated by a joke that my supervisor used to make. He liked to tell his students that whenever you predict something, you should also predict the opposite, because this way you can never be wrong. Haha. In case you are a student, let me warn you that this is bad career advice; You’d also inevitably be wrong, and it tends to be the dirt that sticks. So, don’t /end{advice}. But this joke makes clear that just because a theory makes a correct prediction doesn’t mean it’s good science.

Oh, you may say, you can get away with this once, but then you wouldn’t be able to make several correct predictions. If you said that, you’d be wrong. Because repeated correct predictions, too, are easy to accomplish. In fact, your naïve belief that correct predictions somehow speak for a theory is commonly exploited by scammers.

See, suppose I plan to convince someone that I can correctly predict the stock market. What do I do? Well, I pick, say, 3 stocks and make “predictions” for a week ahead, but that are really just guesses which cover all reasonably possible trends. I then select a large group of victims. To each of them I send one of my guesses. Some of them will coincidentally get the correct guess. A week later, I know which people got the correct guess. To this group, I then send another set of guesses for the week ahead. Again, some people will get the correct guess by coincidence, and a week later I will know which one it was. I do this a third time, and then I have a group of people who have good “evidence” that I can tell the future.

Amazing, no?

What’s the problem here? The problem is that correct predictions don’t tell you whether someone’s theory is good science.

As we have just seen, one of the problems with relying on predictions is that they may be correct just by coincidence. The larger the pool of predictions – or the pool of scientists making predictions! – the more likely this is to happen. The other problem is that relying on predictions makes fundamentally no sense. If I have a scientific theory, it is either a good description of nature, or it is not. At which time someone made a calculation for an observable quantity is entirely irrelevant for a theory’s relation to nature.

This is a point which is often raised by string theorists, and they are correct to raise it. String theorists say that since string theory gives rise to general relativity, it deserves as much praise as general relativity. That’s because, if string theory had been discovered before general relativity, it would have made the same predictions: light deflection on the sun, precession of Mercury, black holes, gravitational waves, and so on.

And indeed, this would be a good argument in favor of string theory – if it was correct. But it isn’t. String theory does not give rise to general relativity. It gives rise to general relativity in 10 dimensions, with supersymmetric matter, a negative cosmological constant, and dozens of additional scalar fields. All this extra clutter conflicts with observations. To fix this conflict with observations, string theorists then have to make several additional assumptions. With that you get a theory that is considerably more complicated than general relativity, but that does not explain the data any better. Hence, Occam’s razor tells you that general relativity is preferable.

Of course, it’s this adding of ad hoc assumptions to fix a mismatch with observation that the philosophers were trying to prevent when they requested testable predictions. But it’s the ad hoc assumptions themselves that are the problem, not the time at which they were made. To decide whether a scientific theory is any good what matters is only its explanatory power. Explanatory power measures how much data you can fit from which number of assumptions. The fewer assumption you make and the more data you fit, the higher the explanatory power, and the better the theory.

Ok, I admit, it’s somewhat more complicated than that. That’s because it also matters how well you fit the data. If you make more assumptions, you will generally be able to fit the data better. So there is a trade-off to be made, which needs to be quantified: At which point is the benefit you get from more assumptions not worth a somewhat better fit to the data? There are statistical tools to decide that. One can argue which one of those is the best for a given purpose, but that’s a fight that experts can fight in the case at hand. What is relevant here is only that the explanatory power of a theory is quantifiable. And it’s the explanatory power that decides whether a theory is good or not.

That’s obvious, I know. But why then do philosophers go on (and on and on) about predictability? Because it’s a convenient rule of thumb. It prevents scientists from adding details to their theory after they have new data, and doing so tends to reduce explanatory power. So, in many cases, asking for predictions is a good idea.

However, if you rely on predictions, you may throw out the baby with the bathwater. Just because no one made a prediction doesn’t mean they necessarily will add assumptions after an observation. In fact, the very opposite can happen. Scientists sometimes remove unnecessary assumptions when they get new data. A theory, therefore, can become better when it has been updated.

Indeed, this has happened several times in the history of physics.

Remember Einstein introducing the cosmological constant and then calling it a blunder? He had mistakenly made a superfluous assumption and then removed it after he learned of the observations. This increased, not decreased, explanatory power. Or think of Dirac’s supposed discovery of anti-particles. When his mathematics revealed a positively charged equivalent of the electron, he argued it would be the proton, which had already been observed at the time. This required the ad hoc assumption that somehow the difference in the masses between the electron and proton didn’t matter. When the positron was discovered later, Dirac could remove the ad hoc hypothesis, thereby improving his theory.

By now I hope it is clear that you should not judge the scientific worth of a theory by looking at the predictions it has made. It is a crude and error-prone criterion. Unfortunately, it has become widely mistaken as a measure for scientific quality, and this has serious consequences, way beyond physics.

Epidemic models, for example, have been judged erroneously by their power to correctly predict the trends of new cases and deaths. But such predictions require modellers to also know what actions society takes to prevent the spread. They require, basically, to predict the minds of political leaders. This, needless to say, is asking for somewhat too much. But, yell the cranks, if it doesn’t make predictions, it’s not science! Nonsense. You should judge epidemic models – any model, really – by how much data they have been able to describe well, and how many assumptions were needed for this. The fewer assumptions and the better the fit to data, the higher the scientific value of the model.

A closely related confusion is the idea that scientists should not update a theory when new data comes in. This can also be traced back to Popper & Co who proclaimed that it is bad scientific practice. But of course a good scientist updates their theory when they get new data. That, after all, is the essence of the scientific method! You update your theory so that it has the highest explanatory power. In practice, this usually means recalibrating free parameters if new information is available.

Another example where this misunderstanding matters are climate models. Climate models have correctly predicted many observed trends, from surface temperature increase, to stratospheric cooling, to sea ice melting. That’s an argument commonly used against climate change deniers. But the deniers then go and dig up some papers that made wrong predictions. This, so their claim, demonstrates that really anything is possible and you can’t trust predictions.

In defense, the scientists say the wrong predictions were few and far between. The deniers then respond – entirely correctly – that there may have been all kinds of reasons for the skewed number of papers that have absolutely nothing to do with their scientific merit.

By now we are arguing about the integrity of scientists and the policies of their journals instead about science. The scientists are clearly losing the argument. And why is that? Because relying on predictions is not a scientific argument. It is inherently a sociological argument. It’s like claiming that a study must be wrong because the lead author has a conflict of interest. That’s reason to be skeptical, yes. But it does not follow that the study is necessarily wrong. That would be a logically faulty conclusion.

What, then, is the scientific answer for the climate change deniers? It’s that climate models explain loads of data with few assumptions. The simplest explanation for our observations is that the trends are caused by human carbon dioxide emission. It’s the hypothesis that has the highest explanatory power.

To add an example that is closer to home: Many non-physicists ridicule hypotheses like supersymmetry and certain types of particle dark matter because they can be eternally amended and hence make no predictions. But that is not the problem with these models. Updating a theory when new data comes in is totally fine. The problem with these models is that they have assumptions that were entirely unnecessary to explain any data to begin with.

Adding supersymmetry to the standard model or details about dark matter particles to the concordance model is superfluous. It lowers the explanatory power of these theories, instead of increasing it. That’s what’s unscientific about it. And of course once you have an assumption that was superfluous in the first place, you can eternally fiddle with it. But it’s the use of superfluous assumptions that’s the unscientific part, not updating them.

In brief, I think the world would be better place if scientists talked less about predictions and more about explanatory power.