Friday, May 22, 2020

Is faster-than-light travel possible?

Einstein said that nothing can travel faster than the speed of light. You have probably heard something like that. But is this really correct? This is what we will talk about today.


But first, a quick YouTube announcement. My channel has seen a lot of new subscribers in the past year. And I have noticed that the newcomers are really confused each time I upload a music video. They’re like oh my god she sings, what’s that? So, to make it easier for you, I will no longer post my music videos here, but I have set up a separate channel for those. This means if you want to continue seeing my music videos, please go and subscribe to my other channel.

Now about faster than light travel. To get the obvious out of the way, no one currently knows how to travel faster than light, so in this sense it’s not possible. But you already knew that and it’s not what I want to talk about. Instead, I want to talk about whether it is possible in principle. Like, is there anything actually preventing us from ever developing a technology for faster than light travel?

To find out let us first have a look at what Einstein really said. Einstein’s theory of Special Relativity contains a speed that all observers will measure to be the same. One can show that this is the speed of massless particles. And since the particles of light are, for all we currently know, massless, we usually identify this invariant speed with the speed of light. But if it turned out one day that the particles of light have a small, nonzero mass, then we would still have this invariant speed in Einstein’s theory, it just would not be the speed of light any more.

Next, Einstein also showed that if you have any particle which moves slower than the speed of light, then you cannot accelerate it to faster than the speed of light. You cannot do that because it would take an infinite amount of energy. And this is why you often hear that the speed of light is an upper limit.

However, there is nothing in Einstein’s theory that forbids a particle to move faster than light. You just don’t know how to accelerate anything to such a speed. So really Einstein did not rule out faster than light motion, he just said, no idea how to get there. However, there is a problem with particles that go faster than light, which is that for some observers they look like they go backwards in time. Really, that’s what the mathematics says.

And that, so the argument goes, is a big problem because once you can travel back in time, you can create causal paradoxes, the so-called “grandfather paradoxes”. The idea is, that you could go back in time, kill your own grandfather – accidentally, we hope – so that you would never be born and could not have travelled back in time to kill him, which does not make any sense whatsoever.

So, faster than light travel is a problem because it can lead to causal inconsistencies. At least that’s what most physicists will tell you or maybe have already told you. I will now explain why this is complete nonsense.

It’s not even hard to see what’s wrong with this argument. Imagine you have a particle that goes right to left backwards in time, what would it look like? It would look like a particle going left to right forward in time. These two descriptions are mathematically just entirely identical. A particle does not know which direction of time is forward.

*Our observation that forward in time is different than backward in time comes from entropy increase. It arises from the behavior of large numbers of particles together. If you have many particles together, you can still in principle reverse any particular process in time, but the reversed process will usually be extremely unlikely. Take the example of mixing dough. It’s very easy to get it mixed up and very difficult to unmix, though that is in principle possible.

In any case, you probably don’t need convincing that we do have an arrow of time and that arrow of time points towards more wrinkles. One direction is forward, the other one is not. That’s pretty obvious. Now the reason for the grandfather paradox is not faster than light travel, but it’s that these stories screw up the direction of the arrow of time. You are going back in time, yet you are getting older. *That is the inconsistency. But as long as you have a consistent arrow of time, there is nothing physically wrong with faster-than-light travel.

So, really, the argument from causal paradoxes is rubbish, they are easy to prevent, you just have to demand a consistent arrow of time. But there is another problem with faster-than-light travel and that comes from quantum mechanics. If you take into account quantum mechanics, then a particle that travels faster than light will destroy the universe, basically, which would be unfortunate. Also, it should already have happened, so the existence of faster-than-light particles seems to not agree with what we observe.

The reason is that particles which move faster than light can have negative energy. And in quantum mechanics you can create pairs of new particles provided you conserve the total energy. Now, if you have particles with negative energy, you can pair them with particles of positive energy, and then you can create arbitrarily many of these pairs from nothing. Physicists then say that the vacuum is unstable. Btw, since it is a common confusion, let me mention that anti-particles do NOT have negative energy. But faster than light particles can have negative energy.

This is clearly something to worry about. However, the conclusion depends on how seriously you take quantum theory. Personally I think quantum theory is not itself fundamental, but it is only an approximation to a better theory that has not yet been developed. The best evidence for this is the measurement problem which I talked about in an earlier video. So I think that this supposed problem with the vacuum instability comes from taking quantum mechanics too seriously.

Also, you can circumvent the problem with the negative energies if you travel faster than light by using wormholes because in this case you can use entirely normal particles. Wormholes are basically shortcuts in space. Instead of taking the long way from here to Andromeda, you could hop into one end of a wormhole and just reappear at the other end. Unfortunately, there are good reasons to think that wormholes don’t exist which I talked about in an earlier video.

In summary, there is no reason in principle why faster than light travel or faster than light communication is impossible. Maybe we just have not figured out how to do it.

Talk To Me [I've been singing again]

This is for the guy who recommended I “release” my “inner Whitney Houston”.

Book Update

The US version of my book "Lost in Math" is about to be published as paperback. You can now pre-order it, which of course you should.

I am quite pleased that what I wrote in the book five years ago has held up so beautifully. There has been zero further progress in the foundations of physics and, needless to say, there will be zero progress until physicists understand that they need to change their methodology. Chances that they actually understand this are not exactly zero, but very close by.

In other news, on Monday I gave an online seminar about Superdeterminism, which was recorded and is now available on YouTube. Don't despair if Tim doesn't quite make sense to you; it took me a year to figure out that he isn't as crazy as he sounds.

The How The Light Gets in Festival which is normally in Hay-on-Wye has also been moved online. I think that's great, because Hay-on-Wye is a tiny village somewhere in the middle of nowhere and traveling there has been somewhat of a pain. Indeed, I had actually declined the invitation months ago. But since I can now attend without having to sit on a car, a bus, a plane, a train, and a taxi, I will be in a debate about Supersymmetry tomorrow (May 23) at 11:30am BST (not CEST) and giving a 30 mins talk about my book at 2pm (again that's BST).

Tuesday, May 19, 2020

[Guest Post] Conversful 101: Explaining What’s In The Bottom Corner Of Your Screen

[This post is written by Ben Alderoty from Conversful.]

You may have noticed something new in the bottom corner of BackRe(action) recently. It appears only if you’re on a computer. So if you’re on a phone or tablet right now, finish reading this post, but then come back another time from a computer to see what I’m talking about. That thing is called Conversful & myself with a team of a few others are behind it. I wanted to take a second to give some context as to what Conversful is & how it works.

We built Conversful to create new conversations. We believe that people on the same website at the same time probably have a lot in common. So much so that if they were to meet randomly at a conference, an airport or a bar, they would probably get into a fantastic conversation. But nothing exists right now to make these spontaneous connections happen. With Conversful, we’re trying to create a space where these connections can happen - a “virtual meeting place” of sorts to borrow Sabine’s words.



To open Conversful, just click the globe icon in the bottom corner. With the app open you can do one of two things; start a new conversation or join a conversation. 1️) To start a new conversation, all you’ll need is a topic and your first name. Topics can be anything. So far we’ve seen topics range from “Physics” to “Stephen Wolfram thinks he is close to a unified theory of physics unifying QM and GR. Some opinions?”. Both of these work. There’s no need to overthink a topic, keep it short, and submit it. 2) Joining a conversation is even easier. With the app open, click ‘Join Conversation’ and enter your first name.


Here’s a few other things:
  • Conversations on Conversful are 1-1. They are between the person who started the conversation and the person who joined it.
  • Conversations on Conversful are real-time. If you post a topic and then leave before someone joins, your topic will disappear. When you come back to the website at a later time you will not have any responses.
  • Conversful is for everyone. We designed Conversful to make it feel like you’re texting a friend. Be yourself, share your thoughts, there’s always someone online to hear them out as long as you’re willing to hear theirs.
Today we rolled out a handful of new features to make it easier for conversations to happen. You’re probably seeing some of them right now. If you’ve already tried Conversful and it didn’t end in a conversation, I ask you to please give it another try!

P.S. To make Conversful the best it can be, I would love to hear from you. If you have any thoughts/ideas/feedback on what’s working (or not) and what else you’d like to see, please feel free to email me at (ben@conversful.com). Cheers from NYC & happy conversing!

Friday, May 15, 2020

Understanding Quantum Mechanics #2: Superposition and Entanglement

If you know one thing about quantum mechanics, it’s that Schrödinger’s cat is both dead and alive. This is what physicists call a “superposition”. But what does this really mean? And what does it have to do with entanglement? This is what we will talk about today.


The key to understanding superpositions is to have a look at how quantum mechanics works. In quantum mechanics, there are no particles and no waves and no cats either. Everything is described by a wave-function, usually denoted with the Greek letter Ψ (Psi). Ψ is a complex valued function and from its absolute square you calculate the probability of a measurement outcome, for example, whether the cat is dead or whether the particle went into the left detector, and so on.

But how do you know what the wave-function does? We have an equation for this, which is the so-called Schrödinger equation. Exactly how this equation looks like is not so important. The important thing is that the solutions to this equation are the possible things that the system can do. And the Schrödinger equation has a very important property. If you have two solutions to the equation, then any sum of those two solutions with arbitrary pre-factors is also a solution.

And that’s what is called a “superposition”. It’s a sum with arbitrary pre-factors. It really sounds more mysterious than it is.

It is relevant because this means if you have two solutions of the Schrödinger equation that reasonably correspond to realistic situations, then any superposition of them also reasonably corresponds to a realistic situation. This is where the idea comes from that if the cat can be dead and the cat can be alive, then the cat can also be in a superposition of dead and alive. Which some people interpret to means, it’s neither dead nor alive but somehow, both, until you measure it. Personally, I am an instrumentalist and I don’t assign any particular meaning to such a superposition. It’s merely a mathematical tool to make a prediction for a measurement outcome.

Having said that, talking about superpositions is not particularly useful, because “superposition” is not an absolute term. It only makes sense to talk about superpositions of something. A wave-function can be a superposition of, say, two different locations. But it makes no sense to say it is a superposition, period.

To see why, let us stick with the simple example of just two solutions, Ψ1 and Ψ2. Now let us create two superpositions, that are a sum and a difference of the two original solutions, Ψ1 and Ψ2. Then you have two new solutions, let us call them Ψ3 and Ψ4. But now you can write the original Ψ1 and Ψ2 as a superposition of Ψ3 and Ψ4. So which one is a superposition? Well, there is no answer to this. Superposition is just not an absolute term. It depends on your choice of a specific set of solutions. You could say, for example, that Schrodinger’s cat is not in a superposition of dead and alive, but that it is instead in the not-superposed state dead-and-alive. And that’s mathematically just as good.

So, superpositions are sums with prefactors, and it only makes sense to speak about superpositions of something. In some sense, I have to say, superpositions are really not terribly interesting.

Much more interesting is entanglement, which is where the quantum-ness of quantum mechanics really shines. To understand entanglement, let us look at a simple example. Suppose you have a particle that decays but that has some conserved quantity. It doesn’t really matter what it is, but let’s say it’s the spin. The particle has spin zero, and the spin is conserved. This particle decays into two other particles, one flies to the left and one to the right. But now let us assume that each of the new particles can have only spin plus or minus 1. This means that either the particle going left had spin plus 1 and the particle going left had spin minus 1. Or it’s the other way round, the particle going left had spin minus 1, and the particle going right had spin plus 1.

In this case, quantum mechanics tells you that the state is in a superposition of the two possible outcomes of the decay. But, and here is the relevant point, now the solutions that you take a superposition of each contain two particles. Mathematically this means you have a sum of products of wave-functions. And in such a case we say that the two particles are “entangled”. If you measure the spin of the one particle, this tells you something about the spin of the other particle. The two are correlated.

This looks like it’s not quite local, but we will talk about just how quantum mechanics is local or not some other time. For today, the relevant point is that entanglement does not depend on the way that you select solutions to the Schrödinger equation. A state is either entangled or it is not. And while entanglement is a type of superposition, not every superposition is also entangled.

A curious property of quantum mechanics is that superpositions of macroscopic non-quantum states, like the dead and alive cat, quickly become entangled with their environment, which makes the quantum properties disappear in a process called “decoherence”. We will talk about this some other time, so stay tuned.

Thanks for watching, see you next week. Oh, and don’t forget to subscribe.

Saturday, May 09, 2020

A brief history of black holes

Today I want to talk about the history of black holes. But before I get to this, let me mention that all my videos have captions. You turn them on by clicking on “CC” in the YouTube toolbar.


Now about the black holes. The possibility that gravity can become so strong that it traps light appears already in Newtonian gravity, but black holes were not really discussed by scientists until it turned out that they are a consequence of Einstein’s theory of general relativity.

General Relativity is a set of equations for the curvature of space and time, called Einstein’s field equations. And black holes are one of the possible solution to Einstein’s equations. This was first realized by Karl Schwarzschild in 1916. For this reason, black holes are also sometimes called the “Schwarzschild solution”.

Schwarzschild of course was not actually looking for black holes. He was just trying to understand what Einstein’s theory would say about the curvature of space-time outside an object that is to good precision spherically symmetric, like, say, our sun or planet earth. Now, outside these objects, there is approximately no matter, which is good, because in this case the equations become particularly simple and Schwarzschild was able to solve them.

What happens in Schwarzschild’s solution is the following. As I said, this solution only describes the outside of some distribution of matter. But you can ask then, what happens on the surface of that distribution of matter if you compress the matter more and more, that is, you keep the mass fixed but shrink the radius. Well, it turns out that there is a certain radius, at which light can no longer escape from the surface of the object, and also not from any location inside this surface. This dividing surface is what we now call the black hole horizon. It’s a sphere whose radius is now called the Schwarzschild radius.

Where the black hole horizon is, depends on the mass of the object, so every mass has its own Schwarzschild radius, and if you could compress the mass to below that radius, it would keep collapsing to a point and you’d make a black hole. But for most stellar objects, their actual radius is much larger than the Schwarzschild radius, so they do not have a horizon, because inside of the matter one has to use a different solution to Einstein’s equations. The Schwarzschild radius of the sun, for example, is a few miles*, whereas the actual radius of the sun is some hundred-thousand miles. The Schwarzschild radius of planet Earth is merely a few millimeters.

Now, it turns out that in Schwarzschild’s original solution, there is a quantity that goes to infinity as you approach the horizon. For this reason, physicists originally thought that the Schwarzschild solution makes no physical sense. However, it turns out that there is nothing physically wrong with that. If you look at any quantity that you can actually measure as you approach a black hole, none of them becomes infinitely large. In particular, the curvature just goes with the inverse of the square of the mass. I explained this in an earlier video. And so, physicists concluded, this infinity at the black hole horizon is a mathematical artifact and, indeed, it can be easily removed.

With that clarified, physicists accepted that there is nothing mathematically wrong with black holes, but then they argued that black holes would not occur in nature because there is no way to make them. The idea was that, since the Schwarzschild solution is perfectly spherically symmetric, the conditions that are necessary to make a black hole would just never happen.

But this too turned out to be wrong. Indeed, it was proved by Stephen Hawking and Roger Penrose in the 1960s that the very opposite is the case. Black holes are what you generally get in Einstein’s theory if you have a sufficient amount of matter that just collapses because it cannot build up sufficient pressure. And so, if a star runs out of nuclear fuel and has no new way to create pressure, a black hole will be the outcome. In contrast to what physicists thought previously, black holes are hard to avoid, not hard to make.

So this was the situation in the 1970s. Black holes had turned from mathematically wrong, to mathematically correct* but non-physical, to a real possibility. But there was at the time no way to actually observe a black hole. That’s because back then the main mode of astrophysical observation was using light. And black holes are defined by the very property that they do not emit light.

However, there are other ways of observing black holes. Most importantly, black holes influence the motion of stars in their vicinity, and the other stars are observable. From this one can infer the mass of the object that the stars orbit around and one can put a limit on the radius. Black holes also swallow material in their vicinity, and from the way that they swallow it, one can tell that the object has no hard surface. The first convincing observations that our own galaxy contains a black hole came in the late 1990s. About ten years later, there were so many observations that could only be explained by the existence of black holes that today basically no one who understands the science doubts black holes exist.

What makes this story interesting to me is how essential it was that Penrose and Hawking understood the mathematics of Einstein’s theory and could formally prove that black holes should exist. It was only because of this that black holes were taken seriously at all. Without that, maybe we’d never have looked for them to begin with. A friend of mine thinks that Penrose deserves a Nobel Prize for his contribution to the discovery of black holes. And I think that’s right.

* Unfortunately, a mistake in the spoken text.

Monday, May 04, 2020

Predictions are overrated

Fortune Teller. Image: Vecteezy.
The world, it seems, is full with people who mistakenly think that a theory which makes correct predictions is a good theory. This is rubbish, of course, and it has led to a lot of unnecessary confusion. I blame this confusion on the many philosophers, notably Popper and Lakatos, who have gone on about the importance of predictions, but never clearly said that it’s not a scientific criterion.

You see, the philosophers wanted a quick way to figure out whether a scientific theory is good or not that would not require them to actually understand the science. This, needless to say, is not possible. But the next best thing you can do is to ask how much you can trust the scientists. It is for this latter purpose, to evaluate the trust you can put in scientists, that predictions are good. But they cannot, and should not, ultimately decide what the scientific value of a theory is.

The problem is well illustrated by a joke that my supervisor used to make. He liked to tell his students that whenever you predict something, you should also predict the opposite, because this way you can never be wrong. Haha. In case you are a student, let me warn you that this is bad career advice; You’d also inevitably be wrong, and it tends to be the dirt that sticks. So, don’t /end{advice}. But this joke makes clear that just because a theory makes a correct prediction doesn’t mean it’s good science.

Oh, you may say, you can get away with this once, but then you wouldn’t be able to make several correct predictions. If you said that, you’d be wrong. Because repeated correct predictions, too, are easy to accomplish. In fact, your naïve belief that correct predictions somehow speak for a theory is commonly exploited by scammers.

See, suppose I plan to convince someone that I can correctly predict the stock market. What do I do? Well, I pick, say, 3 stocks and make “predictions” for a week ahead, but that are really just guesses which cover all reasonably possible trends. I then select a large group of victims. To each of them I send one of my guesses. Some of them will coincidentally get the correct guess. A week later, I know which people got the correct guess. To this group, I then send another set of guesses for the week ahead. Again, some people will get the correct guess by coincidence, and a week later I will know which one it was. I do this a third time, and then I have a group of people who have good “evidence” that I can tell the future.

Amazing, no?

What’s the problem here? The problem is that correct predictions don’t tell you whether someone’s theory is good science.

As we have just seen, one of the problems with relying on predictions is that they may be correct just by coincidence. The larger the pool of predictions – or the pool of scientists making predictions! – the more likely this is to happen. The other problem is that relying on predictions makes fundamentally no sense. If I have a scientific theory, it is either a good description of nature, or it is not. At which time someone made a calculation for an observable quantity is entirely irrelevant for a theory’s relation to nature.

This is a point which is often raised by string theorists, and they are correct to raise it. String theorists say that since string theory gives rise to general relativity, it deserves as much praise as general relativity. That’s because, if string theory had been discovered before general relativity, it would have made the same predictions: light deflection on the sun, precession of Mercury, black holes, gravitational waves, and so on.

And indeed, this would be a good argument in favor of string theory – if it was correct. But it isn’t. String theory does not give rise to general relativity. It gives rise to general relativity in 10 dimensions, with supersymmetric matter, a negative cosmological constant, and dozens of additional scalar fields. All this extra clutter conflicts with observations. To fix this conflict with observations, string theorists then have to make several additional assumptions. With that you get a theory that is considerably more complicated than general relativity, but that does not explain the data any better. Hence, Occam’s razor tells you that general relativity is preferable.

Of course, it’s this adding of ad hoc assumptions to fix a mismatch with observation that the philosophers were trying to prevent when they requested testable predictions. But it’s the ad hoc assumptions themselves that are the problem, not the time at which they were made. To decide whether a scientific theory is any good what matters is only its explanatory power. Explanatory power measures how much data you can fit from which number of assumptions. The fewer assumption you make and the more data you fit, the higher the explanatory power, and the better the theory.

Ok, I admit, it’s somewhat more complicated than that. That’s because it also matters how well you fit the data. If you make more assumptions, you will generally be able to fit the data better. So there is a trade-off to be made, which needs to be quantified: At which point is the benefit you get from more assumptions not worth a somewhat better fit to the data? There are statistical tools to decide that. One can argue which one of those is the best for a given purpose, but that’s a fight that experts can fight in the case at hand. What is relevant here is only that the explanatory power of a theory is quantifiable. And it’s the explanatory power that decides whether a theory is good or not.

That’s obvious, I know. But why then do philosophers go on (and on and on) about predictability? Because it’s a convenient rule of thumb. It prevents scientists from adding details to their theory after they have new data, and doing so tends to reduce explanatory power. So, in many cases, asking for predictions is a good idea.

However, if you rely on predictions, you may throw out the baby with the bathwater. Just because no one made a prediction doesn’t mean they necessarily will add assumptions after an observation. In fact, the very opposite can happen. Scientists sometimes remove unnecessary assumptions when they get new data. A theory, therefore, can become better when it has been updated.

Indeed, this has happened several times in the history of physics.

Remember Einstein introducing the cosmological constant and then calling it a blunder? He had mistakenly made a superfluous assumption and then removed it after he learned of the observations. This increased, not decreased, explanatory power. Or think of Dirac’s supposed discovery of anti-particles. When his mathematics revealed a positively charged equivalent of the electron, he argued it would be the proton, which had already been observed at the time. This required the ad hoc assumption that somehow the difference in the masses between the electron and proton didn’t matter. When the positron was discovered later, Dirac could remove the ad hoc hypothesis, thereby improving his theory.

By now I hope it is clear that you should not judge the scientific worth of a theory by looking at the predictions it has made. It is a crude and error-prone criterion. Unfortunately, it has become widely mistaken as a measure for scientific quality, and this has serious consequences, way beyond physics.

Epidemic models, for example, have been judged erroneously by their power to correctly predict the trends of new cases and deaths. But such predictions require modellers to also know what actions society takes to prevent the spread. They require, basically, to predict the minds of political leaders. This, needless to say, is asking for somewhat too much. But, yell the cranks, if it doesn’t make predictions, it’s not science! Nonsense. You should judge epidemic models – any model, really – by how much data they have been able to describe well, and how many assumptions were needed for this. The fewer assumptions and the better the fit to data, the higher the scientific value of the model.

A closely related confusion is the idea that scientists should not update a theory when new data comes in. This can also be traced back to Popper & Co who proclaimed that it is bad scientific practice. But of course a good scientist updates their theory when they get new data. That, after all, is the essence of the scientific method! You update your theory so that it has the highest explanatory power. In practice, this usually means recalibrating free parameters if new information is available.

Another example where this misunderstanding matters are climate models. Climate models have correctly predicted many observed trends, from surface temperature increase, to stratospheric cooling, to sea ice melting. That’s an argument commonly used against climate change deniers. But the deniers then go and dig up some papers that made wrong predictions. This, so their claim, demonstrates that really anything is possible and you can’t trust predictions.

In defense, the scientists say the wrong predictions were few and far between. The deniers then respond – entirely correctly – that there may have been all kinds of reasons for the skewed number of papers that have absolutely nothing to do with their scientific merit.

By now we are arguing about the integrity of scientists and the policies of their journals instead about science. The scientists are clearly losing the argument. And why is that? Because relying on predictions is not a scientific argument. It is inherently a sociological argument. It’s like claiming that a study must be wrong because the lead author has a conflict of interest. That’s reason to be skeptical, yes. But it does not follow that the study is necessarily wrong. That would be a logically faulty conclusion.

What, then, is the scientific answer for the climate change deniers? It’s that climate models explain loads of data with few assumptions. The simplest explanation for our observations is that the trends are caused by human carbon dioxide emission. It’s the hypothesis that has the highest explanatory power.

To add an example that is closer to home: Many non-physicists ridicule hypotheses like supersymmetry and certain types of particle dark matter because they can be eternally amended and hence make no predictions. But that is not the problem with these models. Updating a theory when new data comes in is totally fine. The problem with these models is that they have assumptions that were entirely unnecessary to explain any data to begin with.

Adding supersymmetry to the standard model or details about dark matter particles to the concordance model is superfluous. It lowers the explanatory power of these theories, instead of increasing it. That’s what’s unscientific about it. And of course once you have an assumption that was superfluous in the first place, you can eternally fiddle with it. But it’s the use of superfluous assumptions that’s the unscientific part, not updating them.

In brief, I think the world would be better place if scientists talked less about predictions and more about explanatory power.

Thursday, April 30, 2020

Book Review: “The Dream Universe” by David Lindley

The Dream Universe: How Fundamental Physics Lost Its Way
By David Lindley
Doubleday (March 17, 2020)

Let me be honest: I expected to dislike this book. For one because it looked like a remake of Lindley’s 1993 book The End of Physics which I already disliked. Also, physics didn’t end. Worse still, if you read the description of his new book, you can easily mistake it for a description of my book Lost in Math. On the website of Lindley’s publisher you find, for example, that The Dream Universe is about “how theoretical physics is returning to its unscientific roots” and that physicists have come to believe
“As we investigate realms further and further from what we can see and what we can test, we must look to elegant, aesthetically pleasing equations to develop our conception of what reality is. As a result, much of theoretical physics today is something more akin to the philosophy of Plato than the science to which the physicists are heirs.”
However, after reading Lindley’s book, I changed my mind. It is a good book and while I think that Lindley in the end draws the wrong conclusions, it is well worth the read. Let me explain.

First of all, The Dream Universe is dramatically better than The End of Physics. The latter struck me as a superficial and, ultimately, pointless attack on some trends in contemporary physics just because the author had other ideas for what physicists should do. There really wasn’t much to learn from the book. The Dream Universe is instead a historical analysis of the changing role of mathematics in the foundations of physics and the growing divide between theory and experiment in the field. In his new book, Lindley makes a well-reasoned case that something is going badly wrong.

Lindley’s book of course has some overlap with mine. Both discuss the problem that arguments from mathematical beauty have become widely accepted among physicists even though they are unscientific. But while I wrote a book about current events with only a short dip into history, and told this story as someone who works in the field, Lindley provides the perspective of an outsider, albeit one who is knowledgeable both about physics and the history of science.

As Lindley tells the reader in the preface, he started a research career in physics, but then left to become a science writer. The End of Physics was his first book after this career change. He then became interested in the history of science and wrote several historical books. Now he has taken on the foundations of physics again with a somewhat more detached view.

The Dream Universe begins with some rather general chapters about the scientific method and about how scientists use mathematics. You find there the story of Galileo, Copernicus, and the epicycles, as well reflections on the conflict-loaded relation between science and the church. Lindley then moves on to the invention of calculus, the development of electrodynamics, and the increasing abstraction of physics, all the way up to string theory and the idea that the universe is a quantum computer. He lists some successes of this abstraction – notably Dirac’s prediction of anti-matter – before showing where this trend has led us: To superstrings, multiverses, lots of empty blather, and a complete lack of progress in the field.

Lindley is a skilled writer and the book is a pleasure to read. He explains even the most esoteric physics concepts eloquently and without wasting the reader’s time. Overall, he maintains a good balance between science, history, and the lessons of both. Lindley also doesn’t leave you guessing about his own opinion. In several places he says very clearly what he thinks about other historians’, scientists’, or philosophers’ arguments which I find so much more valuable than pages of polite tip-toeing that you have to dissect with an electron microscope to figure out what’s really being said.

The reader also learns that Lindley’s personal mode of understanding is visualization rather than abstraction. Lindley, for example, expresses at some point his frustration with a professor who explained (entirely correctly, if you ask me) that “a tensor is an object that transforms as a tensor” with a transformation law that the professor presumably previously defined. Lindley reacts: “Here is how I would explain a tensor. Think of a cube of jellylike material.” It follows two paragraphs about jelly that I personally find entirely unenlightening. Goes to show, I guess, that different people prefer different modes of explanation.

In the end, Lindley puts the blame for the lack of progress in the foundations of physics on mathematical abstraction, a problem he considers insurmountable. “The unanswerable difficulty, as I hope has become clear by now, is that researchers in fundamental physics are exploring a world, or worlds, hopelessly removed from our experience… What defines those unknowable worlds is perfect order, mathematical rigor, even aesthetic elegance.”

He then classifies “fundamental physics today as a kind of philosophy” and explains it is now “less about a strictly rational understanding of the universe and more about finding a scenario that we deem intellectually respectable.” He sees no way out of this situation because “Observation, experiment, and fact-finding are no longer able to guide [researchers in fundamental physics], so they must set their path by other means, and they have decided that pure rationality and mathematical reasoning, along with a refined aesthetic sense, will do the job.”

I am sympathetic to Lindley’s take on the current status of research in the foundations of physics, but I think the conclusion that there is no way forward is not supported by his argument. The problem in modern physics is not the abundance of mathematical abstraction per se, but that physicists have forgotten mathematical abstraction is a means to an end, not an end unto itself. They may have lost sight of the goal, alright, but that doesn’t mean the goal has ceased to exist.

It is also simply wrong that there are no experiments that could guide physicists in the foundations of physics, and I say this as someone who has spent the past 20 years thinking about this very problem. It’s just that physicists are wasting time publishing papers about beautiful theories that have no relevance for nature instead of analyzing what is going wrong in their discipline and how to make progress.

In summary, Lindley’s book is not so much a competition to Lost in Math as a complement. If you want to understand what is going wrong in the foundations of physics, The Dream Universe is an excellent and timely introduction.

Disclaimer: Free review copy.

Book Review: “A Philosophical Approach To MOND” by David Merritt

A Philosophical Approach to MOND: Assessing the Milgromian Research Program in Cosmology
By David Merritt
Cambridge University Press (April 30, 2020)

Don’t get put off by the title of the book! Really it should have been called “A Scientific Approach To MOND,” and I am so glad someone wrote it. MOND, to remind you, stands for Modified Newtonian Dynamics, which is the competing hypothesis to dark matter. Dark matter explains a whole bunch of astrophysical observations by positing a new type of matter that makes itself noticeable only through its gravitational pull. MOND instead postulates that the laws of gravity change on galactic scales.

The vast majority of astrophysicists today think erroneously that dark matter has better support in observational evidence, but Merritt cleans up with this myth. Let me emphasize that Merritt is not originally a philosopher by training. He worked for decades in astrophysics before his interest turned to the philosophy of science in recent years. His book is not a verbose pamphlet, as – excuse me – philosophical treatises tend to be, but it’s an in-depth scientific analysis.

What makes Merritt’s book special is that he evaluates the evidence, both for MOND and the standard model of cosmology, according to the most widely accepted criteria put forward by Popper, Zahar, Musgrave, and Carrier. The physicists among you need not despair: Merritt’s book has an excellent (and blissfully short) introduction into the philosophy of science that contains everything you need to know to follow along.

The book is extremely well structured. Merritt first analyses MOND as a phenomenological idea, largely formulated in words, then MOND in the non-relativistic case, then relativistic completions, and then the hybrid theory of dark matter and modified gravity that can be interpreted as a type of superfluid dark matter. In each step, Merritt examines how the theory fares with respect to confirmed predictions and corroboration, which he summarizes in handy tables.

Along the way he cleans up with quite a number of mistakes that you encounter all over the published literature. Yes, this is hugely troubling, and it should indeed trouble you. There is for example the idea that MOND cannot explain the CMB power spectrum when indeed it made a correct prediction for the second peak, whereas dark matter did not. In fact, astrophysicists had to twiddle with the dark matter idea after the measurement to accommodate the new data. Another wrong but wide-spread conviction is that modified gravity has somehow been ruled out by observation on galaxy clusters.

Having said that, Merritt clearly points out that MOND (or its relativistic generalizations) has certain problems, notably the third peak of the CMB is a headache.

The most interesting part of the book, though, is that Merritt demonstrates by many quotations that astrophysicists who prefer dark matter are confusing the predictive power of a theory with the ability of the theory to accommodate new evidence.

I have found this book tremendously useful, though I want to warn you that this is clearly not a popular science book. The book is full with technical detail. However, I believe that the biggest part of it should be understandable for anyone who has an interest in the topic. There are some parts which will be incomprehensible if you don’t at least have an undergrad degree in physics, eg when Merritt goes on about the Lagrangian formulation of the relativistic completions. But I don’t think that these parts are really essential to understand Merritt’s argument.

But. Of course I have a “but”!

I think that Merritt does not pay enough attention to the problem that MOND, because it is non-relativistic, is incompatible with an extremely well-confirmed theory – General Relativity –, and that we have to date no relativistic completion that does not run into other problems with evidence. This means that MOND, simply put, does not live up to the current scientific standard in the field.

Let me be clear that this does not mean that MOND – as an approximation – is wrong. But I believe the lack of a controlled limit to recover General Relativity is the major reason why so many physicists presently reject MOND. I find it somewhat unfair to simply disregard the scientific standard. The standard is there for a reason, and that reason itself is based on evidence, namely: Certain types of theories have proved successful. MOND is not that type of theory, and no one has yet managed to improve it. It only reproduces General Relativity in the cases where we have precision tests by postulating that it does so, not because there is an actual derivation that demonstrates this is consistently possible. This is an extremely non-trivial problem.

This problem is solved by the hybrid version that can be interpreted as superfluid dark matter. In Merritt’s evaluation this option receives mediocre grades. But of course this is because he does not appreciate the need to remove the tension between MOND and general relativity to begin with. Superfluid dark matter does this.

In summary, I think that everyone who has a research interest in astrophysics and cosmology will benefit from reading this book. And I think that physics would much benefit from a similar analysis of inflation and other hypotheses for the early universe, quantum gravity, theories of everything and grand unification, and quantum foundations.

Disclaimer: Free review copy

Wednesday, April 29, 2020

The Raven Paradox

The scientific method is only a few hundred years old. This continues to amaze me. It seems so obvious, now, that you should go and test your theories and, if necessary, revise them. But for much of human history, coming up with a “theory” was merely about story-telling and sense-making, not about making quantitatively accurate predictions.


Then again, the scientific method is not set in stone. Scientists and philosophers both are still trying to understand just how to identify the best hypothesis or when to discard one. This is not as trivial as it sounds, and this difficulty is well illustrated by the Raven Paradox, which I want to talk about today.

The Raven Paradox was first discussed in the 1940s by the German philosopher Carl Gustav Hempel and it is therefore also known as Hempel’s paradox. Hempel was thinking about what type of evidence counts in favor of a hypothesis. As an example, he used the hypothesis “All ravens are black”. If you see a raven, and the raven is indeed black, then you’d say this counts as evidence in favor of the hypothesis. So far, so good.

Now, the hypothesis that all ravens are black can be expressed as a logical statement in the form “If something is a raven, then it is black.” This statement is then logically equivalent to saying “If something is not black, then it is not a raven.” But once you have reformulated the hypothesis this way, then anything not black that is not a raven counts in favor of your hypothesis. Say, you see a red bus, then that speaks for the hypothesis that ravens are black, because the bus is not black and it not a raven either. If you see a green apple, that’s even more evidence that ravens are black. Yellow post-its? Brown snails? White daisies? They’re all evidence that ravens are black!

To most of you this will sounds somewhat nuts, and that’s what’s paradoxical about it. The argument is logically entirely correct. And yet, it seems intuitively wrong. This is not how we actually go about collecting evidence for hypotheses. So what is going on? Do we maybe not understand how science works after all?

Hempel himself seems to have thought that our intuition is just wrong. But the more commonly accepted explanation is today that our intuition is right, at least in this case. This explanation has it that we think black ravens are better evidence for the hypothesis that ravens are black than non-black non-ravens because there are more non-black non-ravens than there are black ravens, and indeed we have seen a lot of non-black non-ravens in our lives already. So, if we see a green apple, that’s evidence, alright, but it’s not very interesting evidence. It’s not very surprising. It does not tell you much new.

This argument can be made more formal using Bayesian inference. Bayesian inference is a method to update your evaluation of the probability of a hypothesis if you get more information. And indeed, for the raven paradox the calculation seems to be showing that the non-black non-ravens *are evidence in favor of the hypothesis, but black ravens are better evidence. They help you gain more confidence in your hypothesis.

But. The argument from Bayesian inference expects you to know how many non-black non-ravens there are compared to ravens. You might estimate this to be a large number, but where do you get the evidence for that number from? And how have you evaluated it? What do you even mean by a non-black non-raven. Come to think of it, just how do you define “raven”? And what does it mean for something to be “black”? And so on. You can debate this endlessly, if you want.

But you know me, I don’t want to debate this endlessly, I just want to inspire you to think about this paradox for a moment and maybe confuse some other people with it.

Tuesday, April 28, 2020

New blog feature: Chat with other commenters

Moderating comments on this blog is a constant pain. That’s partly because there are always commenters which ignore the comment rules, thus forcing me to step in and reprimand them. Trust me, I don’t enjoy it. But this is a minor hassle. The real trouble with comment sections, here and elsewhere, is that they fulfil two different roles which conflict with each other.

See, my main interest in the comment section is that it contributes to the topic of my blogpost and adds valuable information for other readers. Many commenters, however, would rather use the comment section to discuss their own ideas or have an exchange about something else entirely. Now, in principle I think it’s great if my writing stimulates discussion, but I don’t want it to clog my threads. This brings me in the unfortunate position that I constantly have to tell people to shut up and go elsewhere instead of encouraging them to discuss.

But I may have stumbled over a solution for this problem.

Late last year, I got an email from Ben Alderoty, who had been working on an app that allows website visitors to have private one-on-one conversations. The app is called “Conversful” and if you’re on a laptop or desktop you should see it appear in the bottom right corner of your screen.



Click on the icon, and you are asked to enter a name or pseudonym and a topic you want to have a chat about. I would suggest that you use the same pseudonym that you use for commenting here, so that others recognize you.

Since blogs tend to collect like-minded people, I hope the chances are good that you will find someone to exchange thoughts with, especially since many of you have gotten to know each other over the years already. This blog receives most of its traffic from the USA, Canada, the UK, and Germany. This means that the traffic is the highest between the morning and early afternoon Eastern Time, or between the early afternoon and evening Central European Time, respectively. During these times you are most likely to meet other commenters here.

I want to emphasize that this is a test-run of software which is not yet fully developed and does not have all the functionality you may want from it. But I believe that this idea has much potential. It essentially turns websites in virtual meeting places, where you can have conversations without blasting your words out to the whole world.


If you have feedback or comments on this feature, please let me know, most easily by leaving a comment on this thread. The feedback you provide will go directly to the Conversful team for them to make improvements to the app. If you are running a website yourself where this app might be useful, please get in touch with Ben at ben@conversful.com. For more information about them and their vision, you can also check their website conversful.com.

No, physicists have not explained why there is more matter than anti-matter in the universe. It’s not possible.

Pretty? Get over it.
You would think that physicists finally understood that insisting the laws of nature must be beautiful is unscientific. Or at least, if they do not understand it, you would think science writers meanwhile understand it. But every couple of months I have to endure yet another media blast about physicists who may have solved a problem that does not exist in the first place.

The most recent installation of this phenomenon are loads of articles about the recent T2K results that hint at CP violation in the neutrino sector. Yes, this is an interesting result and deserves to be written about. The problem is not the result itself, the problem is scientists and science writers who try to make this result more important than it is.

Truth be told, few people care about CP violation in the neutrino sector. To sell the story, therefore, this turned into a tale about how the results supposedly explain why there is more matter than antimatter in the universe. But: The experiment does not say anything about why there is more matter than anti-matter in the universe. No, it does not. No, not a single bit. If you think it does, you need to urgently switch on your brain. I do not care what your professor said, please think for yourself. Start it right now.

You can see for yourself what the problem is by reading the reports in the media. Not a single one of them explains why anyone should think there ever were equal amounts of matter and anti-matter to begin with. Leah Crane, for example, writes for New Scientist: “Our leading theories tell us that, in the moments after the big bang, there was an equal amount of matter and antimatter.”

But, no, they do not. They cannot. You don’t even need to know what these “leading theories” look like in detail, except that, as all current theories in physics, they work by applying differential equations to initial values. Theories of this type can never explain the initial values themselves. It’s not possible. The theories therefore do not tell us there was an equal amount of matter and antimatter. This amount is a postulate. The initial conditions are always assumptions that the theory does not justify.

Instead, physicists think for purely aesthetic reasons it would have been nicer if there was an equal amount of matter and antimatter in the early universe. Trouble is, this does not agree with observation. So then they cook up theories for how you can start with an equal amount of matter and anti-matter and still end up with a universe like the one we see. You find a good illustration for this in a paper by Steigman and Scherrer with the title “Is The Universal Matter - Antimatter Asymmetry Fine Tuned?” (arXiv:1801.10059) They write:
“One possibility is that the Universe actually began in an asymmetric state, with more baryons and antibaryons. This is, however, a very unsatisfying explanation. Furthermore, if the Universe underwent a period of inflation (i.e., very rapid expansion followed by reheating), then any preexisting net baryon number would have been erased. A more natural explanation is that the Universe began in an initally [sic] symmetric state, with equal numbers of baryons and antibaryons, and that it evolved later to produce a net baryon asymmetry.”
They call it an “unsatisfying explanation” to postulate a number, but the supposedly better explanation still postulates a number!

People always complain to me that I am supposedly forgetting that science is all about “explaining”. These complainers do not listen. Nothing is being explained here. The two hypothesis on the table are: “The universe started with a ratio X of matter to anti-matter and the outcome is what we observe.” The other explanation is “The universe started with a ratio Y of matter to anti-matter, then many complicated things happened and the outcome is what we observe.” Neither of these theories explains the value of X or Y. If anything, you should prefer the former hypothesis because it’s clearly the simpler one. In any case, though, as I said, this type of theory cannot explain their own initial value.

But here is the mind-boggling thing: The vast majority of physicists think that the second explanation is somehow better because the number 1.0000000000 is prettier than the number 1.0000000001. That’s what it comes down to. They like some numbers better than others. But, look, a first grader can see the problem. Physicists are wondering why X=1.0000000001. But with the supposedly new explanation you then ask why Y=1.0000000000? How is that an improvement? Answer: It is not.

Let me emphasize once again that the problem here is not the experiment itself. The problem is that physicists mistakenly think something is being explained because they never bothered to think about what it even means to explain something.

You may disagree with me that scientists should not waste time on trying to prettify the laws of nature, alright. Maybe you think this is something scientists should do with tax money. But I expect that if a topic gets media coverage then the public hears the truth. So here is the truth: No problem has been solved. The problem is not solvable with the current theories of nature.

Friday, April 24, 2020

Understanding Quantum Mechanics #1: It's not about discreteness

This must be one of the most common misunderstandings about quantum mechanics, that quantum mechanics is about making things discrete. But is an understandable misunderstanding because the word “quantum” suggests that quantum mechanics is about small amounts of something. Indeed, if you ask Google for the meaning of quantum, it offers the definition “a discrete quantity of energy proportional in magnitude to the frequency of the radiation it represents.” Problem is that just because energy is proportional to frequency does not mean it is discrete. In fact, in general it is not.



The reason that quantum mechanics has become associated with discretization is entirely historical. The first signs that something was not quite right with the fundamental theories of the 19th century came from atomic spectral lines. Atoms can absorb and emit light only at certain frequencies. If you think that atoms are basically blobs of particles stuck together, which was what people thought at the time, then this makes absolutely no sense.

According to quantum mechanics now, the negatively charged electrons occupy shells around the positively charged nucleus. These shells cannot have any radius, but only certain values of the radius are allowed. Just what shape the shells have and how large they are can be calculated with quantum mechanics. And this explains why atoms can only absorb and emit light of certain frequencies. Because the energy of the light must fit to the energy that moves an electron from one shell to another.

So, yes, the energies of electrons which are bound to atoms are discrete. But the energies of electrons, or of any particle really, are not always discrete, and neither are other measureable quantities. The energy of a photon traveling through empty space, for example, can have any value according to quantum mechanics. The energy is not discrete. Or, if you look at an electron in the conducting band of a metal, it can be at any position. The position is not discrete.

What, then, does it mean to have a quantum theory as opposed to a non-quantum theory? A quantum theory is one in which you have observable quantities that obey Heisenberg’s uncertainty principle. Mathematically, this is not entirely the correct definition. More precisely a quantum theory has operators for observables which do not commute. But for what the physical consequences are concerned, the uncertainty principle is what tells quantum from non-quantum theories. The other important property of quantum theories is that you can have entanglement. We will talk about what this means another time.

For today, the lesson to take away is that quantizing a theory does not mean you make it discrete. This is important also when it comes to the quantization of gravity. Quantizing gravity does not necessarily mean that space and time have to be discrete. Thanks for watching, see you next week.

PS: I did not suddenly lose half my hair; I just messed up my lighting.

Thursday, April 16, 2020

How Heisenberg Became Uncertain

I have decided that my YouTube channel lacks a history part because there is so much we can learn from the history of science. So, today I want to tell you a story. It’s the story of how Werner Heisenberg got the uncertainty principle named after him.



Heisenberg was born in 1901 in the German city of Würzburg. He went on to study physics in Munich. In 1923, Heisenberg was scheduled for his final oral examination to obtain his doctorate. He passed mathematics, theoretical physics, and astronomy just fine, but then he run into trouble with experimental physics.

His examination in experimental physics was by Wilhelm Wien. That’s the guy who has Wien’s law named after him. Wien, as an experimentalist, had required that Heisenberg did a “Praktikum” which is a series of exercises in physics experimentation; it’s lab work for beginners, basically. But the university lacked some equipment and Heisenberg was not interested enough to find out where to get it. So he just moved on to other things without looking much into the experiments he was supposed to do. That, as it turned out, was not a good idea.

When Heisenberg’s day of the experimental exam came, it did not go well. In their book “The Historical Development of Quantum Theory”, Mehra and Rechenberg recount:
“Wien was annoyed when he learned in the examination that Heisenberg had done so little in the experimental exercise given to him. He then began to ask [Heisenberg] questions to gauge his familiarity with the experimental setup; for instance, he wanted to know what the resolving power of the Fabry-Perot interferometer was... Wien had explained all this in one of his lectures on optics; besides, Heisenberg was supposed to study it anyway... But he had not done so and now tried to figure it out unsuccessfully in the short time available during the examination. Wien... asked about the resolving power of a microscope; Heisenberg did not know that either. Wien questioned him about the resolving power of telescopes, which [Heisenberg] also did not know.”
What happened next? Well, Wien wanted to fail Heisenberg, but the theoretical physicist Arnold Sommerfeld came to Heisenberg’s help. Heisenberg had excelled in the exam on theoretical physics, and so Sommerfeld put in a strong word in favor of giving Heisenberg his PhD. With that, Heisenberg passed the doctoral examination, though he got the lowest possible grade.

But this was not the end of the story. Heisenberg was so embarrassed about his miserable performance that he sat down to learn everything about telescopes and microscopes that he could find. This was in the early days of quantum mechanics and it led him to wonder if there is a fundamental limit to how well one can resolve structures with a microscope. He went about formulating a thought experiment which is now known as “Heisenberg’s Microscope.”

This thought experiment was about measuring a single electron, something which was actually not possible at the time. The smallest distance you can resolve with a microscope, let us call this Δ x, depends on both the wave-length of the light that you use, I will call that λ, and the opening angle of the microscope, ε. The smallest resolvable distance is proportional to the wave-length, so a smaller wave-length allows you to resolve smaller structures. And it is inversely proportional to the sine of the opening angle. A smaller opening angle makes the resolution worse.

But, said Heisenberg, if light is made of particles, that’s the photons, and I try to measure the position of an electron with light, then the photons will kick the electron. But you need some opening angle for the microscope to work, which means you don’t know exactly where the photon is coming from. Therefore, the act of measuring the position of the electron with a photon actually makes me less certain about where the electron is because I didn’t know where the photon came from.

Heisenberg estimated that the momentum that would be transferred from the photon to the electron to is proportional to the energy of the photon, which means inversely proportional to the wavelength, and proportional to the sine of the opening angle. So if we call that momentum Δ p we have Δ p is proportional to sine ε over λ. And the constant in front of this is Planck’s constant, because that gives you the relation between the energy and the wave-length of the photon.

Now you can see that if you multiply the two uncertainties, the one in position and the one in momentum of the electron, you find that it’s just Planck’s constant. This is Heisenberg’s famous uncertainty principle. The more you know about the position of the particle, the less you know about the momentum and the other way round.

We know today that Heisenberg’s argument for microscopes is not quite correct but, remarkably enough, the conclusion is correct. Indeed, this uncertainty has nothing to do with microscopes in particular. Heisenberg’s uncertainty is far more than that: It’s a general property of nature. And it does not only hold for position and momenta but for many other pairs of quantities.

Many years later Heisenberg wrote about his insight: “So one might even assume, that in the work on the gamma-ray microscope and the uncertainty relation I used the knowledge which I had acquired by this poor examination.”

I like this story because it tells us that if there is something you don’t understand, then don’t be ashamed and run away from it, but dig into it. Maybe you will find that no one really understands it and leave your mark in science.

Thursday, April 09, 2020

What is Reductionism?

Last week, we spoke about emergence, this week, we will speak about the opposite: Reductionism. Reductionism, loosely speaking, is the idea that you can understand things by taking them apart into smaller things. This definition of reductionism, as we will see, is not quite correct, but it’s not too far off. Before we get to the details, however, a few words about how enormously important reductionism is for scientific understanding.


A lot of people seem to think that reductionism is a philosophy. But it most definitely is not. That reductionism is correct is a hypothesis about the properties of nature and it is a hypothesis that has so far been supported by every single experiment that has ever been done. I cannot think of *any scientific fact that is better established than that the properties of the constituents of a system determine how the system works.

To be sure, taking things apart into pieces to understand how they work is not always a good idea. Even leaving aside that taking apart a living organism typically kills it, the problem is that the connection between the theory for the constituents and the theory for the whole system may just be too complicated to be useful. Indeed, this is more often the case than not, which is why figuring out how an organism works from studying its components is not a fruitful strategy. Studying the living organism as a whole is dramatically more useful, so this is what scientists normally do in practice.

But if you really want to *understand what an organism does and how it does it, you will look for an explanation on the level of constituents. Like this part sends a signal to that part. This part stores and releases energy. This piece produces something and does this to another piece, and so on. If we want to really understand something, we look for a reductionist theory. Why? Because we know from experience that reductionist theories have more explanatory power. They lead to new predictions rather than just allowing us to reproduce already observed regularities.

Indeed, the whole history of science until now has been a success story of reductionism. Biology can be reduced to chemistry, chemistry can be reduced to atomic physics, and atoms are made of elementary particles. This is why we have computers today. But, again, this does not mean it is always practical to use a theory for the constituents to describe the composite system. For example, you would not use the standard model of particle physics to predict election outcomes. And why not? Because that would not be useful. The computation would take too long. So what’s the use of reductionism then? The use is that at each level of reduction that scientists have discovered, they gained new insights about how nature works and that has enabled us to make both intellectual and technological progress.

But here is the important point. There are two different types of reductionism. One is called methodological reductionism, the other one theory reductionism. Methodological reductionism is about the properties of the real world. It’s about taking things apart into smaller things and finding that the smaller things determine the behavior of the whole. Theory reductionism on the other hand means that you have levels of theories where the higher – emergent – levels can be derived from the lower – more fundamental – levels. But in this case, a high level does not necessary mean the theory is about large things, and a low level does not necessarily mean it’s about small things.

So what type of reductionism is it that has been so successful in the history of science? The funny thing is that it’s a combination of both. Methodological reductionism has so far gone hand in hand with theory reductionism. As we have looked at smaller things we have found more fundamental theories.

But this does not necessarily have to remain this way. There is no reason to think that the next better theory of nature will be found by studying shorter distances. Just because the two types of reductionism have been tied together for a while does not mean it will remain this way.

Indeed, some of the biggest currently open problems in physics manifest themselves on large scales, not on small scales. Besides dark energy and dark matter there is also the measurement problem in quantum mechanics. I have told you about those problems in some earlier videos. They are not in any obvious way short-distance phenomena.

So the next time a particle physicist tries to tell you that we need higher energies to probe shorter distances because that’s where progress will come from, remind them that methodological reductionism is not the same as theory reductionism.

Friday, April 03, 2020

What is emergence? What does “emergent” mean?

The word “emerging” is often used colloquially to mean something like “giving rise to” or “becoming apparent”. But emerging, emergent, and emergence are also technical terms. So, today I want to explain what physicists mean by emergence, which is also the way that the expression is often, but not always, used by philosophers.



Emergent broadly speaking refers to novel types of behavior in systems with many interacting constituents. A good example is the “La ola” wave that you sometimes see in the audience of sporting events. It’s not something you can do alone. It only becomes possible because of the interaction between people and their neighbors.

Indeed, something very similar happens in many condensed-matter systems, where the interactions between atomic constituents gives rise to certain types of collective behavior. These can be waves, like with la ola. The simplest example of this are sound waves. Sound waves are really just a simple, collective description for atoms in a gas that move periodically and so create a propagating mode.

But we know that in quantum mechanics waves are also particles and the other way round. This is why in condensed matter systems one can have “quasi-particles” which behave like particles – with quantum properties and wave-behavior and all that – but are actually a collective that moves together. Quasi-particles are emergent from the interactions of many fundamental particles.

And this is really the most relevant property of emergence. Something is emergent if it comes about from the collective behavior of many constituents of a system, be that people or atoms. If something is emergent, it does not even make sense to speak about it for individual elements of the system.

There are a lot of quantities in physics which are emergent. Think for example of conductivity. Conductivity is the ability of a system to transport currents from one end to another. It’s a property of materials. But it does not make sense to speak of the conductivity of a single electron. It’s the same for viscosity, elasticity, even something as seemingly simple as the color of a material. Color is not a property you find if you take apart a painting into elementary particles. It comes from the band structure of molecules. It’s an emergent property.

You will find that philosophers discuss two types of emergence, that is “strong emergence” and “weak emergence”. What I just talked about is “weak emergence”. Weak emergence means that the emergent property can be derived from the properties the system’s constituents and the interactions between the constituents. An electron or a quark may not have a conductivity, but in principle you can calculate how they form atoms, and molecules, and metals, and then the conductivity is a consequence of this.

In physics the only type of emergence we have is weak emergence. With strong emergence philosophers refer to the hypothetical possibility that a system with many constituents displays a novel behavior which cannot be derived from the properties and the interactions of the constituents. While this is logically possible, there is not a single known example for this in the real world.

The best analogy I can think of are photographic mosaics, that are photos made up of smaller photos. If I gave you all the individual photos and their properties you’d have no idea what the “emergent” picture will be. However, this example is hardly a natural phenomenon. To make a photographic mosaic, you start with the emergent image you want to get and then look for photos that will fit. In other words, the “strong emergence” which you have here works only thanks to an “intelligent designer” who had a masterplan.

The problem with strong emergence is not only that we have no scientific theory for it, it’s worse. Strong emergence is incompatible with what we already know about the laws of nature. That’s because if you think that strong emergence can really happen, then this necessarily implies that there will be objects in this world whose behavior is in conflict with the standard model of particle physics. If that wasn’t so, then really it wouldn’t be strong emergence.

A lot of people seem to think that consciousness or free will should be strongly emergent, but there is absolutely no reason to think that this is the case. For all we currently know, consciousness is weakly emergent, as any other collective phenomenon in large systems.

Monday, March 23, 2020

Are dark energy and dark matter scientific?

I have noticed that each time I talk or write about dark energy or dark matter, I get a lot of comments from people saying, oh that stuff doesn’t exist, you can’t just invent something invisible each time there’s an inconvenient measurement. Physicists have totally lost it. This is such a common misunderstanding that I thought I will dedicate a video to sorting this out. Dark energy and dark matter are entirely normal, and perfectly scientific hypotheses. They may turn out to be wrong, but that doesn’t mean it’s wrong to consider them in the first place.


Before I say anything else, here is a brief reminder what dark energy and dark matter are. Dark energy is what speeds up the expansion of the universe; it does not behave like normal energy. Dark matter has a gravitational pull like normal matter, but you can’t see it. Dark energy and dark matter are two different things. They may be related, but currently we have no good reason to think that they are related.

Why have physicists come up with this dark stuff? Well, we have two theories to describe the behavior of matter. The one is the standard model of particle physics, which describes the elementary particles and the forces between them, except gravity. The other is Einstein’s theory of general relativity, which describes the gravitational force that is generated by all types of matter and energy. The problem is, if you use Einstein’s theory for the matter that is in the standard model only, this does not describe what we see. The predictions you get from combining those two theories do not fit to the observations.

It’s not only one prediction that does not fit to observations, it’s many different ones. For dark matter it’s that galaxies rotate too fast, galaxies in clusters move too fast, gravitational lenses bend light too strongly, and neither the cosmic microwave background nor galactic filaments would look like we observe them without dark matter. I explained this in detail in an earlier video.

For dark energy the shooting gun signature is that the expansion of the universe is getting faster, which you can find out by observing how fast supernova in other galaxies speed away from us. The evidence for dark energy is not quite as solid as for dark matter. I explained this too in an earlier video.

So, what’s the scientist to do when they are faced with such a discrepancy between theory and observation? They look for new regularities in the observation and try to find a simple way to explain them. And that’s what dark energy and dark matter are. They are extremely simple terms to add to Einstein’s theory, that explain observed regularities, and make the theory agrees with the data again.

This is easy to see when it comes to dark energy because the presently accepted version of dark energy is just a constant, the so-called cosmological constant. This cosmological constant is just a constant of nature and it’s a free parameter in General Relativity. Indeed, it was introduced already by Einstein himself. And what explanation for an observation could possibly be simpler than a constant of nature?

For dark matter it’s not quite as simple as that. I frequently hear the criticism that dark matter explains nothing because it can be distributed in arbitrary amounts wherever needed, and therefore can fit any observation. But that’s just wrong. It’s wrong for two reasons.

First, the word “matter” in “dark matter” doesn’t just vaguely mean “stuff”. It’s a technical term that means “stuff with a very specific behavior”. Dark matter behaves like normal matter, except that, for all we currently know, it doesn’t have internal pressure. You cannot explain any arbitrary observation by attributing it to matter. It just happens to be the case that the observations we do have can be explained this way. That’s a non-trivial fact.

Let me emphasize that dark matter in cosmology is a kind of fluid. It does not have any substructure. Particle physicists, needless to say, like the idea that dark matter is made of a particle. This may or may not be case. We currently do not have any observation that tells us dark matter must have a microscopic substructure.

The other reason why it’s wrong to say that dark matter can fit anything is that you cannot distribute it as you wish. Dark matter starts with a random distribution in the early universe. As the universe expands, and matter in it cools, dark matter starts to clump and it forms structures. Normal matter then collects in the gravitational potentials generated by the dark matter. So, you do not get to distribute matter as you wish. It has to fit with the dynamical evolution of the universe.

This is why dark matter and dark energy are good scientific explanations. They are simple and yet explain a lot of data.

Now, to be clear, this is the standard story. If you look into the details it is, as usual, more complicated. That’s because the galactic structures that form with dark matter actually do not fit the data all that well, and they do not explain some regularities that astronomers have observed. So, there are good reasons for being skeptical that dark matter is ultimately the right story, but it isn’t as simple as just saying “it’s unscientific”.

Monday, March 16, 2020

Unpredictability, Undecidability, and Uncomputability

There are quite a number of mathematical theorems that prove the power of mathematics has its limits. But how relevant are these theorems for science? In this video I want to briefly summarize an essay that I just submitted to the essay contest of the Foundational Questions Institute. This year the topic is “Unpredictability, undecidability, and uncomputability”.


Take Gödel’s theorem. Gödel’s theorem says that if a set of consistent axioms is sufficiently complex, you can formulate statements for which you can’t know whether they are right or wrong. So, mathematics is incomplete in this sense, and that is certainly a remarkable insight. But it’s irrelevant for scientific practice. That’s because one can always extend the original set of axioms with another axiom that simply says whether or not the previously undecidable statement is true.

How would we deal with mathematical incompleteness in physics? Well, in physics, theories are sets of mathematical axioms, like the ones that Gödel’s theorem deals with, but that’s not it. Physical theories also come with a prescription for how to identify mathematical structures with measurable quantities. Physics, after all, is science, not mathematics. So, if we had any statement that was undecidable, we’d decide it by making a measurement, and then add an axiom that agrees with the outcome. Or, if the undecidable statement has no observable consequences, then we can as well ignore it.

Mathematical theorems about uncomputability are likewise scientifically irrelevant but for a different reason. The problem with uncomputability is that it always comes from something being infinite. However, nothing real is infinite, so these theorems do not actually apply to anything we could find in nature.

The most famous example is Turing’s “Halting Problem”. Think of any algorithm that computes something. It will either halt at finite time and give you a result, or not. Turing says, now let us try to find a meta-algorithm that can tell us whether another algorithm halts or doesn’t halt. Then he proves that there is no such meta-algorithm which – and here is the important point – works for all possible algorithms. That’s an infinitely large class. In reality we will never need an algorithm that answers infinitely many questions.

Another not-quite as well-known example is Wang’s Domino problem. Wang said, take any set of squares with different colors for each side. Can you use them to fill up an infinite plane without gaps? It turns out that the question is undecidable for arbitrary sets of squares. But then, we never have to tile infinite planes.

We also know that most real numbers are uncomputable. They are uncomputable in the sense that there is no algorithm that will approximate them to a certain, finite precision, in finite time. But in science, we never deal with real numbers. We deal with numbers that have a finitely many digits and that have error bars. So this is another interesting mathematical curiosity, but has no scientific relevance.

What about unpredictability? Well, quantum mechanics has an unpredictable element, as I explained in an earlier video. But this unpredictability is rather uninteresting, since it’s there by postulate. More interesting is the unpredictability in chaotic systems.

Some chaotic systems suffer from a peculiar type of unpredictability. Even if you know the initial conditions arbitrarily precise, you can only make predictions for a finite amount of time. Whether this actually happens for any system in the real world is not presently clear.

The most famous candidate for such an unpredictable equation is the Navier-Stokes equation that is used, among other things, to make the weather forecast. Whether this equation sometimes leads to unpredictable situations is one of the Clay Institute’s Millennium Problems, one of the hardest open mathematical problems today.

But let us assume for a moment this problem was solved and it turned out that, with the Navier stokes equation, it is indeed impossible, in some circumstances, to make predictions beyond a finite time. What would this tell us about nature? Not a terrible lot, because we know already that the Navier-Stokes equation is only an approximation. Really gases and fluids are made of particles and should be described by quantum mechanics. And quantum mechanics does not have the chaotic type of unpredictability. Also, maybe quantum mechanics itself is not ultimately the right theory. So really, we can’t tell whether nature is predictable or not.

This is a general problem with applying impossibility-theorems to nature. We can never know whether the mathematical assumptions that we make in physics are really correct, or if not one day they will be replaced by something better. Physics is science, not math. We use math because it works, not because we think nature is math.

All of this makes it sound like undecidability, unpredictability, and uncomputability are mathematical problems and irrelevant for science. But that would be jumping to conclusions. They are relevant for science. But they are relevant not because they tell us something deep about nature. They are relevant because in practice we use theories that may have these properties. So the theorems tell us what we can do with our theories.

An example. Maybe the Navier-Stokes equation is not fundamentally the right equation for weather predictions. But it is the one that we currently use. And therefore, knowing when an unpredictable situation is coming up matters. Indeed, we might want to know when an unpredictability is coming up to avoid it. This is not really feasible for the weather, but it may be feasible for another partly chaotic system, that is plasma in nuclear fusion plants.

The plasma sometimes develops instabilities that can damage the containment vessel. Therefore, if an instability is coming on, the fusion process must be shut down quickly. This makes fusion very inefficient. If we’d better understand when unpredictable situations are about to occur, we may be able to prevent them from happening in the first place. This would also be useful, for example, for the financial system.

In summary, mathematical impossibility-theorems are relevant in science, not because they tell us something about nature itself, but because we use mathematics in practice to understand observations, and the theorems tell us what can expect of our theories.

You can read all the essays in the contest over at the FQXi website. The deadline was moved to April 24, so you still have time to submit your essay!

Saturday, March 14, 2020

Coronavirus? I have nothing to add.

I keep getting requests from people that I comment on the coronavirus pandemic, disease models, or measures taken to contain and mitigate the outbreak. While I appreciate the faith you put into me, it also leaves me somewhat perplexed. I am not an epidemiologist; I’m a physicist. I have nothing original to say about coronavirus. Sure, I could tell you what I have taken away from other people’s writings – a social media strain of Chinese Whispers, if you wish – but I don’t think this aids information flow, it merely introduces mistakes.

I will therefore keep my mouth shut and just encourage you to get your information from more reliable sources. When it comes to public health, I personally prefer institutional and governmental websites over the mass media, largely because the media has an incentive to make the situation sound more dramatic than it really is. In Germany, I would suggest the Federal Ministry of Health (in English) and the Robert Koch Institute (in German). And regardless of where you live, the websites of the WHO are worth checking out.

I have not come across a prediction for the spread of the disease that looked remotely reliable, but Our World in Data has some neat visualization tools for the case numbers from the WHO (example below).



Having said that, what I can do is offer you a forum to commiserate. I got caught in the midst of organizing a workshop that was supposed to take place in May in the UK. We monitored the situation in Europe for the past weeks, but eventually had to conclude there’s no way around postponing the workshop.

Almost everyone from overseas had to cancel their participation because they weren’t allowed to travel, or, if they had, their health insurance wouldn’t have covered had they contracted the virus. At present only Italy is considered a high risk country in Europe. But it’s likely that in the coming weeks several other European countries will be in a similar situation, which will probably bring more travel restrictions. Finally, most universities here in Germany and in the UK have for now issued a policy to cancel all kinds of meetings on their premises so that we might have ended up without a room for the event.

We presently don’t know when the workshop will take place, but hopefully some time in the fall.

I was supposed to be on a panel discussion in Zurich next week, but that was also cancelled. I am scheduled to give a public lecture in two weeks which has not been cancelled. This comes to me as some surprise because it’s in the German state that, so far, has been hit the worst by coronavirus. I kind of expect this to also be cancelled.

Where we live, most employers have asked employees to work from home if anyhow possible. Schools will be closed next week until after the Easter break – for now. All large events have been cancelled. This puts us in a situation that many people are facing right now: We’ll be stuck at home with bored children. I am actually on vacation for the next two weeks, but looks like it won’t be much of a vacation.

I’m not keen on contracting an infectious disease but believe sooner or later we’ll get it anyway. Even if there’s a vaccine, this may not work for variants of the original strain. We are lucky in that no one in our close family has a pre-existing condition that would put them at an elevated risk, though we worry of course about the grandparents. Shopping panic here has been moderate; the demand on disinfectants, soap and, yes, toilet paper, seems to be abnormally high, but that’s about it. By and large I think the German government has been handling the situation well and Trump’s travel ban is doing Europe a great favor because shit’s about to hit the fan over there.

In any case, I feel like there isn’t much we can do right now other than washing our hands and not coughing other people in the face. I have two papers to finish which will keep me busy for the next weeks. Wherever you are, I hope you stay safe and healthy.

Update: As anticipated, I just got an email saying that the public lecture in April has also been cancelled.