Thursday, April 09, 2020

What is Reductionism?

Last week, we spoke about emergence, this week, we will speak about the opposite: Reductionism. Reductionism, loosely speaking, is the idea that you can understand things by taking them apart into smaller things. This definition of reductionism, as we will see, is not quite correct, but it’s not too far off. Before we get to the details, however, a few words about how enormously important reductionism is for scientific understanding.


A lot of people seem to think that reductionism is a philosophy. But it most definitely is not. That reductionism is correct is a hypothesis about the properties of nature and it is a hypothesis that has so far been supported by every single experiment that has ever been done. I cannot think of *any scientific fact that is better established than that the properties of the constituents of a system determine how the system works.

To be sure, taking things apart into pieces to understand how they work is not always a good idea. Even leaving aside that taking apart a living organism typically kills it, the problem is that the connection between the theory for the constituents and the theory for the whole system may just be too complicated to be useful. Indeed, this is more often the case than not, which is why figuring how out an organism works from studying its components is not a fruitful strategy. Studying the living organism as a whole is dramatically more useful, so this is what scientists normally do in practice.

But if you really want to *understand what an organism does and how it does it, you will look for an explanation on the level of constituents. Like this part sends a signal to that part. This part stores and releases energy. This piece produces something and does this to another piece, and so on. If we want to really understand something, we look for a reductionist theory. Why? Because we know from experience that reductionist theories have more explanatory power. They lead to new predictions rather than just allowing us to reproduce already observed regularities.

Indeed, the whole history of science until now has been a success story of reductionism. Biology can be reduced to chemistry, chemistry can be reduced to atomic physics, and atoms are made of elementary particles. This is why we have computers today. But, again, this does not mean it is always practical to use a theory for the constituents to describe the composite system. For example, you would not use the standard model of particle physics to predict election outcomes. And why not? Because that would not be useful. The computation would take too long. So what’s the use of reductionism them? The use is that at each level of reduction that scientists have discovered, they gained new insights about how nature works and that has enabled us to make both intellectual and technological progress.

But here is the important point. There are two different types of reductionism. One is called methodological reductionism, the other one theory reductionism. Methodological reductionism is about the properties of the real world. It’s about taking things apart into smaller things and finding that the smaller things determine the behavior of the whole. Theory reductionism on the other hand means that you have levels of theories where the higher – emergent – levels can be derived from the lower – more fundamental – levels. But in this case, a high level does not necessary mean the theory is about large things, and a low level does not necessarily mean it’s about small things.

So what type of reductionism is it that has been so successful in the history of science? The funny thing is that it’s a combination of both. Methodological reductionism has so far gone hand in hand with theory reductionism. As we have looked at smaller things we have found more fundamental theories.

But this does not necessarily have to remain this way. There is no reason to think that the next better theory of nature will be found by studying shorter distances. Just because the two types of reductionism have been tied together for a while does not mean it will remain this way.

Indeed, some of the biggest currently open problems in physics manifest themselves on large scales, not on small scales. Besides dark energy and dark matter there is also the measurement problem in quantum mechanics. I have told you about those problems in some earlier videos. They are not in any obvious way short-distance phenomena.

So the next time a particle physicist tries to tell you that we need higher energies to probe shorter distances because that’s where progress will come from, remind them that methodological reductionism is not the same as theory reductionism.

Friday, April 03, 2020

What is emergence? What does “emergent” mean?

The word “emerging” is often used colloquially to mean something like “giving rise to” or “becoming apparent”. But emerging, emergent, and emergence are also technical terms. So, today I want to explain what physicists mean by emergence, which is also the way that the expression is often, but not always, used by philosophers.



Emergent broadly speaking refers to novel types of behavior in systems with many interacting constituents. A good example is the “La ola” wave that you sometimes see in the audience of sporting events. It’s not something you can do alone. It only becomes possible because of the interaction between people and their neighbors.

Indeed, something very similar happens in many condensed-matter systems, where the interactions between atomic constituents gives rise to certain types of collective behavior. These can be waves, like with la ola. The simplest example of this are sound waves. Sound waves are really just a simple, collective description for atoms in a gas that move periodically and so create a propagating mode.

But we know that in quantum mechanics waves are also particles and the other way round. This is why in condensed matter systems one can have “quasi-particles” which behave like particles – with quantum properties and wave-behavior and all that – but are actually a collective that moves together. Quasi-particles are emergent from the interactions of many fundamental particles.

And this is really the most relevant property of emergence. Something is emergent if it comes about from the collective behavior of many constituents of a system, be that people or atoms. If something is emergent, it does not even make sense to speak about it for individual elements of the system.

There are a lot of quantities in physics which are emergent. Think for example of conductivity. Conductivity is the ability of a system to transport currents from one end to another. It’s a property of materials. But it does not make sense to speak of the conductivity of a single electron. It’s the same for viscosity, elasticity, even something as seemingly simple as the color of a material. Color is not a property you find if you take apart a painting into elementary particles. It comes from the band structure of molecules. It’s an emergent property.

You will find that philosophers discuss two types of emergence, that is “strong emergence” and “weak emergence”. What I just talked about is “weak emergence”. Weak emergence means that the emergent property can be derived from the properties the system’s constituents and the interactions between the constituents. An electron or a quark may not have a conductivity, but in principle you can calculate how they form atoms, and molecules, and metals, and then the conductivity is a consequence of this.

In physics the only type of emergence we have is weak emergence. With strong emergence philosophers refer to the hypothetical possibility that a system with many constituents displays a novel behavior which cannot be derived from the properties and the interactions of the constituents. While this is logically possible, there is not a single known example for this in the real world.

The best analogy I can think of are photographic mosaics, that are photos made up of smaller photos. If I gave you all the individual photos and their properties you’d have no idea what the “emergent” picture will be. However, this example is hardly a natural phenomenon. To make a photographic mosaic, you start with the emergent image you want to get and then look for photos that will fit. In other words, the “strong emergence” which you have here works only thanks to an “intelligent designer” who had a masterplan.

The problem with strong emergence is not only that we have no scientific theory for it, it’s worse. Strong emergence is incompatible with what we already know about the laws of nature. That’s because if you think that strong emergence can really happen, then this necessarily implies that there will be objects in this world whose behavior is in conflict with the standard model of particle physics. If that wasn’t so, then really it wouldn’t be strong emergence.

A lot of people seem to think that consciousness or free will should be strongly emergent, but there is absolutely no reason to think that this is the case. For all we currently know, consciousness is weakly emergent, as any other collective phenomenon in large systems.

Monday, March 23, 2020

Are dark energy and dark matter scientific?

I have noticed that each time I talk or write about dark energy or dark matter, I get a lot of comments from people saying, oh that stuff doesn’t exist, you can’t just invent something invisible each time there’s an inconvenient measurement. Physicists have totally lost it. This is such a common misunderstanding that I thought I will dedicate a video to sorting this out. Dark energy and dark matter are entirely normal, and perfectly scientific hypotheses. They may turn out to be wrong, but that doesn’t mean it’s wrong to consider them in the first place.


Before I say anything else, here is a brief reminder what dark energy and dark matter are. Dark energy is what speeds up the expansion of the universe; it does not behave like normal energy. Dark matter has a gravitational pull like normal matter, but you can’t see it. Dark energy and dark matter are two different things. They may be related, but currently we have no good reason to think that they are related.

Why have physicists come up with this dark stuff? Well, we have two theories to describe the behavior of matter. The one is the standard model of particle physics, which describes the elementary particles and the forces between them, except gravity. The other is Einstein’s theory of general relativity, which describes the gravitational force that is generated by all types of matter and energy. The problem is, if you use Einstein’s theory for the matter that is in the standard model only, this does not describe what we see. The predictions you get from combining those two theories do not fit to the observations.

It’s not only one prediction that does not fit to observations, it’s many different ones. For dark matter it’s that galaxies rotate too fast, galaxies in clusters move too fast, gravitational lenses bend light too strongly, and neither the cosmic microwave background nor galactic filaments would look like we observe them without dark matter. I explained this in detail in an earlier video.

For dark energy the shooting gun signature is that the expansion of the universe is getting faster, which you can find out by observing how fast supernova in other galaxies speed away from us. The evidence for dark energy is not quite as solid as for dark matter. I explained this too in an earlier video.

So, what’s the scientist to do when they are faced with such a discrepancy between theory and observation? They look for new regularities in the observation and try to find a simple way to explain them. And that’s what dark energy and dark matter are. They are extremely simple terms to add to Einstein’s theory, that explain observed regularities, and make the theory agrees with the data again.

This is easy to see when it comes to dark energy because the presently accepted version of dark energy is just a constant, the so-called cosmological constant. This cosmological constant is just a constant of nature and it’s a free parameter in General Relativity. Indeed, it was introduced already by Einstein himself. And what explanation for an observation could possibly be simpler than a constant of nature?

For dark matter it’s not quite as simple as that. I frequently hear the criticism that dark matter explains nothing because it can be distributed in arbitrary amounts wherever needed, and therefore can fit any observation. But that’s just wrong. It’s wrong for two reasons.

First, the word “matter” in “dark matter” doesn’t just vaguely mean “stuff”. It’s a technical term that means “stuff with a very specific behavior”. Dark matter behaves like normal matter, except that, for all we currently know, it doesn’t have internal pressure. You cannot explain any arbitrary observation by attributing it to matter. It just happens to be the case that the observations we do have can be explained this way. That’s a non-trivial fact.

Let me emphasize that dark matter in cosmology is a kind of fluid. It does not have any substructure. Particle physicists, needless to say, like the idea that dark matter is made of a particle. This may or may not be case. We currently do not have any observation that tells us dark matter must have a microscopic substructure.

The other reason why it’s wrong to say that dark matter can fit anything is that you cannot distribute it as you wish. Dark matter starts with a random distribution in the early universe. As the universe expands, and matter in it cools, dark matter starts to clump and it forms structures. Normal matter then collects in the gravitational potentials generated by the dark matter. So, you do not get to distribute matter as you wish. It has to fit with the dynamical evolution of the universe.

This is why dark matter and dark energy are good scientific explanations. They are simple and yet explain a lot of data.

Now, to be clear, this is the standard story. If you look into the details it is, as usual, more complicated. That’s because the galactic structures that form with dark matter actually do not fit the data all that well, and they do not explain some regularities that astronomers have observed. So, there are good reasons for being skeptical that dark matter is ultimately the right story, but it isn’t as simple as just saying “it’s unscientific”.

Monday, March 16, 2020

Unpredictability, Undecidability, and Uncomputability

There are quite a number of mathematical theorems that prove the power of mathematics has its limits. But how relevant are these theorems for science? In this video I want to briefly summarize an essay that I just submitted to the essay contest of the Foundational Questions Institute. This year the topic is “Unpredictability, undecidability, and uncomputability”.


Take Gödel’s theorem. Gödel’s theorem says that if a set of consistent axioms is sufficiently complex, you can formulate statements for which you can’t know whether they are right or wrong. So, mathematics is incomplete in this sense, and that is certainly a remarkable insight. But it’s irrelevant for scientific practice. That’s because one can always extend the original set of axioms with another axiom that simply says whether or not the previously undecidable statement is true.

How would we deal with mathematical incompleteness in physics? Well, in physics, theories are sets of mathematical axioms, like the ones that Gödel’s theorem deals with, but that’s not it. Physical theories also come with a prescription for how to identify mathematical structures with measurable quantities. Physics, after all, is science, not mathematics. So, if we had any statement that was undecidable, we’d decide it by making a measurement, and then add an axiom that agrees with the outcome. Or, if the undecidable statement has no observable consequences, then we can as well ignore it.

Mathematical theorems about uncomputability are likewise scientifically irrelevant but for a different reason. The problem with uncomputability is that it always comes from something being infinite. However, nothing real is infinite, so these theorems do not actually apply to anything we could find in nature.

The most famous example is Turing’s “Halting Problem”. Think of any algorithm that computes something. It will either halt at finite time and give you a result, or not. Turing says, now let us try to find a meta-algorithm that can tell us whether another algorithm halts or doesn’t halt. Then he proves that there is no such meta-algorithm which – and here is the important point – works for all possible algorithms. That’s an infinitely large class. In reality we will never need an algorithm that answers infinitely many questions.

Another not-quite as well-known example is Wang’s Domino problem. Wang said, take any set of squares with different colors for each side. Can you use them to fill up an infinite plane without gaps? It turns out that the question is undecidable for arbitrary sets of squares. But then, we never have to tile infinite planes.

We also know that most real numbers are uncomputable. They are uncomputable in the sense that there is no algorithm that will approximate them to a certain, finite precision, in finite time. But in science, we never deal with real numbers. We deal with numbers that have a finitely many digits and that have error bars. So this is another interesting mathematical curiosity, but has no scientific relevance.

What about unpredictability? Well, quantum mechanics has an unpredictable element, as I explained in an earlier video. But this unpredictability is rather uninteresting, since it’s there by postulate. More interesting is the unpredictability in chaotic systems.

Some chaotic systems suffer from a peculiar type of unpredictability. Even if you know the initial conditions arbitrarily precise, you can only make predictions for a finite amount of time. Whether this actually happens for any system in the real world is not presently clear.

The most famous candidate for such an unpredictable equation is the Navier-Stokes equation that is used, among other things, to make the weather forecast. Whether this equation sometimes leads to unpredictable situations is one of the Clay Institute’s Millennium Problems, one of the hardest open mathematical problems today.

But let us assume for a moment this problem was solved and it turned out that, with the Navier stokes equation, it is indeed impossible, in some circumstances, to make predictions beyond a finite time. What would this tell us about nature? Not a terrible lot, because we know already that the Navier-Stokes equation is only an approximation. Really gases and fluids are made of particles and should be described by quantum mechanics. And quantum mechanics does not have the chaotic type of unpredictability. Also, maybe quantum mechanics itself is not ultimately the right theory. So really, we can’t tell whether nature is predictable or not.

This is a general problem with applying impossibility-theorems to nature. We can never know whether the mathematical assumptions that we make in physics are really correct, or if not one day they will be replaced by something better. Physics is science, not math. We use math because it works, not because we think nature is math.

All of this makes it sound like undecidability, unpredictability, and uncomputability are mathematical problems and irrelevant for science. But that would be jumping to conclusions. They are relevant for science. But they are relevant not because they tell us something deep about nature. They are relevant because in practice we use theories that may have these properties. So the theorems tell us what we can do with our theories.

An example. Maybe the Navier-Stokes equation is not fundamentally the right equation for weather predictions. But it is the one that we currently use. And therefore, knowing when an unpredictable situation is coming up matters. Indeed, we might want to know when an unpredictability is coming up to avoid it. This is not really feasible for the weather, but it may be feasible for another partly chaotic system, that is plasma in nuclear fusion plants.

The plasma sometimes develops instabilities that can damage the containment vessel. Therefore, if an instability is coming on, the fusion process must be shut down quickly. This makes fusion very inefficient. If we’d better understand when unpredictable situations are about to occur, we may be able to prevent them from happening in the first place. This would also be useful, for example, for the financial system.

In summary, mathematical impossibility-theorems are relevant in science, not because they tell us something about nature itself, but because we use mathematics in practice to understand observations, and the theorems tell us what can expect of our theories.

You can read all the essays in the contest over at the FQXi website. The deadline was moved to April 24, so you still have time to submit your essay!

Saturday, March 14, 2020

Coronavirus? I have nothing to add.

I keep getting requests from people that I comment on the coronavirus pandemic, disease models, or measures taken to contain and mitigate the outbreak. While I appreciate the faith you put into me, it also leaves me somewhat perplexed. I am not an epidemiologist; I’m a physicist. I have nothing original to say about coronavirus. Sure, I could tell you what I have taken away from other people’s writings – a social media strain of Chinese Whispers, if you wish – but I don’t think this aids information flow, it merely introduces mistakes.

I will therefore keep my mouth shut and just encourage you to get your information from more reliable sources. When it comes to public health, I personally prefer institutional and governmental websites over the mass media, largely because the media has an incentive to make the situation sound more dramatic than it really is. In Germany, I would suggest the Federal Ministry of Health (in English) and the Robert Koch Institute (in German). And regardless of where you live, the websites of the WHO are worth checking out.

I have not come across a prediction for the spread of the disease that looked remotely reliable, but Our World in Data has some neat visualization tools for the case numbers from the WHO (example below).



Having said that, what I can do is offer you a forum to commiserate. I got caught in the midst of organizing a workshop that was supposed to take place in May in the UK. We monitored the situation in Europe for the past weeks, but eventually had to conclude there’s no way around postponing the workshop.

Almost everyone from overseas had to cancel their participation because they weren’t allowed to travel, or, if they had, their health insurance wouldn’t have covered had they contracted the virus. At present only Italy is considered a high risk country in Europe. But it’s likely that in the coming weeks several other European countries will be in a similar situation, which will probably bring more travel restrictions. Finally, most universities here in Germany and in the UK have for now issued a policy to cancel all kinds of meetings on their premises so that we might have ended up without a room for the event.

We presently don’t know when the workshop will take place, but hopefully some time in the fall.

I was supposed to be on a panel discussion in Zurich next week, but that was also cancelled. I am scheduled to give a public lecture in two weeks which has not been cancelled. This comes to me as some surprise because it’s in the German state that, so far, has been hit the worst by coronavirus. I kind of expect this to also be cancelled.

Where we live, most employers have asked employees to work from home if anyhow possible. Schools will be closed next week until after the Easter break – for now. All large events have been cancelled. This puts us in a situation that many people are facing right now: We’ll be stuck at home with bored children. I am actually on vacation for the next two weeks, but looks like it won’t be much of a vacation.

I’m not keen on contracting an infectious disease but believe sooner or later we’ll get it anyway. Even if there’s a vaccine, this may not work for variants of the original strain. We are lucky in that no one in our close family has a pre-existing condition that would put them at an elevated risk, though we worry of course about the grandparents. Shopping panic here has been moderate; the demand on disinfectants, soap and, yes, toilet paper, seems to be abnormally high, but that’s about it. By and large I think the German government has been handling the situation well and Trump’s travel ban is doing Europe a great favor because shit’s about to hit the fan over there.

In any case, I feel like there isn’t much we can do right now other than washing our hands and not coughing other people in the face. I have two papers to finish which will keep me busy for the next weeks. Wherever you are, I hope you stay safe and healthy.

Update: As anticipated, I just got an email saying that the public lecture in April has also been cancelled.

Thursday, March 12, 2020

Essays, Elsewhere

Just a brief note that Tim and I have an essay up at Nautilus
How to Make Sense of Quantum Physics
Superdeterminism, a long-abandoned idea, may help us overcome the current crisis in physics.
BY SABINE HOSSENFELDER & TIM PALMER

Quantum mechanics isn’t rocket science. But it’s well on the way to take the place of rocket science as the go-to metaphor for unintelligible math. Quantum mechanics, you have certainly heard, is infamously difficult to understand. It defies intuition. It makes no sense. Popular science accounts inevitably refer to it as “strange,” “weird,” “mind-boggling,” or all of the above.

We beg to differ. Quantum mechanics is perfectly comprehensible. It’s just that physicists abandoned the only way to make sense of it half a century ago. Fast forward to today and progress in the foundations of physics has all but stalled. The big questions that were open then are still open today. We still don’t know what dark matter is, we still have not resolved the disagreement between Einstein’s theory of gravity and the standard model of particle physics, and we still do not understand how measurements work in quantum mechanics.

How can we overcome this crisis? We think it’s about time to revisit a long-forgotten solution, Superdeterminism, the idea that no two places in the universe are truly independent of each other. This solution gives us a physical understanding of quantum measurements, and promises to improve quantum theory. Revising quantum theory would be a game changer for physicists’ efforts to solve the other problems in their discipline and to find novel applications of quantum technology.

Head over to Nautilus to read the whole thing. It’s a great magazine, btw, and I warmly recommend you follow it.

If you found that interesting, you may also be interested in my contribution to this year’s essay contest from the Foundational Questions Institute on Undecidability, Uncomputability, and Unpredictability:
Math Matters
By Sabine Hossenfelder

Gödel taught us that mathematics is incomplete. Turing taught us some problems are undecidable. Lorenz taught us that, try as we might, some things will remain unpredictable. Are such theorems relevant for the real world or are they merely academic curiosities? In this essay, I first explain why one can rightfully be skeptical of the scientific relevance of mathematically proved impossibilities, but that, upon closer inspection, they are both interesting and important.

Saturday, March 07, 2020

Is Gravity a Force?

I was sick last week and lost like 10 pounds in 3 days, which brings up the question, what is weight?


Weight is actually the force that acts on your body due to the pull of gravity. Now, the gravitational force depends on the mass of the object that is generating the force, in this case, planet Earth. So you can lose weight by simply moving to the moon. Technically, therefore, I should have said I lost mass, not weight.

Why do we normally not make this distinction? That’s because in practice it doesn’t matter. Mass just a number – a “scalar” – as physicists say, but weight, since it is a force, has a direction. So if you wanted to be very annoying, I mean very accurate, then whenever you’d refer to weight you’d have to say which direction you are talking about. The weight in East direction? The weight in North direction? Why doesn’t anyone ever mention this?

We don’t usually mention this because we all agree that we mean the force pulling down, and since we all know what we are talking about, we treat weight as if it was a scalar, omitting the direction. Moreover, the gravitational attraction downwards is pretty much the same everywhere on our planet, which means it is unnecessary to distinguishing between weight and mass in everyday life. Technically, it’s correct: mass and weight are not the same thing. Practically, the difference doesn’t matter.

But wait. Didn’t Einstein say that gravity is not a force to begin with? Ah, yes, there’s that.

Einstein’s theory of general relativity tells us that the effect we call gravity is different from normal forces. In General Relativity, space and time are not flat, like a sheet of paper, but curved, like the often-named rubber sheet. This curvature is caused by all types of mass and energy, and the motion of mass and energy is in return affected by the curvature. This gives you a self-consistent, closed, set of equations know as Einstein’s Field Equations. In Einstein’s theory, then, there is no force acting on masses. The masses are just navigating the curved space-time. We cannot see the curvature directly. We only see its effects. And those effects are what we call gravity.

Now, Einstein’s theory of General Relativity rests on the equivalence principle. The equivalence principle says that locally the effects of gravity are the same as the effects of acceleration in flat space. “Locally” here roughly means “nearby”. And acceleration in flat space is described by Einstein’s theory of Special Relativity. So, with the equivalence principle, you can generalize Special Relativity to General Relativity. Special Relativity is the special case in which space-time is flat, and there is no gravity.

The equivalence principle was well illustrated by Einstein himself. He said, let us consider you are in an elevator that is being pulled up at constant acceleration. There is one force acting on you, which is the floor pushing up. Now, Einstein says, gravity has the very same effect without something pulling up the elevator. And again, there is only one force acting on you, which is the floor pushing up.

If there was nothing pulling the elevator (so, if there was no acceleration) you would feel no force at all. In General Relativity, this corresponds to freely falling in a gravitational field. That’s the key point of Einstein’s insight: If you freely fall, there is no force acting on you. And in that, Einstein and Newton differ. Newton would say, if you jump off a roof, the force of gravity is pulling you down. Einstein says, nope, if you jump off a roof, you take away the force that was pushing you up.

Again, however, the distinction between the two cases is rather technical and one we do not have to bother with in daily life. That is because in daily life we do not need to use the full blown apparatus of General Relativity. Newton’s theory works just fine, for all practical purposes, unless possibly, you plan to visit a black hole.

Sunday, March 01, 2020

How good is the evidence for Dark Energy?

I spoke with Professor Subir Sarkar from the University of Oxford about his claim that dark energy might not exist and the so-called Hubble-tension isn't.

Saturday, February 29, 2020

14 Years BackRe(Action)

[Image: Scott McLeod/Flickr]
14 years ago, I was a postdoc in Santa Barbara, in a tiny corner office where the windows wouldn't open, in a building that slightly swayed each time one of the frequent mini-earthquakes shook up California. I had just published my first blogpost. It happened to be about the possibility that the Large Hadron Collider, which was not yet in operation, would produce tiny black holes and inadvertently kill all of us. The topic would soon rise to attention in the media and thereby mark my entry into the world of science communication. I was well prepared: Black holes at the LHC were the topic of my PhD thesis.

A few months later, I got married.

Later that same year, Lee Smolin's book "The Trouble With Physics" was published, coincidentally at almost the same time I moved to Canada and started my new position at Perimeter Institute. I had read an early version of the manuscript and published one of the first online reviews. Peter Woit's book "Not Even Wrong" appeared at almost the same time and kicked off what later became known as "The String Wars", though I've always found the rather militant term somewhat inappropriate.

Time marched on and I kept writing, through my move to Sweden, my first pregnancy and the following miscarriage, the second pregnancy, the twin's birth, parental leave, my suffering through 5 years of a 3000 km commute while trying to raise two kids, and, in late 2015, my move back to Germany. Then, in 2018, the publication of my first book.

The loyal readers of this blog will have noticed that in the past year I have shifted weight from Blogger to YouTube. The reason is that the way search engine algorithms and the blogosphere have evolved, it has become basically impossible to attract new audiences to a blog. Here on Blogger, I feel rather stuck on the topics I have originally written about, mostly quantum gravity and particle physics, while meanwhile my interests have drifted more towards astrophysics, quantum foundations, and the philosophy of physics. YouTube's algorithm is certainly not perfect, but it serves content to users that may be interested in the topic of a video, regardless of whether they've previously heard of me.

I have to admit that personally I still prefer writing over videos. Not only because it's less time-consuming, but also because I don't particularly like either my voice or my face. But then, the average number of people who watch my videos has quickly surpassed the number of those who typically read my blog, so I guess I am doing okay.

On this occasion I want to thank all of you for spending some time with me, for your feedback and comments and encouragement. I am especially grateful to those of you who have on occasion sent a donation my way. I am not entirely sure where this blog will be going in the future, but stay around and you will find out. I promise it won't be boring.

Friday, February 28, 2020

Quantum Gravity in the Lab? The Hype Is On.


Quanta Magazine has an article by Phillip Ball titled “Wormholes Reveal a Way to Manipulate Black Hole Information in the Lab”. It’s about using quantum simulations to study the behavior of black holes in Anti De-Sitter space, that is a space with a negative cosmological constant. A quantum simulation is a collection of particles with specifically designed interactions that can mimic the behavior of another system. To briefly remind you, we do not live in Anti De-Sitter space. For all we know, the cosmological constant in our universe is positive. And no, the two cases are not remotely similar.

It’s an interesting topic in principle, but unfortunately the article by Ball is full of statements that gloss over this not very subtle fact that we do not live in Anti De-Sitter space. We can read there for example:
“In principle, researchers could construct systems entirely equivalent to wormhole-connected black holes by entangling quantum circuits in the right way and teleporting qubits between them.”
The correct statement would be:
“Researchers could construct systems whose governing equations are in certain limits equivalent to those governing black holes in a universe we do not inhabit.”
Further, needless to say, a collection of ions in the laboratory is not “entirely equivalent” to a black hole. For starters that is because the ions are made of other particles which are yet again made of other particles, none of which has any correspondence in the black hole analogy. Also, in case you’ve forgotten, we do not live in Anti De-Sitter space.

Why do physicists even study black holes in Anti-De Sitter space? To make a long story short: Because they can. They can, both because they have an idea how the math works and because they can get paid for it.

Now, there is nothing wrong with using methods obtained by the AdS/CFT correspondence to calculate the behavior of many particle systems. Indeed, I think that’s a neat idea. However, it is patently false to raise the impression that this tells us anything about quantum gravity, where by “quantum gravity” I mean the theory that resolves the inconsistency between the Standard Model of particle physics and General Relativity in our universe. Ie, a theory that actually describes nature. We have no reason whatsoever to think that the AdS/CFT correspondence tells us something about quantum gravity in our universe.

As I explained in this earlier post, it is highly implausible that the results from AdS carry over to flat space or to space with a positive cosmological constant because the limit is not continuous. You can of course simply take the limit ignoring its convergence properties, but then the theory you get has no obvious relation to General Relativity.

Let us have a look at the paper behind the article. We can read there in the introduction:
“In the quest to understand the quantum nature of spacetime and gravity, a key difficulty is the lack of contact with experiment. Since gravity is so weak, directly probing quantum gravity means going to experimentally infeasible energy scales.”
This is wrong and it demonstrates that the authors are not familiar with the phenomenology of quantum gravity. Large deviations from the semi-classical limit can occur at small energy scales. The reason is, rather trivially, that large masses in quantum superpositions should have gravitational fields in quantum superpositions. No large energies necessary for that.

If you could, for example, put a billiard ball into a superposition of location you should be able to measure what happens to its gravitational field. This is unfeasible, but not because it involves high energies. It’s infeasible because decoherence kicks in too quickly to measure anything.

Here is the rest of the first paragraph of the paper. I have in bold face added corrections that any reviewer should have insisted on:
“However, a consequence of the holographic principle [3, 4] and its concrete realization in the AdS/CFT correspondence [5–7] (see also [8]) is that non-gravitational systems with sufficient entanglement may exhibit phenomena characteristic of quantum gravity in a space with a negative cosmological constant. This suggests that we may be able to use table-top physics experiments to indirectly probe quantum gravity in universes that we do not inhabit. Indeed, the technology for the control of complex quantum many-body systems is advancing rapidly, and we appear to be at the dawn of a new era in physics—the study of quantum gravity in the lab, except that, by the methods described in this paper, we cannot actually test quantum gravity in our universe. For this, other experiments are needed, which we will however not even mention.

The purpose of this paper is to discuss one way in which quantum gravity can make contact with experiment, if you, like us, insist on studying quantum gravity in fictional universes that for all we know do not exist.”

I pointed out that these black holes that string theorists deal with have nothing to do with real black holes in an article I wrote for Quanta Magazine last year. It was also the last article I wrote for them.

Thursday, February 20, 2020

The 10 Most Important Physics Effects

Today I have a count-down of the 10 most important effects in physics that you should all know about.


10. The Doppler Effect

The Doppler effect is the change in frequency of a wave when the source moves relative to the receiver. If the source is approaching, the wavelength appears shorter and the frequency higher. If the source is moving away, the wavelength appears longer and the frequency lower.

The most common example of the Doppler effect is that of an approaching ambulance, where the pitch of the signal is higher when it moves towards you than when it moves away from you.

But the Doppler effect does not only happen for sound waves; it also happens to light which is why it’s enormously important in astrophysics. For light, the frequency is the color, so the color of an approaching object is shifted to the blue and that of an object moving away from you is shifted to the red. Because of this, we can for example calculate our velocity relative to the cosmic microwave background.

The Doppler effect is named after the Austrian physicist Christian Doppler and has nothing to do with the German word Doppelgänger.

9. The Butterfly Effect

Even a tiny change, like the flap of a butterfly’s wings, can making a big difference for the weather next Sunday. This is the butterfly effect as you have probably heard of it. But Edward Lorenz actually meant something much more radical when he spoke of the butterfly effect. He meant that for some non-linear systems you can only make predictions for a limited amount of time, even if you can measure the tiniest perturbations to arbitrary accuracy. I explained this in more detail in my earlier video.

8. The Meissner-Ochsenfeld Effect

The Meissner-Ochsenfeld effect is the impossibility of making a magnetic field enter a superconductor. It was discovered by Walther Meissner and his postdoc Robert Ochsenfeld in 1933. Thanks to this effect, if you try to place a superconductor on a magnet, it will hover above the magnet because the magnetic field lines cannot enter the superconductor. I assure you that this has absolutely nothing to do with Yogic flying.

7. The Aharonov–Bohm Effect

Okay, I admit this is not a particularly well-known effect, but it should be. The Aharonov-Bohm effect says that the wave-function of a charged particle in an electromagnetic field obtains a phase shift from the potential of the background field.

I know this sounds abstract, but the relevant point is that it’s the potential that causes the phase, not the field. In electrodynamics, the potential itself is normally not observable. But this phase shift in the Aharonov-Bohm Effect can and has been observed in interference patterns. And this tells us that the potential is not merely a mathematical tool. Before the Aharonov–Bohm effect one could reasonably question the physical reality of the potential because it was not observable.

6. The Tennis Racket Effect

If you throw any three-dimensional object with a spin, then the spin around the shortest and longest axes will be stable, but that around the intermediate third axis not. The typical example for the spinning object this is a tennis racket, hence the name. It’s also known as the intermediate axis theorem or the Dzhanibekov effect. You see a beautiful illustration of the instability in this little clip from the International Space Station.

5. The Hall Effect

If you bring a conducting plate into a magnetic field, then the magnetic field will affect the motion of the electrons in the plate. In particular, If the plate is orthogonal to the magnetic field lines, you can measure a voltage flowing between opposing ends of the plate, and this voltage can be measured to determine the strength of the magnetic field. This effect is named after Edwin Hall.

If the plate is very thin, the temperature very low, and the magnetic field very strong, you can also observe that the conductivity makes discrete jumps, which is known as the quantum Hall effect.

4. The Hawking Effect

Stephen Hawking showed in the early 1970s that black holes emit thermal radiation with a temperature inverse to the black hole’s mass. This Hawking effect is a consequence of the relativity of the particle number. An observer falling into a black hole would not measure any particles and think the black hole is surrounded by vacuum. But an observer far away from the black hole would think the horizon is surrounded by particles. This can happen because in general relativity, what we mean by a particle depend on the motion of an observer like the passage of time.

A closely related effect is the Unruh effect named after Bill Unruh, which says that an accelerated observer in flat space will measure a thermal distribution of particles with a temperature that depends on the acceleration. Again that can happen because the accelerated observer’s particles are not the same as the particles of an observer at rest.

3. The Photoelectric Effect

When light falls on a plate of metal, it can kick out electrons from their orbits around atomic nuclei. This is called the “photoelectric effect”. The surprising thing about this is that the frequency of the light needs to be above a certain threshold. Just what the threshold is depends on the material, but if the frequency is below the threshold, it does not matter how intense the light is, it will not kick out electrons.

The photoelectric effect was explained in 1905 by Albert Einstein who correctly concluded that it means the light must be made of quanta whose energy is proportional to the frequency of the light.

2. The Casimir Effect

Everybody knows that two metal plates will attract each other if one plate is positively charged and the other one negatively charged. But did you know the plates also attract each other if they are uncharged? Yes, they do!

This is the Casimir effect, named after Hendrik Casimir. It is created by quantum fluctuations that create a pressure even in vacuum. This pressure is lower between the plates than outside of them, so that the two plates are pushed towards each other. However, the force from the Casimir effect is very weak and can be measured only at very short distances.

1. The Tunnel Effect

Definitely my most favorite effect. Quantum effects allow a particle that is trapped in a potential to escape. This would not be possible without quantum effects because the particle just does not have enough energy to escape. However, in quantum mechanics the wave-function of the particle can leak out of the potential and this means that there is a small, but nonzero, probability that a quantum particle can do the seemingly impossible.

Saturday, February 15, 2020

The Reproducibility Crisis: An Interview with Prof. Dorothy Bishop

On my recent visit to Great Britain (the first one post-Brexit) I had the pleasure of talking to Dorothy Bishop. Bishop is Professor of Psychology at the University of Oxford and has been a leading force in combating the reproducibility crisis in her and other disciplines. You find her on twitter under the handle @deevybee . The comment for Nature magazine which I mention in the video is here.

Monday, February 10, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 2.” by Tim Palmer

[This is the second part of Tim’s guest contribution. The first part is here.]



In this second part of my guest post, I want to discuss how the concepts of undecidability and uncomputability can lead to a novel interpretation of Bell’s famous theorem. This theorem states that under seemingly reasonable conditions, a deterministic theory of quantum physics – something Einstein believed in passionately – must satisfy a certain inequality which experiment shows is violated.

These reasonable conditions, broadly speaking, describe the concepts of causality and freedom to choose experimental parameters. The issue I want to discuss is whether the way these conditions are formulated mathematically in Bell’s Theorem actually captures the physics that supposedly underpins them.

The discussion here and in the previous post summarises the essay I recently submitted to the FQXi essay competition on undecidability and uncomputability.

For many, the notion that we have some freedom in our actions and decisions seems irrefutable. But how would we explain this to an alien, or indeed a computer, for whom free will is a meaningless concept? Perhaps we might say that we are free because we could have done otherwise. This invokes the notion of a counterfactual world: even though we in fact did X, we could have done Y.

Counterfactuals also play an important role in describing the notion of causality. Imagine throwing a stone at a glass window. Was the smashed glass caused by my throwing the stone? Yes, I might say, because if I hadn’t thrown the stone, the window wouldn’t have broken.

However, there is an alternative way to describe these notions of free will and causality without invoking counterfactual worlds. I can just as well say that free will denotes an absence of constraints that would otherwise prevent me from doing what I want to do. Or I can use Newton’s laws of motion to determine that a stone with a certain mass, projected at a certain velocity, will hit the window with a momentum guaranteed to shatter the glass. These latter descriptions make no reference to counterfactuals at all; instead the descriptions are based on processes occurring in space-time (e.g. associated with the neurons of my brain or projectiles in physical space).

What has all this got to do with Bell’s Theorem? I mentioned above the need for a given theory to satisfy “certain conditions” in order for it to be constrained by Bell’s inequality (and hence be inconsistent with experiment). One of these conditions, the one linked to free will, is called Statistical Independence. Theories which violate this condition are called Superdeterministic.

Superdeterministic theories are typically excoriated by quantum foundations experts, not least because the Statistical Independence condition appears to underpin scientific methodology in general.

For example, consider a source of particles emitting 1000 spin-1/2 particles. Suppose you measure the spin of 500 of them along one direction and 500 of them along a different direction. Statistical Independence guarantees that the measurement statistics (e.g. the frequency of spin-up measurements) will not depend on the particular way in which the experimenter chooses to partition the full ensemble of 1000 particles into the two sub-ensembles of 500 particles.

If you violate Statistical Independence, the experts say, you are effectively invoking some conspiratorial prescient daemon who could, unknown to the experimenter, preselect particles for the particular measurements the experimenter choses to make - or even worse perhaps, could subvert the mind of the experimenter when deciding which type of measurement to perform on a given particle. Effectively, violating Statistical Independence turns experimenters into mindless zombies! No wonder experimentalists hate Superdeterministic theories of quantum physics!!

However, the experts miss a subtle but crucial point here: whilst imposing Statistical Independence guarantees that real-world sub-ensembles are statistically equivalent, violating Statistical Independence does not guarantee that real-world sub-ensembles are not statistically equivalent. In particular it is possible to violate Statistical Independence in such a way that it is only sub-ensembles of particles subject to certain counterfactual measurements that may be statistically inequivalent to the corresponding sub-ensembles with real-world measurements.

In the example above, a sub-ensemble of particles subject to a counterfactual measurement would be associated with the first sub-ensemble of 500 particles subject to the measurement direction applied to the second sub-ensemble of 500 particles. It is possible to violate Statistical Independence when comparing this counterfactual sub-ensemble with the real-world equivalent, without violating the statistical equivalence of the two corresponding sub-ensembles measured along their real-world directions.

However, for this idea to make any theoretical sense at all, there has to be some mathematical basis for asserting that sub-ensembles with real-world measurements can be different to sub-ensembles with counterfactual-world measurements. This is where uncomputable fractal attractors play a key role.

It is worth keeping an example of a fractal attractor in mind here. The Lorenz fractal attractor, discussed in my first post, is a geometric representation in state space of fluid motion in Newtonian space-time.

The Lorenz attractor.
[Image Credits: Markus Fritzsch.]

As I explained in my first post, the attractor is uncomputable in the sense that there is no algorithm which can decide whether a given point in state space lies on the attractor (in exactly the same sense that, as Turing discovered, there is no algorithm for deciding whether a given computer program will halt for given input data). However, as I lay out in my essay, the differential equations for the fluid motion in space-time associated with the Lorenz attractor are themselves solvable by algorithm to arbitrary accuracy and hence are computable. This dichotomy (between state space and space-time) is extremely important to bear in mind below.

With this in mind, suppose the universe itself evolves on some uncomputable fractal subset of state space, such that the corresponding evolution equations for physics in space-time are computable. In such a model, Statistical Independence will be violated for sub-ensembles if the corresponding counterfactual measurements take states of the universe off the fractal subset (since such counterfactual states have probability of occurrence equal to zero by definition).

In the model I have developed this always occurs when considering counterfactual measurements such as those in Bell’s Theorem. (This is a nontrivial result and is the consequence of number-theoretic properties of trigonometric functions.) Importantly, in this theory, Statistical Independence is never violated when comparing two sub-ensembles subject to real-world measurements such as occurs in analysing Bell’s Theorem.

This is all a bit mind numbing, I do admit. However, the bottom line is that I believe that the mathematical definitions of free choice and causality used to understand quantum entanglement are much too general – in particular they admit counterfactual worlds as physical in a completely unconstrained way. I have proposed alternative definitions of free choice and causality which strongly constrain counterfactual states (essentially they must lie on the fractal subset in state space), whilst leaving untouched descriptions of free choice and causality based only on space-time processes. (For the experts, in the classical limit of this theory, Statistical Independence is not violated for any counterfactual states.)

With these alternative definitions, it is possible to violate Bell’s inequality in a deterministic theory which respects free choice and local causality, in exactly the way it is violated in quantum mechanics. Einstein may have been right after all!

If we can explain entanglement deterministically and causally, then synthesising quantum and gravitational physics may have become easier. Indeed, it is through such synthesis that experimental tests of my model may eventually come.

In conclusion, I believe that the uncomputable fractal attractors of chaotic systems may provide a key geometric ingredient needed to unify our theories of physics.

My thanks to Sabine for allowing me the space on her blog to express these points of view.

Saturday, February 08, 2020

Philosophers should talk more about climate change. Yes, philosophers.


I never cease to be shocked – shocked! – how many scientists don’t know how science works and, worse, don’t seem to care about it. Most of those I have to deal with still think Popper was right when he claimed falsifiability is both necessary and sufficient to make a theory scientific, even though this position has logical consequences they’d strongly object to.

Trouble is, if falsifiability was all it took, then arbitrary statements about the future would be scientific. I should, for example, be able to publish a paper predicting that tomorrow the sky will be pink and next Wednesday my cat will speak French. That’s totally falsifiable, yet I hope we all agree that if we’d let such nonsense pass as scientific, science would be entirely useless. I don’t even have a cat.

As the contemporary philosopher Larry Laudan politely put it, Popper’s idea of telling science from non-science by falsifiability “has the untoward consequence of countenancing as `scientific’ every crank claim which makes ascertainably false assertions.” Which is why the world’s cranks love Popper.

But you are not a crank, oh no, not you. And so you surely know that almost all of today’s philosophers of science agree that falsification is not a sufficient criterion of demarcation (though they disagree on whether it is necessary). Luckily, you don’t need to know anything about these philosophers to understand today’s post because I will not attempt to solve the demarcation problem (which, for the record, I don’t think is a philosophical question). I merely want to clarify just when it is scientifically justified to amend a theory whose predictions ran into tension with new data. And the only thing you need to know to understand this is that science cannot work without Occam’s razor.

Occam’s razor tells you that among two theories that describe nature equally well you should take the simpler one. Roughly speaking it means you must discard superfluous assumptions. Occam’s razor is important because without it we were allowed to add all kinds of unnecessary clutter to a theory just because we like it. We would be permitted, for example, to add the assumption “all particles were made by god” to the standard model of particle physics. You see right away how this isn’t going well for science.

Now, the phrase that two theories “describe nature equally well” and you should “take the simpler one” are somewhat vague. To make this prescription operationally useful you’d have to quantify just what it means by suitable statistical measures. We can then quibble about just which statistical measure is the best, but that’s somewhat beside the point here, so let me instead come back to the relevance of Occam’s razor.

We just saw that it’s unscientific to make assumptions which are unnecessary to explain observation and don’t make a theory any simpler. But physicists get this wrong all the time and some have made a business out of it getting it wrong. They invent particles which make theories more complicated and are of no help to explain existing data. They claim this is science because these theories are falsifiable. But the new particles were unnecessary in the first place, so their ideas are dead on arrival, killed by Occam’s razor.

If you still have trouble seeing why adding unnecessary details to established theories is unsound scientific methodology, imagine that scientists of other disciplines would proceed the way that particle physicists do. We’d have biologists writing papers about flying pigs and then hold conferences debating how flying pigs poop because, who knows, we might discover flying pigs tomorrow. Sounds ridiculous? Well, it is ridiculous. But that’s the same “scientific methodology” which has become common in the foundations of physics. The only difference between elaborating on flying pigs and supersymmetric particles is the amount of mathematics. And math certainly comes in handy for particle physicists because it prevents mere mortals from understanding just what the physicists are up to.

But I am not telling you this to bitch about supersymmetry; that would be beating a dead horse. I am telling you this because I have recently had to deal with a lot of climate change deniers (thanks so much, Tim). And many of these deniers, believe that or not, think I must be a denier too because, drums please, I am an outspoken critic of inventing superfluous particles.

Huh, you say. I hear you. It took me a while to figure out what’s with these people, but I believe I now understand where they’re coming from.

You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes, it’s exactly the same thing that all these physicists do each time their hypothetical particles are not observed! They just fiddle with the parameters of the theory to evade experimental constraints and to keep their pet theories alive. But Popper already said you shouldn’t do that. Then someone yells “Epicycles!” And so, the deniers conclude, climate scientists are as wrong as particle physicists and clearly one shouldn’t listen to either.

But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do.

The more and the better data you have, the higher the demands on your theory. Sometimes this means you actually need a new theory. Sometimes you have to adjust one or the other parameter. Sometimes you find an actual mistake and have to correct it. But more often than not it just means you neglected something that better measurements are sensitive to and you must add details to your theory. And this is perfectly fine as long as adding details results in a model that explains the data better than before, and does so not just because you now have more parameters. Again, there are statistical measures to quantify in which cases adding parameters actually makes a better fit to data.

Indeed, adding epicycles to make the geocentric model of the solar system fit with observations was entirely proper scientific methodology. It was correcting a hypothesis that ran into conflict with increasingly better observations. Astronomers of the time could have proceeded this way until they’d have noticed there is a simpler way to calculate the same curves, which is by using elliptic motions around the sun rather than cycles around cycles around the Earth. Of course this is not what historically happened, but epicycles in and by themselves are not unscientific, they’re merely parametrically clumsy.

What scientists should not do, however, is to adjust details of a theory that were unnecessary in the first place. Kepler for example also thought that the planets play melodies on their orbits around the sun, an idea that was rightfully abandoned because it explains nothing.

To name another example, adding dark matter and dark energy to the cosmological standard model in order to explain observations is sound scientific practice. These are both simple explanations that vastly improve the fit of the theory to observation. What is not sound scientific methodology is then making these theories more complicated than needs to be, eg by replacing dark energy with complicated scalar fields even though there is no observation that calls for it, or by inventing details about particles that make up dark matter even though these details are irrelevant to fit existing data.

But let me come back to the climate change deniers. You may call me naïve, and I’ll take that, but I believe most of these people are genuinely confused about how science works. It’s of little use to throw evidence at people who don’t understand how scientists make evidence-based predictions. When it comes to climate change, therefore, I think we would all benefit if philosophers of science were given more airtime.

Thursday, February 06, 2020

Ivory Tower [I've been singing again]

I caught a cold and didn't come around to record a new physics video this week. Instead I finished a song that I wrote some weeks ago. Enjoy!

Monday, February 03, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 1.” by Tim Palmer

[Tim Palmer is a Royal Society Research Professor in Climate Physics at the University of Oxford, UK. He is only half as crazy as it seems.]


[Screenshot from Tim’s public lecture at Perimeter Institute]


Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other.

The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it.

Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality.

Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates.

So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post.

To start, I need to say what undecidability and uncomputability are in the first place. The concepts go back to the work of Alan Turing who in 1936 showed that no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt. This “Halting Problem” is therefore undecidable by algorithm. So, a key way to know whether a problem is algorithmically undecidable – or equivalently uncomputable – is to see if the problem is equivalent to the Halting Problem.

Let’s return to thinking about chaotic systems. As mentioned, these are deterministic systems whose evolution is effectively unpredictable (because the evolution is sensitive to the starting conditions). However, what is relevant here is not so much this property of unpredictability, but the fact that no matter what initial condition you start from, there is a class of chaotic system where eventually (technically after an infinite time) the state evolves on a fractal subset of state space, sometimes known as a fractal attractor.

One defining characteristic of a fractal is that its dimension is not a simple integer (like that of a one-dimensional line or the two-dimensional surface of a sphere). Now, the key result I need is a theorem that there is no algorithm that will take as input some point x in state space, and halt if that point belongs to a set with fractional dimension. This implies that the fractal attractor A of a chaotic system is uncomputable and the proposition “x belongs to A” is algorithmically undecidable.

How does this help unify physics?

Firstly defining chaos in terms of the geometry of its fractal attractor (e.g. through the fractional dimension of the attractor) is a coordinate independent and hence more relativistic way to characterise chaos, than defining it in terms of exponential divergence of nearby trajectories. Hence the uncomputable fractal attractor provides a way to unify general relativity and chaos theory.

That was easy! The rest is not so easy which is why I need two guest posts and not one!

When it comes to combining chaos theory with quantum mechanics, the first step is to realize that the linearity of the Schrödinger equation is not at all incompatible with the nonlinearity of chaos.

To understand this, consider an ensemble of integrations of a particular chaotic model based on the Lorenz equations – see Fig 1. These Lorenz equations describe fluid dynamical motion, but the details need not concern us here. The fractal Lorenz attractor is shown in the background in Fig 1. These ensembles can be thought of as describing the evolution of probability – something of practical value when we don’t know the initial conditions precisely (as is the case in weather forecasting).

Fig 1: Evolution of a contour of probability, based on ensembles of integrations of the Lorenz equations, is shown evolving in state space for different initial conditions, with the Lorenz attractor as background. 

In the first panel in Fig 1, small uncertainties do not grow much and we can therefore be confident in the predicted evolution. In the third panel, small uncertainties grow explosively, meaning we can have little confidence in any specific prediction. The second panel is somewhere in between.

Now it turns out that the equation which describes the evolution of probability in such chaotic systems, known as the Liouville equation, is itself a linear equation. The linearity of the Liouville equation ensures that probabilities are conserved in time. Hence, for example, if there is an 80% chance that the actual state of the fluid (as described by the Lorenz equation state) lies within a certain contour of probability at initial time, then there is an 80% chance that the actual state of the fluid lies within the evolved contour of probability at the forecast time.

The remarkable thing is that the Liouville equation is formally very similar to the so-called von-Neumann form of the Schrödinger equation – too much, in my view, for this to be a coincidence. So, just as the linearity of the Liouville equation says nothing about the nonlinearity of the underlying deterministic dynamics which generate such probability, so too the linearity of the Schrödinger equation need say nothing about the nonlinearity of some underlying dynamics which generates quantum probabilities.

However, as I wrote above, in order to satisfy Bell’s theorem, it would appear that, being deterministic, a chaotic model will have to violate relativistic causality, seemingly thwarting the aim of trying to unify our theories of physics. At least, that’s the usual conclusion. However, the undecidable uncomputable properties of fractal attractors provide a novel route to allow us to reassess this conclusion. I will explain how this works in the second part of this post.

Sunday, February 02, 2020

Does nature have a minimal length?

Molecules are made of atoms. Atomic nuclei are made of neutrons and protons. And the neutrons and protons are made of quarks and gluons. Many physicists think that this is not the end of the story, but that quarks and gluons are made of even smaller things, for example the tiny vibrating strings that string theory is all about. But then what? Are strings made of smaller things again? Or is there a smallest scale beyond which nature just does not have any further structure? Does nature have a minimal length?

This is what we will talk about today.



When physicists talk about a minimal length, they usually mean the Planck length, which is about 10-35 meters. The Planck length is named after Max Planck, who introduced it in 1899. 10-35 meters sounds tiny and indeed it is damned tiny.

To give you an idea, think of the tunnel of the Large Hadron Collider. It’s a ring with a diameter of about 10 kilometers. The Planck length compares to the diameter of a proton as the radius of a proton to the diameter of the Large Hadron Collider.

Currently, the smallest structures that we can study are about ten to the minus nineteen meters. That’s what we can do with the energies produced at the Large Hadron Collider and that is still sixteen orders of magnitude larger than the Planck length.

What’s so special about the Planck length? The Planck length seems to be setting a limit to how small a structure can be so that we can still measure it. That’s because to measure small structures, we need to compress more energy into small volumes of space. That’s basically what we do with particle accelerators. Higher energy allows us to find out what happens on shorter distances. But if you stuff too much energy into a small volume, you will make a black hole.

More concretely, if you have an energy E, that will in the best case allow you to resolve a distance of about ℏc/E. I will call that distance Δx. Here, c is the speed of light and ℏ is a constant of nature, called Planck’s constant. Yes, that’s the same Planck! This relation comes from the uncertainty principle of quantum mechanics. So, higher energies let you resolve smaller structures.

Now you can ask, if I turn up the energy and the size I can resolve gets smaller, when do I get a black hole? Well that happens, if the Schwarzschild radius associated with the energy is similar to the distance you are trying to measure. That’s not difficult to calculate. So let’s do it.

The Schwarzschild radius is approximately M times G/c2 where G is Newton’s constant and M is the mass. We are asking, when is that radius similar to the distance Δx. As you almost certainly know, the mass associated with the energy is E=Mc2. And, as we previously saw, that energy is just ℏcx. You can then solve this equation for Δx. And this is what we call the Planck length. It is associated with an energy called the Planck energy. If you go to higher energies than that, you will just make larger black holes. So the Planck length is the shortest distance you can measure.

Now, this is a neat estimate and it’s not entirely wrong, but it’s not a rigorous derivation. If you start thinking about it, it’s a little handwavy, so let me assure you there are much more rigorous ways to do this calculation, and the conclusion remains basically the same. If you combine quantum mechanics with gravity, then the Planck length seems to set a limit to the resolution of structures. That’s why physicists think nature may have a fundamentally minimal length.

Max Planck by the way did not come up with the Planck length because he thought it was a minimal length. He came up with that simply because it’s the only unit of dimension length you can create from the fundamental constants, c, the speed of light, G, Newton’s constant, and ℏ. He thought that was interesting because, as he wrote in his 1899 paper, these would be natural units that also aliens would use.

The idea that the Planck length is a minimal length only came up after the development of general relativity when physicists started thinking about how to quantize gravity. Today, this idea is supported by attempts to develop a theory of quantum gravity, which I told you about in an earlier video.

In string theory, for example, if you squeeze too much energy into a string it will start spreading out. In Loop Quantum Gravity, the loops themselves have a finite size, given by the Planck length. In Asymptotically Safe Gravity, the gravitational force becomes weaker at high energies, so beyond a certain point you can no longer improve your resolution.

When I speak about a minimal length, a lot of people seem to have a particular image in mind, which is that the minimal length works like a kind of discretization, a pixilation of an photo or something like that. But that is most definitely the wrong image. The minimal length that we are talking about here is more like an unavoidable blur on an image, some kind of fundamental fuzziness that nature has. It may, but does not necessarily come with a discretization.

What does this all mean? Well, it means that we might be close to finding a final theory, one that describes nature at its most fundamental level and there is nothing more beyond that. That is possible, but. Remember that the arguments for the existence of a minimal length rest on extrapolating 16 orders magnitude below the distances what we have tested so far. That’s a lot. That extrapolation might just be wrong. Even though we do not currently have any reason to think that there should be something new on distances even shorter than the Planck length, that situation might change in the future.

Still, I find it intriguing that for all we currently know, it is not necessary to think about distances shorter than the Planck length.

Friday, January 24, 2020

Do Black Holes Echo?

What happens with the event horizon of two black holes if they merge? Might gravitational waves emitted from such a merger tell us if Einstein’s theory of general relativity is wrong? Yes, they might. But it’s unlikely. In this video, I will explain why. In more detail, I will tell you about the possibility that a gravitational wave signal from a black hole merger has echoes.


But first, some context. We know that Einstein’s theory of general relativity is incomplete. We know that because it cannot handle quantum properties. To complete General Relativity, we need a theory of quantum gravity. But progress in theory development has been slow and experimental evidence for quantum gravity is hard to come by because quantum fluctuations of space-time are so damn tiny. In my previous video I told you about the most promising ways of testing quantum gravity. Today I want to tell you about testing quantum gravity with black hole horizons in particular.

The effects of quantum gravity become large when space and time are strongly curved. This is the case towards the center of a black hole, but it is not the case at the horizon of a black hole. Most people get this wrong, so let me repeat this. The curvature of space is not strong at the horizon of a black hole. It can, in fact, be arbitrarily weak. That’s because the curvature at the horizon is inversely proportional to the square of the black hole’s mass. This means the larger the black hole, the weaker the curvature at the horizon. It also means we have no reason to think that there are any quantum gravitational effects near the horizon of a black hole. It’s an almost flat and empty space.

Black holes do emit radiation by quantum effects. This is the Hawking radiation named after Stephen Hawking. But Hawking radiation comes from the quantum properties of matter. It is an effect of ordinary quantum mechanics and *not an effect of quantum gravity.

However, one can certainly speculate that maybe General Relativity does not correctly describe black hole horizons. So how would you do that? In General Relativity, the horizon is the boundary of a region that you can only get in but never get out. The horizon itself has no substance and indeed you would not notice crossing it. But quantum effects could change the situation. And that might be observable.

Just what you would observe has been studied by Niayesh Afshordi and his group at Perimeter Institute. They try to understand what happens if quantum effects turn the horizon into a physical obstacle that partly reflects gravitational waves. If that was so, the gravitational waves produced in a black hole merger would bounce back and forth between the horizon and the black hole’s photon sphere.

The photon sphere is a potential barrier at about one and a half times the radius of the horizon. The gravitational waves would slowly leak during each iteration rather than escape in one bang. And if that is what is really going on, then gravitational wave interferometers like LIGO should detect echoes of the original merger signal.

And here is the thing! Niayesh and his group did find an echo signal in the gravitational wave data. This signal is in the first event ever detected by LIGO in September 2015. The statistical significance of this echo was originally at 2.5 σ. This means roughly one-in-a-hundred times random fluctuations conspire to look like the observed echo. So, it’s not a great level of significance, at least not by physics standards. But it’s still 2.5σ better than nothing.

Some members of the LIGO collaboration then went and did their own analysis of the data. And they also found the echo, but at a somewhat smaller significance. There has since been some effort by several groups to extract a signal from the data with different techniques of analysis using different models for the exact type of echo signal. The signal could for example be dampened over time, or it’s frequency distribution could change. The reported false alarm rate of these findings ranges from 5% to 0.002%, the latter is a near discovery.

However, if you know anything about statistical analysis, then you know that trying out different methods of analysis and different models until you find something is not a good idea. Because if you try long enough, you will eventually find something. And in the case of black hole echoes, I suspect that most of the models that gave negative results never appeared in the literature. So the statistical significance may be misleading.

I also have to admit that as a theorist, I am not enthusiastic about black hole echoes because there are no compelling theoretical reasons to expect them. We know that quantum gravitational effects become important towards the center of the black hole. But that’s hidden deep inside the horizon and the gravitational waves we detect are not sensitive to what is going on there. That quantum gravitational effects are also relevant at the horizon is speculative and pure conjecture, and yet that’s what it takes to have black hole echoes.

But theoretical misgivings aside, we have never tested the properties of black hole horizons before, and on unexplored territory all stones should be turned. You find a summary of the current status of the search for black hole echoes in Afshordi’s most recent paper.

Wednesday, January 22, 2020

Travel and Book Update

My book “Lost in Math” has meanwhile also been translated to Hungarian and Polish. Previous translations have appeared in German, Spanish, Italian, and French, I believe. I have somewhat lost overview. There should have been a Chinese and Romanian translation too, I think, but I’m not sure what happened to these. In case someone spots them, please let me know. The paperback version of the US-Edition is scheduled to appear in June.

My upcoming trips are to Cambridge, UK, for a public debate on the question “How is the scientific method doing?” (on Jan 28th) and a seminar about Superdeterminism (on Jan 29). On Feb 13, I am in Oxford (again) giving a talk about Superfluid Dark Matter (again), but this time at the physics department. On Feb 24th, I am in London for the Researcher to Reader Conference 2020.

On March 9th I am giving a colloq at Brown University. On March 19th I am in Zurich for some kind of panel discussion, details of which I have either forgotten or never knew. On April 8, I am in Gelsenkirchen for a public lecture.

Our Superdeterminism workshop is scheduled for the first week of May (details to come soon). Mid of May I am in Copenhagen for a public lecture. In June I’ll be on Long Island for a conference on peer review organized by the APS.

The easiest way to keep track of my whatabouts and whereabouts is to follow me on Twitter or on Facebook.

Thursday, January 16, 2020

How to test quantum gravity

Today I want to talk about a topic that most physicists get wrong: How to test quantum gravity. Most physicists believe it is just is not possible. But it is possible.


Einstein’s theory of general relativity tells us that gravity is due to the curvature of space and time. But this theory is strictly speaking wrong. It is wrong because according to general relativity, gravity does not have quantum properties. I told you all about this in my earlier videos. This lacking quantum behavior of gravity gives rise to mathematical inconsistencies that make no physical sense. To really make sense of gravity, we need a theory of quantum gravity. But we do not have such a theory yet. In this video, we will look at the experimental possibilities that we have to find the missing theory.

But before I do that, I want to tell you why so many physicists think that it is not possible to test quantum gravity.

The reason is that gravity is a very weak force and its quantum effects are even weaker. Gravity does not seem weak in everyday life. But that is because gravity, unlike all the other fundamental forces, does not neutralize. So, on long distances, it is the only remaining force and that’s why we notice it so prominently. But if you look at, for example, the gravitational force between an electron and a proton and the electromagnetic force between them, then the electromagnetic force is a factor 10^40 stronger.

One way to see what this means is to look at a fridge magnet. The magnetic force of that tiny thing is stronger than the gravitational pull of the whole planet.

Now, in most approaches to quantum gravity, the gravitational force is mediated by a particle. This particle is called the graviton, and it belongs to the gravitational force the same way that the photon belongs to the electromagnetic force. But since gravity is so much weaker than the electromagnetic force, you need ridiculously high energies to produce a measureable amount of gravitons. With the currently available technology, it would take a particle accelerator about the size of the Milky Way to reach sufficiently high energies.

And this is why most physicists think that one cannot test quantum gravity. It is testable in principle, all right, but not in practice, because one needs these ridiculously large accelerators or detectors.

However, this argument is wrong. It is wrong because one does not need to produce a quantum of a field to demonstrate that the field must be quantized. Take electromagnetism as an example. We have evidence that it must be quantized right here. Because if it was not quantized, then atoms would not be stable. Somewhat more quantitatively, the discrete energy levels of atomic spectral lines demonstrate that electromagnetism is quantized. And you do not need to detect individual photons for that.

With the quantization of gravity, it’s somewhat more difficult, but not impossible. A big advantage of gravity is that the gravitational force becomes stronger for larger systems because, recall, gravity, unlike the other forces, does not neutralize and therefore adds up. So, we can make quantum gravitational effects stronger by just taking larger masses and bringing them into quantum states, for example into a state in which the masses are in two places at once. One should then be able to tell whether the gravitational field is also in two places at once. And if one can do that, one can demonstrate that gravity has quantum behavior.

But the trouble is that quantum effects for large objects quickly fade away, or “decohere” as the physicists say. So the challenge to measuring quantum gravity comes down to producing and maintaining quantum states of heavy objects. “Heavy” here means something like a milli-gram. That doesn’t sound heavy, but it is very heavy compared to the masses of elementary particles.

The objects you need for such an experiment have to be heavy enough so that one can actually measure the gravitational field. There are a few experiments attempting to measure this. But presently the masses that one can bring into quantum states are not quite high enough. However, it is something that will reasonably become possible in the coming decades.

Another good chance to observe quantum gravitational effects is to use the universe as laboratory. Quantum gravitational effects should be strong right after the big bang and inside of black holes. Evidence from what happened in the early universe could still be around today, for example in the cosmic microwave background. Indeed, several groups are trying to find out whether the cosmic microwave background can be analyzed to show that gravity must have been quantized. But at least for now the signal is well below measurement precision.

With black holes, it’s more complicated, because the region where quantum gravity is strong is hidden behind the event horizon. But some computer simulations seem to show that stars can collapse without forming a horizon. In this case we could look right at the quantum gravitational effects. The challenge with this idea is to find out just how the observations would differ between a “normal” black hole and a singularity without horizon but with quantum gravitational effects. Again, that’s subject of current research.

And there are other options. For example, the theory of quantum gravity may violate symmetries that are respected by general relativity. Symmetry violations can show up in high-precision measurements at low energies, even if they are very small. This is something that one can look for with particle decays or particle interactions and indeed something that various experimental groups are looking for.

There are several other ways to test quantum gravity, but these are more speculative in that they look for properties that a theory of quantum gravity may not have.

For example, the way in which gravitational waves are emitted in a black hole merger is different if the black hole horizon has quantum effects. However, this may just not be the case. The same goes for ideas that space itself may have the properties of a medium give rise to dispersion, which means that light of different colors travels at different speed, or may have viscosity. Again, this is something that one can look for, and that physicists are looking for. It’s not our best shot though, because quantum gravity may not give rise to these effects.

In any case, as you can see, clearly it is possible to test quantum gravity. Indeed I think it is possible that we will see experimental evidence for quantum gravity in the next 20 years, most likely by the type of test that I talked about first, with the massive quantum objects.