Friday, November 15, 2019

Did scientists get climate change wrong?

On my recent trip to the UK, I spoke with Tim Palmer about the uncertainty in climate predictions.

Saturday, November 09, 2019

How can we test a Theory of Everything?

How can we test a Theory of Everything? That’s a question I get a lot in my public lectures. In the past decade, physicists have put forward some speculations that cannot be experimentally ruled out, ever, because you can always move predictions to energies higher than what we have tested so far. Supersymmetry is an example of a theory that is untestable in this particular way. After I explain this, I am frequently asked if it is possible to test a theory of everything, or whether such theories are just entirely unscientific.


It’s a good question. But before we get to the answer, I have tell you exactly what physicists mean by “theory of everything”, so we’re on the same page. For all we currently know the world is held together by four fundamental forces. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces, like for example Van-der-Waals forces that hold together molecules or muscle forces derive from those four fundamental forces.

The electromagnetic force and the strong and the weak nuclear force are combined in the standard model of particle physics. These forces have in common that they have quantum properties. But the gravitational force stands apart from the three other forces because it does not have quantum properties. That’s a problem, as I have explained in an earlier video. A theory that solves the problem of the missing quantum behavior of gravity is called “quantum gravity”. That’s not the same as a theory of everything.

If you combine the three forces in the standard model to only one force from which you can derive the standard model, that is called a “Grand Unified Theory” or GUT for short. That’s not a theory of everything either.

If you have a theory from which you can derive gravity and the three forces of the standard model, that’s called a “Theory of Everything” or TOE for short. So, a theory of everything is both a theory of quantum gravity and a grand unified theory.

The name is somewhat misleading. Such a theory of everything would of course *not explain everything. That’s because for most purposes it would be entirely impractical to use it. It would be impractical for the same reason it’s impractical to use the standard model to explain chemical reactions, not to mention human behavior. The description of large objects in terms of their fundamental constituents does not actually give us much insight into what the large objects do. A theory of everything, therefore, may explain everything in principle, but still not do so in practice.

The other problem with the name “theory of everything” is that we will never know that not at some point in the future we will discover something that the theory does not explain. Maybe there is indeed a fifth fundamental force? Who knows.

So, what physicists call a theory of everything should really be called “a theory of everything we know so far, at least in principle.”

The best known example of a theory of everything is string theory. There are a few other approaches. Alain Connes, for example, has an approach based on non-commutative geometry. Asymptotically safe gravity may include a grand unification and therefore counts as a theory of everything. Though, for reasons I don’t quite understand, physicists do not normally discuss asymptotically safe gravity as a candidate for a theory of everything. If you know why, please leave a comment.

These are the large programs. Then there are a few small programs, like Garrett Lisi’s E8 theory, or Xiao-Gang Wen’s idea that the world is really made of qbits, or Felix Finster’s causal fermion systems.

So, are these theories testable?

Yes, they are testable. The reason is that any theory which solves the problem with quantum gravity must make predictions that deviate from general relativity. And those predictions, this is really important, cannot be arbitrarily moved to higher and higher energies. We know that because combining general relativity with the standard model, without quantizing gravity, just stops working near an energy known as the Planck energy.

These approaches to a theory of everything normally also make other predictions. For example they often come with a story about what happened in the early universe, which can have consequences that are still observable today. In some cases they result in subtle symmetry violations that can be measurable in particle physics experiments. The details about this differ from one theory to the next.

But what you really wanted to know, I guess, is whether these tests are practically possible any time soon? I do think it is realistically possible that we will be able to see these deviations from general relativity in the next 50 years or so. About the other tests that rely on models for the early universe or symmetry violations, I’m not so sure, because for these it is again possible to move the predictions and then claim that we need bigger and better experiments to see them.

Is there any good reason to think that such a theory of everything is correct in the first place? No. There is good reason to think that we need a theory of quantum gravity, because without that the current theories are just inconsistent. But there is no reason to think that the forces of the standard model have to be unified, or that all the forces ultimately derive from one common explanation. It would be nice, but maybe that’s just not how the universe works.

Saturday, November 02, 2019

Have we really measured gravitational waves?


A few days ago I met a friend on the subway. He tells me he’s been at a conference and someone asked if he knows me. He says yes, and immediately people start complaining about me. One guy, apparently, told him to slap me.

What were they complaining about, you want to know? Well, one complaint came from a particle physicist, who was clearly dismayed that I think building a bigger particle collider is not a good way to invest $40 billion dollars. But it was true when I said it the first time and it is still true: There are better things we can do with this amount money. (Such as, for example, make better climate predictions, which can be done for as “little” as 1 billion dollars.)

Back to my friend on the subway. He told me that besides the grumpy particle physicist there were also several gravitational wave people who have issues with what I have written about the supposed gravitational wave detections by the LIGO collaboration. Most of the time if people have issues with what I’m saying it’s because they do not understand what I’m saying to begin with. So with this video, I hope to clear the situation up.

Let me start with the most important point. I do not doubt that the gravitational wave detections are real. But. I spend a lot of time on science communication, and I know that many of you doubt that these detections are real. And, to be honest, I cannot blame you for this doubt. So here’s my issue. I think that the gravitational wave community is doing a crappy job justifying the expenses for their research. They give science a bad reputation. And I do not approve of this.

Before I go on, a quick reminder what gravitational waves are. Gravitational waves are periodic deformations of space and time. These deformations can happen because Einstein’s theory of general relativity tells us that space and time are not rigid, but react to the presence of matter. If you have some distribution of matter that curves space a lot, such as a pair of black holes orbiting one another, these will cause space-time to wobble and the wobbles carry energy away. That’s what gravitational waves are.

We have had indirect evidence for gravitational waves since the 1970s because you can measure how much energy a system loses through gravitational waves without directly measuring the gravitational waves. Hulse and Taylor did this by closely monitoring the orbiting frequency of a pulsar binary. If the system loses energy, the two stars get closer and they orbit faster around each other. The predictions for the emission of gravitational waves fit exactly on the observations. Hulse and Taylor got a Nobel prize for that in 1993.

For the direct detection of gravitational waves you have to measure the deformation of space and time that they cause. You can do this by using very sensitive interferometers. An interferometer bounces laser light back and forth in two orthogonal directions and then combines the light.

Light is a wave and depending on whether the crests of the waves from the two directions lie on top of each other or not, the resulting signal is strong – that’s constructive interference – or washed out – that’s destructive interference. Just what happens depends very sensitively on the distance that the light travels. So you can use changes in the strength of the interference pattern to figure out whether one of the directions of the interferometer was temporarily shorter or longer.

A question that I frequently get is how can this interferometer detect anything if both the light and the interferometer itself deform with space-time? Wouldn’t the effect cancel out? No, it does not cancel out, because the interferometer is not made of light. It’s made of massive particles and therefore reacts differently to a periodic deformation of space-time than light does. That’s why one can use light to find out that something happened for real. For more details, please check these papers.

The first direct detection of gravitational waves was made by the LIGO collaboration in September 2015. LIGO consists of two separate interferometers. They are both located in the United States, some thousand kilometers apart. Gravitational waves travel at the speed of light, so if one comes through, it should trigger both detectors with a small delay that comes from the time it takes the wave to travel from one detector to the other. Looking for a signal that appears almost simultaneously in the two detectors helps to identify the signal in the noise.

This first signal measured by LIGO looks like a textbook example of a gravitational wave signal from a merger of two black holes. It’s a periodic signal that increases in frequency and amplitude, as the two black holes get closer to each other and their orbiting period gets shorter. When the horizons of the two black holes merge, the signal is suddenly cut off. After this follows a brief period in which the newly formed larger black hole settles in a new state, called the ringdown. A Nobel Prize was awarded for this measurement in 2017. If you plot the frequency distribution over time, you get this banana. Here it's the upward bend that tells you that the frequency increases before dying off entirely.

Now, what’s the problem? The first problem is that no one seems to actually know where the curve in the famous LIGO plot came from. You would think it was obtained by a calculation, but members of the collaboration are on record saying it was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” Both the collaboration and the journal in which the paper was published have refused to comment. This, people, is highly inappropriate. We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data.

The other problem is that so far we do not have a confirmation that the signals which LIGO detects are in fact of astrophysical origin, and not misidentified signals that originated on Earth. The way that you could show this is with a LIGO detection that matches electromagnetic signals, such as gamma ray bursts, measured by telescopes.

The collaboration had, so far, one opportunity for this, which was an event in August 2017. The problem with this event is that the announcement from the collaboration about their detection came after the announcement of the incoming gamma ray. Therefore, the LIGO detection does not count as a confirmed prediction, because it was not a prediction in the first place – it was a postdiction.

It seems to offend people in the collaboration tremendously if I say this, so let me be clear. I have no reason to think that something fishy went on, and I know why the original detection did not result in an automatic alert. But this isn’t the point. The point is that no one knows what happened before the official announcement besides members of the collaboration. We are waiting for an independent confirmation. This one missed the mark.

Since 2017, the two LIGO detectors have been joined by a third detector called Virgo, located in Italy. In their third run, which started in April this year, the LIGO/Virgo collaboration has issued alerts for 41 events. From these 41 alerts, 8 were later retracted. Of the remaining gravitational wave events, 10 look like they are either neutron star mergers, or mergers of a neutron star with a black hole. In these cases, there should also be electromagnetic radiation emitted which telescopes can see. For black hole mergers, one does not expect this to be the case.

However, no telescope has so far seen a signal that fits to any of the gravitational wave events. This may simply mean that the signals have been too weak for the telescopes to see them. But whatever the reason, the consequence is that we still do not know that what LIGO and Virgo see are actually signals from outer space.

You may ask isn’t it enough that they have a signal in their detector that looks like it could be caused by gravitational waves? Well, if this was the only thing that could trigger the detectors, yes. But that is not the case. The LIGO detectors have about 10-100 “glitches” per day. The glitches are bright and shiny signals but do not look like gravitational wave events. The cause of some of these glitches is known. The cause of other glitches not. LIGO uses a citizen science project to classify these glitches and has given them funky names like “Koi Fish” or “Blip.”

What this means is that they do not really know what their detector detects. They just throw away data that don’t look like they want it to look. This is not a good scientific procedure. Here is why.

Think of an animal. Let me guess, it’s... an elephant. Right? Right for you, right for you, not right for you? Hmm, that’s a glitch in the data, so you don’t count.

Does this prove that I am psychic? No, of course it doesn’t. Because selectively throwing away data that’s inconvenient is a bad idea. Goes for me, goes for LIGO too. At least that’s what you would think.

If we had an independent confirmation that the good-looking signal is really of astrophysical origin, this wouldn’t matter. But we don’t have that either. So that’s the situation in summary. The signals that LIGO and Virgo see are well explained by gravitational wave events. But we cannot be sure that these are actually signals coming from outer space and not some unknown terrestrial effect.

Let me finish by saying once again that personally I do not actually doubt these signals are caused by gravitational waves. But in science, it’s evidence that counts, not opinion.

Wednesday, October 30, 2019

The crisis in physics is not only about physics

downward spiral
In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. Sure, some aspects of these theories have only been experimentally confirmed later. The last to-be-confirmed particle was the Higgs-boson, predicted in the 1960s, measured in 2012. But all shortcomings of these theories – the lacking quantization of gravity, dark matter, the quantum measurement problem, and more – have been known for more than 80 years. And they are as unsolved today as they were then.

The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars.

With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end.

The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing.

The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know.

But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it.

Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.

This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”.

And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality.

Physicists need new methods. Better methods. Methods that are appropriate to the present century.

And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do.

Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.

You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive.

But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community.

Indeed, we see this beginning to happen in medicine and in ecology, too.

Small-scale drug trials have pretty much run their course. These are good only to find in-your-face correlations that are universal across most people. Medicine, therefore, will increasingly have to rely on data collected from large groups over long periods of time to find increasingly personalized diagnoses and prescriptions. The studies which are necessary for this are extremely costly. They must be chosen carefully for not many of them can be made. The study of ecosystems faces a similar challenge, where small, isolated investigations are about to reach their limits.

How physicists handle their crisis will give an example to other disciplines. So watch this space.

Tuesday, October 22, 2019

What is the quantum measurement problem?

Today, I want to explain just what the problem is with making measurements according to quantum theory.

Quantum mechanics tells us that matter is not made of particles. It is made of elementary constituents that are often called particles, but are really described by wave-functions. A wave-function is a mathematical object which is neither a particle nor a wave, but it can have properties of both.

The curious thing about the wave-function is that it does not itself correspond to something which we can observe. Instead, it is only a tool by help of which we calculate what we do observe. To make such a calculation, quantum theory uses the following postulates.

First, as long as you do not measure the wave-function, it changes according to the Schrödinger equation. The Schrödinger equation is different for different particles. But its most important properties are independent of the particle.

One of the important properties of the Schrödinger equation is that it guarantees that the probabilities computed from the wave-function will always add up to one, as they should. Another important property is that the change in time which one gets from the Schrödinger equation is reversible.

But for our purposes the most important property of the Schrödinger equation is that it is linear. This means if you have two solutions to this equation, then any sum of the two solutions, with arbitrary pre-factors, will also be a solution.

The second postulate of quantum mechanics tells you how you calculate from the wave-function what is the probability of getting a specific measurement outcome. This is called the “Born rule,” named after Max Born who came up with it. The Born rule says that the probability of a measurement is the absolute square of that part of the wave-function which describes a certain measurement outcome. To do this calculation, you also need to know how to describe what you are observing – say, the momentum of a particle. For this, you need further postulates, but these do not need to concern us today.

And third, there is the measurement postulate, sometimes called the “update” or “collapse” of the wave-function. This postulate says that after you have made a measurement, the probability of what you have measured suddenly changes to 1. This, I have to emphasize, is a necessary requirement to describe what we observe. I cannot stress this enough because a lot of physicists seem to find it hard to comprehend. If you do not update the wave-function after measurement, then the wave-function does not describe what we observe. We do not, ever, observe a particle that is 50% measured.

The problem with the quantum measurement is now that the update of the wave-function is incompatible with the Schrödinger equation. The Schrödinger equation, as I already said, is linear. That means if you have two different states of a system, both of which are allowed according to the Schrödinger equation, then the sum of the two states is also an allowed solution. The best known example of this is Schrödinger’s cat, which is a state that is a sum of both dead and alive. Such a sum is what physicists call a superposition.

We do, however, only observe cats that are either dead or alive. This is why we need the measurement postulate. Without it, quantum mechanics would not be compatible with observation.

The measurement problem, I have to emphasize, is not solved by decoherence, even though many physicists seem to believe this to be so. Decoherence is a process that happens if a quantum superposition interacts with its environment. The environment may simply be air or, even in vacuum, you still have the radiation of the cosmic microwave background. There is always some environment. This interaction with the environment eventually destroys the ability of quantum states to display typical quantum behavior, like the ability of particles to create interference patterns. The larger the object, the more quickly its quantum behavior gets destroyed.

Decoherence tells you that if you average over the states of the environment, because you do not know exactly what they do, then you no longer have a quantum superposition. Instead, you have a distribution of probabilities. This is what physicists call a “mixed state”. This does not solve the measurement problem because after measurement, you still have to update the probability of what you have observed to 100%. Decoherence does not tell you to do that.

Why is the measurement postulate problematic? The trouble with the measurement postulate is that the behavior of a large thing, like a detector, should follow from the behavior of the small things that it is made up of. But that is not the case. So that’s the issue. The measurement postulate is incompatible with reductionism. It makes it necessary that the formulation of quantum mechanics explicitly refers to macroscopic objects like detectors, when really what these large things are doing should follow from the theory.

A lot of people seem to think that you can solve this problem by way of re-interpreting the wave-function as merely encoding the knowledge that an observer has about the state of the system. This is what is called a Copenhagen or “neo-Copenhagen” interpretation. (And let me warn you that this is not the same as a Psi-epistemic interpretation, in case you have heard that word.)

Now, if you believe that the wave-function merely describes the knowledge an observer has then you may say, of course it needs to be updated if the observer makes a measurement. Yes, that’s very reasonable. But of course this also refers to macroscopic concepts like observers and their knowledge. And if you want to use such concepts in the postulates of your theory, you are implicitly assuming that the behavior of observers or detectors is incompatible with the behavior of the particles that make up the observers or detectors. This requires that you explain when and how this distinction is to be made and none of the existing neo-Copenhagen approaches explain this.

I already told you in an earlier blogpost why the many worlds interpretation does not solve the measurement problem. To briefly summarize it, it’s because in the many worlds interpretation one also has to use a postulate about what a detector does.

What does it take to actually solve the measurement problem? I will get to this, so stay tuned.

Wednesday, October 16, 2019

Dark matter nightmare: What if we are just using the wrong equations?

Dark matter filaments. Computer simulation.
[Image: John Dubinski (U of Toronto)]
Einstein’s theory of general relativity is an extremely well-confirmed theory. Countless experiments have shown that its predictions for our solar system agree with observation to utmost accuracy. But when we point our telescopes at larger distances, something is amiss. Galaxies rotate faster than expected. Galaxies in clusters move faster than they should. The expansion of the universe is speeding up.

General relativity does not tell us what is going on.

Physicists have attributed these puzzling observations to two newly postulated substances: Dark matter and dark energy. These two names are merely placeholders in Einstein’s original equations; their sole purpose is to remove the mismatch between prediction and observation.

This is not a new story. We have had evidence for dark matter since the 1930s, and dark energy was on the radar already in the 1990. Both have since occupied thousands of physicists with attempts to explain just what we are dealing with: Is dark matter a particle, and if so what type, and how can we measure it? If it is not a particle, then what do we change about general relativity to fix the discrepancy with measurements? Is dark energy maybe a new type of field? Is it, too, made of some particle? Does dark matter have something to do with dark energy or are the two unrelated?

To answer these questions, hundreds of hypotheses have been proposed, conferences have been held, careers have been made – but here we are, in 2019, and we still don’t know.

Bad enough, you may say, but the thing that really keeps me up at night is this: Maybe all these thousands of physicists are simply using the wrong equations. I don’t mean that general relativity needs to be modified. I mean that we incorrectly use the equations of general relativity to begin with.

The issue is this. General relativity relates the curvature of space and time to the sources of matter and energy. Put in a distribution of matter and energy at any one moment of time, and the equations tell you what space and time do in response, and how the matter must move according to this response.

But general relativity is a non-linear theory. This means, loosely speaking, that gravity gravitates. More concretely, it means that if you have two solutions to the equations and you take their sum, this sum will not also be a solution.

Now, what we do when we want to explain what a galaxy does, or a galaxy cluster, or even the whole universe, is not to plug the matter and energy of every single planet and star into the equations. This would be computationally unfeasible. Instead, we use an average of matter and energy, and use that as the source for gravity.

Needless to say, taking an average on one side of the equation requires that you also take an average on the other side. But since the gravitational part is non-linear, this will not give you the same equations that we use for the solar system: The average of a function of a variable is not the same as the function of the average of the variable. We know it’s not. But whenever we use general relativity on large scales, we assume that this is the case.

So, we know that strictly speaking the equations we use are wrong. The big question is, then, just how wrong are they?

Nosy students who ask this question are usually told these equations are not very wrong and are good to use. The argument goes that the difference between the equation we use and the equation we should use is negligible because gravity is weak in all these cases.

But if you look at the literature somewhat closer, then this argument has been questioned. And these questions have been questioned. And the questioning questions have been questioned. And the debate has remained unsettled until today.

That it is difficult to average non-linear equations is of course not a problem specific to cosmology. It’s a difficulty that condensed matter physicists have to deal with all the time, and it’s a major headache also for climate scientists. These scientists have a variety of techniques to derive the correct equations, but unfortunately the known methods do not easily carry over to general relativity because they do not respect the symmetries of Einstein’s theory.

It’s admittedly an unsexy research topic. It’s technical and tedious and most physicists ignore it. And so, while there are thousands of physicists who simply assume that the correction-terms from averaging are negligible, there are merely two dozen or so people trying to make sure that this assumption is actually correct.

Given how much brain-power physicists have spent on trying to figure out what dark matter and dark energy is, I think it would be a good idea to definitely settle the question whether it is anything at all. At the very least, I would sleep better.

Further reading: Does the growth of structure affect our dynamical models of the universe? The averaging, backreaction and fitting problems in cosmology, by Chris Clarkson, George Ellis, Julien Larena, and Obinna Umeh. Rept. Prog. Phys. 74 (2011) 112901, arXiv:1109.2314 [astro-ph.CO].

Monday, October 07, 2019

What does the future hold for particle physics?

In my new video, I talk about the reason why the Large Hadron Collider, LHC for short, has not found fundamentally new particles besides the Higgs boson, and what this means for the future of particle physics. Below you find a transcript with references.


Before the LHC turned on, particle physicists had high hopes it would find something new besides the Higgs boson, something that would go beyond the standard model of particle physics. There was a lot of talk about the particles that supposedly make up dark matter, which the collider might produce. Many physicists also expected it to find the first of a class of entirely new particles that were predicted based on a hypothesis known as supersymmetry. Others talked about dark energy, additional dimensions of space, string balls, black holes, time travel, making contact to parallel universes or “unparticles”. That’s particles which aren’t particles. So, clearly, some wild ideas were in the air.

To illustrate the situation before the LHC began taking data, let me quote a few articles from back then.

Here is Valerie Jamieson writing for New Scientist in 2008:
“The Higgs and supersymmetry are on firm theoretical footing. Some theorists speculate about more outlandish scenarios for the LHC, including the production of extra dimensions, mini black holes, new forces, and particles smaller than quarks and electrons. A test for time travel has also been proposed.”
Or, here is Ian Sample for the Guardian, also in 2008:
“Scientists have some pretty good hunches about what the machine might find, from creating never-seen-before particles to discovering hidden dimensions and dark matter, the mysterious substance that makes up 25% of the universe.”
Paul Langacker in 2010, writing for the APS:

“Theorists have predicted that spectacular signals of supersymmetry should be visible at the LHC.” Michael Dine for Physics Today in 2007:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
The Telegraph in 2010:
“[The LHC] could answer the question of what causes mass, or even surprise its creators by revealing the existence of a fifth, sixth or seventh secret dimension of time and space.”
A final one. Here is Steve Giddings writing in 2010 for phys.org:
“LHC collisions might produce dark-matter particles... The collider might also shed light on the more predominant “dark energy,”... the LHC may reveal extra dimensions of space... if these extra dimensions are configured in certain ways, the LHC could produce microscopic black holes... Supersymmetry could be discovered by the LHC...”
The Large Hadron collider has been running since 2010. It has found the Higgs boson. But why didn’t it find any of the other things?

This question is surprisingly easy to answer. There was never a good reason to expect any of these things in the first place. The more difficult question is why did so many particle physicists think those were reasonable expectations, and why has not a single one of them told us what they have learned from their failed predictions?

To see what happened here, it is useful to look at the difference between the prediction of the Higgs-boson and the other speculations. The standard model without the Higgs does not work properly. It becomes mathematically inconsistent at energies that the LHC is able to reach. Concretely, without the Higgs, the standard model predicts probabilities larger than one, which makes no sense.

We therefore knew, before the LHC turned on, that something new had to happen. It could have been something else besides the Higgs. The Higgs was one way to fix the problem with the standard model, but there are other ways. However, the Higgs turned out to be right.

All other proposed ideas, extra dimensions, supersymmetry, time-travel, and so on, are unnecessary. These theories have been constructed so that they are compatible with all existing observations. But they are not necessary to solve any problem with the standard model. They are basically wishful thinking.

The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix. I am afraid that many of them still believe this. This supposed problem is that the standard model is not “technically natural”.

This means the standard model contains one number that is small, but there is no explanation for why it is small. This number is the mass of the Higgs-boson divided by the Planck mass, which happens to be about 10-15. The standard model works just fine with that number and it fits the data. But a small number like this, without explanation, is ugly and particle physicists didn’t want to believe nature could be that ugly.

Well, now they know that nature doesn’t care what physicists want it to be like.

What does this mean for the future of particle physics? This argument from “technical naturalness” was the only reason that physicists had to think that the standard model is incomplete and something to complete it must appear at LHC energies. Now that it is clear this argument did not work, there is no reason why a next larger collider should see anything new either. The standard model runs into mathematical trouble again at energies about a billion times higher than what a next larger collider could test. At the moment, therefore, we have no good reason to build a larger particle collider.

But particle physics is not only collider physics. And so, it seems likely to me, that research will shift to other areas of physics. A shift that has been going on for two decades already, and will probably become more pronounced now, is the move to astrophysics, in particular the study of dark matter and dark energy and also, to some extent, the early universe.

The other shift that we are likely to see is a move away from high energy particle physics and move towards high precision measurements at lower energies, or to table top experiments probing the quantum behavior of many particle systems, where we still have much to learn.

Wednesday, October 02, 2019

Has Reductionism Run its Course?

For more than 2000 years, ever since Democritus’ first musings about atoms, reductionism has driven scientific inquiry. The idea is simple enough: Things are made of smaller things, and if you know what the small things do, you learn what the large things do. Simple – and stunningly successful.

After 2000 years of taking things apart into smaller things, we have learned that all matter is made of molecules, and that molecules are made of atoms. Democritus originally coined the word “atom” to refer to indivisible, elementary units of matter. But what we have come to call “atoms”, we now know, is made of even smaller particles. And those smaller particles are yet again made of even smaller particles.

© Sabine Hossenfelder
The smallest constituents of matter, for all we currently know, are the 25 particles which physicists collect in the standard model of particle physics. Are these particles made up of yet another set of smaller particles, strings, or other things?

It is certainly possible that the particles of the standard model are not the ultimate constituents of matter. But we presently have no particular reason to think they have a substructure. And this raises the question whether attempting to look even closer into the structure of matter is a promising research direction – right here, right now.

It is a question that every researcher in the foundations of physics will be asking themselves, now that the Large Hadron Collider has confirmed the standard model, but found nothing beyond that.

20 years ago, it seemed clear to me that probing physical processes at ever shorter distances is the most reliable way to better understand how the universe works. And since it takes high energies to resolve short distances, this means that slamming particles together at high energies is the route forward. In other words, if you want to know more, you build bigger particle colliders.

This is also, unsurprisingly, what most particle physicists are convinced of. Going to higher energies, so their story goes, is the most reliable way to search for something fundamentally new. This is, in a nutshell, particle physicists’ major argument in favor of building a new particle collider, one even larger than the presently operating Large Hadron Collider.

But this simple story is too simple.

The idea that reductionism means things are made of smaller things is what philosophers more specifically call “methodological reductionism”. It’s a statement about the properties of stuff. But there is another type of reductionism, “theory reductionism”, which instead refers to the relation between theories. One theory can be “reduced” to another one, if the former can be derived from the latter.

Now, the examples of reductionism that particle physicists like to put forward are the cases where both types of reductionism coincide: Atomic physics explains chemistry. Statistical mechanics explains the laws of thermodynamics. The quark model explains regularities in proton collisions. And so on.

But not all cases of successful theory reduction have also been cases of methodological reduction. Take Maxwell’s unification of the electric and magnetic force. From Maxwell’s theory you can derive a whole bunch of equations, such as the Coulomb law and Faraday’s law, that people used before Maxwell explained where they come from. Electromagnetism, is therefore clearly a case of theory reduction, but it did not come with a methodological reduction.

Another well-known exception is Einstein’s theory of General Relativity. General Relativity can be used in more situations than Newton’s theory of gravity. But it is not the physics on short distances that reveals the differences between the two theories. Instead, it is the behavior of bodies at high relative speed and strong gravitational fields that Newtonian gravity cannot cope with.

Another example that belongs on this list is quantum mechanics. Quantum mechanics reproduces classical mechanics in suitable approximations. It is not, however, a theory about small constituents of larger things. Yes, quantum mechanics is often portrayed as a theory for microscopic scales, but, no, this is not correct. Quantum mechanics is really a theory for all scales, large to small. We have observed quantum effects over distances exceeding 100km and for objects weighting as “much” as a nanogram, composed of more than 1013 atoms. It’s just that quantum effects on large scales are difficult to create and observe.

Finally, I would like to mention Noether’s theorem, according to which symmetries give rise to conservation laws. This example is different from the previous ones in that Noether’s theorem was not applied to any theory in particular. But it has resulted in a more fundamental understanding of natural law, and therefore I think it deserve a place on the list.

In summary, history does not support particle physicists’ belief that a deeper understanding of natural law will most likely come from studying shorter distances. On the very contrary, I have begun to worry that physicists’ confidence in methodological reductionism stands in the way of progress. That’s because it suggests we ask certain questions instead of others. And those may just be the wrong questions to ask.

If you believe in methodological reductionism, for example, you may ask what dark energy is made of. But maybe dark energy is not made of anything. Instead, dark energy may be an artifact of our difficulty averaging non-linear equations.

It’s similar with dark matter. The methodological reductionist will ask for a microscopic theory and look for a particle that dark matter is made of. Yet, maybe dark matter is really a phenomenon associated with our misunderstanding of space-time on long distances.

The maybe biggest problem that methodological reductionism causes lies in the area of quantum gravity, that is our attempt to resolve the inconsistency between quantum theory and general relativity. Pretty much all existing approaches – string theory, loop quantum gravity, causal dynamical triangulation (check out my video for more) – assume that methodological reductionism is the answer. Therefore, they rely on new hypotheses for short-distance physics. But maybe that’s the wrong way to tackle the problem. The root of our problem may instead be that quantum theory itself must be replaced by a more fundamental theory, one that explains how quantization works in the first place.

Approaches based on methodological reductionism – like grand unified forces, supersymmetry, string theory, preon models, or technicolor – have failed for the past 30 years. This does not mean that there is nothing more to find at short distances. But it does strongly suggest that the next step forward will be a case of theory reduction that does not rely on taking things apart into smaller things.

Sunday, September 29, 2019

Travel Update

The coming days I am in Brussels, for a workshop that I’m not sure where it is or what it is about. It also doesn’t seem to have a website. In any case, I’ll be away, just don’t ask me exactly where or why.

On Oct 15, I am giving a public lecture at the University of Minnesota. On Oct 17, I am giving a colloquium in Cleveland. On Oct 25, I am giving a public lecture in Göttingen (in German). On Oct 29, I’m in Genoa giving a talk at the “Festival della Scienza” to accompany the publication of the Italian translation of my book “Lost in Math.” I don’t speak Italian, so this talk will be in English.

On Nov 5th I’m speaking in Berlin about dark matter. On Nov 6th I am supposed to give a lecture at the Einstein Forum on Potsdam, though that doesn’t seem to be on their website. These two talks in Berlin and Potsdam will also be in German.

On Nov 12th I’m giving a seminar in Oxford, in case Britain still exists at that point. Dec 9th I’m speaking in Wuppertal, details to come, and that will hopefully be the last trip this year.

Next time I’m in the USA will probably be late March 2020. In case you are interested that I stop by at your place, please get in touch.

I am always happy to meet readers of my blog, so in case our paths cross, do not hesitate to say hi.

Friday, September 27, 2019

The Trouble with Many Worlds

Today I want to talk about the many worlds interpretation of quantum mechanics and explain why I do not think it is a complete theory.



But first, a brief summary of what the many worlds interpretation says. In quantum mechanics, every system is described by a wave-function from which one calculates the probability of obtaining a specific measurement outcome. Physicists usually take the Greek letter Psi to refer to the wave-function.

From the wave-function you can calculate, for example, that a particle which enters a beam-splitter has a 50% chance of going left and a 50% chance of going right. But – and that’s the important point – once you have measured the particle, you know with 100% probability where it is. This means that you have to update your probability and with it the wave-function. This update is also called the wave-function collapse.

The wave-function collapse, I have to emphasize, is not optional. It is an observational requirement. We never observe a particle that is 50% here and 50% there. That’s just not a thing. If we observe it at all, it’s either here or it isn’t. Speaking of 50% probabilities really makes sense only as long as you are talking about a prediction.

Now, this wave-function collapse is a problem for the following reason. We have an equation that tells us what the wave-function does as long as you do not measure it. It’s called the Schrödinger equation. The Schrödinger equation is a linear equation. What does this mean? It means that if you have two solutions to this equation, and you add them with arbitrary prefactors, then this sum will also be a solution to the Schrödinger equation. Such a sum, btw, is also called a “superposition”. I know that superposition sounds mysterious, but that’s really all it is, it’s a sum with prefactors.

The problem is now that the wave-function collapse is not linear, and therefore it cannot be described by the Schrödinger equation. Here is an easy way to understand this. Suppose you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability. But here’s the thing. If you take a superposition of these two states, you will not get a superposition of probabilities. You will get 100% either on the one side, or on the other.

The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.

Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.

But that’s not how most physicists deal with it. Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.

If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if you could do that, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done. And without that, the Copenhagen interpretation and its cousins do not solve the measurement problem, they simply refuse to acknowledge that the problem exists in the first place.

The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching.

Some people have a problem with the branching because it’s not clear just exactly when or where it should take place, but I do not think this is a serious problem, it’s just a matter of definition. No, the real problem is that after throwing out the measurement postulate, the many worlds interpretation needs another assumption, that brings the measurement problem back.

The reason is this. In the many worlds interpretation, if you set up a detector for a measurement, then the detector will also split into several universes. Therefore, if you just ask “what will the detector measure”, then the answer is “The detector will measure anything that’s possible with probability 1.”

This, of course, is not what we observe. We observe only one measurement outcome. The many worlds people explain this as follows. Of course you are not supposed to calculate the probability for each branch of the detector. Because when we say detector, we don’t mean all detector branches together. You should only evaluate the probability relative to the detector in one specific branch at a time.

That sounds reasonable. Indeed, it is reasonable. It is just as reasonable as the measurement postulate. In fact, it is logically entirely equivalent to the measurement postulate. The measurement postulate says: Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch. Same thing.

And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.

And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics. What’s the trouble with the other interpretations? We will talk about this some other time. So stay tuned.

Wednesday, September 18, 2019

Windows Black Screen Nightmare

Folks, I have a warning to utter that is somewhat outside my usual preaching.

For the past couple of days one of my laptops has tried to install Windows updates but didn’t succeed. In the morning I would find an error message that said something went wrong. I ignored this because really I couldn’t care less what problems Microsoft causes itself. But this morning, Windows wouldn’t properly start. All I got was a black screen with a mouse cursor. This is the computer I use for my audio and video processing.

Now, I’ve been a Windows user for 20+ years and I don’t get easily discouraged by spontaneously appearing malfunctions. After some back and forth, I managed to open a command prompt from the task manager to launch the Windows explorer by hand. But this just produced an error message about some obscure .dll file being corrupted. Ok, then, I thought, I’ll run an sfc /scandisk. But this didn’t work; the command just wouldn’t run. At this point I began to feel really bad about this.

I then rebooted the computer a few times with different login options, but got the exact same problem with an administrator login and in the so-called safe mode. The system restore produced an error message, too. Finally, I tried the last thing that came to my mind, a factory reset. Just to have Windows inform me that the command couldn’t be executed.

With that, I had run out of Windows-wisdom and called a helpline. Even the guy on the helpline was impressed by this system’s fuckedupness (if that isn’t a word, it should be) and, after trying a few other things that didn’t work, recommended I wipe the disk clean and reinstall Windows.

So that’s basically how I spent my day, today. Which, btw, happens to be my birthday.

The system is running fine now, though I will have to reinstall all my software. Luckily enough my hard-disk partition seems to have saved all my video and audio files. It doesn’t seem to have been a hardware problem. It also doesn’t smell like a virus. The two IT guys I spoke with said that most likely something went badly wrong with one of those Windows updates. In fact, if you ask Google for Windows Black Screen you’ll find that similar things have happened before after Windows updates. Though, it seems, not quite as severe as this case.

The reason I am telling you this isn’t just to vent (though there’s that), but to ask you that in case you encounter the same problem, please let us know. Especially if you find a solution that doesn’t require reinstalling Windows from scratch.

Update: Managed to finish what I meant to do before my computer became dysfunctional

Monday, September 16, 2019

Why do some scientists believe that our universe is a hologram?



Today, I want to tell you why some scientists believe that our universe is really a 3-dimensional projection of a 2-dimensional space. They call it the “holographic principle” and the key idea is this.

Usually, the number of different things you can imagine happening inside a part of space increases with the volume. Think of a bag of particles. The larger the bag, the more particles, and the more details you need to describe what the particles do. These details that you need to describe what happens are what physicists call the “degrees of freedom,” and the number of these degrees of freedom is proportional to the number of particles, which is proportional to the volume.

At least that’s how it normally works. The holographic principle, in contrast, says that you can describe what happens inside the bag by encoding it on the surface of that bag, at the same resolution.

This may not sounds all that remarkable, but it is. Here is why. Take a cube that’s made of smaller cubes, each of which is either black or white. You can think of each small cube as a single bit of information. How much information is in the large cube? Well, that’s the number of the smaller cubes, so 3 cube in this example. Or, if you divide every side of the large cube into N pieces instead of three, that’s N cube. But if you instead count the surface elements of the cube, at the same resolution, you have only 6 x N square. This means that for large N, there are many more volume bits than surface bits at the same resolution.

The holographic principle now says that even though there are so many fewer surface bits, the surface bits are sufficient to describe everything that happens in the volume. This does not mean that the surface bits correspond to certain regions of volume, it’s somewhat more complicated. It means instead that the surface bits describe certain correlations between the pieces of volume. So if you think again of the particles in the bag, these will not move entirely independently.

And that’s what is called the holographic principle, that really you can encode the events inside any volume on the surface of the volume, at the same resolution.

But, you may say, how come we never notice that particles in a bag are somehow constrained in their freedom? Good question. The reason is that the stuff that we deal with in every-day life, say, that bag of particles, doesn’t remotely make use of the theoretically available degrees of freedom. Our present observations only test situations well below the limit that the holographic principle says should exist.

The limit from the holographic principle really only matters if the degrees of freedom are strongly compressed, as is the case, for example, for stuff that collapses to a black hole. Indeed, the physics of black holes is one of the most important clues that physicists have for the holographic principle. That’s because we know that black holes have an entropy that is proportional to the area of the black hole horizon, not to its volume. That’s the important part: black hole entropy is proportional to the area, not to the volume.

Now, in thermodynamics entropy counts the number of different microscopic configurations that have the same macroscopic appearance. So, the entropy basically counts how much information you could stuff into a macroscopic thing if you kept track of the microscopic details. Therefore, the area-scaling of the black hole entropy tells you that the information content of black holes is bounded by a quantity which proportional to the horizon area. This relation is the origin of the holographic principle.

The other important clue for the holographic principle comes from string theory. That’s because string theorists like to apply their mathematical methods in a space-time with a negative cosmological constant, which is called an Anti-de Sitter space. Most of them believe, though it has strictly speaking never been proved, that gravity in an Anti-de Sitter space can be described by a different theory that is entirely located on the boundary of that space. And while this idea came from string theory, one does not actually need the strings for this relation between the volume and the surface to work. More concretely, it uses a limit in which the effects of the strings no longer appear. So the holographic principle seems to be more general than string theory.

I have to add though that we do not live in an Anti-de Sitter space because, for all we currently know, the cosmological constant in our universe is positive. Therefore it’s unclear how much the volume-surface relation in Anti-De Sitter space tells us about the real world. And for what the black hole entropy is concerned, the mathematics we currently have does not actually tell us that it counts the information that one can stuff into a black hole. It may instead only count the information that one loses by disconnecting the inside and outside of the black hole. This is called the “entanglement entropy”. It scales with the surface for many systems other than black holes and there is nothing particularly holographic about it.

Whether or not you buy the motivations for the holographic principle, you may want to know whether we can test it. The answer is definitely maybe. Earlier this year, Erik Verlinde and Kathryn Zurek proposed that we try to test the holographic principle using gravitational wave interferometers. The idea is that if the universe is holographic, then the fluctuations in the two orthogonal directions that the interferometer arms extend into would be more strongly correlated than one normally expects. However, not everyone agrees that the particular realization of holography which Verlinde and Zurek use is the correct one.

Personally I think that the motivations for the holographic principle are not particularly strong and in any case we’ll not be able to test this hypothesis in the coming centuries. Therefore writing papers about it is a waste of time. But it’s an interesting idea and at least you now know what physicists are talking about when they say the universe is a hologram.

Tuesday, September 10, 2019

Book Review: “Something Deeply Hidden” by Sean Carroll

Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime
Sean Carroll
Dutton, September 10, 2019

Of all the weird ideas that quantum mechanics has to offer, the existence of parallel universes is the weirdest. But with his new book, Sean Carroll wants to convince you that it isn’t weird at all. Instead, he argues, if we only take quantum mechanics seriously enough, then “many worlds” are the logical consequence.

Most remarkably, the many worlds interpretation implies that in every instance you split into many separate you’s, all of which go on to live their own lives. It takes something to convince yourself that this is reality, but if you want to be convinced, Carroll’s book is a good starting point.

“Something Deeply Hidden” is an enjoyable and easy-to-follow introduction to quantum mechanics that will answer your most pressing questions about many worlds, such as how worlds split, what happens with energy conservation, or whether you should worry about the moral standards of all your copies.

The book is also notable for what it does not contain. Carroll avoids going through all the different interpretations of quantum mechanics in detail, and only provides short summaries. Instead, the second half of the book is dedicated to his own recent work, which is about constructing space from quantum entanglement. I do find this a promising line of research and he presents it well.

I was somewhat perplexed that Carroll does not mention what I think are the two biggest objections to the many world’s interpretation, but I will write about this in a separate post.

Like Carroll’s previous books, this one is engaging, well-written, and clearly argued. I can unhesitatingly recommend it to anyone who is interested in the foundations of physics.

[Disclaimer: Free review copy]

Sunday, September 08, 2019

Away Note

I'm attending a conference in Oxford the coming week, so there won't be much happening on this blog. Also, please be warned that comments may be stuck in the moderation queue longer than usual.

Friday, September 06, 2019

The five most promising ways to quantize gravity

Today, I want to tell you what ideas physicists have come up with to quantize gravity. But before I get to that, I want to tell you why it matters.



That we do not have a theory of quantum gravity is currently one of the biggest unsolved problems in the foundations of physics. A lot of people, including many of my colleagues, seem to think that a theory of quantum gravity will remain an academic curiosity without practical relevance.

I think they are wrong. That’s because whatever solves this problem will tell us something about quantum theory, and that’s the theory on which all modern electronic devices run, like the ones on which you are watching this video. Maybe it will take 100 years for quantum gravity to find a practical application, or maybe it will even take a 1000 years. But I am sure that understanding nature better will not forever remain a merely academic speculation.

Before I go on, I want to be clear that quantizing gravity by itself is not the problem. We can, and have, quantized gravity the same way that we quantize the other interactions. The problem is that the theory which one gets this way breaks down at high energies, and therefore it cannot be how nature works, fundamentally.

This naïve quantization is called “perturbatively quantized gravity” and it was worked out in the 1960s by Feynman and DeWitt and some others. Perturbatively quantized gravity is today widely believed to be an approximation to whatever is the correct theory.

So really the problem is not just to quantize gravity per se, you want to quantize it and get a theory that does not break down at high energies. Because energies are proportional to frequencies, physicists like to refer to high energies as “the ultraviolet” or just “the UV”. Therefore, the theory of quantum gravity that we look for is said to be “UV complete”.

Now, let me go through the five most popular approaches to quantum gravity.

1. String Theory

The most widely known and still the most popular attempt to get a UV-complete theory of quantum gravity is string theory. The idea of string theory is that instead of talking about particles and quantizing them, you take strings and quantize those. Amazingly enough, this automatically has the consequence that the strings exchange a force which has the same properties as the gravitational force.

This was discovered in the 1970s and at the time, it got physicists very excited. However, in the past decades several problems have appeared in string theory that were patched, which has made the theory increasingly contrived. You can hear all about this in my earlier video. It has never been proved that string theory is indeed UV-complete.

2. Loop Quantum Gravity

Loop Quantum Gravity is often named as the biggest competitor of string theory, but this comparison is somewhat misleading. String theory is not just a theory for quantum gravity, it is also supposed to unify the other interactions. Loop Quantum Gravity on the other hand, is only about quantizing gravity.

It works by discretizing space in terms of a network, and then using integrals around small loops to describe the space, hence the name. In this network, the nodes represent volumes and the links between nodes the areas of the surfaces where the volumes meet.

Loop Quantum Gravity is about as old as string theory. It solves the problem of combining general relativity and quantum mechanics to one consistent theory but it has remained unclear just exactly how one recovers general relativity in this approach.

3. Asymptotically Safe Gravity

Asymptotic Safety is an idea that goes back to a 1976 paper by Steven Weinberg. It says that a theory which seems to have problems at high energies when quantized naively, may not have a problem after all, it’s just that it’s more complicated to find out what happens at high energies than it seems. Asymptotically Safe Gravity applies the idea of asymptotic safety to gravity in particular.

This approach also solves the problem of quantum gravity. Its major problem is currently that it has not been proved that the theory which one gets this way at high energies still makes sense as a quantum theory.

4. Causal Dynamical Triangulation

The problem with quantizing gravity comes from infinities that appear when particles interact at very short distances. This is why most approaches to quantum gravity rely on removing the short distances by using objects of finite extensions. Loop Quantum Gravity works this way, and so does String Theory.

Causal Dynamical Triangulation also relies on removing short distances. It does so by approximating a curved space with triangles, or their higher-dimensional counterparts respectively. In contrast to the other approaches though, where the finite extension is a postulated, new property of the underlying true nature of space, in Causal Dynamical Triangulation, the finite size of the triangles is a mathematical aid, and one eventually takes the limit where this size goes to zero.

The major reason why many people have remained unconvinced of Causal Dynamical Triangulation is that it treats space and time differently, which Einstein taught us not to do.

5. Emergent Gravity

Emergent gravity is not one specific theory, but a class of approaches. These approaches have in common that gravity derives from the collective behavior of a large number of constituents, much like the laws of thermodynamics do. And much like for thermodynamics, in emergent gravity, one does not actually need to know all that much about the exact properties of these constituents to get the dynamical law.

If you think that gravity is really emergent, then quantizing gravity does not make sense. Because, if you think of the analogy to thermodynamics, you also do not obtain a theory for the structure of atom by quantizing the equations for gases. Therefore, in emergent gravity one does not quantize gravity. One instead removes the inconsistency between gravity and quantum mechanics by saying that quantizing gravity is not the right thing to do.

Which one of these theories is the right one? No one knows. The problem is that it’s really, really hard to find experimental evidence for quantum gravity. But that it’s hard doesn’t mean impossible. I will tell you some other time how we might be able to experimentally test quantum gravity after all. So, stay tuned.

Wednesday, September 04, 2019

What’s up with LIGO?

The Nobel-Prize winning figure.
We don’t know exactly what it shows.
[Image Credits: LIGO]
Almost four years ago, on September 14 2015, the LIGO collaboration detected gravitational waves for the first time. In 2017, this achievement was awarded the Nobel Prize. Also in that year, the two LIGO interferometers were joined by VIRGO. Since then, a total of three detectors have been on the lookout for space-time’s subtle motions.

By now, the LIGO/VIRGO collaboration has reported dozens of gravitational wave events: black hole mergers (like the first), neutron star mergers, and black hole-neutron star mergers. But not everyone is convinced the signals are really what the collaboration claims they are.

Already in 2017, a group of physicists around Andrew Jackson in Denmark reported difficulties when they tried to reproduce the signal reconstruction of the first event. In an interview dated November last year, Jackson maintained that the only signal they have been able to reproduce is the first. About the other supposed detections he said: “We can’t see any of those events when we do a blind analysis of the data. Coming from Denmark, I am tempted to say it’s a case of the emperor’s new gravitational waves.”

For most physicists, the 170817 neutron-star merger – the strongest signal LIGO has seen so-far – erased any worries raised by the Danish group’s claims. That’s because this event came with an electromagnetic counterpart that was seen by multiple telescopes, which can demonstrate that LIGO indeed sees something of astrophysical origin and not terrestrial noise. But, as critics have pointed out correctly, the LIGO alert for this event came 40 minutes after NASA’s gamma-ray alert. For this reason, the event cannot be used as an independent confirmation of LIGO’s detection capacity. Furthermore, the interpretation of this signal as a neutron-star merger has also been criticized. And this criticism has been criticized for yet other reasons.

It further fueled the critics’ fire when Michael Brooks reported last year for New Scientist that, according to two members of the collaboration, the Nobel-prize winning figure of LIGO’s seminal detection was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” To this date, the journal that published the paper has refused to comment.

The LIGO collaboration has remained silent on the matter, except for issuing a statement according to which they have “full confidence” in their published results (surprise), and that we are to await further details. Glaciers are now moving faster than this collaboration.

In April this year, LIGO started the third observation run (O3) after an upgrade that increased the detection sensitivity by about 40% over the previous run.  Many physicists hoped the new observations would bring clarity with more neutron-star events that have electromagnetic counterparts, but that hasn’t happened.

Since April, the collaboration has issued 33 alerts for new events, but so-far no electromagnetic counterparts have been seen. You can check the complete list for yourself here. 9 of the 33 events have meanwhile been downgraded because they were identified as likely of terrestrial origin, and been retracted.

The number of retractions is fairly high partly because the collaboration is still coming to grips with the upgraded detector. This is new scientific territory and the researchers themselves are still learning how to best analyze and interpret the data. A further difficulty is that the alerts must go out quickly in order for telescopes to be swung around and point at the right location in the sky. This does not leave much time for careful analysis.

With the still lacking independent confirmation that LIGO sees events of astrophysical origin, critics are having a good time. In a recent article for the German online magazine Heise, Alexander Unzicker – author of a book called “The Higgs Fake” – contemplates whether the first event was a blind injection, ie, a fake signal. The three people on the blind injection team at the time say it wasn’t them, but Unzicker argues that given our lack of knowledge about the collaboration’s internal proceedings, there might well have been other people able to inject a signal. (You can find an English translation here.)

In the third observation run, the collaboration has so-far seen one high-significance binary neutron star candidate (S190425z). But the associated electromagnetic signal for this event has not been found. This may be for various reasons. For example, the analysis of the signal revealed that the event must have been far away, about 4 times farther than the 2017 neutron-star event. This means that any electromagnetic signal would have been fainter by a factor of about 16. In addition, the location in the sky was rather uncertain. So, the electromagnetic signal was plausibly hard to detect.

More recently, on August 14th, the collaboration reported a neutron-star black hole merger. Again the electromagnetic counterpart is missing. In this case they were able to locate the origin to better precision. But they still estimate the source is about 7 times farther away than the 2017 neutron-star event, meaning it would have been fainter by a factor of about 50.

Still, it is somewhat perplexing the signal wasn’t seen by any of the telescopes that looked for it. There may have been physical reasons at the source, such that the neutron-star was swallowed in one bite, in which case there wouldn’t be much emitted, or that the system was surrounded by dust, blocking the electromagnetic signal.

A second neutron star-black hole merger on August 17 was retracted

And then there are the “glitches”.

LIGO’s “glitches” are detector events of unknown origin whose frequency spectrum does not look like the expected gravitational wave signals. I don’t know exactly how many of those the detector suffers from, but the way they are numbered, by a date and two digits, indicates between 10 and 100 a day. LIGO uses a citizen science project, called “Gravity Spy” to identify glitches. There isn’t one type of glitch, there are many different types of them, with names like “Koi fish,” “whistle,” or “blip.” In the figures below you see a few examples.


Examples for LIGO's detector glitches. [Image Source]

This gives me some headaches, folks. If you do not know why your detector detects something that does not look like what you expect, how can you trust it in the cases where it does see what you expect?

Here is what Andrew Jackson had to say on the matter:

Jackson: “The thing you can conclude if you use a template analysis is [...] that the results are consistent with a black hole merger. But in order to make the stronger statement that it really and truly is a black hole merger you have to rule out anything else that it could be.

“And the characteristic signal here is actually pretty generic. What do they find? They find something where the amplitude increases, where the frequency increases, and then everything dies down eventually. And that describes just about every catastrophic event you can imagine. You see, increasing amplitude, increasing frequency, and then it settles into some new state. So they really were obliged to rule out every terrestrial effects, including seismic effects, and the fact that there was an enormous lightning string in Burkina Faso at exactly the same time [...]”
Interviewer: “Do you think that they failed to rule out all these other possibilities?

Jackson: “Yes…”
If it was correct what Jackson said, this would be highly problematic indeed. But I have not been able to think of any other event that looks remotely like a gravitational wave signal, even leaving aside the detector correlations. Unlike what Jackson states, a typical catastrophic event does not have a frequency increase followed by a ring-down and sudden near-silence.

Think of an earthquake for example. For the most part, earthquakes happen when stresses exceed a critical threshold. The signal don’t have a frequency build-up, and after the quake, there’s a lot of rumbling, often followed by smaller quakes. Just look at the below figure that shows the surface movement of a typical seismic event.
Example of typical earthquake signal. [Image Source]


It looks nothing like that of a gravitational wave signal.

For this reason, I don’t share Jackson’s doubts over the origin of the signals that LIGO detects. However, the question whether there are any events of terrestrial origin with similar frequency characteristics arguably requires consideration beyond Sabine scratching her head for half an hour.

So, even though I do not have the same concerns as were raised by the LIGO critics, I must say that I do find it peculiar indeed there is so little discussion about this issue. A Nobel Prize was handed out, and yet we still do not have confirmation that LIGO’s signals are not of terrestrial origin. In which other discipline is it considered good scientific practice to discard unwelcome yet not understood data, like LIGO does with the glitches? Why do we still not know just exactly what was shown in the figure of the first paper? Where are the electromagnetic counterparts?

LIGO’s third observing run will continue until March 2020. It presently doesn’t look like it will bring the awaited clarity. I certainly hope that the collaboration will make somewhat more efforts to erase the doubts that still linger around their supposed detections.

Wednesday, August 28, 2019

Solutions to the black hole information paradox

In the early 1970s, Stephen Hawking discovered that black holes can emit radiation. This radiation allows black holes to lose mass and, eventually, to entirely evaporate. This process seems to destroy all the information that is contained in the black hole and therefore contradicts what we know about the laws of nature. This contradiction is what we call the black hole information paradox.

After discovering this problem 40 years ago, Hawking spent the rest of his life trying to solve it. He passed away last year, but the problem is still alive and there is no resolution in sight.

Today, I want to tell you what solutions physicists have so-far proposed for the black hole information loss problem. If you want to know more about just what exactly is the problem, please read my previous blogpost.

There are hundreds of proposed solutions to the information loss problem, that I can’t possibly all list here. But I want to tell you about the five most plausible ones.

1. Remnants.

The calculation that Hawking did to obtain the properties of the black hole radiation makes use of general relativity. But we know that general relativity is only approximately correct. It eventually has to be replaced by a more fundamental theory, which is quantum gravity. The effects of quantum gravity are not relevant near the horizon of large black holes, which is why the approximation that Hawking made is good. But it breaks down eventually, when the black hole has shrunk to a very small size. Then, the space-time curvature at the horizon becomes very strong and quantum gravity must be taken into account.

Now, if quantum gravity becomes important, we really do not know what will happen because we don’t have a theory for quantum gravity. In particular we have no reason to think that the black hole will entirely evaporate to begin with. This opens the possibility that a small remainder is left behind which just sits there forever. Such a black hole remnant could keep all the information about what formed the black hole, and no contradiction results.

2. Information comes out very late.

Instead of just stopping to evaporate when quantum gravity becomes relevant, the black hole could also start to leak information in that final phase. Some estimates indicate that this leakage would take a very long time, which is why this solution is also known as a “quasi-stable remnant”. However, it is not entirely clear just how long it would take. After all, we don’t have a theory of quantum gravity. This second option removes the contradiction for the same reason as the first.

3. Information comes out early.

The first two scenarios are very conservative in that they postulate new effects will appear only when we know that our theories break down. A more speculative idea is that quantum gravity plays a much larger role near the horizon and the radiation carries information all along, it’s just that Hawking’s calculation doesn’t capture it.

Many physicists prefer this solution over the first two for the following reason. Black holes do not only have a temperature, they also have an entropy, called the Bekenstein-Hawking entropy. This entropy is proportional to the area of the black hole. It is often interpreted as counting the number of possible states that the black hole geometry can have in a theory of quantum gravity.

If that is so, then the entropy must shrink when the black hole shrinks and this is not the case for the remnant and the quasi-stable remnant.

So, if you want to interpret the black hole entropy in terms of microscopic states, then the information must begin to come out early, when the black hole is still large. This solution is supported by the idea that we live in a holographic universe, which is currently popular, especially among string theorists.

4. Information is just lost.

Black hole evaporation, it seems, is irreversible and that irreversibility is inconsistent with the dynamical law of quantum theory. But quantum theory does have its own irreversible process, which is the measurement. So, some physicists argue that we should just accept black hole evaporation is irreversible and destroys information, not unlike quantum measurements do. This option is not particularly popular because it is hard to include additional irreversible process into quantum theory without spoiling conservation laws.

5. Black holes don’t exist.

Finally, some physicists have tried to argue that black holes are never created in the first place in which case no information can get lost in them. To make this work, one has to find a way to prevent a distribution of matter from collapsing to a size that is below its Schwarzschild radius. But since the formation of a black hole horizon can happen at arbitrarily small matter densities, this requires that one invents some new physics which violates the equivalence principle, and that is the key principle underlying Einstein’s theory of general relativity. This option is a logical possibility, but for most physicists, it’s asking for too much.

Personally, I think that several of the proposed solutions are consistent, that includes option 1-3 above, and other proposals such as those by Horowitz and Maldacena, ‘t Hooft, or Maudlin. This means that this is a problem which just cannot be solved by relying on mathematics alone.

Unfortunately, we cannot experimentally test what is happening when black holes evaporate because the temperature of the radiation is much, much too small to be measurable for the astrophysical black holes we know of. And so, I suspect we will be arguing about this for a long, long time.