Wednesday, October 30, 2019

The crisis in physics is not only about physics

downward spiral
In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. Sure, some aspects of these theories have only been experimentally confirmed later. The last to-be-confirmed particle was the Higgs-boson, predicted in the 1960s, measured in 2012. But all shortcomings of these theories – the lacking quantization of gravity, dark matter, the quantum measurement problem, and more – have been known for more than 80 years. And they are as unsolved today as they were then.

The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars.

With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end.

The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing.

The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know.

But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it.

Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.

This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”.

And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality.

Physicists need new methods. Better methods. Methods that are appropriate to the present century.

And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do.

Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.

You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive.

But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community.

Indeed, we see this beginning to happen in medicine and in ecology, too.

Small-scale drug trials have pretty much run their course. These are good only to find in-your-face correlations that are universal across most people. Medicine, therefore, will increasingly have to rely on data collected from large groups over long periods of time to find increasingly personalized diagnoses and prescriptions. The studies which are necessary for this are extremely costly. They must be chosen carefully for not many of them can be made. The study of ecosystems faces a similar challenge, where small, isolated investigations are about to reach their limits.

How physicists handle their crisis will give an example to other disciplines. So watch this space.

Tuesday, October 22, 2019

What is the quantum measurement problem?

Today, I want to explain just what the problem is with making measurements according to quantum theory.

Quantum mechanics tells us that matter is not made of particles. It is made of elementary constituents that are often called particles, but are really described by wave-functions. A wave-function is a mathematical object which is neither a particle nor a wave, but it can have properties of both.

The curious thing about the wave-function is that it does not itself correspond to something which we can observe. Instead, it is only a tool by help of which we calculate what we do observe. To make such a calculation, quantum theory uses the following postulates.

First, as long as you do not measure the wave-function, it changes according to the Schrödinger equation. The Schrödinger equation is different for different particles. But its most important properties are independent of the particle.

One of the important properties of the Schrödinger equation is that it guarantees that the probabilities computed from the wave-function will always add up to one, as they should. Another important property is that the change in time which one gets from the Schrödinger equation is reversible.

But for our purposes the most important property of the Schrödinger equation is that it is linear. This means if you have two solutions to this equation, then any sum of the two solutions, with arbitrary pre-factors, will also be a solution.

The second postulate of quantum mechanics tells you how you calculate from the wave-function what is the probability of getting a specific measurement outcome. This is called the “Born rule,” named after Max Born who came up with it. The Born rule says that the probability of a measurement is the absolute square of that part of the wave-function which describes a certain measurement outcome. To do this calculation, you also need to know how to describe what you are observing – say, the momentum of a particle. For this, you need further postulates, but these do not need to concern us today.

And third, there is the measurement postulate, sometimes called the “update” or “collapse” of the wave-function. This postulate says that after you have made a measurement, the probability of what you have measured suddenly changes to 1. This, I have to emphasize, is a necessary requirement to describe what we observe. I cannot stress this enough because a lot of physicists seem to find it hard to comprehend. If you do not update the wave-function after measurement, then the wave-function does not describe what we observe. We do not, ever, observe a particle that is 50% measured.

The problem with the quantum measurement is now that the update of the wave-function is incompatible with the Schrödinger equation. The Schrödinger equation, as I already said, is linear. That means if you have two different states of a system, both of which are allowed according to the Schrödinger equation, then the sum of the two states is also an allowed solution. The best known example of this is Schrödinger’s cat, which is a state that is a sum of both dead and alive. Such a sum is what physicists call a superposition.

We do, however, only observe cats that are either dead or alive. This is why we need the measurement postulate. Without it, quantum mechanics would not be compatible with observation.

The measurement problem, I have to emphasize, is not solved by decoherence, even though many physicists seem to believe this to be so. Decoherence is a process that happens if a quantum superposition interacts with its environment. The environment may simply be air or, even in vacuum, you still have the radiation of the cosmic microwave background. There is always some environment. This interaction with the environment eventually destroys the ability of quantum states to display typical quantum behavior, like the ability of particles to create interference patterns. The larger the object, the more quickly its quantum behavior gets destroyed.

Decoherence tells you that if you average over the states of the environment, because you do not know exactly what they do, then you no longer have a quantum superposition. Instead, you have a distribution of probabilities. This is what physicists call a “mixed state”. This does not solve the measurement problem because after measurement, you still have to update the probability of what you have observed to 100%. Decoherence does not tell you to do that.

Why is the measurement postulate problematic? The trouble with the measurement postulate is that the behavior of a large thing, like a detector, should follow from the behavior of the small things that it is made up of. But that is not the case. So that’s the issue. The measurement postulate is incompatible with reductionism. It makes it necessary that the formulation of quantum mechanics explicitly refers to macroscopic objects like detectors, when really what these large things are doing should follow from the theory.

A lot of people seem to think that you can solve this problem by way of re-interpreting the wave-function as merely encoding the knowledge that an observer has about the state of the system. This is what is called a Copenhagen or “neo-Copenhagen” interpretation. (And let me warn you that this is not the same as a Psi-epistemic interpretation, in case you have heard that word.)

Now, if you believe that the wave-function merely describes the knowledge an observer has then you may say, of course it needs to be updated if the observer makes a measurement. Yes, that’s very reasonable. But of course this also refers to macroscopic concepts like observers and their knowledge. And if you want to use such concepts in the postulates of your theory, you are implicitly assuming that the behavior of observers or detectors is incompatible with the behavior of the particles that make up the observers or detectors. This requires that you explain when and how this distinction is to be made and none of the existing neo-Copenhagen approaches explain this.

I already told you in an earlier blogpost why the many worlds interpretation does not solve the measurement problem. To briefly summarize it, it’s because in the many worlds interpretation one also has to use a postulate about what a detector does.

What does it take to actually solve the measurement problem? I will get to this, so stay tuned.

Wednesday, October 16, 2019

Dark matter nightmare: What if we are just using the wrong equations?

Dark matter filaments. Computer simulation.
[Image: John Dubinski (U of Toronto)]
Einstein’s theory of general relativity is an extremely well-confirmed theory. Countless experiments have shown that its predictions for our solar system agree with observation to utmost accuracy. But when we point our telescopes at larger distances, something is amiss. Galaxies rotate faster than expected. Galaxies in clusters move faster than they should. The expansion of the universe is speeding up.

General relativity does not tell us what is going on.

Physicists have attributed these puzzling observations to two newly postulated substances: Dark matter and dark energy. These two names are merely placeholders in Einstein’s original equations; their sole purpose is to remove the mismatch between prediction and observation.

This is not a new story. We have had evidence for dark matter since the 1930s, and dark energy was on the radar already in the 1990. Both have since occupied thousands of physicists with attempts to explain just what we are dealing with: Is dark matter a particle, and if so what type, and how can we measure it? If it is not a particle, then what do we change about general relativity to fix the discrepancy with measurements? Is dark energy maybe a new type of field? Is it, too, made of some particle? Does dark matter have something to do with dark energy or are the two unrelated?

To answer these questions, hundreds of hypotheses have been proposed, conferences have been held, careers have been made – but here we are, in 2019, and we still don’t know.

Bad enough, you may say, but the thing that really keeps me up at night is this: Maybe all these thousands of physicists are simply using the wrong equations. I don’t mean that general relativity needs to be modified. I mean that we incorrectly use the equations of general relativity to begin with.

The issue is this. General relativity relates the curvature of space and time to the sources of matter and energy. Put in a distribution of matter and energy at any one moment of time, and the equations tell you what space and time do in response, and how the matter must move according to this response.

But general relativity is a non-linear theory. This means, loosely speaking, that gravity gravitates. More concretely, it means that if you have two solutions to the equations and you take their sum, this sum will not also be a solution.

Now, what we do when we want to explain what a galaxy does, or a galaxy cluster, or even the whole universe, is not to plug the matter and energy of every single planet and star into the equations. This would be computationally unfeasible. Instead, we use an average of matter and energy, and use that as the source for gravity.

Needless to say, taking an average on one side of the equation requires that you also take an average on the other side. But since the gravitational part is non-linear, this will not give you the same equations that we use for the solar system: The average of a function of a variable is not the same as the function of the average of the variable. We know it’s not. But whenever we use general relativity on large scales, we assume that this is the case.

So, we know that strictly speaking the equations we use are wrong. The big question is, then, just how wrong are they?

Nosy students who ask this question are usually told these equations are not very wrong and are good to use. The argument goes that the difference between the equation we use and the equation we should use is negligible because gravity is weak in all these cases.

But if you look at the literature somewhat closer, then this argument has been questioned. And these questions have been questioned. And the questioning questions have been questioned. And the debate has remained unsettled until today.

That it is difficult to average non-linear equations is of course not a problem specific to cosmology. It’s a difficulty that condensed matter physicists have to deal with all the time, and it’s a major headache also for climate scientists. These scientists have a variety of techniques to derive the correct equations, but unfortunately the known methods do not easily carry over to general relativity because they do not respect the symmetries of Einstein’s theory.

It’s admittedly an unsexy research topic. It’s technical and tedious and most physicists ignore it. And so, while there are thousands of physicists who simply assume that the correction-terms from averaging are negligible, there are merely two dozen or so people trying to make sure that this assumption is actually correct.

Given how much brain-power physicists have spent on trying to figure out what dark matter and dark energy is, I think it would be a good idea to definitely settle the question whether it is anything at all. At the very least, I would sleep better.

Further reading: Does the growth of structure affect our dynamical models of the universe? The averaging, backreaction and fitting problems in cosmology, by Chris Clarkson, George Ellis, Julien Larena, and Obinna Umeh. Rept. Prog. Phys. 74 (2011) 112901, arXiv:1109.2314 [astro-ph.CO].

Monday, October 07, 2019

What does the future hold for particle physics?

In my new video, I talk about the reason why the Large Hadron Collider, LHC for short, has not found fundamentally new particles besides the Higgs boson, and what this means for the future of particle physics. Below you find a transcript with references.


Before the LHC turned on, particle physicists had high hopes it would find something new besides the Higgs boson, something that would go beyond the standard model of particle physics. There was a lot of talk about the particles that supposedly make up dark matter, which the collider might produce. Many physicists also expected it to find the first of a class of entirely new particles that were predicted based on a hypothesis known as supersymmetry. Others talked about dark energy, additional dimensions of space, string balls, black holes, time travel, making contact to parallel universes or “unparticles”. That’s particles which aren’t particles. So, clearly, some wild ideas were in the air.

To illustrate the situation before the LHC began taking data, let me quote a few articles from back then.

Here is Valerie Jamieson writing for New Scientist in 2008:
“The Higgs and supersymmetry are on firm theoretical footing. Some theorists speculate about more outlandish scenarios for the LHC, including the production of extra dimensions, mini black holes, new forces, and particles smaller than quarks and electrons. A test for time travel has also been proposed.”
Or, here is Ian Sample for the Guardian, also in 2008:
“Scientists have some pretty good hunches about what the machine might find, from creating never-seen-before particles to discovering hidden dimensions and dark matter, the mysterious substance that makes up 25% of the universe.”
Paul Langacker in 2010, writing for the APS:

“Theorists have predicted that spectacular signals of supersymmetry should be visible at the LHC.” Michael Dine for Physics Today in 2007:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
The Telegraph in 2010:
“[The LHC] could answer the question of what causes mass, or even surprise its creators by revealing the existence of a fifth, sixth or seventh secret dimension of time and space.”
A final one. Here is Steve Giddings writing in 2010 for phys.org:
“LHC collisions might produce dark-matter particles... The collider might also shed light on the more predominant “dark energy,”... the LHC may reveal extra dimensions of space... if these extra dimensions are configured in certain ways, the LHC could produce microscopic black holes... Supersymmetry could be discovered by the LHC...”
The Large Hadron collider has been running since 2010. It has found the Higgs boson. But why didn’t it find any of the other things?

This question is surprisingly easy to answer. There was never a good reason to expect any of these things in the first place. The more difficult question is why did so many particle physicists think those were reasonable expectations, and why has not a single one of them told us what they have learned from their failed predictions?

To see what happened here, it is useful to look at the difference between the prediction of the Higgs-boson and the other speculations. The standard model without the Higgs does not work properly. It becomes mathematically inconsistent at energies that the LHC is able to reach. Concretely, without the Higgs, the standard model predicts probabilities larger than one, which makes no sense.

We therefore knew, before the LHC turned on, that something new had to happen. It could have been something else besides the Higgs. The Higgs was one way to fix the problem with the standard model, but there are other ways. However, the Higgs turned out to be right.

All other proposed ideas, extra dimensions, supersymmetry, time-travel, and so on, are unnecessary. These theories have been constructed so that they are compatible with all existing observations. But they are not necessary to solve any problem with the standard model. They are basically wishful thinking.

The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix. I am afraid that many of them still believe this. This supposed problem is that the standard model is not “technically natural”.

This means the standard model contains one number that is small, but there is no explanation for why it is small. This number is the mass of the Higgs-boson divided by the Planck mass, which happens to be about 10-15. The standard model works just fine with that number and it fits the data. But a small number like this, without explanation, is ugly and particle physicists didn’t want to believe nature could be that ugly.

Well, now they know that nature doesn’t care what physicists want it to be like.

What does this mean for the future of particle physics? This argument from “technical naturalness” was the only reason that physicists had to think that the standard model is incomplete and something to complete it must appear at LHC energies. Now that it is clear this argument did not work, there is no reason why a next larger collider should see anything new either. The standard model runs into mathematical trouble again at energies about a billion times higher than what a next larger collider could test. At the moment, therefore, we have no good reason to build a larger particle collider.

But particle physics is not only collider physics. And so, it seems likely to me, that research will shift to other areas of physics. A shift that has been going on for two decades already, and will probably become more pronounced now, is the move to astrophysics, in particular the study of dark matter and dark energy and also, to some extent, the early universe.

The other shift that we are likely to see is a move away from high energy particle physics and move towards high precision measurements at lower energies, or to table top experiments probing the quantum behavior of many particle systems, where we still have much to learn.

Wednesday, October 02, 2019

Has Reductionism Run its Course?

For more than 2000 years, ever since Democritus’ first musings about atoms, reductionism has driven scientific inquiry. The idea is simple enough: Things are made of smaller things, and if you know what the small things do, you learn what the large things do. Simple – and stunningly successful.

After 2000 years of taking things apart into smaller things, we have learned that all matter is made of molecules, and that molecules are made of atoms. Democritus originally coined the word “atom” to refer to indivisible, elementary units of matter. But what we have come to call “atoms”, we now know, is made of even smaller particles. And those smaller particles are yet again made of even smaller particles.

© Sabine Hossenfelder
The smallest constituents of matter, for all we currently know, are the 25 particles which physicists collect in the standard model of particle physics. Are these particles made up of yet another set of smaller particles, strings, or other things?

It is certainly possible that the particles of the standard model are not the ultimate constituents of matter. But we presently have no particular reason to think they have a substructure. And this raises the question whether attempting to look even closer into the structure of matter is a promising research direction – right here, right now.

It is a question that every researcher in the foundations of physics will be asking themselves, now that the Large Hadron Collider has confirmed the standard model, but found nothing beyond that.

20 years ago, it seemed clear to me that probing physical processes at ever shorter distances is the most reliable way to better understand how the universe works. And since it takes high energies to resolve short distances, this means that slamming particles together at high energies is the route forward. In other words, if you want to know more, you build bigger particle colliders.

This is also, unsurprisingly, what most particle physicists are convinced of. Going to higher energies, so their story goes, is the most reliable way to search for something fundamentally new. This is, in a nutshell, particle physicists’ major argument in favor of building a new particle collider, one even larger than the presently operating Large Hadron Collider.

But this simple story is too simple.

The idea that reductionism means things are made of smaller things is what philosophers more specifically call “methodological reductionism”. It’s a statement about the properties of stuff. But there is another type of reductionism, “theory reductionism”, which instead refers to the relation between theories. One theory can be “reduced” to another one, if the former can be derived from the latter.

Now, the examples of reductionism that particle physicists like to put forward are the cases where both types of reductionism coincide: Atomic physics explains chemistry. Statistical mechanics explains the laws of thermodynamics. The quark model explains regularities in proton collisions. And so on.

But not all cases of successful theory reduction have also been cases of methodological reduction. Take Maxwell’s unification of the electric and magnetic force. From Maxwell’s theory you can derive a whole bunch of equations, such as the Coulomb law and Faraday’s law, that people used before Maxwell explained where they come from. Electromagnetism, is therefore clearly a case of theory reduction, but it did not come with a methodological reduction.

Another well-known exception is Einstein’s theory of General Relativity. General Relativity can be used in more situations than Newton’s theory of gravity. But it is not the physics on short distances that reveals the differences between the two theories. Instead, it is the behavior of bodies at high relative speed and strong gravitational fields that Newtonian gravity cannot cope with.

Another example that belongs on this list is quantum mechanics. Quantum mechanics reproduces classical mechanics in suitable approximations. It is not, however, a theory about small constituents of larger things. Yes, quantum mechanics is often portrayed as a theory for microscopic scales, but, no, this is not correct. Quantum mechanics is really a theory for all scales, large to small. We have observed quantum effects over distances exceeding 100km and for objects weighting as “much” as a nanogram, composed of more than 1013 atoms. It’s just that quantum effects on large scales are difficult to create and observe.

Finally, I would like to mention Noether’s theorem, according to which symmetries give rise to conservation laws. This example is different from the previous ones in that Noether’s theorem was not applied to any theory in particular. But it has resulted in a more fundamental understanding of natural law, and therefore I think it deserve a place on the list.

In summary, history does not support particle physicists’ belief that a deeper understanding of natural law will most likely come from studying shorter distances. On the very contrary, I have begun to worry that physicists’ confidence in methodological reductionism stands in the way of progress. That’s because it suggests we ask certain questions instead of others. And those may just be the wrong questions to ask.

If you believe in methodological reductionism, for example, you may ask what dark energy is made of. But maybe dark energy is not made of anything. Instead, dark energy may be an artifact of our difficulty averaging non-linear equations.

It’s similar with dark matter. The methodological reductionist will ask for a microscopic theory and look for a particle that dark matter is made of. Yet, maybe dark matter is really a phenomenon associated with our misunderstanding of space-time on long distances.

The maybe biggest problem that methodological reductionism causes lies in the area of quantum gravity, that is our attempt to resolve the inconsistency between quantum theory and general relativity. Pretty much all existing approaches – string theory, loop quantum gravity, causal dynamical triangulation (check out my video for more) – assume that methodological reductionism is the answer. Therefore, they rely on new hypotheses for short-distance physics. But maybe that’s the wrong way to tackle the problem. The root of our problem may instead be that quantum theory itself must be replaced by a more fundamental theory, one that explains how quantization works in the first place.

Approaches based on methodological reductionism – like grand unified forces, supersymmetry, string theory, preon models, or technicolor – have failed for the past 30 years. This does not mean that there is nothing more to find at short distances. But it does strongly suggest that the next step forward will be a case of theory reduction that does not rely on taking things apart into smaller things.