Monday, April 22, 2019

Comments on “Quantum Gravity in the Lab” in Scientific American

The April issue of Scientific American has an article by Tim Folger titled “Quantum Gravity in the Lab”. It is mainly about the experiment by the Aspelmeyer group in Vienna, which I wrote about three years ago here. Folger also briefly mentions the two other proposals by Bose et al and Marletto and Vedral. (For context and references see my recent blogpost about the possibility to experimentally test quantum gravity, or this 2016 summary.)

I am happy that experimental tests for quantum gravity finally receive some media coverage. I worked on the theory-side of this topic for more than a decade, and I am still perplexed by how little attention the possibility to measure quantum gravitational effects gets, both in the media and in the community. Even a lot of physicists think this research area doesn’t exist.

It find this perplexing because if you think one cannot test quantum gravity, then developing a theory for it is not science, hence it should not be pursued in physics departments. So we have a situation in which a lot of physicists think it is fine to spend money on developing a theory they believe is not testable, or whose testability they believe is not worth studying. Excuse me for not being able to make sense of this. Freeman Dyson at least is consequential in that he both believes quantum gravity isn’t testable and isn’t worth the time.

In any case, the SciAm piece contains some bummers that I want to sort out in the hope to prevent them from spreading.

First, the “In Brief” box says:
“Physicists are hoping that by making extremely precise measurements of gravity in small-sale set-ups – experiments that will fit onto a tabletop in a laboratory – they can detect effects from the intersection of gravity and quantum theory.”
But not everything that is at the intersection of “gravity” and “quantum” is actually quantum gravity. A typical example is black hole evaporation. It is caused by quantum effects in a gravitational field, but the gravitational field itself does not need to have quantum properties for the effect to occur. Black hole evaporation, therefore, is not quantum gravitational in origin.

The same goes, eg, for this lovely experiment that measured the quantization of momenta of neutrons in the gravitational potential. Again this is clearly a quantum effect that arises from the gravitational interaction, so it is at “the intersection of gravity and quantum theory.” But since the quantum properties of gravity itself do not play a role for the neutron-experiment, it does not test quantum gravity.

A more helpful statement would therefore be “experiments that can probe the quantum behavior of gravity”. Or, since gravity is really caused by the curvature of space-time, one could also write “experiments that can probe the quantum properties of space and time.”

 Another misleading statement in “In Brief” box is:
“The experiments aim to show whether gravity becomes quantized – that is, divisible into discrete bits – on extremely tiny scales.”
This is either badly phrased, or wrong, or both. I don’t know what it means for gravity to be divisible into discrete bits. The best interpretation I can think of is that the sentence refers to gravitons, particles which should be associated with a quantized gravitational field. The experiments that the article describes, however, do not attempt to measure gravitons.

In the text we further find the sentence:
”A quantum space-time… would become coarse-grained, like a digital photograph that becomes pixelated when magnified.”
But we have no reason to think there is any pixelation going on for space-time. Moreover, the phrase “magnified” refers to distance scales. Quantum gravity, however, does not become relevant just below a certain distance. Indeed, if it became relevant below a certain distance, that would be in conflict with the symmetries of special relativity; this distance-dependence is hence a highly problematic conjecture.

What happens in the known approaches to quantum gravity instead is that quantum uncertainties increase when space-time curvature becomes large. It is not so much a pixelation that is going on, but an inevitable blurring that stems from the quantum fluctuations of space and time.

And then there is this quote from Lajos Diosi:
”[We] know for sure that there will be a total scrambling of the spacetime continuity if you go down to the Planck scale.”
No one knows “for sure” what happens at the Planck scale, so please don’t take this at face value.

Besides this, the article is a recommendable read.

If you want to get a sense of what other ways there are to experimentally test quantum gravity (or don’t have a subscription) read my 2017 Nautilus essay “What Quantum Gravity Needs is more Experiment.”

Monday, April 15, 2019

How Heroes Hurt Science

Einstein, Superhero
Tell a story. That’s the number one advice from and to science communicators, throughout centuries and all over the globe.

We can recite the seven archetypes forward and backward, will call at least three people hoping they disagree with each other, ask open-ended questions to hear what went on backstage, and trade around the always same anecdotes: Wilson checking the telescope for pigeon shit (later executing the poor birds), Guth drawing a box around a key equation of inflation, Geim and Novoselov making graphene with sticky tape, Feynman’s bongos, Einstein’s compass, Newton’s apple, Archimedes jumping out of the tub yelling “Eureka”.

Raise your hand if one of those was news for you. You there, the only one who raised a hand, let me guess: You didn’t know the fate of the pigeons. Now you know. And will read about it at least 20 more times from here on. Mea culpa.  

Sure, stories make science news relatable, entertaining, and, most of all, sellable. They also make them longer. Personally I care very little about people-tales. I know I’m a heartless fuck. But really I would rather just get the research briefing without Peter’s wife and Mary’s move and whatever happened that day in the elevator. By paragraph four I’ll confuse the two guys with Asian names and have forgotten why I was reading your piece to begin with. Help!

Then, maybe, that’s just me. Certainly, if it sells, it means someone buys it. Or at least clicks on an ad every now and then.

I used to think the story-telling is all right, just a spoonful of sugar to make the science news go down. Recently though, I worry such hero-tales are not only not communicating how science works, they are actually standing in the way of progress.

Here is how science really works. At any given time, we have a pool of available knowledge and we have a group of researchers tasked with expanding this knowledge. They use the existing knowledge to either build experiments and collect new data, or to draw new conclusions. If the newly created knowledge results in a better understanding of nature, we speak of a breakthrough. This information will from thereon be available to other researchers, who can build on it, and so on.

What does it take to make a breakthrough? It takes people who have access to the relevant knowledge, have the education to comprehend that knowledge, and bring the skill and motivation to make a contribution to science. Intelligence is one of the key factors that allows a scientist to draw conclusions from existing knowledge, but you can’t draw conclusions from information you don’t have. So, besides having the brain, you also need to have access to knowledge and must decide what to pay attention to.

Science tends to be well-populated by intelligent people, which means that there is always some number of people on the research-front banging their head on the same problem. Who among these smart folks is first to make a breakthrough then often depends on timing and luck.

Evidence of this is in the published literature, demonstrated by new insights often put forward almost simultaneously by different people.

Like Newton and Leibnitz developing calculus in parallel. Like Schrödinger and Heisenberg developing two different routes to quantum mechanics at the same time. Like Bardeen, Cooper, and Schrieffer explaining superconductivity just when Bogolubyov also did. Like the Englert-Brout-Higgs-Guralnik-Hagen-Kibble mechanism, invented three times within a matter of years.

These people had access to the same information, and they had the required processing power, so they were able to make the next step. If they hadn’t done it then and there, someone else would have.  

And then there are the rediscoveries. Like Wikipedia just told me that it’s all been said before: “The concept of multiple discovery opposes a traditional view—the `heroic theory’ of invention and discovery.”

Okay, so I am terribly unoriginal, snort. But yeah, I am opposing the traditional view. And I am opposing it for a reason. Focusing on individual achievement, while forgetting about the scientific infrastructure that enabled it, has downsides.

One downside of our hero-stories is that we fail to acknowledge the big role privilege still plays for access to higher education. Too many people on the planet will never contribute to science, not because they don’t have the intellectual ability, but because they do not have the opportunity to acquire the necessary knowledge.

The other downside is that we underestimate the relevance of communication and information-sharing in scientific communities. If you believe that genius alone will do, then you are likely to believe it doesn’t matter how smart people arrange their interaction. If you, on the other hand, take into account that even big-brained scientists must somehow make decisions about what information to have a look at, you understand that it matters a lot how scientists organize their work-life.

I think that currently many scientists, especially in the foundations of physics, fail to pay attention to how they exchange and gather information, processes that can easily be skewed by social biases. What’s a scientist, after all? A scientist is someone who collects information, chews on it, and outputs new information. But, as they say, garbage in, garbage out.

The major hurdle on the way to progress is presently not a shortage of smart people. It’s smart people who don’t want to realize that they are but wheels in the machinery.

Saturday, April 13, 2019

The LHC has broken physics, according to this philosopher

The Large Hadron Collider (LHC) has not found any new, fundamental particles besides the Higgs-boson. This is a slap in the face of thousands of particle physicists who were confident the mega-collider should see more: Other new particles, additional dimensions of space, tiny black holes, dark matter, new symmetries, or something else entirely. None of these fancy things have shown up.

The reason that particle physicists were confident the LHC should see more than the Higgs was that their best current theory – the Standard Model of particle physics – is not “natural.”

The argument roughly goes like this: The Higgs-boson is different from all the other particles in the standard model. The other particles have either spin 1 or spin ½, but the Higgs has spin 0. Because of this, quantum fluctuations make a much larger contribution to the mass of the Higgs. This large contribution is ugly, therefore particle physicists think that once collision energies are high enough to create the Higgs, something else must appear along with the Higgs so that the theory is beautiful again.

However, that the Standard Model is not technically natural has no practical relevance because these supposedly troublesome quantum contributions are not observable. The lack of naturalness is therefore an entirely philosophical conundrum, and the whole debate about it stems from a fundamental confusion about what requires explanation in science to begin with. If you can’t observe it, there’s nothing to explain. If it’s ugly, that’s too bad. Why are we even talking about this?

It has remained a mystery to me why anyone buys naturalness arguments. You may say, well, that’s just Dr Hossenfelder’s opinion and other people have other opinions. But it is no longer just an opinion: The LHC predictions based on naturalness arguments did, as a matter of fact, not work. So you might think that particle physicists would finally stop using them.

But particle physicists have been mostly quiet about the evident failure of their predictions. Except for a 2017 essay by Gian-Francesco Guidice, head of the CERN theory group and one of the strongest advocates of naturalness-arguments, no one wants to admit something went badly wrong. Maybe they just hope no one will notice their blunder, never mind that it’s all over the published literature. Some are busy inventing new types of naturalness that would predict new particles would show up only at the next larger collider.

This may be an unfortunate situation for particle physicists, but it’s a wonderful situation for philosophers who love nothing quite as much as going on about someone else’s problem. A few weeks ago, we discussed William Porter’s analysis of the naturalness crisis in particle physics. Today I want to draw your attention to a preprint by the philosopher David Wallace, titled “Naturalness and Emergence.”

About the failure of particle physicists’ predictions, Wallace writes:
“I argue that any such naturalness failure threatens to undermine the entire structure of our understanding of inter-theoretic reduction, and so risks a much larger crisis in physics than is sometimes suggested.”
With “inter-theoretic reduction” he refers to the derivation of a low-energy theory from a high-energy theory, which is the root of reductionism. In the text he further writes:
”If Naturalness arguments just brutely fail in particle physics and cosmology, then there can be no reliable methodological argument for assuming them elsewhere (say, in statistical mechanics).”
Sadly enough, Wallace makes the very mistake that I have pointed out repeatedly, that I frequently hear particle physicists make, and that also Porter Williams spells out in his recent paper. Wallace conflates the dynamical degrees of freedom of a theory (say, the momenta of particles) with the parameters of a theory (eg, the strength of the interactions). For the former you can collect statistical data and meaningfully talk about probabilities. For the latter you cannot: We have only one universe with one particular set of constants. Speaking about these constants’ probabilities is scientifically meaningless.

In his paper, Wallace argues that in statistical mechanics one needs to assume an initial probability distribution that is “simple” in a way that he admits is not well-defined. It is correct, of course, that in practice you start with such an assumption to make a calculation that results in a prediction. However, this being science, you can justify this assumption by the mere fact that the predictions work.

The situation is entirely different, however, for distributions over parameters. Not only are these distributions not observable by construction, worse, starting with the supposedly simple assumptions for this distribution demonstrably works badly. That it works badly is, after all, the whole reason we are having this discussion to begin with.

Ironically enough, Wallace, in his paper, complains that I am conflating different types of naturalness in my paper, making me think he cannot have read what I wrote too carefully. The exact opposite is the case: I am not commenting on the naturalness of statistical distributions of degrees of freedom because that’s just a different story. As I say explicitly, naturalness arguments are well-defined if you can sample the probability distribution. If, on the other hand, you have no way to ever determine a distribution, they are ill-defined.

While I am at it, a note for the folks with their Bayesian arguments. Look, if you think that a supersymmetric extension of the standard model is more predictive than the standard model itself, then maybe try to figure out just what it predicts. Besides things we have not seen, I mean. And keep in mind that if you assume that the mass of the Higgs-boson is in the energy range of the LHC, you cannot count that as a prediction. Good luck.

Back to Wallace. His argument is logically equivalent to crossing the Atlantic in a 747 while claiming that heavier-than-air machines can’t fly. The standard model has so far successfully explained every single piece of data that’s come out of the LHC. If you are using a criterion for theory-evaluation according to which that’s not a good theory, the problem is with you, not with the theory.

Saturday, April 06, 2019

Away Note/Travel Update/Interna

I will be away next week, giving three talks at Brookhaven National Lab on Tuesday, April 9, and one at Yale April 10.


Next upcoming lectures are Stuttgart on April 29 (in Deutsch), Barcelona on May 23, Mainz on June 11, (probably) Groningen on June 22, and Hamburg on July 5th.

I may or may not attend this year’s Lindau Nobel Laureate meeting, and have a hard time making up my mind about whether or not to go to SciFoo, because, considering the status of my joints, it’s a choice between physical and financial pain, and frankly I’d rather chose neither.

In any case, I am always happy to meet readers of my blog, so if our paths should cross, please do not hesitate to say Hi.

It follows the obligatory note about slow traffic on this blog while I am traveling: I have comment moderation on. This means comments will only appear after I have manually approve them. Sometimes I sleep, sometimes I am offline, sometimes I have better things to do than checking my inbox. As a result, comments may sit in the queue for a while. Normally it does not take longer than 24 hours.

Let me also mention that I no longer read comments posted as “Unknown”. I have no idea what Google is doing, but the “Unknown” comments are today what anonymous comments were a decade ago. The vast majority of those are spam, and most of the rest are insults and other ill-informed nonsense. I do not have time for this and therefore collectively forwarded them straight to junk.

There have also recently been some (?) people who tried to post random strings of letters or, in some cases, single letters. I am assuming this was to try if the comment feature works. I will not approve such comments, so it is not a useful method to figure out what is going on.

Friday, April 05, 2019

Does the world need a larger particle collider? [video]

Another attempt to explain myself. Transcript below.


I know you all wanted me to say something about the question of whether or not to build a new particle collider, one that is larger than even the Large Hadron Collider. And your wish is my command, so here we go.

There seem to be a lot of people who think I’m an enemy of particle physics. Most of those people happen to be particle physicists. This is silly, of course. I am not against particle physics, or against particle colliders in general. In fact, until recently, I was in favor of building a larger collider.

Here is what I wrote a year ago:
“I too would like to see a next larger particle collider, but not if it takes lies to trick taxpayers into giving us money. More is at stake here than the employment of some thousand particle physicists. If we tolerate fabricated arguments in the scientific literature just because the conclusions suit us, we demonstrate how easy it is for scientists to cheat.

Fact is, we presently have no evidence – neither experimental nor theoretical evidence – that a next larger collider would find new particles.”
And still in December I wrote:
“I am not opposed to building a larger collider. Particle colliders that reach higher energies than we probed before are the cleanest and most reliable way to search for new physics. But I am strongly opposed to misleading the public about the prospects of such costly experiments. We presently have no reliable prediction for new physics at any energy below the Planck energy. A next larger collider may find nothing new. That may be depressing, but it’s true.”
Before I tell you why I changed my mind, I want to tell you what’s great about high energy particle physics, why I worked in that field for some while, and why, until recently I was in favor of building that larger collider.

Particle colliders are really the logical continuation of microscopes, you build them to see small structures. But think of a light microscope: The higher the energy of the light, the shorter its wavelength, and the shorter its wavelength, the better the resolution of small structures. This is why you get better resolution with microscopes that use X-rays than with microscopes that use visible light.

Now, quantum mechanics tells us that particles have wavelengths too, and for particles higher energy also means better resolution. Physicists started this with electron microscopes, and it continues today with particle colliders.

So that’s why we build particle colliders that reach higher and higher energies, because that allows us to test what happens at shorter and shorter distances. The Large Hadron Collider currently probes distances of about one thousandth of the diameter of a proton.

Now, if probing short distances is what you want to do, then particle colliders are presently the cleanest way to do this. There are other ways, but they have disadvantages.

The first alternative is cosmic rays. Cosmic rays are particles that come from outer space at high speed, which means if they hit atoms in the upper atmosphere, that collision happens at high energy.

Most of the cosmic rays are at low energies, but every once in a while one comes at high energy. The highest collision energies still slightly exceed those tested at the LHC.

But it is difficult to learn much from cosmic rays. To begin with, the highly energetic ones are rare, and happen far less frequently than you can make collisions with an accelerator. Few collisions means bad statistics which means limited information.

And there are other problems, for example we don’t know what the incoming particle is to begin with. Astrophysicists currently think it’s a combination of protons and light atomic nuclei, but really they don’t know for sure.

Another problem with cosmic rays is that the collisions do not happen in vacuum. Instead, the first collision creates a lot of secondary particles which collide again with other atoms and so on. This gives rise to what is known as a cosmic ray shower. This whole process has to be modeled on a computer and that again brings in uncertainty.

Then the final problem with cosmic rays is that you cannot cover the whole surface of the planet to catch the particles that were created. So you cover some part of it and extrapolate from there. Again, this adds uncertainty to the results.

With a particle collider, in contrast, you know what is colliding and you can build detectors directly around the collision region. That will still not capture all particles that are created, especially not in the beam direction, but it’s much better than with cosmic rays.

The other alternative to highly energetic particle collisions are high precision measurements at low energies.

You can use high precision instead of high energy because, according to the current theories, everything that happens at high energies also influences what happens at low energies. It is just that this influence is very small.

Now, high precision measurements at low energies are a very powerful method to understand short distance physics. But interpreting the results puts a high burden on theoretical physicists. That’s because you have to be very, well, precise, to make those calculations, and making calculations at low energies is difficult.

This also means that if you should find a discrepancy between theory and experiment, then you will end up debating whether it’s an actual discrepancy or whether it’s a mistake in the calculation.

A good example for this is the magnetic moment of the muon. We have known since the 1960s that the measured value does not fit with the prediction, and this tension has not gone away. Yet it has remained unclear whether this means the theories are missing something, or whether the calculations just are not good enough.

With particle colliders, on the other hand, if there is a new particle to create above a certain energy, you have it in your face. The results are just easier to interpret.

So, now that I have covered why particle colliders are a good way to probe short distances, let me explain why I am not in favor of building a larger one right now. It’s simply because we currently have no reason to think there is anything new to discover at the next shorter distances, not until we get to energies a billion times higher than what even the next larger collider would reach. That, and the fact that the cost of a next larger particle colliders is high compared to the typical expenses for experiments in the foundations of physics.

So a larger particle collider presently has a high cost but a low estimated benefit. It is just not a good way to invest money. Instead, there are other research directions in the foundations of physics which are more promising. Dark matter is a good example.

One of the key motivations for building a larger particle collider that particle physicists like to bring up is that we still do not know what dark matter is made of.

But we are not even sure that dark matter is made of particles. And if it’s a particle, we do not know what mass it has or how it interacts. If it’s a light particle, you would not look for it with a bigger collider. So really it makes more sense to collect more information about the astrophysical situation first. That means concretely better telescopes, better sky coverage, better redshift resolution, better frequency coverage, and so on.

Other research directions in the foundations that are more promising are those where we have problems in the theories that do require solutions, this is currently the case in quantum gravity and in the foundations of quantum mechanics. I can tell you something about this some other time.

But really my intention here is not to advocate a particular alternative. I merely think that physicists should have an honest debate about the evident lack of progress in the foundations of physics and what to do about it.

Since the theoretical development of the standard model was completed in the 1970s, there has been no further progress in theory development. You could say maybe it’s just hard and they haven’t figured it out. But the slow progress in and by itself is not what worries me.

What is worries me is that in the past 40 years physicists have made loads and loads of predictions for physics beyond the standard model, and those were all wrong. Every. Single. One. Of them.

This is not normal. This is bad scientific methodology. And this bad scientific methodology has flourished because experiments have only delivered null results.

And it has become a vicious cycle: Bad predictions motivate experiments. The experiments find only null results. The null results do not help theory development, which leads to bad predictions that motivate experiments, which deliver null results, and so on.

We have to break this cycle. And that’s why I am against building a larger particle collider.

Thursday, April 04, 2019

Is the universe a hologram? Gravitational wave interferometers may tell us.

Shadow on the wall.
Left-right. Forward-backward. Up-down. We are used to living in three dimensions of space. But according to some physicists, that’s an illusion. Instead, they say, the universe is a hologram and every move we make is really inscribed on the surface of the space around us. So far, this idea has been divorced from experimental test. But in a recent arXiv paper, physicists have now proposed a way to test it.

What does it mean to live in a hologram? Take a Rubik’s cube. No, don’t leave! I mean, take it, metaphorically speaking. The cube is made of 3 x 3 x 3 = 27 smaller cubes. It has 6 x 3 x 3 = 54 surface elements. If you use each surface pixel and each volume pixel to encode one bit of information, then you can use the surface elements to represent everything that happens in the volume.

A volume pixel, by the way, is called a voxel. If you learn nothing else from me today, take that.

So the Rubik’s cube has more pixels than voxels. But if you divide it up into smaller and smaller pieces, the ratio soon changes in favor of voxels. For N intervals on each edge, you have N3 voxels compared to 6 x N2 pixels. The amount of information that you can encode in the volume, therefore, increases much faster than the amount you can encode on the surface.

Or so you would think. Because the holographic principle forbids this from happening. This idea, which originates in the physics of black holes, requires that everything which can happen inside a volume of space is inscribed on the surface of that volume. This means if you’d look very closely at what particles do, you would find they cannot move entirely independently. Their motions must have subtle correlations in order to allow a holographic description.

We do not normally notice these holographic correlations, because ordinary stuff needs few of the theoretically available bits of information anyway. Atoms in a bag of milk, for example, are huge compared to the size of the voxels, which is given by the Planck length. The Planck length, that is 25 orders of magnitude smaller the diameter of an atom. And these 25 orders of magnitude, you may think, crush all hopes to ever see the constraints incurred on us by the holographic principle.

But theoretical physicists don’t give up hope that easily. The massive particles we know are too large to be affected by holographic limitations. But physicists expect that space and time have quantum fluctuations. These fluctuations are normally too small to observe. But if the holographic principle is correct, then we might be able to find correlations in these fluctuations.

To see these holographic correlations, we would have to closely monitor the extension of a large volume of space. Which is, in a nutshell, what a gravitational wave interferometer does.

The idea that you can look for holographic space-time fluctuations with gravitational wave interferometers is not new. It was proposed already a decade ago by Craig Hogan, at a time when the GEO600 interferometer reported unexplained noise.

Hogan argued that this noise was a signal of holography. Unfortunately, the noise vanished with a new readout method. Hogan, undeterred, found a factor in his calculation and went on to build his own experiment, the “Holometer” to search for evidence that we live in a hologram.

The major problem with Hogan’s idea, as I explained back then, was that his calculation did not respect the most important symmetry of Special Relativity, the so-called Lorentz-symmetry. For this reason the prediction made with Hogan’s approach had consequences we would have seen long ago with other experiments. His idea, therefore, was ruled out before the experiment even started.

Hogan, since he could not resolve the theoretical problems, later said he did not have a theory and one should just look. His experiment didn’t find anything. Or, well, it delivered null results. Which are also results, of course.

In any case, two weeks ago Erik Verlinde and Kathryn Zurek had another go at holographic noise:
    Observational Signatures of Quantum Gravity in Interferometers
    Erik P. Verlinde, Kathryn M. Zurek
    arXiv:1902.08207 [gr-qc]
The theoretical basis of their approach is considerably better than Hogan’s.

The major problem with using holographic arguments for interferometers is that you need to specify what surface you are talking about on which the information is supposedly encoded. But the moment you define the surface you are in conflict with the observer-independence of Special Relativity. That’s the issue Hogan ran into, and ultimately was not able to resolve.

Verlinde and Zurek circumvent this problem by speaking not about a volume of space and its surface, but about a volume of space-time (a “causal diamond”) and its surface. Then they calculate the amount of fluctuations that a light-ray accumulates when it travels back and forth between the two arms of the interferometer.

The total deviation they calculate scales with the geometric mean of the length of the interferometer arm (about a kilometer) and the Planck length (about 10-35 meters). If you put in the numbers, that comes out to be about 10-16 meters, which is not far off the sensitivity of the LIGO interferometer, currently about 10-15 meters.

Please do not take these numbers too seriously, because they do not account for uncertainties. But this rough estimate explains why the idea put forward by Verlinde and Zurek is not crazy talk. We might indeed be able to reach the required sensitivity in the soon future.

Let me be honest, though. The new approach by Verlinde and Zurek has not eliminated my worries about Lorentz-invariance. The particular observable they calculate is determined by the rest-frame of the interferometer. This is fine. My worry is not with their interferometer calculation, but with the starting assumption they make about the fluctuations. These are by assumption uncorrelated in the radial direction. But that radial direction could be any direction. And then I am not sure how this is compatible with their other assumption, that is holography.

The authors of the paper have been very patient in explaining their idea to me, but at least so far I have not been able to sort out my confusion about this. I hope that one of their future publications will lay this out in more detail. The present paper also does not contain quantitative predictions. This too, I assume, will follow in a future publication.

If they can demonstrate that their theory is compatible with Special Relativity, and therefore not in conflict with other measurements already, this would be a well-motivated prediction for quantum gravitational effects. Indeed, I would consider it one of the currently most promising proposals.

But if Hogan’s null result demonstrates anything, it is that we need solid theoretical predictions to know where to search for evidence of new phenomena. In the foundations of physics, the days when “just look” was a promising strategy are over.

Tuesday, April 02, 2019

Dear Dr B: Does the LHC collide protons at twice the speed of light?

I recently got a brilliant question after a public lecture: “If the LHC accelerates protons to almost the speed of light and then collides them head-on, do they collide at twice the speed of light?”

The short answer is “No.” But it’s a lovely question and the explanation contains a big chunk of 20th century physics.

First, let me clarify that it does not make sense to speak of a collision’s velocity. One can speak about its center-of-mass energy, but one cannot meaningfully assign a velocity to a collision itself. What makes sense, instead, is to speak about relative velocities. If you were one of the protons and the other proton comes directly at you, does it come at you with twice the speed of light?

It does not, of course. You already knew this, because Einstein taught us nothing travels faster than the speed of light. But for this to work, it is necessary that velocities do not add the way we are used to. Indeed, according to Einstein, for velocities, 1 plus 1 is not equal to 2. Instead, 1+1 is equal to 1.

I know that sounds crazy, but it’s true.

To give you an idea how this comes about, let us forget for a moment that we have three dimensions of space and that protons at the LHC actually go in a circle. It is easier to look at the case where the protons move in straight lines, so, basically only in one dimension of space. It is then no longer necessary to worry about the direction of velocities and we can just speak about their absolute value.

Let us also divide all velocities by the speed of light so that we do not have to bother with units.

Now, if you have objects that move almost at the speed of light, you have to use Special Relativity to describe what they do. In particular you want to know, if you see two objects approaching each other at velocity u and v, then what is the velocity of one object if you were flying along with the other? For this, in special relativity, you have to add u and v by the following equation:


You see right away that the result of this addition law is always smaller than 1 if both velocities were smaller than 1. And if u equals 1 – that is, one object moves with the speed of light – then the outcome is also 1. This means that all observers agree on the speed at which light moves.

If you check what happens with the protons at the LHC, you will see that adding twice 99% of the speed of light brings you to something like 99,9999% of the speed of light, but never to 100%, and certainly not to 200%.

I will admit the first time I saw this equation it just seemed entirely arbitrary to me. I was in middle school, then, and really didn’t know much about Special Relativity. I just thought, well, why? Why this? Why not some other weird addition law?

But once you understand the mathematics, it becomes clear there is nothing arbitrary about this equation. What happens is roughly the following.

Special Relativity is based on the symmetry of space-time. This does not mean that time is like space – arguably, it is not – but that the two belong together and cannot be treated separately. Importantly, the combination of space and time has to work the same way for all observers, regardless of how fast they move. This observer-independence is the key principle of Einstein’s theory of Special Relativity.

If you formulate observer-independence mathematically, it turns out there is only one way that a moving clock can tick, and only one way that a moving object can appear – it all follows from the symmetry requirement. The way that moving objects shrink and their time slows down is famously described by time-dilation and length-contraction. But once you have this, you can also derive that there is only one way to add velocities and still be consistent with observer-independence of the space-time symmetry. This is what the above equation expresses.

Let me also mention that the commonly made reference to the speed of light in Special Relativity is somewhat misleading. We do this mostly for historical reasons.

In Special Relativity we have a limiting velocity which cannot be reached by massive particles, no matter how much energy we use to accelerate them. Particles without masses, on the other hand, always move at that limiting velocity. Therefore, if light is made of massless particles, then the speed of light is identical to the limiting velocity. And for all we currently know, light is indeed made of massless particles, the so-called photons.

However, should it turn out one day that photons really have a tiny mass that we just haven’t been able to measure so far, then the limiting velocity would still exist. It would just no longer be equal to the speed of light.

So, in summary: Sometimes 1 and 1 is indeed 1.

Wednesday, March 27, 2019

Nonsense arguments for building a bigger particle collider that I am tired of hearing (The Ultimate Collection)

[Image Source]
I know you’re all sick of hearing me repeat why a larger particle collider is currently not a good investment. Trust me, I am sick of it too. To save myself some effort, I decided to collect the most frequent arguments from particle physicists with my response. You’ve heard it all before, so feel free to ignore.

1. The “Just look” argument.

This argument goes: “We don’t know that we will find something new, but we have to look!” or “We cannot afford to not try.” Sometimes this argument is delivered with poetic attitude, like: “Probing the unknown is the spirit of science” and similar slogans that would do well on motivational posters.

Science is exploratory and to make progress we should study what has not been studied before, true. But any new experiment in the foundations of physics does that. You can probe new regimes not only be reaching higher energies, but also by reaching higher resolution, better precision, bigger systems, lower temperatures, less noise, more data, and so on.

No one is saying we should stop explorative research in the foundations of physics. But since resources are limited, we should invest in experiments that bring the biggest benefit for the projected cost. This means the higher the expenses for an experiment, the better the reasons for building it should be. And since a bigger particle collider is presently the most expensive proposal on the table, particle physicists should have the best reasons.

“Just look” certainly does not deliver any such reason. We can look elsewhere for lower cost and more promise, for example by studying the dark ages or heavy quantum oscillators. (See also point 18.)

2. The “No Zero Sum” argument.

“It’s not a zero sum game,” they will say. This point is usually raised by particle physicists to claim that if they do not get money for a larger particle collider, then this does not imply a similar amount of money will go to some other area in the foundations of physics.

This argument is a badly veiled attempt to get me to stop criticizing them. It does nothing to explain why a particle collider is a good investment.

3. Everyone gets to do their experiment!

This usually comes up right after the No-Zero-Sum-argument. When I point out that we have to decide what is the best investment into progress in the foundations of physics, particle physicists claim that everyone’s proposal will get funded.

This is just untrue.

Take the Square Kilometer Array as an example. Its full plan is lacking about $1 billion in funding and the scientific mission is therefore seriously compromised. The FAIR project in Germany likewise had to slim down their aspirations because one of their planned detectors could not be accommodated in the budget. The James Webb Space telescope just narrowly escaped a funding limitation that would have threatened its potential. And that leaves aside those communities which do not have sufficient funding to even formulate proposals for large-scale experiments. (See also point 19.)

Decisions have to be made. Every “yes” to something implies a “no” to something else. I suspect particle physicists do not want to discuss the benefit of their research compared to that of other parts of the foundations of physics because they know they would not come out ahead. But that is exactly the conversation we need to have.

4. Remember the Superconducting Super Collider!

Yes, the Superconducting Super Collider (SSC). I remember. The SSC was planned in the United States in the 1980s. It would have reached energies somewhat exceeding that of the Large Hadron Collider, and somewhat below that of the now planned Future Circular Collider.

Whatever happened to the SSC? What happened is that the estimated cost ballooned from $5.3 billion in 1987 to $10 billion in 1993, and when US congress finally refused to eat up the bill, particle physicists collectively blamed Phillip Anderson. Anderson is a Nobel Prize winning condensed matter physicist who testified before the US congress in opposition of the project, pointing out that society doesn’t stand much to benefit from a big collider.

While Anderson’s testimony certainly did not help, particle physicists clearly use him as a scapegoat. Anderson-blaming has become a collective myth in their community. But historians largely agree the main reasons for the cancellation were: (a) the crudely wrong cost estimate, (b) the end of the cold war, (c) the lack of international financial contributions, and (d) the failure of particle physicists to explain why their mega-collider was worth building. Voss and Koshland, in a 1993 Editorial for Science, summed the latter point up as follows:
“That particle physics asks questions about the fundamental structure of matter does not give it any greater claim on taxpayer dollars than solid-state physics or molecular biology. Proponents of any project must justify the costs in relation to the scientific and social return. The scientific community needs to debate vigorously the best use of resources, and not just within specialized subdisciplines. There is a limited research budget and, although zero-sum arguments are tricky, researchers need to set their own priorities or others will do it for them.”
Remember that?

5. It is not a waste of money

This usually refers to this attempted estimate to demonstrate that the LHC has a positive return on investment. That may be true (I don’t trust this estimate), but just because the LHC does not have a negative return on investment does not mean it’s a good investment. For this you would have to demonstrate it would be difficult to invest the money in a better way. Are you sure you cannot think of a better way to invest $20 billion to benefit mankind?

6. The “Money is wasted elsewhere too” argument.

The typical example I hear is the US military budget, but people have brought up pretty much anything else they don’t approve of, be that energy subsidies, MP salaries, or – as Lisa Randall recently did – the US government shutdown.

This argument simply demonstrates moral corruption: The ones making it want permission to waste money because waste of money has happened before. But the existence of stupidity does not justify more stupidity. Besides that, no one in the history of science funding ever got funding for complaining they don’t like how their government spends taxes.

The most interesting aspect of this argument is that particle physicists make it, even make it in public, though it means they basically admit their collider is a waste of money.

7. But particle physicists will leave if we don’t build this collider.

Too bad. Seriously, who cares? This is a profession almost exclusively funded by taxes. We don’t pay particle physicists just so they are not unemployed. We pay them because we hope they will generate knowledge that benefits society, if not now, then some time in the future. Please provide any reason that continuing to pay them is a good use of tax money. And if you can’t deliver a reason, I full well think we can let them go, thank you.

8. But we have unsolved problems in the foundations of physics.

This argument usually refers to the hierarchy problem, dark matter, dark energy, the baryon asymmetry, quantum gravity, and/or the nature of neutrino masses.

The hierarchy problem is not a problem, it is an aesthetic misgiving. For the other problems, there is no reason to think a larger collider would help solving them.

I have explained this extensively elsewhere and don’t want to go into the question what problems make promising research directions here. If you want more details, read eg this or this or my book.

9. So-and-so many billions is only such-and-such a tiny amount per person per day.

I have no idea what this is supposed to show. You can do the same exercise with literally any other expense. Did you know that for as little a tenth of a Cent per year per person I could pay my grad student?

10. Tim Berners-Lee invented the WWW while employed at CERN.

By the same logic we should build patent offices to develop new theories of gravitation.

11. It may lead to spin-offs.

The example they often bring up is contributions to WiFi technology that originated in some astrophysicists’ attempt to detect primordial black holes.

In response, allow me to rephrase the spin-off-argument: Physicists sometimes don’t waste all money invested into foundational research because they accidentally come across something that’s actually useful. That wasn’t what you meant? Well, but that’s what this argument says.

If these spin-offs are what you are really after, then you should invest more into data analysis or technology R&D, or at least try to find out which research environments are likely to benefit spin-offs. (It is presently unclear how relevant serendipity is to scientific progress.) Even in the best case this may be an argument for basic research in general, but not for building a particle collider in particular.

12. A big particle collider would benefit many tech industries and scientific networks.

Same with any other big investment into experimental science. It is not a good argument for a particle collider in particular.

13. It will be great for education, too!

If you want to invest into education, why dig a tunnel along with it?

14. Knowledge about particle physics will get lost if we do not continue.

We have scientific publications to avoid that. If particle physicists worry this may not work, they should learn to write comprehensible papers. Besides, it’s not like particle physicists would have no place to work if we do not build the next mega-collider. There are more than a hundred particle accelerators in the world; the LHC is merely the largest one. Also note that the LHC is not the only experiment at CERN. So, even if we do not build a larger collider, CERN would not just close down.

15. Highly energetic particle collisions are the cleanest way to measure the physics of short distances.

I tend to agree. This is what originally sparked my interest in high energy particle physics. But there is currently no reason to think that the next breakthroughs wait on shorter distances. Times change. The year is 2019, not 1999.

16. Lord Kelvin also said that physics was over and he was wrong

Yeah, except that I am the one saying we could do better things with $20 billion than measuring the next digits of some constants.

17. Particle accelerators are good for other things.

The typical example is that beams of ions can treat certain types of cancer better than the more common radiation therapies. That’s great of course, and I am all in favor of further developing this technology to enable the treatment of more patients, but this is an entirely different research avenue than building a larger collider.

18. You do not know what else we should do.

Sure I do. I wrote a whole book on this: In the foundations of physics, we should focus on those areas where we have inconsistencies, either between experiment and theory, or internal inconsistencies in the theories. Examining such inconsistencies is what has historically led to breakthroughs.

We currently have such situations in the following areas:

(a) Astrophysical and cosmological observations attributed to dark matter. These are discrepancies between theory and data which should be studied closer, until we have pinned down the theory. Some people have mistakenly claimed I am advocating more direct detection experiments for certain types of dark matter particles. This is not so. I am saying we need better observations of the already known discrepancies. Better sky coverage, better resolution, better stats. If we have a good idea what dark matter is, we can think of building a collider to test it, if that turns out to be useful.

(b) Quantum Gravity. The lack of a theory for quantized gravity is an internal theoretical inconsistency. We know it requires solution. A lot of physicists are not interested in experimentally testing this because they think it is not possible. I have previously explained here and here why that is wrong.

(c) The foundations of quantum mechanics: The measurement postulate is inconsistent with reductionism. There is basically no phenomenological or experimental exploration of this.

Needless to say, I think my argument for how to break the current impasse is a good one, but I do not really expect everyone to just agree with it. I am primarily putting this forward because it’s the kind of discussion we should have: We have not made progress in the foundations of physics for 40 years. What can we do about it? At least I have an argument. Particle physicists do not.

19. But you do not have any other worked-out proposals

The proposal for the FCC was worked out by a study group over 5 years, supported by 11 million Euro. Needless to say, I cannot, as a single person and in a few weeks of time, produce comparable proposals for large scale experiments. Expecting me to do so is unreasonable.

Sunday, March 24, 2019

Superfluid Dark Matter [Video]

I am at home with a cold, and so I finally got around to finish this video on superfluid dark matter which has been sitting on my hard disk for a few months.


This is a sequel to my earlier two videos about Dark Matter and Modified Gravity.

For captions, click CC in the tool bar. Will add German captions in the next days. Now I need a few ibuprofen.

Saturday, March 23, 2019

Just Move (I’ve been singing again)

I have spent the last few weekends shouting at a foam mat. It’s nothing personal. Foam and I, we normally get along just fine. It’s just that I was hoping to improve my singing technique. Also, shouting may come in handy on other occasions, you never know.

Alas, my shouting success was limited. It’s hard to project anger at a foam mat. But somewhere along the way I seem to have spontaneously developed a head-voice vibrato, not sure how come. Probably a sign that my head is becoming increasingly hollow.

Besides that, I have a new pre-amplifier which works better than the previous one, but has a markedly different noise pattern that I yet have to get used to. If my voice sounds different, that’s probably why. That, or the hollow head.

(Soundcloud version here.)

Wednesday, March 20, 2019

Science has a problem. Here is how you can help.

[I have gotten numerous requests by people who want to share Appendix C of my book. The content is copyrighted, of course, but my publisher kindly agreed that I can make it publicly available. You may use this text for non-commercial purposes, so long as you add the copyright disclaimer, see bottom of post.]

Both bottom-up and top-down measures are necessary to improve the current situation. This is an interdisciplinary problem whose solution requires input from the sociology of science, philosophy, psychology, and – most importantly – the practicing scientists themselves. Details differ by research area. One size does not fit all. Here is what you can do to help.

As a scientist:
  • Learn about social and cognitive biases: Become aware of what they are and under which circumstances they are likely to occur. Tell your colleagues.
  • Prevent social and cognitive biases: If you organize conferences, encourage speakers to not only list motivations but also shortcomings. Don’t forget to discuss “known problems.” Invite researchers from competing programs. If you review papers, make sure open questions are adequately mentioned and discussed. Flag marketing as scientifically inadequate. Don’t discount research just because it’s not presented excitingly enough or because few people work on it.
  • Beware the influence of media and social networks: What you read and what your friends talk about affects your interests. Be careful what you let into your head. If you consider a topic for future research, factor in that you might have been influenced by how often you have heard others speak about it positively.
  • Build a culture of criticism: Ignoring bad ideas doesn’t make them go away, they will still eat up funding. Read other researchers’ work and make your criticism publicly available. Don’t chide colleagues for criticizing others or think of them as unproductive or aggressive. Killing ideas is a necessary part of science. Think of it as community service.
  • Say no: If a policy affects your objectivity, for example because it makes continued funding dependent on the popularity of your research results, point out that it interferes with good scientific conduct and should be amended. If your university praises its productivity by paper counts and you feel that this promotes quantity over quality, say that you disapprove of such statements.
As a higher ed administrator, science policy maker, journal editor, representative of funding body:
  • Do your own thing: Don’t export decisions to others. Don’t judge scientists by how many grants they won or how popular their research is – these are judgements by others who themselves relied on others. Make up your own mind, carry responsibility. If you must use measures, create your own. Better still, ask scientists to come up with their own measures.
  • Use clear guidelines: If you have to rely on external reviewers, formulate recommendations for how to counteract biases to the extent possible. Reviewers should not base their judgment on the popularity of a research area or the person. If a reviewer’s continued funding depends on the well-being of a certain research area, they have a conflict of interest and should not review papers in their own area. That will be a problem because this conflict of interest is presently everywhere. See next 3 points to alleviate it.
  • Make commitments: You have to get over the idea that all science can be done by postdocs on 2-year fellowships. Tenure was institutionalized for a reason and that reason is still valid. If that means fewer people, then so be it. You can either produce loads of papers that nobody will care about 10 years from now, or you can be the seed of ideas that will still be talked about in 1000 years. Take your pick. Short-term funding means short-term thinking.
  • Encourage a change of field: Scientists have a natural tendency to stick to what they know already. If the promise of a research area declines, they need a way to get out, otherwise you’ll end up investing money into dying fields. Therefore, offer reeducation support, 1-2 year grants that allow scientists to learn the basics of a new field and to establish contacts. During that period they should not be expected to produce papers or give conference talks.
  • Hire full-time reviewers: Create safe positions for scientists specialized in providing objective reviews in certain fields. These reviewers should not themselves work in the field and have no personal incentive to take sides. Try to reach agreements with other institutions on the number of such positions.
  • Support the publication of criticism and negative results: Criticism of other people’s work or negative results are presently underappreciated. But these contributions are absolutely essential for the scientific method to work. Find ways to encourage the publication of such communication, for example by dedicated special issues.
  • Offer courses on social and cognitive biases: This should be mandatory for anybody who works in academic research. We are part of communities and we have to learn about the associated pitfalls. Sit together with people from the social sciences, psychology, and the philosophy of science, and come up with proposals for lectures on the topic.
  • Allow a division of labor by specialization in task: Nobody is good at everything, so don’t expect scientists to be. Some are good reviewers, some are good mentors, some are good leaders, and some are skilled at science communication. Allow them to shine in what they’re good at and make best use of it, but don’t require the person who spends their evenings in student Q&A to also bring in loads of grant money. Offer them specific titles, degrees, or honors.
As a science writer or member of the public, ask questions:
  • You’re used to asking about conflicts of interest due to funding from industry. But you should also ask about conflicts of interest due to short-term grants or employment. Does the scientists’ future funding depend on producing the results they just told you about?
  • Likewise, you should ask if the scientists’ chance of continuing their research depends on their work being popular among their colleagues. Does their present position offer adequate protection from peer pressure?
  • And finally, like you are used to scrutinize statistics you should also ask whether the scientists have taken means to address their cognitive biases. Have they provided a balanced account of pros and cons or have they just advertised their own research?
You will find that for almost all research in the foundations of physics the answer to at least one of these questions is no. This means you can’t trust these scientists’ conclusions. Sad but true.


Reprinted from Lost In Math by Sabine Hossenfelder. Copyright © 2018. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc

Saturday, March 16, 2019

Particle physicists continue to spread misinformation about prospects of new collider

Physics Today has published an essay by Gordon Kane about “The collider question.”

Gordon Kane is Professor of Physics at the University of Michigan. He is well known in the field, both for his research and his engagement in science communication. Kane has written several well-received popular science books about particle physics in general, and supersymmetry in particular.

In his new essay for Physics Today, Kane mentions “economic considerations” and the possibility of spin-offs in favor of a larger collider. But these are arguments that could be made about any experiment of similar size.

His key point is that a next larger collider is needed to answer some of the currently open big questions in the foundations of physics:
“For our next colliders the goal is to provide data for a more comprehensive theory, hopefully one that incorporates dark matter, quantum gravity, and neutrino masses and solves the hierarchy problem. But what does that mean in practice?”
He claims that:
“It’s been known since the 1980s that a mathematically consistent quantum theory of gravity has to be formulated in 9 or 10 spatial dimensions.”
This statement is wrong. It is known that string theory requires additional dimensions of space, but physicists do not presently know that string theory is the correct theory for quantum gravity. They also have several other, mathematically consistent, approaches to quantum gravity that do not require additional dimensions, such as asymptotically safe gravity, or loop quantum gravity.

Kane then refers to an earlier article he wrote about his own models for Physics Today and claims:
“They predict or describe the Higgs boson mass. We can now study the masses that new particles have in such models to get guidance for what colliders to build.”
Note the odd phrase “predict or describe the Higgs boson mass.” The story here is that in 2011 Kane and collaborators, a few days before CERN collaborations released first results of the Higgs measurement, published a paper claiming they can predict the correct Higgs-mass. Kane later wrote a Comment about this for Nature Magazine. All particle physicists I ever spoke with about Kane’s prediction suspect it was informed by rumors about the Higgs mass and then, consciously or unconsciously, backward constructed.

In his Physics Today essay, Kane then goes on to write that his models “generically have some observable superpartners with masses between about 1500 GeV and 5000 GeV” and argues that:
“Such theoretical work provides quantitative predictions to help set goals for collider construction, similar to how theorists helped zero in on the mass of the Higgs boson.”
Gordon Kane has made predictions for the appearance of new particles at colliders for 20+ years. Every time an experiment fails to see those new particles, he adjusts the masses so that the theory is still compatible with data. For references, please check out Will Kinney’s recent twitter thread.

Among particle physicists, Kane is somewhat exceptional by his public presence and the boldness of his assertions. But his method of making predictions is typical practice in the field. Indeed, by the current standard in particle physics, his research is of high quality. Kane’s models are well-motivated by beautiful ideas and, together with his collaborators, he has amassed a lot of impressive looking equations, not to mention citations.

This does not change the fact that those predictions are worthless.

Allow me an analogy. Forget for a moment that we are talking about particle physics, and think of climate science. That’s the stuff with global warming and melting ice sheets and so on, I’m sure you’ve heard. Now imagine that those models could predict literally any possible future trend. That would be pretty useless predictions, wouldn’t you agree? It wouldn’t be much of a science, really. It would be pretty ridiculous, indeed.

Well, that’s how predictions for new particles currently work. The methods of theory-development used by particle physicists can predict anything and everything.

You do not have to take my word for it, you only have to look at this paper about “ambulance chasing”. Ambulance chasing is the practice of particle physicists to cook up models to explain statistical fluctuations that they hope will turn out to be real particles. With the currently accepted methods of model-building, they can produce hundreds of explanations within months, regardless of whether the signal was actually real or not.

You do not even need to understand a single word written in those papers to see that one cannot trust predictions made this way.

I don’t want to pick on Kane too much, because he just does what he has learned, and he does an excellent job at this. But Gordon Kane is to particle physics what Amy Cuddy is to psychology: A very public example of scientific methodology gone badly wrong.

Like Cuddy, the blame is not on Kane in particular, the blame is on a community which is not correcting a methodology they all know is not working. The difference is that while psychologists have recognized the problems in their community and have taken steps towards improvement, particle physicists still refuse to acknowledge their field even has a problem.

The methods of theory-development used in particle physics to predict new physics are not proper science. This research must be discontinued. And it should certainly not be used to argue we need a next larger collider.



Correction, March 18: I have been informed that Physics Today is not the membership magazine of the American Physical Society, it is just that members of the American Physical Society receive the magazine. I therefore rewrote the first sentence.

Thursday, March 14, 2019

Particle physicists excited over discovery of nothing in particular

Logo of Moriond meeting.
“Rencontres de Moriond” is one of the most important annual conferences in particle physics. This year’s meeting will start in two days, on March 16th. Usually, experimental collaborations try to have at least preliminary results to present at the conference, so we have an interesting couple of weeks ahead.

The collaboration of the ATLAS experiment at CERN has already released a few results from the searches for “exotic” particles with last year’s run 2 data. So far, they have seen nothing new. More results will likely appear online soon. 

One of the key questions to be addressed by the new data analysis is whether the “lepton flavor anomalies” persist. These anomalies are several small differences between rates of processes that, according to the standard model, should be identical. Separately, each deviation from the standard model has a low statistical significance, not exceeding 3 σ. However, in 2017 a group of particle physicists claimed that the combined significance exceeds 5 σ.

You should take such combined analyses with several grains of salt. Choosing some parts of the data while disregarding others makes the conclusion unreliable. This does not mean the result is wrong, just that it’s impossible to know if it is a real effect or a statistical fluctuation. Really this question can only be resolved with more data. CMS, another one of the LHC experiments, recently tested a specific explanation for the anomaly but found nothing.

Meanwhile it must have dawned on particle physicists that the non-discovery of fundamentally new particles besides the Higgs is a problem for their field, and especially for the prospects of financing that bigger collider which they want. For two decades they told the public that the LHC would help answering some “big questions,” for example by finding dark matter or supersymmetric particles, illustrated well by this LHC outreach website:


Screenshot of the LHC Outreach website.


However, the predictions for new particles besides the Higgs were all wrong. And now, rather than owning up to their mistakes, particle physicists want you to think it’s exciting they have found neither dark matter, nor extra dimensions, nor supersymmetry, nor anything else that is not in the standard model. In a recent online article at Scientific American, James Beacham is quoted saying:
“We’re right on the cusp of a revolution but we don’t really know where that revolution is going to be coming from. It’s so exciting and enticing. I would argue there’s never been a better time to be a particle physicist.”
The particle physicist Jon Butterworth says likewise:
“It’s more exciting and more uncertain now than I think it’s ever been in my career.”
And Nima Arkani-Hamed, in an interview with the CERN Courier begins his answer to the question “How do you view the status of particle physics?” with:
“There has never been a better time to be a physicist.”
The logic here seems to be this: First, mass-produce empty predictions to raise the impression that a costly experiment will answer some big questions. Then, if the experiment fails to answer those questions, proclaim how exciting it is that your predictions were wrong. Finally, explain that you need money for a larger experiment to answer those big questions.

The most remarkable thing about this is that they actually seem to think this will work.

Needless to say, if the analysis of the recent data reveals a signal of new effects, then the next collider will be built for sure. If nothing new shows up, then particle physicists can either continue to excitedly deny anything went wrong, or realize they have to act against hype and group-think in their community. The next weeks will be interesting.

Saturday, March 09, 2019

Motte and Bailey, Particle Physics Style

“Motte and bailey” is a rhetorical maneuver in which someone switches between an argument that does not support their conclusion but is easy to defend (the “motte”), and an argument that supports their conclusion but is hard to defend (the “bailey”). The purpose of this switch is to trick the listener into believing that the easy-to-defend argument suffices to support the conclusion.

This rhetorical trick is omnipresent in arguments that particle physicists currently make for building the next larger collider.

There are good arguments to build a larger collider, but those don’t justify the investment. These arguments are measuring the properties of known particles to higher precision and keeping particle physicists occupied. Also, we could just look and see if we find something new. That’s the motte.

Then there is an argument which would justify the investment, but this is not based on sound reasoning. This argument is that a next larger collider would lead to progress in the foundations of physics, for example by finding new symmetries or solving the riddle of dark matter. This argument is indefensible because there is no reason to think the next larger collider would help answering those questions. That’s the bailey.

This maneuver is particularly amusing if you both have people who make the indefensible argument and others who insist no one makes it. In a recent interview with the CERN courier, for example, Nima Arkani-Hamed says:
“Nobody who is making the case for future colliders is invoking, as a driving motivation, supersymmetry, extra dimensions…”
While his colleague, Lisa Randall, has defended the investment into the next larger collider by arguing:
“New dimensions or underlying structures might exist, but we won’t know unless we explore.”
I don’t think that particle physicists are consciously aware of what they are doing. Really, I get the impression they just throw around whatever arguments come to their mind and hope the other side doesn’t have a response. Most unfortunately, this tactic often works, just because there are few people competent enough to understand particle physicists’ arguments and also willing to point out when they go wrong.

For this reason I want to give you an explicit example for how motte and bailey is employed by particle physicists to make their case. I do this in the hope that it will help others notice when they encounter this flawed argument.

The example I will use is a recent interview I did for a podcast with the Guardian. The narrator is Ian Sample. Also on the show is particle physicist Brian Foster. I don’t personally know Foster and never spoke with him before. You can listen to the whole thing here, but I have transcribed the relevant parts below. (Please let me know in case I misheard something.)

At around 10:30 min the following exchange takes place.

Ian: “Are there particular things that physicists would like to look for, actual sort-of targets like the Higgs, that could be named like the Higgs?”

Brian: “The Higgs is really, I think, at the moment the thing that we are particularly interested in because it is the new particle on the block. And we know very little about it so far. And that will give us hopefully clues as to where to look for new phenomena beyond the standard model. Because the thing is that we know there must be physics beyond the standard model. If for no other reason than, as you mention, there’s very strong evidence that there is dark matter in the universe and that dark matter must be made of particles of some sort we have no candidate for those particles at the moment.”

I then explain that this argument does not work because there is no reason to think the next larger collider would find dark matter particles, that, in fact, we are not even sure dark matter is made of particles.

After some more talk about the various proposals for new colliders that are currently on the table, the discussion returns to the question what justifies the investment. At about 24:06 you can hear:

Ian: “Sabine, you’ve had a fair bit of flak for some of your criticisms for the FCC, haven’t you, from within the community?”

Sabine: “Sure, true, but I did expect it. Fact is, we have no reason to think that a next larger particle collider will actually tell us anything new about the fundamental laws of nature. There’s certainly some constants that you can always measure better, you can always say, well, I want to measure more precisely what the Higgs is doing, or how that particle decays, and so on and so forth. But if you want to make progress in our understanding of the foundations of physics that’s just not currently a promising thing to invest in. And I don’t think that’s so terribly controversial, but a lot of particle physicists clearly did not like me saying this publicly.”

Brian: “I beg to differ, I think it is very controversial, and I think it’s wrong, as I’ve tried to say several times. I mean the way in which you can make progress in particle physics is by making these precision measurements. You know very well that quantum mechanics is such that if you can make very high precision measurements that can tell you a lot of things about much higher energies than what you can reach in the laboratory. So that’s the purpose of doing very high precision physics at the LHC, it’s not like stamp collecting. You are trying to make measurements which will be sufficiently precise that they will give you a very strong indication of where there will be new physics at high energies.”

(Only tangentially relevant, but note that I was talking about the foundations of physics, whereas Brian’s reply is about progress in particle physics in particular.)

Sabine: “I totally agree with that. The more precisely you measure, the more sensitive you are to the high energy contributions. But still there is no good reason right now to think that there is anything to find, is what I’m saying.”

Brian: “But that’s not true. I mean, it’s quite clear, as you said yourself, that the standard model is incomplete. Therefore, if we can measure the absolutely outstanding particle in the standard model, the Higgs boson, which is completely unique, to very high precision, then the chances are very strong that we will find some indication for what this physics beyond the standard model is.”

Sabine: “So exactly what physics beyond the standard model are you referring to there?

Brian: “I have no idea. That’s why I want to do the measurement.”

I then explain why there is no reason to think that the next larger collider will find evidence of new physical effects. I do this by pointing out that the only reliable indications we have for new physics merely tell us something new has to appear at latest at energies that are still about a billion times higher than what even the next larger collider could reach.

At this point Brian stops claiming the chances are “very strong” that a bigger machine would find something new, and switches to the just-look-argument:

Brian: “Look, it’s a grave mistake to be too strongly lead by theoretical models [...]”

The just-look-argument is of course well and fine. But, as I have pointed out many times before, the same just-look-argument can be made for any other new experiment in the foundations of physics. It therefore does not explain why a larger particle collider in particular is a good investment. Indeed, the opposite is the case: There are less costly experiments for which we have good reasons, such as measuring more precisely the properties of dark matter or probing the weak field regime of quantum gravity.

When I debunk the just-look-argument, a lot of particle physicists then bring up the no-zero-sum-argument. I just did another podcast a few days ago where the no-zero-sum-argument played a big role and if that appears online, I’ll comment on that in more detail.

The real tragedy is that there is absolutely no learning curve in this exchange. Doesn’t matter how often I point out that particle physicists’ arguments don’t hold water, they’ll still repeat them.

(Completely irrelevant aside: This is the first time I have heard a recording made in my basement studio next to other recordings. I am pleased to note all the effort I put into getting good sound quality paid off.)

Friday, March 08, 2019

Inflation: Status Update

Model of Inflation.
img src: umich.edu
The universe hasn’t always been this way. That the cosmos as a whole evolves, rather than being eternally unchanging, is without doubt one of the most remarkable scientific insights of the past century. It follows from Einstein’s theory of general relativity: Einstein’s theory tells us that the universe must expand. As a consequence, in the early universe matter must have been compressed to high density.

But if you follow the equations back in time, general relativity eventually stops working. Therefore, no one presently knows how the universe began. Indeed, we may never know.

Since the days of Einstein, physicists have made much progress detailing the history of the universe. But the deeper they try to peer into our past, the more difficult their task becomes.

This difficulty arises partly because new data are harder and harder to come by. The dense matter in the early universe blocked light, so we cannot use light to look back to any time earlier than the formation of the cosmic microwave background. For even earlier times, we can make indirect inferences, or hope for new messengers, like gravitational waves or neutrinos. This is technologically and mathematically challenging, but these are challenges that can be overcome, at least in principle. (Says the theorist.)

The more serious difficulty is conceptual. When studying the universe as whole, physicists face the limits of the scientific method: The further back in time they look, the simpler their explanations become. At some point, then, there will be nothing left to simplify, and so there will be no way to improve their explanations. The question isn’t whether this will happen, the question is when it will happen.

The miserable status of today’s theories for the early universe makes me wonder whether it has already happened. Cosmologists have hundreds of theories, and many of those theories come in several variants. It’s not quite as bad as in particle physics, but the situation is similar in that cosmologists, too, produce loads of ill-motivated models for no reason other than that they can get them published. (And they insist this is good scientific practice. Don’t get me started.)

The currently most popular theory for the early universe is called “inflation”. According to inflation, the universe once underwent a phase in which volumes of space increased exponentially in time. This rapid expansion then stopped in an event called “reheating,” at which the particles of the standard model were produced. After this, particle physics continues the familiar way.

Inflation was originally invented to solve several finetuning problems. (I wrote about this previously, and don’t want to repeat it all over again, so if you are not familiar with the story, please check out this earlier post.) Yall know that I think finetuning arguments are a waste of time, so naturally I think these motivations for inflations are no good. However, just because the original reason for the idea of inflation doesn’t make sense doesn’t mean the theory is wrong.

Ever since the results of the Planck in 2013 it hasn’t looked good for inflation. After the results appeared, Anna Ijjas, Paul Steinhardt, and Avi Loeb argued in a series of papers that the models of inflation which are compatible with the data themselves require finetuning, and therefore bring back the problem they were meant to solve. They popularized their argument in a 2017 article in Scientific American, provocatively titled “Pop Goes the Universe.”

The current models of inflation work not simply by assuming that the universe did undergo a phase of exponential inflation, but they moreover introduce a new field – the “inflaton” – that supposedly caused this rapid expansion. For this to work, it is not sufficient to just postulate the existence of this field, the field also must have a suitable potential. This potential is basically a function (of the field) and typically requires several parameters to be specified.

Most of the papers published on inflation are then exercises in relating this inflaton potential to today’s cosmological observables, such as the properties of the cosmic microwave background.

Now, in the past week two long papers about all those inflationary models appeared on the arXiv:
and

The first paper, by Jerome Martin alone, is a general overview of the idea of inflation. It is well-written and a good introduction, but if you are familiar with the topic, nothing new to see here.

The second paper is more technical. It is a thorough re-analysis of the issue of finetuning in inflationary models and a response to the earlier papers by Ijjas, Steinhardt, and Loeb. The main claim of the new paper is that the argument by Ijjas et al, that inflation is “in trouble,” is wrong because it confuses two different types of models, the “plateau models” and the “hilltop models” (referring to different types of the inflaton potential).

According to the new analysis, the models most favored by the data are the plateau models, which do not suffer from finetuning problems, whereas the hilltop models do (in general) suffer from finetuning but are not favored by the data anyway. Hence, they conclude, inflation is doing just fine.

The rest of the paper analyses different aspects of finetuning in inflation (such as quantum contributions to the potential), and discusses further problems with inflation, such as the trans-planckian problem and the measurement problem (as pertaining to cosmological perturbations). It is a very balanced assessment of the situation.

The paper uses standard methods of analysis (Bayesian statistics), but I find this type of model-evaluation generally inconclusive. The problem with such analyses is that they do not take into account the prior probability for the models themselves but only for the initial values and the parameters of the model. Therefore, the results tend to favor models which shove unlikeliness from the initial condition into the model (eg the type of function for the potential).

This is most obvious when it comes to the so-called “curvature problem,” or the question why the universe today is spatially almost flat. You can get this outcome without inflation, but it requires you to start with an exponentially small value of the curvature already (curvature density, to be precise). If you only look at the initial conditions, then that strongly favors inflation.

But of course inflation works by postulating an exponential suppression that comes from the dynamical law. And not only this, it furthermore introduces a field which is strictly speaking unnecessary to get the exponential expansion. I therefore do not buy into the conclusion that inflation is the better explanation. On the very contrary, it adds unnecessary structure.

This is not to say that I think inflation is a bad idea. It’s just that I think cosmologists are focusing on the wrong aspects of the model. Finetuning arguments will forever remain ambiguous because they eventually depend on unjustifiable assumptions. What’s the probability for getting any particular inflaton potential to begin with? Well, if you use the most common measure on the space of all possible function, then all so-far considered potentials have probability zero. This type of reasoning just does not lead anywhere. So why waste time talking about finetuning?

Instead, let us talk about those predictions whose explanatory value does not depend on finetuning arguments, of which I suspect (but do not know) that ET-correlations in the CMB power spectrum are an example. Since finetuning debates will remain unsolvable, it would be more fruitful to focus on those benefits of inflation that can be quantified unambiguously.

In any case, I am sure the new paper will make many cosmologists happy, and encourage them to invent many more models for inflation. Sigh.