I get asked a lot what I think about this or that report of an anomaly in particle physics, like the B-meson anomaly at the large hadron collider which made headlines last month or the muon g-2 that was just all over the news. But I thought instead of just giving you my opinion, which you may or may not trust, I will instead give you some background to gauge the relevance of such headlines yourself. Why are there so many anomalies in particle physics? And how seriously should you take them? That’s what we will talk about today.
The Higgs boson was discovered in nineteen eighty-four. I’m serious. The Crystal Ball Experiment at DESY in Germany saw a particle that fit the expectation already in nineteen eighty-four. It made it into the New York Times with the headline “Physicists report mystery particle”. But the supposed mystery particle turned out to be a data fluctuation. The Higgs boson was actually only discovered in 2012 at the Large Hadron Collider at CERN. And 1984 was quite a year, because also supersymmetry was observed and then disappeared again.
How can this happen? Particle physicists calculate what they expect to see in an experiment using the best theory they have at the time. Currently that’s the standard model of particle physics. In 1984, that’d have been the standard model minus the particles which hadn’t been discovered.
But the theory alone doesn’t tell you what to expect in a measurement. For this you also have to take into account how the experiment is set up, so for example what beam and what luminosity, and how the detector works and how sensitive it is. This together: theory, setup, detector, gives you an expectation for your measurement. What you are then looking for are deviations from that expectation. Such deviations would be evidence for something new.
Here’s the problem. These expectations are always probabilistic. They don’t tell you exactly what you will see. They only tell you a distribution over possible outcomes. That’s partly due to quantum indeterminism but partly just classical uncertainty.
Therefore, it’s possible that you see a signal when there isn’t one. As an example, suppose I randomly distribute one-hundred points on this square. If I divide the square into four pieces of equal size, I expect about twenty-five points in each square. And indeed that turns out to be about correct for this random distribution. Here is another random distribution. Looks reasonable.
Now let’s do this a million times. No, actually, let’s not do this.
I let my computer do this a million times, and here is one of the outcomes. Whoa. That doesn’t look random! It looks like something’s attracting the points to that one square. Maybe it’s new physics!
No, there’s no new physics going on. Keep in mind, this distribution was randomly created. There’s no signal here, it’s all noise. It’s just that every once in a while noise happens to look like a signal.
This is why particle physicists like scientists in all other disciplines, give a “confidence level” to their observation that tells you how “confident” they are that the observation was not a statistical fluctuation. They do this by calculating the probability that the supposed signal could have been created purely by chance. If fluctuations create a signature like what you are looking for one in twenty times, then the confidence level is 95%. If fluctuations create it one in a hundred times, the confidence level is 99%, and so on. Loosely speaking, the higher the confidence level, the more remarkable the signal.
But exactly at which confidence level you declare a discovery is convention. Since the mid 1990s, particle physicists have used for discoveries a confidence level of 99.99994 percent. That’s about a one in a million chance for the signal to have been a random fluctuation. It’s also frequently referred to as 5 σ, where σ is one standard deviation. (Though that relation only holds for the normal distribution.)
But of course deviations from the expectation attract attention already below the discovery threshold. Here is a little more history. Quarks, for all we currently know, are elementary particles, meaning we haven’t seen an substructures. But a lot of physicists have speculated that quarks might be made up of even small things. These smaller particles are often called “preons”. They were found in 1996. The New York Times reported: “Tiniest Nuclear Building Block May Not Be the Quark”. The significance of the signal was about three sigma, that’s about a one in thousand chance for it to be coincidence and about the same as the current B-meson anomaly. But the supposed quark substructure was a statistical fluctuation.
The same year, the Higgs was discovered again, this time at the Large Electron Positron collider at CERN. It was an excess of Higgs-like events that made it to almost 4 σ, which is a one in sixteenthousand chance to be a random fluctuation. Guess what, that signal vanished too.
Then, in 2003, supersymmetry was “discovered” again, this time in form of a supposed sbottom quark, that’s the hypothetical supersymmetric partner particle of the bottom quark. That signal too was at about 3 σ but then disappeared.
And in 2015, we saw the di-photon anomaly that made it above 4 σ and disappeared again. There have even been some six sigma signals that disappeared again, though these had no known interpretation in terms of new physics.
For example in 1998 the Tevatron at Fermilab measured some events they dubbed “superjets” at six σ. They were never seen again. In 2004 HERA at DESY saw pentaquarks – that are particles made of 5 quarks – with 6 σ significance but that signal also disappeared. And then there is the muon g-2 anomaly that recently increased from 3.7 to 4.2 σ, but still hasn’t crossed the discovery threshold.
Of course not all discoveries that disappeared in particle physics were due to fluctuations. For example, in 1984, the UA1 experiment at CERN saw eleven particle decays of a certain type when they expected only three point five. The signature fit to that expected for the top quark. The physicists were quite optimistic they had found the top quark and this news too made it into the New York Times.
Turned out though they had misestimated the expected number of such events. Really there was nothing out of the ordinary. The top quark wasn’t actually discovered until 1995. A similar thing happened in 2011, when the CDF collaboration at Fermilab saw an excess of events at about 4 \sigma. These were not fluctuations, but they required better understanding of the background.
And then of course there are possible issues with the data analysis. For example, there are various tricks you can play to increase the supposed significance. This basically doesn’t happen in collaboration papers, but you sometimes see individual researchers that use very, erm, creative methods of analysis. And then there may can be systematic problems with the detection, triggers, or filters and so on.
In summary: Possible reasons why a discovery might disappear are (a) fluctuations (b) miscalculations (c) analysis screw-ups (d) systematics. The most common one, just by looking at history, are fluctuations. And why are there so many fluctuations in particle physics? It’s because they have a lot of data. The more data you have, the more likely you are to find fluctuations that look like signals. That, by the way, is why particle physicists introduced the five sigma standard in the first place. Because otherwise they’d constantly have “discoveries” that disappear.
So what’s with that B-meson anomaly at the LHC that recently made headlines. It’s actually been around since 2015, but recently a new analysis came out and so it was in the news again. It’s currently lingering at 3.1 σ. As we saw, signals of that strength go away all the time, but it’s interesting that this one’s stuck around instead of going away. That makes me think it’s either a systematic problem or indeed a real signal.
Note: I have a longer comment about the recent muon g-2 measurement here.