Saturday, April 24, 2021

Particle Physics Discoveries That Disappeared

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]


I get asked a lot what I think about this or that report of an anomaly in particle physics, like the B-meson anomaly at the large hadron collider which made headlines last month or the muon g-2 that was just all over the news. But I thought instead of just giving you my opinion, which you may or may not trust, I will instead give you some background to gauge the relevance of such headlines yourself. Why are there so many anomalies in particle physics? And how seriously should you take them? That’s what we will talk about today.

The Higgs boson was discovered in nineteen eighty-four. I’m serious. The Crystal Ball Experiment at DESY in Germany saw a particle that fit the expectation already in nineteen eighty-four. It made it into the New York Times with the headline “Physicists report mystery particle”. But the supposed mystery particle turned out to be a data fluctuation. The Higgs boson was actually only discovered in 2012 at the Large Hadron Collider at CERN. And 1984 was quite a year, because also supersymmetry was observed and then disappeared again.

How can this happen? Particle physicists calculate what they expect to see in an experiment using the best theory they have at the time. Currently that’s the standard model of particle physics. In 1984, that’d have been the standard model minus the particles which hadn’t been discovered.

But the theory alone doesn’t tell you what to expect in a measurement. For this you also have to take into account how the experiment is set up, so for example what beam and what luminosity, and how the detector works and how sensitive it is. This together: theory, setup, detector, gives you an expectation for your measurement. What you are then looking for are deviations from that expectation. Such deviations would be evidence for something new.

Here’s the problem. These expectations are always probabilistic. They don’t tell you exactly what you will see. They only tell you a distribution over possible outcomes. That’s partly due to quantum indeterminism but partly just classical uncertainty.

Therefore, it’s possible that you see a signal when there isn’t one. As an example, suppose I randomly distribute one-hundred points on this square. If I divide the square into four pieces of equal size, I expect about twenty-five points in each square. And indeed that turns out to be about correct for this random distribution. Here is another random distribution. Looks reasonable.

Now let’s do this a million times. No, actually, let’s not do this.

I let my computer do this a million times, and here is one of the outcomes. Whoa. That doesn’t look random! It looks like something’s attracting the points to that one square. Maybe it’s new physics!

No, there’s no new physics going on. Keep in mind, this distribution was randomly created. There’s no signal here, it’s all noise. It’s just that every once in a while noise happens to look like a signal.

This is why particle physicists like scientists in all other disciplines, give a “confidence level” to their observation that tells you how “confident” they are that the observation was not a statistical fluctuation. They do this by calculating the probability that the supposed signal could have been created purely by chance. If fluctuations create a signature like what you are looking for one in twenty times, then the confidence level is 95%. If fluctuations create it one in a hundred times, the confidence level is 99%, and so on. Loosely speaking, the higher the confidence level, the more remarkable the signal.

But exactly at which confidence level you declare a discovery is convention. Since the mid 1990s, particle physicists have used for discoveries a confidence level of 99.99994 percent. That’s about a one in a million chance for the signal to have been a random fluctuation. It’s also frequently referred to as 5 σ, where σ is one standard deviation. (Though that relation only holds for the normal distribution.)

But of course deviations from the expectation attract attention already below the discovery threshold. Here is a little more history. Quarks, for all we currently know, are elementary particles, meaning we haven’t seen an substructures. But a lot of physicists have speculated that quarks might be made up of even small things. These smaller particles are often called “preons”. They were found in 1996. The New York Times reported: “Tiniest Nuclear Building Block May Not Be the Quark”. The significance of the signal was about three sigma, that’s about a one in thousand chance for it to be coincidence and about the same as the current B-meson anomaly. But the supposed quark substructure was a statistical fluctuation.

The same year, the Higgs was discovered again, this time at the Large Electron Positron collider at CERN. It was an excess of Higgs-like events that made it to almost 4 σ, which is a one in sixteenthousand chance to be a random fluctuation. Guess what, that signal vanished too.

Then, in 2003, supersymmetry was “discovered” again, this time in form of a supposed sbottom quark, that’s the hypothetical supersymmetric partner particle of the bottom quark. That signal too was at about 3 σ but then disappeared.

And in 2015, we saw the di-photon anomaly that made it above 4 σ and disappeared again. There have even been some six sigma signals that disappeared again, though these had no known interpretation in terms of new physics.

For example in 1998 the Tevatron at Fermilab measured some events they dubbed “superjets” at six σ. They were never seen again. In 2004 HERA at DESY saw pentaquarks – that are particles made of 5 quarks – with 6 σ significance but that signal also disappeared. And then there is the muon g-2 anomaly that recently increased from 3.7 to 4.2 σ, but still hasn’t crossed the discovery threshold.

Of course not all discoveries that disappeared in particle physics were due to fluctuations. For example, in 1984, the UA1 experiment at CERN saw eleven particle decays of a certain type when they expected only three point five. The signature fit to that expected for the top quark. The physicists were quite optimistic they had found the top quark and this news too made it into the New York Times.

Turned out though they had misestimated the expected number of such events. Really there was nothing out of the ordinary. The top quark wasn’t actually discovered until 1995. A similar thing happened in 2011, when the CDF collaboration at Fermilab saw an excess of events at about 4 \sigma. These were not fluctuations, but they required better understanding of the background.

And then of course there are possible issues with the data analysis. For example, there are various tricks you can play to increase the supposed significance. This basically doesn’t happen in collaboration papers, but you sometimes see individual researchers that use very, erm, creative methods of analysis. And then there may can be systematic problems with the detection, triggers, or filters and so on.

In summary: Possible reasons why a discovery might disappear are (a) fluctuations (b) miscalculations (c) analysis screw-ups (d) systematics. The most common one, just by looking at history, are fluctuations. And why are there so many fluctuations in particle physics? It’s because they have a lot of data. The more data you have, the more likely you are to find fluctuations that look like signals. That, by the way, is why particle physicists introduced the five sigma standard in the first place. Because otherwise they’d constantly have “discoveries” that disappear.

So what’s with that B-meson anomaly at the LHC that recently made headlines. It’s actually been around since 2015, but recently a new analysis came out and so it was in the news again. It’s currently lingering at 3.1 σ. As we saw, signals of that strength go away all the time, but it’s interesting that this one’s stuck around instead of going away. That makes me think it’s either a systematic problem or indeed a real signal.

Note: I have a longer comment about the recent muon g-2 measurement here.

32 comments:

  1. Hi! Nice video as always. What do you mean when you say a result goes away? It's not replicated in further experiments? I thought these kinds of phenomena were rare to begin with, so there is no guarantee that redoing an experiment will give the same result.

    ReplyDelete
    Replies
    1. When I say a result goes away I mean that it turns out it was a fluctuation in the end. You can find this out by collecting more data. Whether you this with the same experiment or a different one doesn't matter, in principle. In practice, if you do a different experiment, you can rule out certain types of errors that may come from the method used.

      Delete
    2. I should maybe add that particle colliders usually have several experiments, each with their own group of scientists. These experiments can (to some extent) act as checks on each other.

      Delete
    3. YES, SURE... SURE... QUANTUM GRAVITY, SURE... DCN

      Delete
    4. What do you mean when you say a result goes away?
      in the end all fluctuations collapse

      Delete
  2. The annoying thing with the g - 2 measurements that deviate from the standard model are claims of another force. That might be the case, but other things need to be eliminated first. Of course that this is at 4.2σ means it could be a fluctuation, though anything beyond 4σ is "eyebrow raising." If this does make it to 5σ it may be more likely due to fine details in radiative corrections or Feynman diagrams. Massive quarks or hadrons, which contribute to the vacuum as so-called virtual particles, may couple to the muon in ways not correctly computed.

    ReplyDelete
    Replies
    1. @Lawrence,

      You say

      "Massive quarks or hadrons, which contribute to the vacuum as so-called virtual particles, may couple to the muon in ways not correctly computed."

      Currently, the best estimates of hadronic contributions to the muon g-2 anomaly come from lattice computations. Do you have a better proposal in mind?

      Delete
    2. It is possible that heavy quark or hadron contributions have not been computed right. The problem might be resolved by work such as https://www.nature.com/articles/s41586-021-03418-1 . These types of possibilities need to be eliminated first before we assume there are new forces and the like.

      Delete
    3. Lawrence,

      Note that the paper you cite also relies on lattice QCD computations. The authors admit, however, that their lattice findings show some tension with the R-ratio results of other simulations.

      My question to you stands: Are there non-perturbative methods outside lattice QCD that enhance reliability of current hadronic estimates?

      Delete
    4. “Massive quarks or hadrons, which contribute to the vacuum as so-called virtual particles, may couple to the muon in ways not correctly computed.”

      By “Massive quarks or hadrons” I assume that Lawrence is referring to the two higher generation quark sets; the strange/charm and bottom/top and the various hadrons of which they are constituents. That these particles in virtual form might be implicated in the yet to disappear muon G-2 anomaly is very intriguing to me. A quarter century ago I had a rather simple idea as to why only three generations of quarks and leptons have ever been detected, no more, no less. But the idea came with a consequence as elucidated in the next few paragraphs.

      There is a dichotomy within the Standard Model with respect to the higher-to-lower generation decay pathways between the leptonic and hadronic sectors. In the former one sees that ‘flavor’ is always conserved by its transfer to the lightest particle carrying that ‘flavor’ namely a neutrino (prior to the neutrino oscillating to other flavors). For example, the end products of a muon or anti-muon decay always include an anti-muon neutrino or muon neutrino, respectively. In contrast, the decay of a 2nd or 3rd generation quark to a lower generation quark results in the “discarding” of the higher generation flavor. This is formally described as “an allowed diagonal change between quark generations”. Thus, one does not (apparently) see the ultimate transfer of the 2nd or 3rd generations of quark flavors to their respective generation neutrinos.

      But for my idea to make sense a higher quark ‘flavor’ must always be preserved (initially) in the form of a neutrino carrying that flavor away in the final decay products. But the allowed diagonal change between quark generations is in direct contradiction to this, and it scarcely seemed possible that the momentum contribution of an unexpected neutrino (actually neutrino pair) could be overlooked. This was discouraging until I happened to look at some old bubble chamber photographs in the 3rd edition of Donald H. Perkins “Introduction to High Energy Physics”. Back in 1994 I examined ten flavor changing hadronic decays from old bubble chamber photos in this book, in which one unit of strangeness was dissipated. Of these, six on initial inspection appeared as likely candidates for possible momentum disparity. Two more were suggestive of momentum discrepancies. One event was inconclusive since neutral particles were among the decay products complicating the analysis. A final event displayed no obvious evidence of momentum imbalance. The absence of evidence in this case may simply imply that the momentum vector of the proposed neutrinos closely coincided with the mean axis of momentum of the visible decay particles.

      Continued below….

      Delete
    5. But the best candidate for (possible) momentum discrepancy was on page 115 of Perkins book. This illustrated the collision of a negative kaon with the proton nucleus of a hydrogen atom in the chamber coming to rest at point A. The subsequent decay products of this interaction are a neutral pion which quickly undergoes Dalitz decay, and a neutral lambda. The neutral lambda traverses a short distance away from point A, decaying at point B to a negative pion and proton, both tracks being of course visible. As the track of the neutral lambda can be readily inferred by the line between points A and B it’s possible to get some idea of how the momentum is distributed among these particles. Using a 5X magnifying headset, protractor, and ruler, I made careful measurements of the radii of curvature of the final products – the proton and negative pion and concluded, (with a bit of math), that the transverse momentum component for the negative pion (relative to the trajectory of the neutral lambda) was considerably greater than that for the proton.

      Assuming a neutrino pair (of opposite helicity) ends up in the lambda’s decay products along with the proton and negative pion, how could this come about with the vertex of a Feynman diagram limited to three inputs/outputs? The only way to get around this I realized was to suppose that the lambda (and all other 2nd generation decays) would have to proceed by two stages. In the case of the neutral lambda decay the only sequence that made sense was for the neutral lambda to first morph into a positive charmed lambda, which would then decay to a proton and a neutrino pair. However, the mass of the positive charmed lambda is double that of the neutral lambda, so that would seem impossible. But, perhaps invoking Heisenberg’s Uncertainty Principle, the lifespan of this intermediate charmed lambda could be so short that such a transition would be allowed.

      Now I know that these photos in Perkins book are from the 50’s and 60’s, and that much larger plates were carefully examined at the time they were made by trained specialists. And modern particle accelerators use enormous calorimeters in their detector arrays to automatically record the momentum of particles emanating from the collision point. So the idea that an amateur enthusiast could second guess the original analyses with a small reproduction of the original plates would appear rather unlikely. However, should this idea turn out to be correct it might just be the source of the persistent G-2 anomaly.

      Delete
    6. In this paper, which I confess I have not fully read and only read a Nature review of it, there is a difference in how the lattice decimations are done so to more accurately get RG flow. At least this is my understanding. This accounts for contributions of heavy hadrons to correction terms better, at least the authors claim. Their results are closer to the FNAL result.

      Delete
    7. @ David It sounds that what you were looking at were strong interaction physics. I am not sure what that has to do with this. Hadrons do have quantum flavor dynamics, aka weak interaction, and the flavor of quarks can flip with a W^+ or W^-. The mass of the hadron factors in, and the controversy here may well be with calculating corrections from the vacuum by this process.

      Delete
    8. I wrote something wrong, or in a confusing way. The main contribution from heavy hadrons to the g-2 is electromagnetic or QED.

      Delete
    9. Lawrence, actually it was (speculative) weak interaction physic that involved flavor changes in quarks mediated by the Z0 (can’t do subscripts) that I was considering. Before racking last night I suddenly remembered that such transitions are designated “Flavor Changing Neutral Currents” or FCNC’s. When I wrote the comments above I had completely forgotten about that facet of weak interaction physics due to not looking into this area of physics for some time (hard to keep everything on the fingertips). In the Standard Model FCNC’s are highly suppressed by the GIM mechanism (1970). Back in the 92-94 time frame when I came up with this idea it was pre-to-just-beginning internet, at least at the place where I was employed. So all the knowledge I gleaned about physics of any kind was from books, mostly popular expositions. Although the GIM mechanism is discussed in Perkins book I somehow overlooked its significance to the idea I was working on until a few year later.

      On learning of the GIM mechanism and dedicated searches for FCNC interactions I realized that put a monkey wrench into the concept that the Z0 could mediate a quark flavor change (from S to U) in the scenario I described above. Still, I’m puzzled why so many of the bubble chamber photos of hadronic decays (where strangeness is lost) give the appearance of missing momentum, and the one that I scrutinized with measuring tools seemed definitely out of balance. This makes me wonder if such putative decays are flying under the radar, somehow being overlooked. The apparent uncorrelated momentum in these photos if carried off by a neutrino pair would of course not be visible. And the extremely short lifetime of the proposed intermediate stage charmed quark would be just as invisible on bubble tracks as the intermediate vector bosons W and Z that mediate the weak interactions. So it all boils down to just how precisely momentum is tallied in the analysis of particle decays. I assume it has to be far too good for any such non-standard interactions to escape notice.

      Delete
  3. @Dr Hossenfelder: Hi, I'd been wondering what you'd make of the various announcements made about new pentaquarks and other particle things, so thanks for the answer via video.
    I tend to ask myself an atheists' version of 'What Would Jesus Do' - What would Sabine say? when it comes to articles announcing new particles, etc.

    ReplyDelete
  4. What about physicists that disappeared? One famous case was of Ettore Majorana who disappeared on a boat trip from Palermo to Naples in 1938. There have bern reported sights since, but nothing confirmed ...

    ReplyDelete
  5. A favorite anomaly: the 17 keV neutrino. So many examples. Often tricky systematics. Or wishful thinking to support bad (and complicated!) analysis.

    ReplyDelete
  6. Hi Sabine, your discussion about “confidence levels” reminds me a recent post about “probable primes”... One could wonder that an observation is 99,99994% true and an integer 99,99994% prime!

    ReplyDelete
  7. Fluctuation and uncertainty in reality undercuts the trust that science engenders in its authority. Antiscience is a set of attitudes that involve a rejection of science and the scientific method. People holding antiscientific views do not accept science as an objective method that can generate universal knowledge.

    Antiscience is emerging as a pervasive and highly lethal force, and one that threatens worldwide climate change mitigation efforts, trust in medical science, and undercuts trust in experimentation in all branches of science. In many areas of science, fluctuation and uncertainty overwhelm the ability of experimentation to establish truth.

    Because of this inherent uncertainty in reality, antiscience rejects mainstream scientific views and methods and espouses replacement with unproven or deliberately misleading theories, often for nefarious and political gains. Antiscience targets prominent doctors, ecologists, and scientists and attempts to discredit them. Well established medical practices such as vaccination are called into doubt. Doubt becomes a predicate for inaction.

    Now antiscience is causing mass deaths once again in this COVID-19 pandemic. Beginning in the spring of 2020, the Trump White House launched a coordinated disinformation campaign that dismissed the severity of the epidemic in the United States to maintain tourism to support his bisiness empire.

    Antiscience uses uncertainty, fluctuation and randomness in one area of science to discredit reliable results in other areas. Mistrust within science becomes endemic and research that promises to reveal new truths are branded as pseudoscience. Belief and faith replaces method and becomes the criteria for truth. Unless we can cut through uncertainty and recognize the true nature of reality, then humankind will be incapable of dealing with it.

    ReplyDelete
    Replies
    1. The word “truth” is nowhere mentioned in my physics books, only in my philosophy books.

      Delete
    2. Antiscience is emerging as a pervasive and highly lethal force,

      Not emerging, it's always been with us -- see Sagan's A Demon-Haunted World

      Delete
    3. Hi Axil,
      I blame greed and conservatives - those who want to preserve the status quo, especially financial and religious conservatives who are too often both at once. Belief and faith are not to blame; misplaced belief and blind faith are the problems. Reality can be a tricky beast so we need multiple reliable sources of information to give us the correct perspectives.
      The videos and blog here are one such arrow in my quiver.

      Delete
    4. @Henry Flessner:

      Good point ;-).

      @C Thompson:

      For neo-cons, science was good ehen was pulling in profits from fossil fuels. When it rang the alarm bell for human induced climate change, they became anti-science and climate denialists.

      It's laughable.

      Delete
    5. @Mozibur

      Yes, and infuriating.
      I am also directing my wrath at what I think of as 'trad-cons', more commonly known as Tories, like the ones I ranted about under the previous post.
      Drat them all.

      Delete
  8. A couple of questions:
    Is 'quantum uncertainty' another way of describing randomness, or is it something else?
    And, in the linked article, would the 'new physics' dimensions that may be found like Kaluza-Klein curled-up dimensions, or different?

    ReplyDelete
  9. As the old stock market joke used to end: "Fluct again."

    (In 1961, stock prices on the NYSE mysteriously dropped quite seriously for the better part of a day and then just as mysteriously recovered. Various regulators investigated since this kind of correlated trading could have been a sign of market manipulation. Back then, the authorities were concerned about market manipulation. They found nothing. Their conclusion: it was just a fluctuation. It became known among investors as The Great Fluctuation, and so it passed into the popular culture of the time with the catch phrase noted above.)

    ReplyDelete
  10. Thanks Sabine. Excellent thoughts as always.
    I was an experimental particle physicist for the first decade of my career. I was a graduate student on a competing experiment during the Zeta fiasco. We decided to modify our detector (even though it was for charged particles) and dedicate some run time to investigate (in vain). I still remember the excited cries of “Iron Clad Nobel Prize” during our group meeting, which was also crashed by some theorists, equally as excitable.

    I think you omitted several important points as to the causes of these transient “discoveries.” Fluctuations are inevitable (as are death, taxes and false positives), but it is going public with them that is the real issue. This is probably due to ego and the fact that only “discoveries” get Nobel prizes. Usually new effects appear weakly at first, and it is not always clear what to do. When money and prestige are involved its not always a bad decision to be aggressive, especially since there is almost no penalty for being wrong. As an experimentalist, the golden rule should be to do the measurement correctly AND draw the right conclusion.

    During my postdoc I got involved personally in one such incident. We would measure effects at a resonance and then subtract data collected at an energy below resonance to attribute effects directly to “on” resonance physics. Competing experiment A saw an effect on resonance and not below. Experiment C saw the same. Both teams attributed this to the resonance even though it was kinematically forbidden by the on resonance physics. Experiment C2 (which was a better detector with more data) on which I did the analysis, saw the effect both on and off resonance in a way that it could all be explained by the below resonance data (nothing left after subtraction). It was not kinematically forbidden but just thought very unlikely. Of all this only the C analysis was actually ever published in the literature (the Zeta was only published in the NYTimes). The moral is that even effects seen by more than one experiment can have unfortunate fluctuations.

    ReplyDelete
  11. The journal Nature has published new theoretical calculations that contrast sharply with the "consensus" theoretical value of 2020 and, in fact, nullify the discrepancy.

    Leading hadronic contribution to the muon magnetic moment from lattice QCD

    Abstract
    The standard model of particle physics describes the vast majority of experiments and observations involving elementary particles. Any deviation from its predictions would be a sign of new, fundamental physics. One long-standing discrepancy concerns the anomalous magnetic moment of the muon, a measure of the magnetic field surrounding that particle. Standard-model predictions1 exhibit disagreement with measurements2 that is tightly scattered around 3.7 standard deviations. Today, theoretical and measurement errors are comparable; however, ongoing and planned experiments aim to reduce the measurement error by a factor of four. Theoretically, the dominant source of error is the leading-order hadronic vacuum polarization (LO-HVP) contribution. For the upcoming measurements, it is essential to evaluate the prediction for this contribution with independent methods and to reduce its uncertainties. The most precise, model-independent determinations so far rely on dispersive techniques, combined with measurements of the cross-section of electron–positron annihilation into hadrons3,4,5,6. To eliminate our reliance on these experiments, here we use ab initio quantum chromodynamics (QCD) and quantum electrodynamics simulations to compute the LO-HVP contribution. We reach sufficient precision to discriminate between the measurement of the anomalous magnetic moment of the muon and the predictions of dispersive methods. Our result favours the experimentally measured value over those obtained using the dispersion relation. Moreover, the methods used and developed in this work will enable further increased precision as more powerful computers become available.

    Cite this article
    Borsanyi, S., Fodor, Z., Guenther, J.N. et al. Leading hadronic contribution to the muon magnetic moment from lattice QCD. Nature (2021).
    https://doi.org/10.1038/s41586-021-03418-1

    https://www.nature.com/articles/s41586-021-03418-1

    ReplyDelete
  12. When money and prestige are involved its not always a bad decision to be aggressive, especially since there is almost no penalty for being wrong. As an experimentalist, the golden rule should be to do the measurement correctly AND draw the right conclusion.

    There shouldn't be a penalty for being wrong -- all scientists get things wrong. It's only by opening up one's results to the community at large that one can best find out if they got something wrong, and if so, where.

    ReplyDelete
  13. Large Hadron Collider: What's Next?

    Smashing more protons at one shot. Lots of fundamental particles of different energy would be emancipated and duly measured.

    Looking for new 'particles' again?

    ReplyDelete
  14. Got up this morning and remembered that there is an outstanding neutrino physics anomaly – the excess of electron neutrino and electron anti-neutrino events seen in the MiniBooNE experiment. This signal has a (combined) significance of 4.7 sigma as detailed in the abstract of the arXiv paper linked below. When I google searched the MiniBooNE experiment this morning I could see that I last looked at this paper on 18 November 2019. At that time I was keenly interested if this anomaly might be explained by the model broached on 27 April 2021 upthread. In that model a neutrino pair is speculated to be released in the end products of 2nd generation hadronic decays above and beyond those that the Standard Model expects to be released. This pair would consist of a 1st generation electron neutrino (or anti-neutrino) and a 2nd generation muon neutrino (or anti-neutrino) for the various quantum numbers to work out. Going from memory 3rd generation to 2nd generation hadronic decays usually proceed by either the strong or electromagnetic interactions, both of which conserve flavor.

    Looking at TABLE 1, in the paper back in 2019 I could see that there were several types of kaons (2nd generation mesons) listed, whose decays comprise some of the neutrino sources anticipated in the experiment. So the excess electron-type neutrinos predicted in my model for 2nd generation hadronic decays might possibly be the origin for the excess of electron neutrino/anti-neutrino events seen in the MiniBooNE experiment. I’m pretty sure I wrote some notes on this. I’ll see if I can find them after doing some chores this afternoon.

    https://arxiv.org/abs/1805.12028

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.