Showing posts with label Particle Physics. Show all posts
Showing posts with label Particle Physics. Show all posts

Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

Wednesday, June 14, 2017

What’s new in high energy physics? Clockworks.

Clockworks. [Img via dwan1509].
High energy physics has phases. I don’t mean phases like matter has – solid, liquid, gaseous and so on. I mean phases like cranky toddlers have: One week they eat nothing but noodles, the next week anything as long as it’s white, then toast with butter but it must be cut into triangles.

High energy physics is like this. Twenty years ago, it was extra dimensions, then we had micro black holes, unparticles, little Higgses – and the list goes on.

But there hasn’t been a big, new trend since the LHC falsified everything that was falsifiable. It’s like particle physics stepped over the edge of a cliff but hasn’t looked down and now just walks on nothing.

The best candidate for a new trend that I saw in the past years is the “clockwork mechanism,” though the idea just took a blow and I’m not sure it’ll go much farther.

The origins of the model go back to late 2015, when the term “clockwork mechanism” was coined by Kaplan and Rattazzi, though Cho and Im pursued a similar idea and published it at almost the same time. In August 2016, clockworks were picked up by Giudice and McCullough, who advertised the model as a “a useful tool for model-building applications” that “offers a solution to the Higgs naturalness problem.”

Gears. Img Src: Giphy.
The Higgs naturalness problem, to remind you, is that the mass of the Higgs receives large quantum corrections. The Higgs is the only particle in the standard model that suffers from this problem because it’s the only scalar. These quantum corrections can be cancelled by subtracting a constant so that the remainder fits the observed value, but then the constant would have to be very finely tuned. Most particle physicists think that this is too much of a coincidence and hence search for other explanations.

Before the LHC turned on, the most popular solution to the Higgs naturalness issue was that some new physics would show up in the energy range comparable to the Higgs mass. We now know, however, that there’s no new physics nearby, and so the Higgs mass has remained unnatural.

Clockworks are a mechanism to create very small numbers in a “natural” way, that is from numbers that are close by 1. This can be done by copying a field multiple times and then coupling each copy to two neighbors so that they form a closed chain. This is the “clockwork” and it is assumed to have a couplings with values close to 1 which are, however, asymmetric among the chain neighbors.

The clockwork’s chain of fields has eigenmodes that can be obtained by diagonalizing the mass matrix. These modes are the “gears” of the clockwork and they contain one massless particle.

The important feature of the clockwork is now that this massless particle’s mode has a coupling that scales with the clockwork’s coupling taken to the N-th power, where N is the number of clockwork gears. This means even if the original clockwork coupling was only a little smaller than 1, the coupling of the lightest clockwork mode becomes small very fast when the clockwork grows.

Thus, clockworks are basically a complicated way to make a number of order 1 small by exponentiating it.

I’m an outspoken critic of arguments from naturalness (and have been long before we had the LHC data) so it won’t surprise you to hear that I am not impressed. I fail to see how choosing one constant to match observation is supposedly worse than introducing not only a new constant, but also N copies of some new field with a particular coupling pattern.

Either way, by March 2017, Ben Allanach reports from Recontres de Moriond – the most important annual conference in particle physics – that clockworks are “getting quite a bit of attention” and are “new fertile ground.”

Ben is right. Clockworks contain one light and weakly coupled mode – difficult to detect because of the weak coupling – and a spectrum of strongly coupled but massive modes – difficult to detect because they’re massive. That makes the model appealing because it will remain impossible to rule it out for a while. It is, therefore, perfect playground for phenomenologists.

And sure enough, the arXiv has since seen further papers on the topic. There’s clockwork inflation and clockwork dark mattera clockwork axion and clockwork composite Higgses – you get the picture.

But then, in April 2017, a criticism of the clockwork mechanism appears on the arXiv. Its authors Craig, Garcia Garcia, and Sutherland point out that the clockwork mechanism can only be used if the fields in the clockwork’s chain have abelian symmetry groups. If the group isn’t abelian the generators will mix together in the zero mode, and maintaining gauge symmetry then demands that all couplings be equal to one. This severely limits the application range of the model.

A month later, Giudice and McCullough reply to this criticism essentially by saying “we know this.” I have no reason to doubt it, but I still found the Craig et al criticism useful for clarifying what clockworks can and can’t do. This means in particular that the supposed solution to the hierarchy problem does not work as desired because to maintain general covariance one is forced to put a hierarchy of scales into the coupling already.

I am not sure whether this will discourage particle physicists from pursuing the idea further or whether more complicated versions of clockworks will be invented to save naturalness. But I’m confident that – like a toddler’s phase – this too shall pass.

Wednesday, June 07, 2017

Dear Dr B: What are the chances of the universe ending out of nowhere due to vacuum decay?

    “Dear Sabine,

    my names [-------]. I'm an anxiety sufferer of the unknown and have been for 4 years. I've recently came across some articles saying that the universe could just end out of no where either through false vacuum/vacuum bubbles or just ending and I'm just wondering what the chances of this are occurring anytime soon. I know it sounds silly but I'd be dearly greatful for your reply and hopefully look forward to that

    Many thanks

    [--------]”


Dear Anonymous,

We can’t predict anything.

You see, we make predictions by seeking explanations for available data, and then extrapolating the best explanation into the future. It’s called “abductive reasoning,” or “inference to the best explanation” and it sounds reasonable until you ask why it works. To which the answer is “Nobody knows.”

We know that it works. But we can’t justify inference with inference, hence there’s no telling whether the universe will continue to be predictable. Consequently, there is also no way to exclude that tomorrow the laws of nature will stop and planet Earth will fall apart. But do not despair.

Francis Bacon – widely acclaimed as the first to formulate the scientific method – might have reasoned his way out by noting there are only two possibilities. Either the laws of nature will break down unpredictably or they won’t. If they do, there’s nothing we can do about it. If they don’t, it would be stupid not to use predictions to improve our lives.

It’s better to prepare for a future that you don’t have than to not prepare for a future you do have. And science is based on this reasoning: We don’t know why the universe is comprehensible and why the laws of nature are predictive. But we cannot do anything about unknown unknowns anyway, so we ignore them. And if we do that, we can benefit from our extrapolations.

Just how well scientific predictions work depends on what you try to predict. Physics is the currently most predictive discipline because it deals with the simplest of systems, those whose properties we can measure to high precision and whose behavior we can describe with mathematics. This enables physicists to make quantitatively accurate predictions – if they have sufficient data to extrapolate.

The articles that you read about vacuum decay, however, are unreliable extrapolations of incomplete evidence.

Existing data in particle physics are well-described by a field – the Higgs-field – that fills the universe and gives masses to elementary particles. This works because the value of the Higgs-field is different from zero even in vacuum. We say it has a “non-vanishing vacuum expectation value.” The vacuum expectation value can be calculated from the masses of the known particles.

In the currently most widely used theory for the Higgs and its properties, the vacuum expectation value is non-zero because it has a potential with a local minimum whose value is not at zero.

We do not, however, know that the minimum which the Higgs currently occupies is the only minimum of the potential and – if the potential has another minimum – whether the other minimum would be at a smaller energy. If that was so, then the present state of the vacuum would not be stable, it would merely be “meta-stable” and would eventually decay to the lowest minimum. In this case, we would live today in what is called a “false vacuum.”

Image Credits: Gary Scott Watson.


If our vacuum decays, the world will end – I don’t know a more appropriate expression. Such a decay, once triggered, releases an enormous amount of energy – and it spreads at the speed of light, tearing apart all matter it comes in contact with, until all vacuum has decayed.

How can we tell whether this is going to happen?

Well, we can try to measure the properties of the Higgs’ potential and then extrapolate it away from the minimum. This works much like Taylor series expansions, and it has the same pitfalls. Indeed, making predictions about the minima of a function based on a polynomial expansion is generally a bad idea.

Just look for example at the Taylor series of the sine function. The full function has an infinite number of minima at exactly the same value but you’d never guess from the first terms in the series expansion. First it has one minimum, then it has two minima of different value, then again it has only one – and the higher the order of the expansion the more minima you get.

The situation for the Higgs’ potential is more complicated because the coefficients are not constant, but the argument is similar. If you extract the best-fit potential from the available data and extrapolate it to other values of the Higgs-field, then you find that our present vacuum is meta-stable.

The figure below shows the situation for the current data (figure from this paper). The horizontal axis is the Higgs mass, the vertical axis the mass of the top-quark. The current best-fit is the upper left red point in the white region labeled “Metastability.”
Figure 2 from Bednyakov et al, Phys. Rev. Lett. 115, 201802 (2015).


This meta-stable vacuum has, however, a ridiculously long lifetime of about 10600 times the current age of the universe, take or give a few billion billion billion years. This means that the vacuum will almost certainly not decay until all stars have burnt out.

However, this extrapolation of the potential assumes that there aren’t any unknown particles at energies higher than what we have probed, and no other changes to physics as we know it either. And there is simply no telling whether this assumption is correct.

The analysis of vacuum stability is not merely an extrapolation of the presently known laws into the future – which would be justified – it is also an extrapolation of the presently known laws into an untested energy regime – which is not justified. This stability debate is therefore little more than a mathematical exercise, a funny way to quantify what we already know about the Higgs’ potential.

Besides, from all the ways I can think of humanity going extinct, this one worries me least: It would happen without warning, it would happen quickly, and nobody would be left behind to mourn. I worry much more about events that may cause much suffering, like asteroid impacts, global epidemics, nuclear war – and my worry-list goes on.

Not all worries can be cured by rational thought, but since I double-checked you want facts and not comfort, fact is that current data indicates our vacuum is meta-stable. But its decay is an unreliable prediction based the unfounded assumption that there either are no changes to physics at energies beyond the ones we have tested, or that such changes don’t matter. And even if you buy this, the vacuum almost certainly wouldn’t decay as long as the universe is hospitable for life.

Particle physics is good for many things, but generating potent worries isn’t one of them. The biggest killer in physics is still the 2nd law of thermodynamics. It will get us all, eventually. But keep in mind that the only reason we play the prediction game is to get the best out of the limited time that we have.

Thanks for an interesting question!

Monday, March 27, 2017

Book review: “Anomaly!” by Tommaso Dorigo

Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab
Tommaso Dorigo
World Scientific Publishing Europe Ltd (November 17, 2016)

Tommaso Dorigo is a familiar name in the blogosphere. Over at “A Quantum’s Diary’s Survivor”, he reliably comments on everything going on in particle physics. Located in Venice, Tommaso is a member of the CMS collaboration at CERN and was part of the CDF collaboration at Tevatron – a US particle collider that ceased operation in 2011.

Anomaly! Is Tommaso’s first book and it chronicles his time in the CDF collaboration from the late 1980s until 2000. This covers the measurement of the mass of the Z-boson, the discovery of the top-quark and the – eventually unsuccessful – search for supersymmetric particles. In his book, Tommaso weaves together the scientific background about particle physics with brief stories of the people involved and their – often conflict-laden – discussions.

The first chapters of the book contain a brief summary of the standard model and quantum field theory and can be skipped by those familiar with these topics. The book is mostly self-contained in that Tommaso provides all the knowledge necessary to understand what’s going on (with a few omissions that I believe don’t matter much). But the pace is swift. I sincerely doubt a reader without background in particle physics will be able to get through the book without re-reading some passages many times.

It is worth emphasizing that Tommaso is an experimentalist. I think I hadn’t previously realized how much the popular science literature in particle physics has, so-far, been dominated by theorists. This makes Anomaly! a unique resource. Here, the reader can learn how particle physics is really done! From the various detectors and their designs, to parton distribution functions, to triggers and Monte Carlo simulations, Tommaso doesn’t shy away from going into all the details. At the same time, his anecdotes showcase how a large collaboration like CDF – with more than 500 members – work.

That having been said, the book is also somewhat odd in that it simply ends without summary, or conclusion, or outlook. Given that the events Tommaso writes about date back 30 years, I’d have been interested to hear whether something has changed since. Is the software development now better managed? Is there still so much competition between collaborations? Is the relation to the media still as fraught? I got the impression an editor pulled the manuscript out under Tommaso’s still typing fingers because no end was in sight 😉

Besides this, I have little to complain about. Tommaso’s writing style is clear and clean, and also in terms of structure – mostly chronological – nothing seems amiss. My major criticism is that the book doesn’t have any references, meaning the reader is stuck there without any guide for how to proceed in case he or she wants to find out more.

So should you, or should you not buy the book? If you’re considering to become a particle physicist, I strongly recommend you read this book to find out if you fit the bill. And if you’re a science writer who regularly reports on particle physics, I also recommend you read this book to get an idea of what’s really going on. All the rest of you I have to warn that while the book is packed with information, it’s for the lovers. It’s about how the author tracked down a factor of 1.25^2 to explain why his data analysis came up with 588 rather than 497 Z \to b\bar b decays. And you’re expected to understand why that’s exciting.

On a personal note, the book brought back a lot of memories. All the talk of Herwig and Pythia, of Bjorken-x, rapidity and pseudorapidity, missing transverse energy, the CTEQ tables, hadronization, lost log-files, missed back-ups, and various fudge-factors reminded me of my PhD thesis – and of all the reasons I decided that particle physics isn’t for me.

[Disclaimer: Free review copy.]

Thursday, February 23, 2017

Book Review: “The Particle Zoo” by Gavin Hesketh

The Particle Zoo: The Search for the Fundamental Nature of Reality
By Gavin Hesketh
Quercus (1 Sept. 2016)

The first word in Gavin Hesketh’s book The Particle Zoo is “Beauty.” I read the word, closed the book, and didn’t reopen it for several months. Having just myself finished writing a book about the role of beauty in theoretical physics, it was the absolutely last thing I wanted to hear about.

I finally gave Hesketh’s book a second chance and took it along on a recent flight. Turned out once I passed the somewhat nauseating sales pitch in the beginning, the content considerably improved.

Hesketh provides a readable and accessible no-nonsense introduction to the standard model and quantum field theory. He explains everything as well as possible without using equations.

The author is an experimentalist and part of the LHC’s ATLAS collaboration. The Particle Zoo also has a few paragraphs about how it is to work in such large collaborations. Personally, I found this the most interesting part of the book. Hesketh also does a great job to describe how the various types of particle detectors work.

Had the book ended here, it would have been a well-done job. But Hesketh goes on to elaborate on physics beyond the standard model. And there he’s clearly out of his water.

Problems start when he begins laying out the shortcomings of the standard model, leaving the reader with the impression that it’s non-renormalizable. I suspect (or hope) he wasn’t referring to non-renormalizability but maybe Landau poles or the non-convergence of the perturbative expansion, but the explanation is murky.

Murky is bad, but wrong is worse. And wrong follows. Fore example, to generate excitement for new physics, Hesketh writes:
“Some theories suggest that antimatter responds to gravity in a different way: matter and antimatter may repel each other… [W]hile this is a strange idea, so far it is one that we cannot rule out.”
I do not know of any consistent theory that suggests antimatter responds differently to gravity than matter, and I say that as one of the three theorists on the planet who have worked on antigravity. I have no idea what Hesketh is referring to in this paragraph.

It does not help that “The Particle Zoo” does not have any references. I understand that a popular science book isn’t a review article, but I would expect that a scientist at least quotes sources for historical facts and quotations, which isn’t the case.

He then confuses a “Theory of Everything” with quantum gravity, and about supersymmetry (SuSy) he writes:
“[I]f SuSy is possible and it makes everything much neater, it really should exist. Otherwise it seems that nature has apparently gone out of its way to avoid it, making the equations uglier at the same time, and we would have to explain why that is.”
Which is a statement that should be embarrassing for any scientist to make.

Hesketh’s attitude to supersymmetry is however somewhat schizophrenic because he later writes that:
“[T]his is really why SuSy has lived for so long: whenever an experiment finds no signs of the super-particles, it is possible merely to adjust some of these free parameters so that these super-particles must be just a little bit heavier, just a little bit further out of reach. By never being specific, it is never wrong.”
Only to then reassure the reader
“SuSy may end up as another beautiful theory destroyed by an ugly fact, and we should find out in the next years.”
I am left to wonder which fact he thinks will destroy a theory that he just told us is never wrong.

Up to this point I might have blamed the inaccuracies on an editor, but then Hesketh goes on to explain the (ADD model of) large extra dimensions and claims that it solves the hierarchy problem. This isn’t so – the model reformulates one hierarchy (the weakness of gravity) as another hierarchy (extra dimensions much larger than the Planck length) and hence doesn’t solve the problem. I am not sure whether he is being intentionally misleading or really didn’t understand this, but either way, it’s wrong.

Hesketh furthermore states that if there were such large extra dimensions the LHC might produce microscopic black holes – but he doesn’t mention with a single word that not the faintest evidence for this has been found.

When it comes to dark matter, he waves away the possibility that the observations are due to a modification of gravity with the magic word “Bullet Cluster” – a distortion of facts about which I have previously complained. I am afraid he actually might not know any better since this myth has been so widely spread, but if he doesn’t care to look at the subject he shouldn’t write a book about it. To round things up, Hesketh misspells “Noether” as “Nöther,” though I am willing to believe that this egg was laid by someone else.

In summary, the first two thirds of the book about the standard model, quantum field theory, and particle detectors are recommendable. But when it comes to new physics the author doesn’t know what he’s talking about.

Update April 7th 2017: Most of these bummers have been fixed in the paperback edition.

Thursday, January 19, 2017

Dark matter’s hideout just got smaller, thanks to supercomputers.

Lattice QCD. Artist’s impression.
Physicists know they are missing something. Evidence that something’s wrong has piled up for decades: Galaxies and galaxy clusters don’t behave like Einstein’s theory of general relativity predicts. The observed discrepancies can be explained either by modifying general relativity, or by the gravitational pull of some, so-far unknown type of “dark matter.”

Theoretical physicists have proposed many particles which could make up dark matter. The most popular candidates are a class called “Weakly Interacting Massive Particles” or WIMPs. They are popular because they appear in supersymmetric extensions of the standard model, and also because they have a mass and interaction strength in just the right ballpark for dark matter. There have been many experiments, however, trying to detect the elusive WIMPs, and one after the other reported negative results.

The second popular dark matter candidate is a particle called the “axion,” and the worse the situation looks for WIMPs the more popular axions are becoming. Like WIMPs, axions weren’t originally invented as dark matter candidates.

The strong nuclear force, described by Quantum ChromoDynamics (QCD), could violate a symmetry called “CP symmetry,” but it doesn’t. An interaction term that could give rise to this symmetry-violation therefore has a pre-factor – the “theta-parameter” (θ) – that is either zero or at least very, very small. That nobody knows just why the theta-parameter should be so small is known as the “strong CP problem.” It can be solved by promoting the theta-parameter to a field which relaxes to the minimum of a potential, thereby setting the coupling to the troublesome term to zero, an idea that dates back to Peccei and Quinn in 1977.

Much like the Higgs-field, the theta-field is then accompanied by a particle – the axion – as was pointed out by Steven Weinberg and Frank Wilczek in 1978.

The original axion was ruled out within a few years after being proposed. But theoretical physicists quickly put forward more complicated models for what they called the “hidden axion.” It’s a variant of the original axion that is more weakly interacting and hence more difficult to detect. Indeed it hasn’t been detected. But it also hasn’t been ruled out as a dark matter candidate.

Normally models with axions have two free parameters: one is the mass of the axion, the other one is called the axion decay constant (usually denoted f_a). But these two parameters aren’t actually independent of each other. The axion gets its mass by the breaking of a postulated new symmetry. A potential, generated by non-perturbative QCD effects, then determines the value of the mass.

If that sounds complicated, all you need to know about it to understand the following is that it’s indeed complicated. Non-perturbative QCD is hideously difficult. Consequently, nobody can calculate what the relation is between the axion mass and the decay constant. At least so far.

The potential which determines the particle’s mass depends on the temperature of the surrounding medium. This is generally the case, not only for the axion, it’s just a complication often omitted in the discussion of mass-generation by symmetry breaking. Using the potential, it can be shown that the mass of the axion is inversely proportional to the decay constant. The whole difficulty then lies in calculating the factor of proportionality, which is a complicated, temperature-dependent function, known as the topological susceptibility of the gluon field. So, if you could calculate the topological susceptibility, you’d know the relation between the axion mass and the coupling.

This isn’t a calculation anybody presently knows how to do analytically because the strong interaction at low temperatures is, well, strong. The best chance is to do it numerically by putting the quarks on a simulated lattice and then sending the job to a supercomputer.

And even that wasn’t possible until now because the problem was too computationally intensive. But in a new paper, recently published in Nature, a group of researchers reports they have come up with a new method of simplifying the numerical calculation. This way, they succeeded in calculating the relation between the axion mass and the coupling constant.

    Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
    S. Borsanyi et al
    Nature 539, 69–71 (2016)

(If you don’t have journal access, it’s not the exact same paper as this but pretty close).

This result is a great step forward in understanding the physics of the early universe. It’s a new relation which can now be included in cosmological models. As a consequence, I expect that the parameter-space in which the axion can hide will be much reduced in the coming months.

I also have to admit, however, that for a pen-on-paper physicist like me this work has a bittersweet aftertaste. It’s a remarkable achievement which wouldn’t have been possible without a clever formulation of the problem. But in the end, it’s progress fueled by technological power, by bigger and better computers. And maybe that’s where the future of our field lies, in finding better ways to feed problems to supercomputers.

Friday, December 16, 2016

Cosmic rays hint at new physics just beyond the reach of the LHC

Cosmic ray shower. Artist’s impression.
[Img Src]
The Large Hadron Collider (LHC) – the world’s presently most powerful particle accelerator – reaches a maximum collision energy of 14 TeV. But cosmic rays that collide with atoms in the upper atmosphere have been measured with collision energies about ten times as high.

The two types of observations complement each other. At the LHC, energies are smaller, but collisions happen in a closely controlled experimental environment, directly surrounded by detectors. This is not the case for cosmic rays – their collisions reach higher energies, but the experimental uncertainties are higher.

Recent results from the Pierre Auger Cosmic Ray observatory at center-of-mass energies of approximately 100 TeV are incompatible with the Standard Model of particle physics and hint at unexplained new phenomena. The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations.

Cosmic rays are created either by protons or light atomic nuclei which come from outer space. These particles are accelerated in galactic magnetic fields, though the exact way how their get their high speeds is often unknown. When they enter the atmosphere of planet Earth, they sooner or later hit an air molecule. This destroys the initial particle and creates a primary shower of new particles. This shower has an electromagnetic part and a part of quarks and gluons that quickly form bound states known as hadrons. These particles undergo further decays and collisions, leading to a secondary shower.

The particles of the secondary shower can be detected on Earth in large detector arrays like Pierre Auger, which is located in Argentina. Pierre Auger has two types of detectors: 1) detectors that directly collect the particles which make it to the ground, and 2) fluorescence detectors which captures the light emitted from the ionization air molecules.

The hadronic component of the shower is dominated by pions, which are the lightest mesons and composed of a quark and an anti-quark. The neutral pions decay quickly, mostly into photons; the charged pions create muons which make it into the ground-based detectors.

It has been known for several years that the muon signal seems too large compared to the electromagnetic signal – the balance between them is off. This however was not based on very solid data analyses because it was dependent on a total energy estimate, and that’s very hard to do if you don’t measure all particles of the shower and have to extrapolate from what you measure.

In the new paper – just published in PRL – the Pierre Auger collaboration used a different analysis method for the data, one that does not depend on the total energy calibration. They individually fit the results of detected showers by comparing them to computer-simulated events. From a previously generated sample, they pick the simulated event that best matches the fluorescence result.

Then they add two parameters to also fit the hadronic result: The one parameter adjusts the energy calibration of the fluorescence signal, the other rescales the number of particles in the hadronic component. Then they look for the best-fit values and find that these are systematically off the standard model prediction. As an aside, their analysis also shows that the energy does not need to be recalibrated.

The main reason for the mismatch with the standard model predictions is that the detectors measure more muons than expected. What’s up with those muons? Nobody knows, but the origin of the mystery seems not in the muons themselves, but in the pions from whose decay they come.

Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. So the result by Pierre Auger indicates that the total energy in neutral pions is smaller than what the present simulations predict.

One possible explanation for this, which has been proposed by Farrar and Allen, is that we misunderstand chiral symmetry breaking. It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons (not the Higgs!). The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

Chiral symmetry restoration almost certainly plays a role in the cosmic ray collisions, and a more important role than it does at the LHC. So, quite possibly this is the culprit here. But it might be something more exotic, new short-lived particles that become important at high energies and which make interaction probabilities deviate from the standard model extrapolation. Or maybe it’s just a measurement fluke that will go away with more data.

If the signal remains, however, that’s a strong motivation to build the next larger particle collider which could reach 100 TeV. Our accelerators would then be as good as the heavens.


[This post previously appeared on Forbes.]

Wednesday, November 16, 2016

A new theory SMASHes problems

Most of my school nightmares are history exams. But I also have physics nightmares, mostly about not being able to recall Newton’s laws. Really, I didn’t like physics in school. The way we were taught the subject, it was mostly dead people’s ideas. On the rare occasion our teacher spoke about contemporary research, I took a mental note every time I heard “nobody knows.” Unsolved problems were what fascinated me, not laws I knew had long been replaced by better ones.

Today, mental noting is no longer necessary – Wikipedia helpfully lists the unsolved problems in physics. And indeed, in my field pretty much every paper starts with a motivation that names at least one of these problems, preferably several.

A recent paper which excels on this count is that of Guillermo Ballesteros and collaborators, who propose a new phenomenological model named SM*A*S*H.
    Unifying inflation with the axion, dark matter, baryogenesis and the seesaw mechanism
    Guillermo Ballesteros, Javier Redondo, Andreas Ringwald, Carlos Tamarit
    arXiv:1608.05414 [hep-ph]

A phenomenological model in high energy particle physics is an extension of the Standard Model by additional particles (or fields, respectively) for which observable, and potentially testable, consequences can be derived. There are infinitely many such models, so to grab the reader’s attention, you need a good motivation why your model in particular is worth the attention. Ballesteros et al do this by tackling not one but five different problems! The name SM*A*S*H stands for Standard Model*Axion*Seesaw*Higgs portal inflation.

First, there are the neutrino oscillations. Neutrinos can oscillate into each other if at least two of them have small but nonzero masses. But neutrinos are fermions and fermions usually acquire masses by a coupling between left-handed and right-handed versions of the particle. Trouble is, nobody has ever seen a right-handed neutrino. We have measured only left-handed neutrinos (or right-handed anti-neutrinos).

So to explain neutrino oscillations, there either must be right-handed neutrinos so heavy we haven’t yet seen them. Or the neutrinos differ from the other fermions – they could be so-called Majorana neutrinos, which can couple to themselves and that way create masses. Nobody knows which is the right explanation.

Ballesteros et al in their paper assume heavy right-handed neutrinos. These create small masses for the left-handed neutrinos by a process called see-saw. This is an old idea, but the authors then try to use these heavy neutrinos also for other purposes.

The second problem they take on is the baryon asymmetry, or the question why matter was left over from the Big Bang but no anti-matter. If matter and anti-matter had existed in equal amounts – as the symmetry between them would suggest – then they would have annihilated to radiation. Or, if some of the stuff failed to annihilate, the leftovers should be equal amounts of both matter and anti-matter. We have not, however, seen any large amounts of anti-matter in the universe. These would be surrounded by tell-tale signs of matter-antimatter annihilation, and none have been observed. So, presently, nobody knows what tilted the balance in the early universe.

In the SM*A*S*H model, the right-handed neutrinos give rise to the baryon asymmetry by a process called thermal leptogenesis. This works basically because the most general way to add right-handed neutrinos to the standard model already offers an option to violate this symmetry. One just has to get the parameters right. That too isn’t a new idea. What’s interesting is that Ballesteros et al point out it’s possible to choose the parameters so that the neutrinos also solve a third problem.

The third problem is dark matter. The universe seems to contain more matter than we can see at any wavelength we have looked at. The known particles of the standard model do not fit the data – they either interact too strongly or don’t form structures efficiently enough. Nobody knows what dark matter is made of. (If it is made of something. Alternatively, it could be a modification of gravity. Regardless of what xkcd says.)

In the model proposed by Ballesteros, the right-handed neutrinos could make up the dark matter. That too is an old idea and it’s not working very well: The more massive of the right-handed neutrinos can decay into lighter ones by emitting a photon and this hasn’t been seen. The problem here is getting the mass range of the neutrinos to both work for dark matter and the baryon asymmetry. Ballesteros et al solve this problem by making up dark matter mostly from something else, a particle called the axion. This particle has the benefit of also being good to solve a fourth problem.

Fourth, the strong CP problem. The standard model is lacking a possible interaction term which would cause the strong nuclear force to violate CP symmetry. We know this term is either absent or very tiny because otherwise the neutron would have an electric dipole moment, which hasn’t been observed.

This problem can be fixed by promoting the constant in front of this term (the theta parameter) to a field. The field then will move towards the minimum of the potential, explaining the smallness of the parameter. The field however is accompanied by a particle (dubbed the “axion” by Frank Wilczek) which hasn’t been observed. Nobody knows whether the axion exists.

In the SMASH model, the axion gives rise to dark matter by leaving behind a condensate and particles that are created in the early universe from the decay of topological defects (strings and domain walls). The axion gets its mass from an additional quark-like field (denoted with Q in the paper), and also solves the strong CP problem.

Fifth, inflation, the phase of rapid expansion in the early universe. Inflation was invented to explain several observational puzzles, notably why the temperature of the cosmic microwave background seems to be almost the same in every direction we look (up to small fluctuations). That’s surprising because in a universe without inflation the different parts of the hot plasma in the early universe which created this radiation had never been in contact before. They thus had no chance to exchange energy and come to a common temperature. Inflation solves this problem by blowing up an initially small patch to gigantic size. Nobody knows, however, what causes inflation. It’s normally assumed to be some scalar field. But where that field came from or what happened to it is unclear.

Ballesteros and his collaborators assume that the scalar field which gives rise to inflation is the Higgs – the only fundamental scalar which we have so far observed. This too is an old idea, and one that works badly. To make Higgs inflation works, one needs to introduce an unconventional coupling of the Higgs field to gravity, and this leads to a breakdown of the theory (loss of unitarity) in ranges where one needs it to work (ie the breakdown can’t be blamed on quantum gravity).

The SM*A*S*H model contains an additional scalar field which gives rise to a more complicated coupling and the authors claim that in this case the breakdown doesn’t happen until at the Planck scale (where it can be blamed on quantum gravity).

So, in summary, we have three right-handed neutrinos with their masses and mixing matrix, a new quark-like field and its mass, the axion field, a scalar field, the coupling between the scalar and the Higgs, the self-coupling of the scalar, the coupling of the quark to the scalar, the axion decay constant, the coupling of the Higgs to gravity, and the coupling of the new scalar to gravity. Though I might have missed something.

In case you just scrolled down to see if I think this model might be correct. The answer is almost certainly no. It’s a great model according to the current quality standard in the field. But when you combine several speculative ideas without observational evidence, you don’t get a model that is less speculative and has more evidence speaking for it.

Monday, August 29, 2016

Dear Dr. B: How come we never hear of a force that the Higgs boson carries?

    “Dear Dr. Hossenfelder,

    First, I love your blog. You provide a great insight into the world of physics for us laymen. I have read in popular science books that the bosons are the ‘force carriers.’ For example the photon carries the electromagnetic force, the gluon, the strong force, etc. How come we never hear of a force that the Higgs boson carries?

    Ramiro Rodriguez
Dear Ramiro,

The short answer is that you never hear of a force that the Higgs boson carries because it doesn’t carry one. The longer answer is that not all bosons are alike. This of course begs the question just how the Higgs-boson is different, so let me explain.

The standard model of particle physics is based on gauge symmetries. This basically means that the laws of nature have to remain invariant under transformations in certain internal spaces, and these transformations can change from one place to the next and one moment to the next. They are what physics call “local” symmetries, as opposed to “global” symmetries whose transformations don’t change in space or time.

Amazingly enough, the requirement of gauge symmetry automatically explains how particles interact. It works like this. You start with fermions, that are particles of half-integer spin, like electrons, muons, quarks and so on. And you require that the fermions’ behavior must respect a gauge symmetry, which is classified by a symmetry group. Then you ask what equations you can possibly get that do this.

Since the fermions can move around, the equations that describe what they do must contain derivatives both in space and in time. This causes a problem, because if you want to know how the fermions’ motion changes from one place to the next you’d also have to know what the gauge transformation does from one place to the next, otherwise you can’t tell apart the change in the fermions from the change in the gauge transformation. But if you’d need to know that transformation, then the equations wouldn’t be invariant.

From this you learn that the only way the fermions can respect the gauge symmetry is if you introduce additional fields – the gauge fields – which exactly cancel the contribution from the space-time dependence of the gauge transformation. In the standard model the gauge fields all have spin 1, which means they are bosons. That's because to cancel the terms that came from the space-time derivative, the fields need to have the same transformation behavior as the derivative, which is that of a vector, hence spin 1.

To really follow this chain of arguments – from the assumption of gauge symmetry to the presence of gauge-bosons – requires several years’ worth of lectures, but the upshot is that the bosons which exchange the forces aren’t added by hand to the standard model, they are a consequence of symmetry requirements. You don’t get to pick the gauge-bosons, neither their number nor their behavior – their properties are determined by the symmetry.

In the standard model, there are 12 such force-carrying bosons: the photon (γ), the W+, W-, Z, and 8 gluons. They belong to three gauge symmetries, U(1), SU(2) and SU(3). Whether a fermion does or doesn’t interact with a gauge-boson depends on whether the fermion is “gauged” under the respective symmetry, ie transforms under it. Only the quarks, for example, are gauged under the SU(3) symmetry of the strong interaction, hence only the quarks couple to gluons and participate in that interaction. The so-introduced bosons are sometimes specifically referred to as “gauge-bosons” to indicate their origin.

The Higgs-boson in contrast is not introduced by a symmetry requirement. It has an entirely different function, which is to break a symmetry (the electroweak one) and thereby give mass to particles. The Higgs doesn’t have spin 1 (like the gauge-bosons) but spin 0. Indeed, it is the only presently known elementary particle with spin zero. Sheldon Glashow has charmingly referred to the Higgs as the “flush toilet” of the standard model – it’s there for a purpose, not because we like the smell.

The distinction between fermions and bosons can be removed by postulating an exchange symmetry between these two types of particles, known as supersymmetry. It works basically by generalizing the concept of a space-time direction to not merely be bosonic, but also fermionic, so that there is now a derivative that behaves like a fermion.

In the supersymmetric extension of the standard model there are then partner particles to all already known particles, denoted either by adding an “s” before the particle’s name if it’s a boson (selectron, stop quark, and so on) or adding “ino” after the particle’s name if it’s a fermion (Wino, photino, and so on). There is then also Higgsino, which is the partner particle of the Higgs and has spin 1/2. It is gauged under the standard model symmetries, hence participates in the interactions, but still is not itself consequence of a gauge.

In the standard model most of the bosons are also force-carriers, but bosons and force-carriers just aren’t the same category. To use a crude analogy, just because most of the men you know (most of the bosons in the standard model) have short hair (are force-carriers) doesn’t mean that to be a man (to be a boson) you must have short hair (exchange a force). Bosons are defined by having integer spin, as opposed to the half-integer spin that fermions have, and not by their ability to exchange interactions.

In summary the answer to your question is that certain types of bosons – the gauge bosons – are a consequence of symmetry requirements from which it follows that these bosons do exchange forces. The Higgs isn’t one of them.

Thanks for an interesting question!

Peter Higgs receiving the Nobel Prize from the King of Sweden.
[Img Credits: REUTERS/Claudio Bresciani/TT News Agency]



Previous Dear-Dr-B’s that you might also enjoy:

Saturday, August 06, 2016

The LHC “nightmare scenario” has come true.

The recently deceased diphoton
bump. Img Src: Matt Strassler.

I finished high school in 1995. It was the year the top quark was discovered, a prediction dating back to 1973. As I read the articles in the news, I was fascinated by the mathematics that allowed physicists to reconstruct the structure of elementary matter. It wouldn’t have been difficult to predict in 1995 that I’d go on to make a PhD in theoretical high energy physics.

Little did I realize that for more than 20 years the so provisional looking standard model would remain undefeated world-champion of accuracy, irritatingly successful in its arbitrariness and yet impossible to surpass. We added neutrino masses in the late 1990s, but this idea dates back to the 1950s. The prediction of the Higgs, discovered 2012, originated in the early 1960s. And while the poor standard model has been discounted as “ugly” by everyone from Stephen Hawking to Michio Kaku to Paul Davies, it’s still the best we can do.

Since I entered physics, I’ve seen grand unified models proposed and falsified. I’ve seen loads of dark matter candidates not being found, followed by a ritual parameter adjustment to explain the lack of detection. I’ve seen supersymmetric particles being “predicted” with constantly increasing masses, from some GeV to some 100 GeV to LHC energies of some TeV. And now that the LHC hasn’t seen any superpartners either, particle physicists are more than willing to once again move the goalposts.

During my professional career, all I have seen is failure. A failure of particle physicists to uncover a more powerful mathematical framework to improve upon the theories we already have. Yes, failure is part of science – it’s frustrating, but not worrisome. What worries me much more is our failure to learn from failure. Rather than trying something new, we’ve been trying the same thing over and over again, expecting different results.

When I look at the data what I see is that our reliance on gauge-symmetry and the attempt at unification, the use of naturalness as guidance, and the trust in beauty and simplicity aren’t working. The cosmological constant isn’t natural. The Higgs mass isn’t natural. The standard model isn’t pretty, and the concordance model isn’t simple. Grand unification failed. It failed again. And yet we haven’t drawn any consequences from this: Particle physicists are still playing today by the same rules as in 1973.

For the last ten years you’ve been told that the LHC must see some new physics besides the Higgs because otherwise nature isn’t “natural” – a technical term invented to describe the degree of numerical coincidence of a theory. I’ve been laughed at when I explained that I don’t buy into naturalness because it’s a philosophical criterion, not a scientific one. But on that matter I got the last laugh: Nature, it turns out, doesn’t like to be told what’s presumably natural.

The idea of naturalness that has been preached for so long is plainly not compatible with the LHC data, regardless of what else will be found in the data yet to come. And now that naturalness is in the way of moving predictions for so-far undiscovered particles – yet again! – to higher energies, particle physicists, opportunistic as always, are suddenly more than willing to discard of naturalness to justify the next larger collider.

Now that the diphoton bump is gone, we’ve entered what has become known as the “nightmare scenario” for the LHC: The Higgs and nothing else. Many particle physicists thought of this as the worst possible outcome. It has left them without guidance, lost in a thicket of rapidly multiplying models. Without some new physics, they have nothing to work with that they haven’t already had for 50 years, no new input that can tell them in which direction to look for the ultimate goal of unification and/or quantum gravity.

That the LHC hasn’t seen evidence for new physics is to me a clear signal that we’ve been doing something wrong, that our experience from constructing the standard model is no longer a promising direction to continue. We’ve maneuvered ourselves into a dead end by relying on aesthetic guidance to decide which experiments are the most promising. I hope that this latest null result will send a clear message that you can’t trust the judgement of scientists whose future funding depends on their continued optimism.

Things can only get better.

[This post previously appeared in a longer version on Starts With A Bang.]

Monday, July 04, 2016

Why the LHC is such a disappointment: A delusion by name “naturalness”

Naturalness, according to physicists.

Before the LHC turned on, theoretical physicists had high hopes the collisions would reveal new physics besides the Higgs. The chances of that happening get smaller by the day. The possibility still exists, but the absence of new physics so far has already taught us an important lesson: Nature isn’t natural. At least not according to theoretical physicists.

The reason that many in the community expected new physics at the LHC was the criterion of naturalness. Naturalness, in general, is the requirement that a theory should not contain dimensionless numbers that are either very large or very small. If that is so, then theorists will complain the numbers are “finetuned” and regard the theory as contrived and hand-made, not to say ugly.

Technical naturalness (originally proposed by ‘t Hooft) is a formalized version of naturalness which is applied in the context of effective field theories in particular. Since you can convert any number much larger than one into a number much smaller than one by taking its inverse, it’s sufficient to consider small numbers in the following. A theory is technically natural if all suspiciously small numbers are protected by a symmetry. The standard model is technically natural, except for the mass of the Higgs.

The Higgs is the only (fundamental) scalar we know and, unlike all the other particles, its mass receives quantum corrections of the order of the cutoff of the theory. The cutoff is assumed to be close by the Planck energy – that means the estimated mass is 15 orders of magnitude larger than the observed mass. This too-large mass of the Higgs could be remedied simply by subtracting a similarly large term. This term however would have to be delicately chosen so that it almost, but not exactly, cancels the huge Planck-scale contribution. It would hence require finetuning.

In the framework of effective field theories, a theory that is not natural is one that requires a lot of finetuning at high energies to get the theory at low energies to work out correctly. The degree of finetuning can, and has been, quantified in various measures of naturalness. Finetuning is thought of as unacceptable because the theory at high energy is presumed to be more fundamental. The physics we find at low energies, so the argument, should not be highly sensitive to the choice we make for that more fundamental theory.

Until a few years ago, most high energy particle theorists therefore would have told you that the apparent need to finetuning the Higgs mass means that new physics must appear nearby the energy scale where the Higgs will be produced. The new physics, for example supersymmetry, would avoid the finetuning.

There’s a standard tale they have about the use of naturalness arguments, which goes somewhat like this:

1) The electron mass isn’t natural in classical electrodynamics, and if one wants to avoid finetuning this means new physics has to appear at around 70 MeV. Indeed, new physics appears even earlier in form of the positron, rendering the electron mass technically natural.

2) The difference between the masses of the neutral and charged pion is not natural because it’s suspiciously small. To prevent fine-tuning one estimates new physics must appear around 700 MeV, and indeed it shows up in form of the rho meson.

3) The lack of flavor changing neutral currents in the standard model means that a parameter which could a priori have been anything must be very small. To avoid fine-tuning, the existence of the charm quark is required. And indeed, the charm quark shows up in the estimated energy range.

From these three examples only the last one was an actual prediction (Glashow, Iliopoulos, and Maiani, 1970). To my knowledge this is the only prediction that technical naturalness has ever given rise to – the other two examples are post-dictions.

Not exactly a great score card.

But well, given that the standard model – in hindsight – obeys this principle, it seems reasonable enough to extrapolate it to the Higgs mass. Or does it? Seeing that the cosmological constant, the only other known example where the Planck mass comes in, isn’t natural either, I am not very convinced.

A much larger problem with naturalness is that it’s a circular argument and thus a merely aesthetic criterion. Or, if you prefer, a philosophic criterion. You cannot make a statement about the likeliness of an occurrence without a probability distribution. And that distribution already necessitates a choice.

In the currently used naturalness arguments, the probability distribution is assumed to be uniform (or at least approximately uniform) in a range that can be normalized to one by dividing through suitable powers of the cutoff. Any other type of distribution, say, one that is sharply peaked around small values, would require the introduction of such a small value in the distribution already. But such a small value justifies itself by the probability distribution just like a number close to one justifies itself by its probability distribution.

Naturalness, hence, becomes a chicken-and-egg problem: Put in the number one, get out the number one. Put in 0.00004, get out 0.00004. The only way to break that circle is to just postulate that some number is somehow better than all other numbers.

The number one is indeed a special number in that it’s the unit element of the multiplication group. One can try to exploit this to come up with a mechanism that prefers a uniform distribution with an approximate width of one by introducing a probability distribution on the space of probability distributions, leading to a recursion relation. But that just leaves one to explain why that mechanism.

Another way to see that this can’t solve the problem is that any such mechanism will depend on the basis in the space of functions. Eg, you could try to single out a probability distribution by asking that it’s the same as its Fourier-transformation. But the Fourier-transformation is just one of infinitely many basis transformations in the space of functions. So again, why exactly this one?

Or you could try to introduce a probability distribution on the space of transformations among bases of probability distributions, and so on. Indeed I’ve played around with this for some while. But in the end you are always left with an ambiguity, either you have to choose the distribution, or the basis, or the transformation. It’s just pushing around the bump under the carpet.

The basic reason there’s no solution to this conundrum is that you’d need another theory for the probability distribution, and that theory per assumption isn’t part of the theory for which you want the distribution. (It’s similar to the issue with the meta-law for time-varying fundamental constants, in case you’re familiar with this argument.)

In any case, whether you buy my conclusion or not, it should give you a pause that high energy theorists don’t ever address the question where the probability distribution comes from. Suppose there indeed was a UV-complete theory of everything that predicted all the parameters in the standard model. Why then would you expect the parameters to be stochastically distributed to begin with?

This lacking probability distribution, however, isn’t my main issue with naturalness. Let’s just postulate that the distribution is uniform and admit it’s an aesthetic criterion, alrighty then. My main issue with naturalness is that it’s a fundamentally nonsensical criterion.

Any theory that we can conceive of which describes nature correctly must necessarily contain hand-picked assumptions which we have chosen “just” to fit observations. If that wasn’t so, all we’d have left to pick assumptions would be mathematical consistency, and we’d end up in Tegmark’s mathematical universe. In the mathematical universe then, we’d no longer have to choose a consistent theory, ok. But we’d instead have to figure out where we are, and that’s the same question in green.

All our theories contain lots of assumptions like Hilbert-spaces and Lie-algebras and Haussdorf measures and so on. For none of these is there any explanation other than “it works.” In the space of all possible mathematics, the selection of this particular math is infinitely fine-tuned already – and it has to be, for otherwise we’d be lost again in Tegmark space.

The mere idea that we can justify the choice of assumptions for our theories in any other way than requiring them to reproduce observations is logical mush. The existing naturalness arguments single out a particular type of assumption – parameters that take on numerical values – but what’s worse about this hand-selected assumption than any other hand-selected assumption?

This is not to say that naturalness is always a useless criterion. It can be applied in cases where one knows the probability distribution, for example for the typical distances between stars or the typical quantum fluctuation in the early universe, etc. I also suspect that it is possible to find an argument for the naturalness of the standard model that does not necessitate to postulate a probability distribution, but I am not aware of one.

It’s somewhat of a mystery to me why naturalness has become so popular in theoretical high energy physics. I’m happy to see it go out of the window now. Keep your eyes open in the next couple of years and you’ll witness that turning point in the history of science when theoretical physicists stopped dictating nature what’s supposedly natural.

Friday, June 24, 2016

Where can new physics hide?

Also an acronym for “Not Even Wrong.”

The year is 2016, and physicists are restless. Four years ago, the LHC confirmed the Higgs-boson, the last outstanding prediction of the standard model. The chances were good, so they thought, that the LHC would also discover other new particles – naturalness seem to demand it. But their hopes were disappointed.

The standard model and general relativity do a great job, but physicists know this can’t be it. Or at least they think they know: The theories are incomplete, not only disagreeable and staring each other in the face without talking, but inadmissibly wrong, giving rise to paradoxa with no known cure. There has to be more to find, somewhere. But where?

The hiding places for novel phenomena are getting smaller. But physicists haven’t yet exhausted their options. Here are the most promising areas where they currently search:

1. Weak Coupling

Particle collisions at high energies, like those reached at the LHC, can produce all existing particles up to the energy that the colliding particles had. The amount of new particles however depends on the strength by which they couple to the particles that were brought to collision (for the LHC that’s protons, or their constituents quarks and gluons, respectively). A particle that couples very weakly might be produced so rarely that it could have gone unnoticed so far.

Physicists have proposed many new particles which fall into this category because weakly interacting stuff generally looks a lot like dark matter. Most notably there are the weakly interacting massive particles (WIMPs), sterile neutrinos (that are neutrinos which don’t couple to the known leptons), and axions (proposed to solve the strong CP problem and also a dark matter candidate).

These particles are being looked for both by direct detection measurements – monitoring large tanks in underground mines for rare interactions – and by looking out for unexplained astrophysical processes that could make for an indirect signal.

2. High Energies

If the particles are not of the weakly interacting type, we would have noticed them already, unless their mass is beyond the energy that we have reached so far with particle colliders. In this category we find all the supersymmetric partner particles, which are much heavier than the standard model particles because supersymmetry is broken. Also at high energies could hide excitations of particles that exist in models with compactified extra dimensions. These excitations are similar to higher harmonics of a string and show up at certain discrete energy levels which depend on the size of the extra dimension.

Strictly speaking, it isn’t the mass that is relevant to the question whether a particle can be discovered, but the energy necessary to produce the particles, which includes binding energy. An interaction like the strong nuclear force, for example, displays “confinement” which means that it takes a lot of energy to tear quarks apart even though their masses are not all that large. Hence, quarks could have constituents – often called “preons” – that have an interaction – dubbed “technicolor” – similar to the strong nuclear force. The most obvious models of technicolor however ran into conflict with data decades ago. The idea however isn’t entirely dead, and though the surviving models aren’t presently particularly popular, some variants are still viable.

These phenomena are being looked for at the LHC and also in highly energetic cosmic ray showers.

3. High Precision

High precision tests of standard model processes are complementary to high energy measurements. They can be sensitive to tiniest effects stemming from virtual particles with energies too high to be produced at colliders, but still making a contribution at lower energies due to quantum effects. Examples for this are proton decay, neutron-antineutron oscillation, the muon g-2, the neutron electric dipole moment, or Kaon oscillations. There are existing experiments for all of these, searching for deviations from the standard model, and the precision for these measurements is constantly increasing.

A somewhat different high precision test is the search for neutrinoless double-beta decay which would demonstrate that neutrinos are Majorana-particles, an entirely new type of particle. (When it comes to fundamental particles that is. Majorana particles have recently been produced as emergent excitations in condensed matter systems.)

4. Long ago

In the early universe, matter was much denser and hotter than we can hope to ever achieve in our particle colliders. Hence, signatures left over from this time can deliver a bounty of new insights. The temperature fluctuations in the cosmic microwave background (B-modes and non-Gaussianities) may be able to test scenarios of inflation or its alternatives (like phase transitions from a non-geometric phase), whether our universe had a big bounce instead of a big bang, and – with some optimism – even whether gravity was quantized back them.

5. Far away

Some signatures of new physics appear on long distances rather than of short. An outstanding question is for example what’s the shape of the universe? Is it really infinitely large, or does it close back onto itself? And if it does, then how does it do this? One can study these questions by looking for repeating patterns in the temperature fluctuation of the cosmic microwave background (CMB). If we live in a multiverse, it might occasionally happen that two universes collide, and this too would leave a signal in the CMB.

New insights might also hide in some of the well-known problems with the cosmological concordance model, such as the too pronounced galaxy cusps or the too many dwarf galaxies that don’t fit well with observations. It is widely believed that these problems are numerical issues or due to a lack of understanding of astrophysical processes and not pointers to something fundamentally new. But who knows?

Another novel phenomenon that would become noticeable on long distances is a fifth force, which would lead to subtle deviations from general relativity. This might have all kinds of effects, from violations of the equivalence principle to a time-dependence of dark energy. Hence, there are experiments testing the equivalence principle and the constancy of dark energy to every higher precision.

6. Right here

Not all experiments are huge and expensive. While tabletop discoveries have become increasingly unlikely simply because we’ve pretty much tried all that could be done, there are still areas where small-scale lab experiments reach into unknown territory. This is the case notably in the foundations of quantum mechanics, where nanoscale devices, single photon sources and – detectors, and increasingly sophisticated noise-control technics have enabled previously impossible experiments. Maybe one day we’ll be able to solve the dispute over the “correct” interpretation of quantum mechanics simply by measuring which one is right.

So, physics isn’t over yet. It has become more difficult to test new fundamental theories, but we are pushing the limits in many currently running experiments.

[This post previously appeared on Starts With a Bang.]

Monday, February 22, 2016

Too many anti-neutrinos: Evidence builds for new anomaly

Bump ahead.
Tl;dr: A third experiment has reported an unexplained bump in the spectrum of reactor-produced anti-neutrinos. Speculations for the cause of the signal so far focus on incomplete nuclear fission models.


Neutrinos are the least understood of the known elementary particles, and they just presented physicists with a new puzzle. While monitoring the neutrino flux from nearby nuclear power plants, three different experiments have measured an unexpected bump around 5 MeV. First reported by the Double Chooz experiment in 2014, the excess was originally not statistically significant
5 MeV bump as seen by Double Chooz. Image source: arXiv:1406.7763
Last year, a second experiment, RENO, reported an excess but did not assign a measure of significance. However, the bump is clearly visible in their data
5 MeV bump as seen by RENO. Image source: arXiv:1511.05849
The newest bump is from the Daya Bay collaboration and was just published in PRL

5 MeV bump as seen by Daya Bay. Image source: arXiv:1508.04233

They give the excess a local significance of 4.1 σ – a probability of less than one in ten thousand for the signal being due to pure chance.

This is a remarkable significance for a particle that interacts so feebly, and an impressive illustration of how much detector technology has improved. Originally, the neutrino’s interaction was thought to be so weak that to measure it at all it seemed necessary placing detectors next to the most potent neutrino source known – a nuclear bomb explosion.

And this is exactly what Frederick Reines and Clyde Cowan set out to do. In 1951, they devised “Project Poltergeist” to detect the neutrino emission from a nuclear bomb: “Anyone untutored in the effects of nuclear explosions would be deterred by the challenge of conducting an experiment so close to the bomb,” wrote Reines, “but we knew otherwise from experience and pressed on.” And their audacious proposal was approved swiftly: “Life was much simpler in those days—no lengthy proposals or complex review committees,” recalls Reines.

Briefly after their proposal was approved, however, the two men found a better experimental design and instead placed a larger detector close by a nuclear power plant. But the controlled splitting of nuclei in a power plant needs much longer to produce the same number of neutrinos as a nuclear bomb blast, and patience was required of Reines and Cowan. Their patience eventually paid off: They were awarded the 1995 Nobel Prize in physics for the first successful detection of neutrinos – a full 65 years after the particles were first predicted.

Another Nobel Prize for neutrinos was handed out just last year, this one commemorating the neutrino’s ability to “oscillate,” that is to change between different neutrino types as they travel. But, as the recent measurements demonstrate, neutrinos still have surprises in stock.

Good news first, the new experiments have confirmed the neutrino oscillations. On short base-lines as that of Daya Bay – a few kilometer – the electron-anti-neutrinos that are emitted during nuclear fission change into to tau-anti-neutrinos and arrive at the detector in reduced numbers. The wavelength of the oscillation between the two particles depends on the energy – higher energy means a longer wavelength. Thus, a detector placed at fixed distance from the emission point will see a different energy-distribution of particles than that at emission.

The emitted energy spectrum can be deduced from the composition of the reactor core – a known mixture of Uranium and Plutonium, each in two different isotopes. After the initial split, these isotopes leave behind a bunch of radioactive nuclei which then decay further. The math is messy, but not hugely complicated. With nuclear fission and decay models as input, the experimentalists can then extract from their data the change in the energy-distribution due to neutrino oscillation. And the parameters of the oscillation that they have observed fit those of other experiments.

Now to the bad news. The fits of the oscillation parameters to the energy spectrum do not take into account the overall number of particles. And when they look at the overall number, the Daya Bay experiment, like other reactor neutrino experiments before, falls about 6% short of expectation. And then there is the other oddity: the energy spectrum has a marked bump that does not agree with the predictions based on nuclear models. There are too many neutrinos in the energy range of 5 MeV.

There are four possible origins for this discrepancy: Detection, travel, production, and misunderstood background. Let us look at them one after the other.

Detection: The three experiments all use the same type of detector, a liquid scintillator with Gadolinium target. Neutrino-nucleus cross-sections are badly understood because neutrinos interact so weakly and very little data is available. However, the experimentalists calibrate their detectors with other radioactive sources in near vicinity, and no bumps have been seen in these reference measurements. This strongly speaks against detector shortcomings as an explanation.

Travel: An overall lack of particles could be explained with oscillation into a so-far undiscovered new type of ‘sterile’ neutrino. However, such an oscillation cannot account for a bump in the spectrum. This could thus at best be a partial explanation, though an intriguing one.

Production: The missing neutrinos and the bump in the spectrum are inferred relative to the expected neutrino flux from the power plant. To calculate the emission spectrum, the physicists rely on nuclear models. The isotopes in the power plant’s core are among the best studied nuclei ever, but still this is a likely source of error. Most research studies of radioactive nuclei investigate them in small numbers, whereas in a reactor a huge number of different nuclei are able to interact with each other. A few proposals have been put forward that mostly focus on the decay of Rubidium and Yttrium isotopes because these make the main contribution to the high energy tail of the spectrum. But so far none of the proposed explanations has been entirely convincing.

Background: Daya Bay and RENO both state that the signal is correlated with the reactor power which makes it implausible that it’s a background effect. There aren’t many details in the paper about the time-dependence of the emission though. It would seem possible to me that reactor power depends on the time of the day or on the season, both of which could also be correlated with background. But this admittedly seems like a long shot.

Thus, at the moment the most conservative explanation is a lacking understanding of processes taking place in the nuclear power plant. It presently seems very unlikely to me that there is fundamentally new physics involved in this – if the signal is real to begin with. It looks convincing to me, but I asked fellow blogger Tommaso Dorigo for his thoughts: “Their signal looks a bit shaky to me - it is very dependent on the modeling of the spectrum and the p-value is unimpressive, given that there is no reason to single out the 5 MeV region a priori. I bet it's a modeling issue.”

Whatever the origin of the reactor antineutrino anomaly, it will require further experiments. As Anna Hayes, a nuclear theorist at Los Alamos National Laboratory, told Fermilab’s Symmetry Magazine: “Nobody expected that from neutrino physics. They uncovered something that nuclear physics was unaware of for 40 years.”