Pages

Sunday, February 03, 2019

A philosopher's take on “naturalness” in particle physics

Square watermelons. Natural?
[Image Source]
Porter Williams is a philosopher at the University of Pittsburgh. He has a new paper about “naturalness,” an idea that has become a prominent doctrine in particle physics. In brief, naturalness requires that a theory’s dimensionless parameters should be close to 1, unless there is an explanation why they are not.

Naturalness arguments were the reason so many particle physicists expected (still expect) the Large Hadron Collider (LHC) to see fundamentally new particles besides the Higgs-boson.

In his new paper, titled “Two Notions of Naturalness,” Williams argues that, in recent years, naturalness arguments have split into two different categories.

The first category of naturalness is the formerly used one, based on quantum field theory. It quantifies, roughly speaking, how sensitive the parameters of a theory at low energies are to changes of the parameters at high energies. Assuming a probability distribution for the parameters at high energies, you can then quantify the likelihood of finding a theory with the parameters we do observe. If the likelihood is small, the theory is said to be “unnatural” or “finetuned”. The mass of the Higgs-boson is unnatural in this sense, so is the cosmological constant, and the theta-parameter.

The second, and newer, type of naturalness, is based on the idea that our universe is one of infinitely many that together make up a “multiverse.” In this case, if you assume a probability distribution over the universes, you can calculate the likelihood of finding the parameters we observe. Again, if that comes out to be unlikely, the theory is called “unnatural.” This approach has so far not been pursued much. Particle physicists therefore hope that the standard model may turn out to be natural in this new way.

I wrote about this drift of naturalness arguments last year (it is also briefly mentioned in my book). I think Williams correctly identifies a current trend in the community.

But his paper is valuable beyond identifying a new trend, because Williams lays out the arguments from naturalness very clearly. I have given quite some talks about the topic, and in doing so noticed that even particle physicists are sometimes confused about exactly what the argument is. Some erroneously think that naturalness is a necessary consequence of effective field theory. This is not so. Naturalness is an optional requirement that the theory may or may not fulfill.

As Williams points out: “Requiring that a quantum field theory be natural demands a more stringent autonomy of scales than we are strictly licensed to expect by [the] structural features of effective field theory.” By this he disagrees with a claim by the theoretical physicist Gian-Francesco Giudice, according to whom naturalness “can be justified by renormalization group methods and the decoupling theorem.” I side with Williams.

Nevertheless, Williams comes out in defense of naturalness arguments. He thinks that these arguments are well-motivated. I cannot, however, quite follow his rationale for coming to this conclusion.

It is correct that the sensitivity to high-energy parameters is peculiar and something that we see in the standard model only for the mass of the Higgs-boson*. But we know why that is: The Higgs-boson is different from all the other particles in being a scalar particle. The expectation that its quantum corrections should enjoy a similar protection as the other particles is therefore not justified.

Williams offers one argument that I had not heard before, which is that you need naturalness to get reliable order-of-magnitude estimates. But this argument assumes that you have only one constant for each dimension of units, so it’s circular. The best example is cosmology. The cosmological constant is not natural. GR has another, very different, mass-scale, that is the Planck mass. Still you can perfectly well make order-of-magnitude estimates as long as you know which mass-scales to use. In other words, making order-of-magnitude estimates in an unnatural theory is only problematic if you assume the theory really should be natural.

The biggest problem, however, is the same for both types of naturalness: You don’t have the probability distribution and no way of obtaining it because it’s a distribution over an experimentally inaccessible space. To quantify naturalness, you therefore have to postulate a distribution, but that has the consequence that you merely get out what you put in. Naturalness arguments can therefore always be amended to give whatever result you want.

And that really is the gist of the current trend. The LHC data has shown that the naturalness arguments that particle physicists relied on did not work. But instead of changing their methods of theory-development, they adjust their criteria of naturalness to accommodate the data. This will not lead to better predictions.


*The strong CP-problem (that’s the thing with the theta-parameter) is usually assumed to be solved by the Pecci-Quinn mechanism, never mind that we still haven’t seen axions. The cosmological constant has something to do with gravity, and therefore particle physicists think it’s none of their business.

18 comments:

  1. Dear Sabine, I have not read William's paper yet, but I will make a similar point re: the naturalness in my seminar in Tubingen on Tuesday. If the probability is defined in the epistemological, not ontological sense (that is, the probability quantifies a human's knowledge in light of experimental data), then the intuitive notion of naturalness is closely related, if not identical, to the requirement of mathematical smoothness of functions describing dependence of theoretical predictions on theoretical parameters. Then, it is very easy to show using the Bayes theorem that unnatural models are not predictive. That is, if the prior and likelihood probabilities have discontinuities or large cancellations between the parameters, it becomes impossible to numerically compare the probabilities for different theories. Not only one cannot decide which unnatural model is more consistent with the empirical data, in the particularly unnatural (irregular) cases a variety of plausible predictions can be harvested from a fertile patch of theory landscape.

    Many discussions of naturalness are cavalier with the definitions of "probability" and "likelihood". If we follow the common epistemological definition of the probability, the origin of the intuition about naturalness is transparent.

    ReplyDelete
    Replies
    1. Pavel,

      I cannot edit comments, I can only entirely delete them, sorry.

      I get the impression that your notion of naturalness is different from the notion of technical naturalness that Porter is discussing. The standard model is clearly both technically unnatural and yet predictive. Your notion of naturalness seems to be what I would loosely speaking call explanatory power.

      Delete
    2. Dear Sabine, very helpful, then is it more conventional to say "Unnatural models are not explanatory", and that "not predictive" is a subcategory of "not explanatory"? I regretfully admit that I have not had a chance to get familiar with all relevant literature and terminology, including the interesting new paper by Williams.

      While indeed the Standard Model is very predictive in general, it fails to make certain numerical predictions. I can ask: "What happens to an LHC observable if I change the QCD coupling constant by one part out of a quadrillion?" Nothing will happen, of course, the QCD coupling constant must change by a percent or two of its value to produce a measurable effect. [The derivative of the likelihood P(D|T) with respect to alpha_s is not too large.] I can similarly ask: "What happens if the UV cutoff on the Higgs running mass changes by one part in a quadrillion?" With the rest of parameters fixed, the Universe disappears.

      Delete
    3. Pavel,

      That's just not how the word "natural" is used in the literature. In the common terminology, unnatural models can be both predictive and explanatory. General Relativity and the Standard Model are both unnatural, yet both are predictive and have high explanatory power.

      The UV-cutoff of the Higgs mass is not an observable. It's by assumption renormalized so that the remaining observable is the Higgs mass which is a free parameter of the theory that has to be fixed by measurement. (And cut-off renormalization has not actually been in use for decades.)

      Look, if it was necessary to know the cut-off to high precision, you would be right: the theory would not be very predictive. But you do not need to know it to make predictions. It's a pseudo-problem because it doesn't concern observables. (Same thing, btw, with the cosmological constant.)

      I don't know what your comment about the measurement uncertainty for the QCD coupling constant is supposed to say.

      Delete
    4. Sabine, sure, the SM is highly explanatory and predictive, except for the Higgs mass that receives a huge cancellation between physically and structurally distinct radiative contributions. [A clever renormalization may hide away the cancellation, still I am pretty confident there will be large derivatives of the likelihood with respect to the parameters for some (quasi-)observables, in some energy range, any renormalization scheme.]

      Delete
    5. Pavel,

      The SM does not predict any of the particle masses. In that the Higgs is not any different from the other particles. That the cancellation is huge has absolutely no consequences for the predictability exactly because it cancels qua assumption. It does not enter any predictions.

      Delete
  2. Its obvious that a big problem in this situation, is the fact that people don't really pay attention to what naturalness really means or what each author means by it. Its always a vague notion and people just assume other people know what they mean by it. But I think if we pay close attention, we can actually abandon useless notions of naturalness and find one useful meaning for it.
    One useless meaning is when we assume a multiverse and define a natural universe as one with high probability. But this is clearly useless because we could be a universe with a low probability. Low probability still means possible!
    Another meaning is when we don't assume a multiverse and define a natural set of laws as one with high probability. This meaning is useless too because the space of all possible physical laws is only an abstraction that may have nothing to do with our universe.
    But there is one meaning that is not at all equivalent to the above two and may actually be useful. The idea that dimensionless constants need to be close to 1 or, more generally, close to each other. Yes, it actually may sound stupid but there is a good reason for this. If we have a theory with constants a and b and a-b=c and c is significantly different from a and b, then c should actually be considered a new constant of the theory. Why is this a problem? Because if a and b are close to each other, we just use a new unit and put them close to 1. This means that our theory didn't really choose any energy(or whatever) scale, there is nothing to be explained, the scale has to be somewhere and we don't really care where. But if we have significantly different energy scales in our theory, then it means if we rescale one of them to be 1, others would still be much different. Which means we still have a few constants in the theory to explain. People can ask why those constants have those values and not any other values. So I think naturalness, with this meaning, is a condition on our theories to objectively identify whether we can call that theory fundamental or we still need to dig deeper.

    ReplyDelete
    Replies
    1. I should add that even though the last notion described in my comment could be a useful notion, it doesn't mean that our current theories should satisfy it. Maybe the fundamental theory can't be described using the same semantics we have in QFT or string theory or any other currently known theory. So it could well be that currently known theories are not natural and yet they're correct, but the underlying theory is actually natural.

      Delete
  3. Sabine,
    I find your latest blog entries quite interesting. However, I am not sure what people expect to find in particle physics nowdays, especially with you talking a lot about possible findings only achieveable at much higher energy than we can currently achieve. I am not a physicist (yet) but want to become one. I searched a bit but was unable to find what particle physicist actually believe to find with creating a larger accelerator.
    Do you have some sources that could help me understand this? (I dont mind technical sources because I understand math far enough to get through the first 2 semesters of physics, although "laymans" terms would be fine too, maybe you could write about it, if you haven´t?)
    From my limited understanding I also prefer to not have naturalness as a criteria for theories, shouldn´t theories predict data or get proven by data, there should be no reason to believe that numbers need to be close to some arbitary value, at least thats what I think.

    ReplyDelete
    Replies
    1. Hi Daniel,

      I don't want to speculate about what they believe to find because really I don't know what they believe. Let me instead tell you what they hope to find.

      They hope to find some new particles, notably some that would give them clues about new symmetries and/or a unification of the forces, or about dark matter/dark energy, or just something entirely new and unforeseen.

      The most popular theory for such new particles has for a long time been supersymmetry. A lot of books have been written about that, eg by Gordon Kane and Dan Hooper. I've read them both and they are good books, in a sense, though they are overselling their case. Another book about new things to find is Lisa Randall's "Warped Passages".

      If you want something with more substance, you can have a look at some of the reports of the working groups for BSM pheno, eg this recent one. That will give you a pretty good idea of the stuff that they look for. (Together with more references than you'll be able to read in a year ;))

      Delete
    2. Thanks Sabine, will be checking that out for sure. Also yes, "believe" was badly worded by me.

      Delete
  4. I started thinking, and as usual confused myself. If in abstract QFT space, a large range for a parameter in the low energy theory flows towards a fixed point for that parameter in the high energy theory, then if we sit a little off the fixed point and consider the flows down to the low energy theory,a very small change in the high energy theory parameter will move the low energy parameter a lot. Why is that a bad thing? It is supposed to be a bad thing, but I can't remember why.

    Thanks in advance!

    ReplyDelete
    Replies
    1. The argument goes that that's a bad thing because if you assume a probability distribution on the initial parameters (at high energies) that has a typical (normalized) width of 1 you are unlikely to end up near the measured values.

      The problem with this argument is that you put in the number 1 to justify that parameters of order 1 are natural. It's a circular argument. You can put in any number to justify what that number is natural.

      Delete
  5. I find the whole concept of a multiverse to be an admission of defeat, not worthy of science to use as a basis for grandiose experiments. It is not disturbance about the output of money that informs my objections, just the idea that grown up physicists would waste their brains on such silly ideas.

    ReplyDelete
  6. I think this is tricky. Coupling constants and running parameters have been argued to converge to unity at quantum gravity, Hagedorn temperature etc. The fine structure constant α = e^2/4πεħc is experimentally known to be around 1/128 at the TeV scale and this as a running parameter is thought to converge to unity. For an elementary particle of mass m the gravitational constant is α_g(m) = Gm^2/ħc. For the mass a Planck mass α_g(m) = 1, though in some ways this is how things are defined. It is a sort of tautology. If I were to use a known particle mass my temptation would be to use the mass of the Higgs particle, where this gives α_g(m_higg) = 1.6×10^{-35}. This is pretty far from being unit,

    The oddity about the world we live in is that elementary particle masses are very small compared to the Planck scale. The masses of W^± and Z are conferred by the degrees of freedom of Goldstone bosons and these in the Higgs field have a comparatively small mass. The quartic potential of the Higgs field for significantly larger mass exhibits huge mass renormalization up the Planck mass. This is why the gravitational field appears so weak and we have such a tiny value for α_g(m_higg). The gravitational coupling constant being quadratic in energy can of course scale as G(γm)^2/ħc, where γ is the Lorentz factor. For black holes this could be equally G(Γm)^2/ħc for Γ = 1/sqrt{1 - 2m/r} that becomes huge on the stretched horizon as r → 2m.

    This then gets to the thesis of this paper that RG flows to effective theories must not have a sensitive dependency that can radically change the low energy coupling parameters. With what I wrote above I can think of a way this happens, where particles approach the stretched horizon with α_g(m) → 1 so the asymptotic observer witnesses pure quantum gravity oscillators on the stretched horizon. Then particle on the stretched horizon are forced to have small masses due to the scale of the stretched horizon. This might then impose some tighter constraints on RG flows and take some bite out of the fine tuning argument.

    On the other hand this might be a bit contrived, and RG flows are similar to fluid dynamics which has some chaos analogues. The sensitivity to conditions has by way of contrast a possible “naturalness” of its own. This would of course that probabilities be summed over a multiverse.

    ReplyDelete
  7. Bee,

    couldn't naturalness, specifically UV-cutoff of the Higgs mass, the null results from LHC on favored BSM, and QFT be used to

    1- to suggest no new physics from fermi to planck scale and
    2- single out QG theories that does not affect the Higgs mass at the fermi scale?

    and i have in mind asymptotic safety which was used to predict the Higgs mass at 126 GEV, which was observed.

    in this case naturalness is being used not to predict new physics at the fermi scale but identify those QG theories at the planck scale that allows for a 126 GEV Higgs.

    ReplyDelete
  8. According to a well-known joke, there are only three natural numbers: zero, one, and infinity. From this perspective an infinitely fine-tuned SM with a Higgs mass at 125.x GeV is natural. I find it quite intriguing that the Higgs mass seems to sit exactly at the boundary of vacuum stability, rather than a few GeV away from it in either direction.

    If there is an underlying reason for this coincidence (a "principle of living dangerously") it can be used to make predictions: all kinds of BSM physics that moves the SM away from criticality is ruled out. BSM physics that remains on the brink of vacuum instability is of course not affected.

    ReplyDelete
    Replies
    1. which BSM physics moves the SM away from criticality is ruled out, for example does SUSY or strings do this?

      Delete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.