|
Naturalness, according to physicists. |
Before the LHC turned on, theoretical physicists had high hopes the collisions would reveal new physics besides the Higgs. The chances of that happening get smaller by the day. The possibility still exists, but the absence of new physics so far has already taught us an important lesson: Nature isn’t natural. At least not according to theoretical physicists.
The reason that many in the community expected new physics at the LHC was the criterion of naturalness. Naturalness, in general, is the requirement that a theory should not contain dimensionless numbers that are either very large or very small. If that is so, then theorists will complain the numbers are “finetuned” and regard the theory as contrived and hand-made, not to say ugly.
Technical naturalness (originally proposed by ‘t Hooft) is a formalized version of naturalness which is applied in the context of effective field theories in particular. Since you can convert any number much larger than one into a number much smaller than one by taking its inverse, it’s sufficient to consider small numbers in the following. A theory is technically natural if all suspiciously small numbers are protected by a symmetry. The standard model is technically natural, except for the mass of the Higgs.
The Higgs is the only (fundamental) scalar we know and, unlike all the other particles, its mass receives quantum corrections of the order of the cutoff of the theory. The cutoff is assumed to be close by the Planck energy – that means the estimated mass is 15 orders of magnitude larger than the observed mass. This too-large mass of the Higgs could be remedied simply by subtracting a similarly large term. This term however would have to be delicately chosen so that it almost, but not exactly, cancels the huge Planck-scale contribution. It would hence require finetuning.
In the framework of effective field theories, a theory that is not natural is one that requires a lot of finetuning at high energies to get the theory at low energies to work out correctly. The degree of finetuning can, and has been, quantified in various measures of naturalness. Finetuning is thought of as unacceptable because the theory at high energy is presumed to be more fundamental. The physics we find at low energies, so the argument, should not be highly sensitive to the choice we make for that more fundamental theory.
Until a few years ago, most high energy particle theorists therefore would have told you that the apparent need to finetuning the Higgs mass means that new physics must appear nearby the energy scale where the Higgs will be produced. The new physics, for example supersymmetry, would avoid the finetuning.
There’s a standard tale they have about the use of naturalness arguments, which goes somewhat like this:
1) The electron mass isn’t natural in classical electrodynamics, and if one wants to avoid finetuning this means new physics has to appear at around 70 MeV. Indeed, new physics appears even earlier in form of the positron, rendering the electron mass technically natural.
2) The difference between the masses of the neutral and charged pion is not natural because it’s suspiciously small. To prevent fine-tuning one estimates new physics must appear around 700 MeV, and indeed it shows up in form of the rho meson.
3) The lack of flavor changing neutral currents in the standard model means that a parameter which could a priori have been anything must be very small. To avoid fine-tuning, the existence of the charm quark is required. And indeed, the charm quark shows up in the estimated energy range.
From these three examples only the last one was an actual prediction (Glashow, Iliopoulos, and Maiani, 1970). To my knowledge this is the only prediction that technical naturalness has ever given rise to – the other two examples are post-dictions.
Not exactly a great score card.
But well, given that the standard model – in hindsight – obeys this principle, it seems reasonable enough to extrapolate it to the Higgs mass. Or does it? Seeing that the cosmological constant, the only other known example where the Planck mass comes in, isn’t natural either, I am not very convinced.
A much larger problem with naturalness is that it’s a circular argument and thus a merely aesthetic criterion. Or, if you prefer, a philosophic criterion. You cannot make a statement about the likeliness of an occurrence without a probability distribution. And that distribution already necessitates a choice.
In the currently used naturalness arguments, the probability distribution is assumed to be uniform (or at least approximately uniform) in a range that can be normalized to one by dividing through suitable powers of the cutoff. Any other type of distribution, say, one that is sharply peaked around small values, would require the introduction of such a small value in the distribution already. But such a small value justifies itself by the probability distribution just like a number close to one justifies itself by its probability distribution.
Naturalness, hence, becomes a chicken-and-egg problem: Put in the number one, get out the number one. Put in 0.00004, get out 0.00004. The only way to break that circle is to just postulate that some number is somehow better than all other numbers.
The number one is indeed a special number in that it’s the unit element of the multiplication group. One can try to exploit this to come up with a mechanism that prefers a uniform distribution with an approximate width of one by introducing a probability distribution on the space of probability distributions, leading to a recursion relation. But that just leaves one to explain why that mechanism.
Another way to see that this can’t solve the problem is that any such mechanism will depend on the basis in the space of functions. Eg, you could try to single out a probability distribution by asking that it’s the same as its Fourier-transformation. But the Fourier-transformation is just one of infinitely many basis transformations in the space of functions. So again, why exactly this one?
Or you could try to introduce a probability distribution on the space of transformations among bases of probability distributions, and so on. Indeed I’ve played around with this for some while. But in the end you are always left with an ambiguity, either you have to choose the distribution, or the basis, or the transformation. It’s just pushing around the bump under the carpet.
The basic reason there’s no solution to this conundrum is that you’d need another theory for the probability distribution, and that theory per assumption isn’t part of the theory for which you want the distribution. (It’s similar to the issue with the meta-law for time-varying fundamental constants, in case you’re familiar with this argument.)
In any case, whether you buy my conclusion or not, it should give you a pause that high energy theorists don’t ever address the question where the probability distribution comes from. Suppose there indeed was a UV-complete theory of everything that predicted all the parameters in the standard model. Why then would you expect the parameters to be stochastically distributed to begin with?
This lacking probability distribution, however, isn’t my main issue with naturalness. Let’s just postulate that the distribution is uniform and admit it’s an aesthetic criterion, alrighty then. My main issue with naturalness is that it’s a fundamentally nonsensical criterion.
Any theory that we can conceive of which describes nature correctly must necessarily contain hand-picked assumptions which we have chosen “just” to fit observations. If that wasn’t so, all we’d have left to pick assumptions would be mathematical consistency, and we’d end up in Tegmark’s mathematical universe. In the mathematical universe then, we’d no longer have to choose a consistent theory, ok. But we’d instead have to figure out where we are, and that’s the same question in green.
All our theories contain lots of assumptions like Hilbert-spaces and Lie-algebras and Haussdorf measures and so on. For none of these is there any explanation other than “it works.” In the space of all possible mathematics, the selection of this particular math is infinitely fine-tuned already – and it has to be, for otherwise we’d be lost again in Tegmark space.
The mere idea that we can justify the choice of assumptions for our theories in any other way than requiring them to reproduce observations is logical mush. The existing naturalness arguments single out a particular type of assumption – parameters that take on numerical values – but what’s worse about this hand-selected assumption than any other hand-selected assumption?
This is not to say that naturalness is always a useless criterion. It can be applied in cases where one knows the probability distribution, for example for the typical distances between stars or the typical quantum fluctuation in the early universe, etc. I also suspect that it is possible to find an argument for the naturalness of the standard model that does not necessitate to postulate a probability distribution, but I am not aware of one.
It’s somewhat of a mystery to me why naturalness has become so popular in theoretical high energy physics. I’m happy to see it go out of the window now. Keep your eyes open in the next couple of years and you’ll witness that turning point in the history of science when theoretical physicists stopped dictating nature what’s supposedly natural.