The reason that particle physicists were confident the LHC should see more than the Higgs was that their best current theory – the Standard Model of particle physics – is not “natural.”
The argument roughly goes like this: The Higgs-boson is different from all the other particles in the standard model. The other particles have either spin 1 or spin ½, but the Higgs has spin 0. Because of this, quantum fluctuations make a much larger contribution to the mass of the Higgs. This large contribution is ugly, therefore particle physicists think that once collision energies are high enough to create the Higgs, something else must appear along with the Higgs so that the theory is beautiful again.
However, that the Standard Model is not technically natural has no practical relevance because these supposedly troublesome quantum contributions are not observable. The lack of naturalness is therefore an entirely philosophical conundrum, and the whole debate about it stems from a fundamental confusion about what requires explanation in science to begin with. If you can’t observe it, there’s nothing to explain. If it’s ugly, that’s too bad. Why are we even talking about this?
It has remained a mystery to me why anyone buys naturalness arguments. You may say, well, that’s just Dr Hossenfelder’s opinion and other people have other opinions. But it is no longer just an opinion: The LHC predictions based on naturalness arguments did, as a matter of fact, not work. So you might think that particle physicists would finally stop using them.
But particle physicists have been mostly quiet about the evident failure of their predictions. Except for a 2017 essay by Gian-Francesco Guidice, head of the CERN theory group and one of the strongest advocates of naturalness-arguments, no one wants to admit something went badly wrong. Maybe they just hope no one will notice their blunder, never mind that it’s all over the published literature. Some are busy inventing new types of naturalness that would predict new particles would show up only at the next larger collider.
This may be an unfortunate situation for particle physicists, but it’s a wonderful situation for philosophers who love nothing quite as much as going on about someone else’s problem. A few weeks ago, we discussed William Porter’s analysis of the naturalness crisis in particle physics. Today I want to draw your attention to a preprint by the philosopher David Wallace, titled “Naturalness and Emergence.”
About the failure of particle physicists’ predictions, Wallace writes:
“I argue that any such naturalness failure threatens to undermine the entire structure of our understanding of inter-theoretic reduction, and so risks a much larger crisis in physics than is sometimes suggested.”With “inter-theoretic reduction” he refers to the derivation of a low-energy theory from a high-energy theory, which is the root of reductionism. In the text he further writes:
”If Naturalness arguments just brutely fail in particle physics and cosmology, then there can be no reliable methodological argument for assuming them elsewhere (say, in statistical mechanics).”Sadly enough, Wallace makes the very mistake that I have pointed out repeatedly, that I frequently hear particle physicists make, and that also Porter Williams spells out in his recent paper. Wallace conflates the dynamical degrees of freedom of a theory (say, the momenta of particles) with the parameters of a theory (eg, the strength of the interactions). For the former you can collect statistical data and meaningfully talk about probabilities. For the latter you cannot: We have only one universe with one particular set of constants. Speaking about these constants’ probabilities is scientifically meaningless.
In his paper, Wallace argues that in statistical mechanics one needs to assume an initial probability distribution that is “simple” in a way that he admits is not well-defined. It is correct, of course, that in practice you start with such an assumption to make a calculation that results in a prediction. However, this being science, you can justify this assumption by the mere fact that the predictions work.
The situation is entirely different, however, for distributions over parameters. Not only are these distributions not observable by construction, worse, starting with the supposedly simple assumptions for this distribution demonstrably works badly. That it works badly is, after all, the whole reason we are having this discussion to begin with.
Ironically enough, Wallace, in his paper, complains that I am conflating different types of naturalness in my paper, making me think he cannot have read what I wrote too carefully. The exact opposite is the case: I am not commenting on the naturalness of statistical distributions of degrees of freedom because that’s just a different story. As I say explicitly, naturalness arguments are well-defined if you can sample the probability distribution. If, on the other hand, you have no way to ever determine a distribution, they are ill-defined.
While I am at it, a note for the folks with their Bayesian arguments. Look, if you think that a supersymmetric extension of the standard model is more predictive than the standard model itself, then maybe try to figure out just what it predicts. Besides things we have not seen, I mean. And keep in mind that if you assume that the mass of the Higgs-boson is in the energy range of the LHC, you cannot count that as a prediction. Good luck.
Back to Wallace. His argument is logically equivalent to crossing the Atlantic in a 747 while claiming that heavier-than-air machines can’t fly. The standard model has so far successfully explained every single piece of data that’s come out of the LHC. If you are using a criterion for theory-evaluation according to which that’s not a good theory, the problem is with you, not with the theory.
Good article! Friendly neighborhood copy editor:
ReplyDeleteMissing word; "So you would think that particle physicists finally stop using them." == "would finally stop using them"
Missing word; " predict new particles show up only at the next larger collider." == "new particles that would show up only at the next larger collider."
Castaldo,
DeleteThanks for pointing out; I have fixed that. I better not blog while jet-lagged :/
If different theories predict different probability distributions for a given measurement (for different values of some parameter, say), each of which can be compared with experiment, then any convex sum of those probabilities can also be compared with experiment. One could, for an outrageous toy example, consider a convex integral over different values of Planck's constant (effectively taking Planck's constant to have a small "width"), and see whether it models experimental results better than any absolutely narrow value for Planck's constant. Such an approach is not, I think, David Wallace's intention, but perhaps it's meaningful enough as a counter to your "Speaking about these constants’ probabilities is scientifically meaningless."
ReplyDeletePeter: But I believe the constants in question are actually measured to some error-range in order to agree with experimental or observational results. So something like G (the gravitational constant) is set to 4 or 5 significant digits; and because of how we measure it, it has an error of 46 ppm.
DeleteSo there is no way to say one G_1 in that range is 'better' than another G_2 in that range; without reducing the error in our original measurements!
Likewise, the Planck length is derived using G. So without improving the accuracy of G with more accurate measurements, we can't improve the uncertainty in the Planck length, either.
Even if we do find some clever method to narrow the range of G (or the Planck length), that only emphasizes the problem Dr. Hossenfelder is pointing out: the value of G may be an arbitrary constant; and there is not good scientific reason to believe it is not a constant, or can be different in other universes, or is in any way a distribution from which a new universe can 'choose' some unique G by random or environmental influences.
It is a constant; we have no reason to believe it can be anything else but what it is. The equations won't change, they just use the letter 'G'. If we find that some value of G is more predictive than another, the best we can do with that information is narrow the range of our estimate of the constant.
Well put. It blows me away that seemingly intelligent people can't understand this dirt simple point.
DeleteHi Sabine,
ReplyDeletethe SM does not explain anything about its free parameters - by definition you might say. Still this is a lot of unexplained data. And even if you classify that an uninteresting or non-important problem, it may well be all what remains to solve.
Best,
J.
akidbelle,
DeleteExplaining the values of some parameters from an underlying principle is neither uninteresting nor not-important. I am merely saying it's not a promising problem to work on, given that it might well be there just isn't any answer. Better focus on solving problems that actually require a solution - it's a more reliable route to progress.
Alles auf der Welt geht natürlich zu,
ReplyDeletenur meine Hose geht natürlich nicht zu.
Heinz Erhardt
Sabine - If it’s ugly, that’s too bad. Why are we even talking about this?
ReplyDeleteBecause virtually the whole history of physics is "beautiful" theories replacing jury built ugly ones. I'm not trying to judge whether "naturalness" is either beautiful or correct, but throwing away the beauty criteria leaves not much to go on.
CIP,
DeleteThe history of physics is also full with "beautiful" ideas that unfortunately did not work, and ideals of "beauty" that changed over time. It follows from this that insisting on using very specific ideals of beauty from the past to construct new theories is a bad idea.
Unfortunately, many physicists have a very, erm, selective recall of history.
In your reply to akidbelle you said,
ReplyDeleteI am merely saying it's not a promising problem to work on, given that it might well be there just isn't any answer. Better focus on solving problems that actually require a solution - it's a more reliable route to progress.
I certainly don't criticize this view from a pragmatic standpoint, but I'm not at all convinced that focusing on resolving inconsistencies between QM and GR is more likely to lead to a successful fundamental theory than trying to explain/derive the 'free' parameters of the standard model. Let me explain what I mean.
Inconsistencies between GR and QM have been known for many decades now, and clearly many theorists have been disturbed by them and have tried hard to resolve them. You may or may not agree that none of the efforts at 'quantum gravity' thus far are especially compelling (my guess is you'd agree), but I think it's uncontroversial that none proposed thus far is predictive in the sense that the SM is predictive, and that all suffer from important shortcomings.
So applying the same standard to efforts at 'quantum gravity' as you apply to 'naturalness,' a reasonable conclusion is that those efforts have a poor track record, and thus pursuing a theory of quantum gravity by trying to resolve known inconsistencies between QM and GR is not promising. Sure, one can excuse the poor track record by saying the problems are "hard" and are hampered by "not enough empirical guidance," so that it is way too early (or "not fair") to draw that conclusion. But one can also argue that a key reason quantum gravity is so hard is that, by focusing on the inconsistencies between QM and GR (e.g., the fundamental role of time in QM but not GR, and problem of incorporating quantum fluctuations and superposition into the metric), theorists implicitly assume that many/most postulates of QM and GR will be recognizable or immediately derivable in the fundamental theory, as opposed to QM and GR emerging less directly from the fundamental theory. If that implicit assumption is wrong, how can one hope to arrive at 'quantum gravity' by primarily focusing on inconsistencies between QM and GR?
Continuing this line of thought, one might instead focus on the parameters and properties of the SM itself, and assume that those attributes are not arbitrary, i.e., that they are fixed by the dynamics of the fundamental theory. Then many, many questions legitimately arise: why do so few different internal symmetries underlie the SM; why is the particle mass spectrum discrete; what role does the Higgs field play in quantization of mass; why does our universe have 3 spatial dimensions, and is that fact related to the internal symmetries of SM particles; why is the metric signature Lorentzian rather than Euclidean; what is the elementary causal process by which an electron moves when it is in the neighborhood of another electron (a much deeper question than computing an electromagnetic potential and invoking the electron's equation of motion)? Certainly there are many, many other questions beyond these.
And why shouldn't an elementary theory be able to answer questions like those above, once one lets go of the 'pragmatic' world view that "that's just the way things are; our job is to just describe what we see"? Sure, maybe Nature really is arbitrary or unexplainable, so that asking those questions leads to endless dead ends. But there is no empirical evidence--only philosophical predispositions--for believing that. And looking for explanatory answers that are empirically consistent with QM, GR and all experimental observations might lead us to a fundamental theory more quickly than trying to reconcile QM and GR more directly.
Marty,
DeleteWell, first, let me say that is roughly speaking the discussion that people in the field should be having. As I keep emphasizing, the focus on resolving inconsistencies is the conclusion I myself have come to, and I have explained why. But if other people come to other conclusions, fine, we can talk about that.
Having said that, of course I am not saying that focusing on resolving a mathematical inconsistency is sufficient and we can stop there. As I spell out clearly in my book, you still need to have contact to experiment. Physics isn't math. And that's exactly what has gone wrong with quantum gravity - they basically assumed all along that it would be experimentally inaccessible and therefore never even thought about how to test it.
Lucky for us, experimentalists were not deterred by this, so maybe in 20 years or so we will actually have data.
As to explaining the properties in the standard model. Well, it only counts as an explanation if makes things simpler (concretely you may want to think about computational complexity). It is easy enough to "explain" parameters by some other mathematical structures, if those structures in one way or the other merely calculate those parameters. (For a similar reason it is nonsense to introduce multiple fields and their constants to "explain" the value of one particular constant. Occam would weep.)
The difficulty with quantum gravitation might be summed as a problem in how one defines a propagator of the gravitational field. A quantum field theory defines a Green's function that propagates a field from one point to another. However, with gravitation we have a problem in that the field being propagated is spacetime itself. So how does one propagate a field on itself? Further, there is a difference in the meaning of time. We may think of a particle moving through space as having a proper time according to its particle path length, and a path integral as a sum over lengths and average over such lengths. With spacetime one is concerned with the dynamics of a spatial manifold where time at best is related to Tr(K) or some transverse form of this. Here K is the extrinsic curvature defined by normal vectors on the spatial manifold that in some sense is “time.” However we can choose spatial slices anyway we want by local coordinate transformations and so this normal is really a manifestation of such freely chosen coordinates. We however need physics as “fields modulo diffeomorphisms or gauge freedoms by group operations,” and so this has troubles.
DeleteThe Wheeler-DeWitt equation defines a superspace where the point in this 6 dimensional space is a spatial manifold. So we might then be able to do quantum gravitation. This is given by the Hamilton constraint NH = 0 that in a quantum form is ĤΨ[g] = 0. This removes the problem of time as diffeomorphism by eliminating time altogether. This is a way of saying that a general spatial manifold is one where a Gaussian surface can't be generally defined if mass-energy is distributed everywhere, and further that spacetime curvature can have by itself mass-energy content. One might in fact get a bit of a Mach principle flavor here! Yet this WDW equation has troubles, and the loop quantum gravitation effort that follows has proven to be not so workable. While this seems odd, one has to realize this WDW equation comes from a constraint condition with Lagrangian L = π^{ij}ġ_{ij} - NH + … , which means we might not expect this at all to define all of quantized geometro-dynamics.
People looking at this who try to resolve these problems do focus in on the obstruction. This obviously can't work if the very set up is the obstruction itself. LQG is a system of constraints that defines a contact manifold or solutions with zero content. This might be important, but not in the way often though by those in LQG. String theory with quantum gravitation in one sense works a bit better, but it has a patchwork aspect to it. A more “Zen Buddhist” perspective of unanswering the question is maybe advised. In some ways this means turning away from the large trends in theoretical physics, which unfortunately means with your backside exposed it gets kicked.
I think the ultimate foundations of physics are really just plain vanilla quantum mechanics. Spacetime is then a form of emergent epiphenomenology from large N entanglements of quantum states. I use the term because spacetime is really something our minds construct that we perceive through this epiphenomenology we call consciousness.
I sort of followed your reasoning until you seem to define time functionally by a variable which seems defined by time. To escape trapping myself into circular reasoning, I scanned the last paragraph. Ah. Epiphenomenology.
DeleteGood luck.
Bert Kortegaard
@Lawrence Crowell
Delete"Spacetime is then a form of emergent epiphenomenology from large N entanglements of quantum states"
Didn't Leonard Susskind and his colaberation put some stuff out to this effect and slap the ER==EPR on it claiming that using some sort of black hole thought experiment they could show gravity == entanglment?
@Korean
"Ah. Epiphenomenology.
Good luck."
If Lawrence means reality is a consequence of consciousness then I agree with your snicker, if he meant that 'classical world' == 'an averaging of the states of many many quantum things looks very classical' then I'm with Lawrence, the term is well defined but I've seen it misapplied before, idk hard to say.
I think Raamsdonk has a better perspective on how spacetime is emergent from entanglement. Susskind illustrates a similarity between nontraversable wormholes and bipartite entanglements. The argument is a bit more convoluted IMO, but it not unreasonable.
DeleteI did a paper in graduate school where I worked out how a Galois field representation of Schild's ladder, a ruler-compass discrete method of parallel translation in spacetime, was isomorphic to a Galois representation of a Dirac field. I started thinking then 30 years ago that QM and GR were aspects of the same thing or fundamentally isomorphic. I was called crazy.
Spacetime is a classical physics, and it is reasonable to think it emerges from large N entanglements. Spacetime is really just a system of relationships, and fundamentally this is a manifestation of large N entanglements and wave interference that gives a form of locality. That is though really how we process this with our minds.
Thank you once again...
ReplyDeletewhat happened to you that you have become so bitter? I feel like i should offer you a hug
ReplyDeleteThis is condescending and sexist. You should delete this crap Sabine.
DeleteExellent collection of thoughts, grateful for the perspective. Thank you. Imo a good definition of naturalness in a model would include a measure of how many constants and parameters are required to be input by hand. Ideally number of fundamental constants would be minimal, say 4 or 5, and NO free parameters. Unziker's choice of the four that define alpha seems best possible to me.
ReplyDeleteOK, thanks for answering.
ReplyDeleteBut then on what grounds do you define 'promising'?
Secondly, do you believe that you may solve 'some parameters from an underlying principle', without understanding a large part of the rest of it? Meaning the 'underlying principle' would impact everything.
Last, assume a solution exists that solves all free parameters (or just 80% of those). Would it not imply that we miss so much that all we know is either superficial or emergent?
akidbelle,
DeleteI define promising by extrapolating scientific history. One could quantify it, but it would be an overkill. Numbers really are not necessary here.
If you look at the history of science, relying on beauty has sometimes led to useful theories, sometimes not. Beauty has not been a reliable guide. Some beautiful theories were wrong, and other theories that were thought of as "ugly" were correct. Later people's conception of beauty changed, in some cases (that's McAllister's point). In other cases, successful theories are still considered as "ugly". Either way you put it, beauty is an unreliable guide and, to state the obvious, there is also no reason to think it should be a reliable guide to the development of better theories.
In stark contrast to this, we have never witnessed an actual inconsistency in natural phenomena. If you have a theory that has an internal inconsistency, therefore, all available evidence says that this cannot be what is really going on in nature. Indeed, if you look at the cases where theory-led arguments from beauty seemed to work in physics, you will find that these all had an underlying inconsistency that required solution.
I don't know what you mean by "solve some parameters from an underlying principle".
"If you look at the history of science, relying on beauty has sometimes led to useful theories, sometimes not. Beauty has not been a reliable guide."
DeleteWe find a theory that works, and then we reformulate to look beautiful, i.e. intuitive to human cognition. Perhaps the latter step is the problem.
Basic question: what is the " scale " to determine relative size of free parameters? a ratio (Kelvin scale) general (log) linear line, with a well defined origin we can talk about one quantity being twice that of another (closed under the operation of division, on which geometric mean can be defined). An interval (Celsius scale) line, with no well defined origin is an “affine” space in which only ratios of differences can be expressed where only first differences can be compared and an arithmetic mean may be used.
ReplyDeleteThe parameters in question are all dimensionless, ie no units. In case you want to point out that you can make large numbers small by taking a logarithm, that's right. That's indeed how some of the supposed "explanations" for a lack of "naturalness" work. Not that this actually explains anything.
DeleteDW is a reductionist but dislikes the designation. So he develops a rhetoric in which 'reductionism' is replaced be naturalness, accordingly there is a paragraph(5) Emergence vs naturalness, not vs reductionism. The main issue is seen when he states (8)
ReplyDelete[Failure of Naturalness] would make the Standard Model, and general relativity, unNaturally emergent theories, the only known cases of such.
Actually this is a trivial statement that as long as we lack a reasonable theory of quantum gravity (that is, we do not have a good idea what mass is) neither the SM nor GR can be reduced to something more fundamental.
But the bulk of the paper is devoted to comments that reduction to a peculiar case, an unnatural one, should be avoided. Sadly, DW has nothing much to offer here except to state that inventing a general case and disguise it as a probability distribution does not really work. However at this juncture 'natural' comes as a synonym of 'general'. Reducing everything to a most general case might look as the aim of science but its current state strongly suggests that is not feasible. Or at least that expecting to get rid of emergence and/or unnaturalness is not realistic.
Not sure what you are getting at here. The Standard Model and General Relativity are both technically unnatural. Wallace is trying to link technical naturalness to reductionism itself (which, as I have explained, is just justified because it's really an entirely different story). You do not need to actually know the underlying theory to make a statement about technical naturalness (that's the whole point of using the criterion to begin with).
DeleteHi Sabine,
ReplyDeleteNot sure if you noticed this interesting but off topic article in Quanta:
https://www.quantamagazine.org/mathematicians-discover-the-perfect-way-to-multiply-20190411/
I bring it up because of the quotation discussing the motivation for their research:
"Their conjecture was based on a hunch that an operation as fundamental as multiplication must have a limit more elegant than n × log n × log(log n).
'It was kind of a general consensus that multiplication is such an important basic operation that, just from an aesthetic point of view, such an important operation requires a nice complexity bound,' Fürer said. 'From general experience the mathematics of basic things at the end always turns out to be elegant.' "
This has all the dogmatic keywords:
elegance, general consensus, aesthetics.
I stress the quote from Fürer: "the mathematics of basic things...ALWAYS turns out to be elegant."
RK,
DeletePhysics isn't math. I cannot stress this enough. The task of physicists is to figure out which math describes observations, not to prove theorems.
I agree. But since you don't prove theorems, nothing is ever certain, e.g. that emergent properties are nonexistent.
DeleteWallace the philosopher,
ReplyDeleteThis one from Richard Feynman:
The philosophy of science is important to scientists as ornithology is important to birds.
This one from Michio Kaku:
Experimental physicists need gigantic particle accelerators. Theoretical physicists just need pen and paper. Philosophers don't even need a trash can.
You ruined the joke a bit, and I'm fairly sure that the joke pre-dates Kaku. It goes like this:
DeleteMathematics is the second-cheapest department at any college because all you need is a pencil, paper and a wastebasket. Philosophy is the cheapest because you don’t need the wastebasket.
You ruined the joke a bit, and I'm fairly sure that the joke pre-dates Kaku. It goes like this:
DeleteMathematics is the second-cheapest department at any college because all you need is a pencil, paper and a wastebasket. Philosophy is the cheapest because you don’t need the wastebasket.
Except ornithology isn't a philosophy. Not an exact analogy. The philosophy of a field of study usually is relevant to those studying that field.
DeleteI am a bit undecided on this. This argument by Wallace is that all is lost if there is no natural ways the standard model fits in some larger more general scheme. The problem I see is this statement is one of motivation, just as the call for beauty in physics is meant to give some meaning to the pursuit. Hossenfelder is right in pointing out that this by itself leads to little, and no matter how beautiful a theory is it can still be wrong.
ReplyDeleteI might compare this to the issue of quantum gravity. Why quantum gravity? Is is possible the world is filled with quantum fields and spacetime is purely classical? The question one might raise is how can a field that is classical emit quanta of radiation when classical systems can't enter into entanglements. This is a reason it is suspected the Hawking-Unruh theory of semiclassical radiation emitted by black holes, with the spacetime metric being rather artificially adjusted by back reaction, is an effective theory. We think for the world to be more elegant that gravitation should have some sort of quantum mechanical aspect to it. This is of course not a proof of this, but a motivating statement. It could of course be wrong.
The state of affairs with physics is that at some ultimate foundation we find “what is” just simply IS, and since that may be quantum mechanics we have a bit of a Bill Clinton problem with knowing “what is is.” With the standard model and the Higgs field of scalars it is always possible these are just how nature if configured and there is nothing underlying it. Whether this state of affairs is real is still unknown and it means people might ponder whether there is something else.
another pedant here:
ReplyDelete"blow to" OR "slap in the face"
Thanks! I fixed that.
Delete