Friday, November 10, 2017

Naturalness is dead. Long live naturalness.

I was elated when I saw that Gian Francesco Giudice announced the “Dawn of the Post-Naturalness Era,” as the title of his recent paper promises. The craze in particle physics, I thought, might finally come to an end; data brought reason back to Earth after all.

But disillusionment followed swiftly when I read the paper.

Gian Francesco Giudice is a theoretical physicist at CERN. He is maybe not the most prominent member of his species, but he has been extremely influential in establishing “naturalness” as a criterion to select worthwhile theories of particle physics. Together with Riccardo Barbieri, Giudice wrote one of the pioneering papers on how to quantify naturalness, thereby significantly contributing to the belief that it is a scientific criterion. To date the paper has been cited more than 1000 times.

Giudice was also the first person I interviewed for my upcoming book about the relevance of arguments from beauty in particle physics. It became clear to me quickly, however, that he does not think naturalness is an argument from beauty. Instead, Giudice, like many in the field, believes the criterion is mathematically well-defined. When I saw his new paper, I hoped he’d come around to see the mistake. But I was overly optimistic.

As Giudice makes pretty clear in the paper, he still thinks that “naturalness is a well-defined concept.” I have previously explained why that is wrong, or rather why, if you make naturalness well-defined, it becomes meaningless. A quick walk through the argument goes as follows.

Naturalness in quantum field theories – ie, theories of the type of the standard model of particle physics – means that a theory at low energies does not sensitively depend on the choice of parameters at high energies. I often hear people say this means that “the high-energy physics decouples.” But note that changing the parameters of a theory is not a physical process. The parameters are whatever they are.

The processes that are physically possible at high energies decouple whenever effective field theories work, pretty much by definition of what it means to have an effective theory. But this is not the decoupling that naturalness relies on. To quantify naturalness you move around between theories in an abstract theory space. This is very similar to moving around in the landscape of the multiverse. Indeed, it is probably not a coincidence that both ideas became popular around the same time, in the mid 1990s.

If you now want to quantify how sensitively a theory at low energy depends on the choice of parameters at high energies, you first have to define the probability for making such choices. This means you need a probability distribution on theory space. Yes, it’s the exact same problem you also have for inflation and in the multiverse.

In most papers on naturalness, however, the probability distribution is left unspecified which implicitly means one chooses a uniform distribution over an interval of about length 1. The typical justification for this is that once you factor out all dimensionful parameters, you should only have numbers of order 1 left. It is with this assumption that naturalness becomes meaningless because you have now simply postulated that numbers of order 1 are better than other numbers.

You wanted to avoid arbitrary choices, but in the end you had to make an arbitrary choice. This turns the whole idea ad absurdum.

That you have to hand-select a probability distribution to make naturalness well-defined used to be well-known. One of the early papers on the topic clearly states
“The “theoretical license” at one’s discretion when making this choice [for the probability distribution] necessarily introduces an element of arbitrariness to the construction.” 
Anderson and Castano, Phys. Lett. B 347:300-308 (1995)

Giudice too mentions “statistical comparisons” on theory space, so I am sure he is aware of the need to define the distribution. He also writes, however, that “naturalness is an inescapable consequence of the ingredients generally used to construct effective field theories.” But of course it is not. If it was, why make it an additional requirement?

(At this point usually someone starts quoting the decoupling theorem. In case you are that person let me say that a) no one has used mass-dependent regularization schemes since the 1980s for good reasons, and b) not only is it questionable to assume perturbative renormalizability, we actually know that gravity isn’t perturbatively renormalizable. In other words, it’s an irrelevant objection, so please let me go on.)

In his paper, Giudice further claims that “naturalness has been a good guiding principle” which is a strange thing to say about a principle that has led to merely one successful prediction but at least three failed predictions, more if you count other numerical coincidences that physicists obsess about like the WIMP miracle or gauge coupling unification. The tale of the “good guiding principle” is one of the peculiar myths that gets passed around in communities until everyone believes it.

Having said that, Giudice’s paper also contains some good points. He suggests, for example, that the use of symmetry principles in the foundations of physics might have outlasted its use. Symmetries might just be emergent at low energies. This is a fairly old idea which goes back at least to the 1980s, but it’s still considered outlandish by most particle physicists. (I discuss it in my book, too.)

Giudice furthermore points out that in case your high energy physics mixes with the low energy physics (commonly referred to as “UV/IR mixing”) it’s not clear what naturalness even means. Since this mixing is believed to be a common feature of non-commutative geometries and quite possibly quantum gravity in general, I have picked people’s brains on this for some years. But I only got shoulder shrugs, and I am none the wiser today. Giudice in his paper also doesn’t have much to say about the consequences other than that it is “a big source of confusion,” on which I totally agree.

But the conclusion that Giudice comes to at the end of his paper seems to be the exact opposite of mine.

I believe what is needed for progress in the foundations of physics is more mathematical rigor. Obsessing about ill-defined criteria like naturalness that don’t even make good working hypotheses isn’t helpful. And it would serve particle physicists well to identify their previous mistakes in order to avoid repeating them. I dearly hope they will not just replace one beauty-criterion by another.

Giudice on the other hand thinks that “we need pure unbridled speculation, driven by imagination and vision.” Which sounds great, except that theoretical particle physics has not exactly suffered from a dearth of speculation. Instead, it has suffered from a lack of sound logic.

Be that as it may, I found the paper insightful in many regards. I certainly agree that this is a time of crisis but that this is also an opportunity for change to the better. Giudice’s paper is very timely. It is also merely moderately technical, so I encourage you to give it a read yourself.

66 comments:

akidbelle said...

Hi Sabine,

I still fail to see why a coupling in range 1 would be natural. As far as I know, none of those beasts come in this range - not any - so I think nature tells us the opposite.

One question where I do not find any answer in what I read (which is not much actually): what would it mean if there is nothing more to find in any energy range?

Best,
J.

Tevong said...
This comment has been removed by the author.
Kevin Van Horn said...

Your previous comments on naturalness I found unconvincing because you seemed to be saying that an appropriate probability distribution can not, even in principle, be defined, as there is no ensemble from which you could extract frequencies. But as a Bayesian statistician / computer scientist I work mostly with *epistemic* probabilities. As a non-physicist I'm sure I can't appreciate the difficulties of creating an appropriate epistemic probability distribution, but I balk at a claim that it can't even be done *in principle*.

This blog post, however, makes a point that I can agree with 100%: it is the responsibility of naturalness proponents to define and defend the prior distribution they have in mind.

Uncle Al said...

Theory defining experiment "naturally" excludes falsifying observations. Success is empirical failure demanding better equipment. The 500 + 500 = 1000 GeV International Linear Collider rebudgets to 125 + 125 = 250 GeV, bonused bureaucrats, and no offending outputs. Enormous volumes of liquid water, argon, xenon; 3/4 tonne of TeO2 single crystals at 0.01 kelvin punctiliously see nothing.

http://bigthink.com/videos/eric-weinstein-after-einstein-we-stopped-believing-in-lone-genius-is-it-time-to-believe-again
...”Transcript” lower left, pale.

CVs fatten with citations as answers remain obscure for not being within theory.

John Anderson said...

“The Character of Physical Law” and the corresponding Messenger Lectures by Feynman at Cornell in 1964 could really use an update. But I gather that such an undertaking might not be able to produce material that is comprehensible by the interested public.

Esoterica like naturalness may play a part or not but it seems a popular working principle. If our universe is just one of many, then perhaps there is no fundamental explanation for our physics. In that case perhaps the best we could do is “The Local Character of Physical Law.”

The chapters in Feynman’s book:
1 ) The Law of Gravitation as Example: No quantum gravity yet to supplant the Newtonian example shown. Einstein’s GR has received enormous conformational data since this book was published.

2 ) The Relation of Math to Physics: A lot more sophisticated math is in use now but the experimental side hasn’t progressed as much. (Group theory, knots, conformal whatever.)

3 ) The Great Conservation Principles: Not much new here (is there?) except R-parity, maybe.

4 ) Symmetry in Physical Law. More symmetries (10^500 ha, ha) are possible with M-theory, Lisi’s E8 model. But where is the experimental confirmation?

5 ) The Distinction of Past and Future: Sean Carroll’s Past Hypothesis could be added and Externalism. (This is one of my favorite topics.)

6 ) Probability and Uncertainty: A lot has been done in the area of interpreting quantum mechanics. Is there any more agreement that is supported by experiment than in 1964?

7 ) Seeking New Laws: Physicists have proposed many new laws since 1964. How many have been confirmed. Are physicists' methods better? They appear to be more prolific.

qsa said...

"I believe what is needed for progress in the foundations of physics is more mathematical rigor"

what do you mean by that? foundations of physics by its nature at this moment is speculative. There is a whole journal dedicated to it.
example paper:(

Multi-Time Wave Functions Versus Multiple Timelike Dimensions
Matthias Lienert, Sören Petrat, Roderich Tumulka

Euphonium said...

"... theoretical particle physics has not exactly suffered from a dearth of speculation. Instead, it has suffered from a lack of sound logic."

Ka-BLAM!

Sabine Hossenfelder said...

qsa,

What's your problem with the paper? Do you think it's wrong? Funny you would pick this particular paper as Roderich happens to be the person who taught me how to lead watertight proofs. I'm sure he knows what he's doing.

Of course theoretical physics is speculative in its nature. Physics isn't math and creativity is needed. But if you look at the history of physics then progress came from hitting on actual mathematical problems, not aesthetic itches. Eg, Newtonian gravity is actually incompatible with special relativity. Special relativity is actually incompatible with the non-relativistic Schroedinger equation. The standard model without the Higgs actually violates unitarity somewhere at LHC energies. These are actual problems. Naturalness problems aren't. They are aesthetic misgivings.

You cannot, of course, derive new physical theories, because logic alone only gets you so far. Conclusions always depend on the assumptions and those you can't prove, therefore you always need experimental check. What happens if you forget that you can see in the quantum gravity communities.

But at least by using math you can make sure that you are trying to solve an actual problem. Naturalness problems aren't actually problems. That the perturbative expansion doesn't converge, otoh, is an actual problem. So is, eg, Haag's theorem. Then there is of course the need to quantize gravity, also an actual problem. Maybe you know of others?

See, what happened with naturalness is that people erroneously came to believe it is a problem that requires a solution. It's a mistake that could have been prevented had they paid more attention to the math. Best,

B.

Sabine Hossenfelder said...

Kevin,

That's right, the problem is that you are dealing with a "virtual" space (both here and in the multiverse) so there is no way to ever learn anything about the probability distribution - you have only one event in the sample. There is likewise no way to argue for one or the other prior (on that space) if you want to do a Bayesian assessment for what a 'probable' theory is. There are actually a few papers in the literature on Bayesian naturalness, typically assuming a uniform log prior, presumably because that is 'natural'. But it's just pushing the issue of 'making a choice' around under the carpet.

What you *can* do of course is just Bayesian inference on the space to find out which one best explains present data (as opposed to being probable by fiat) which would give you back the standard model in the IR with high confidence, but then we already knew that. I have no problem with this type of inference, but it does not prefer "natural" theories and is hence a different story altogether. Best,

B.

Dave Miller said...

Bee,

I remember quite clearly, back around 1980, listening to Helen Quinn of SLAC complaining to another SLAC theorist that the younger generation were in danger of believing that musing about the correct definition of "naturalness" was actually doing physics.

I have not talked with Helen since on the issue and cannot attest to her views almost forty years later. But, I thought that you might like to know that the obsession with "naturalness" and criticism of that obsession goes back that far.

Dave Miller in Sacramento

CapitalistImperialistPig said...

It seems to me that a lot of progress in physics has come from picking an appealing heuristic and trying to see where it leads (Faraday, Einstein, Dirac, Feynman). I have more trouble seeing where emphasis on mathematical rigor has made a difference. Do you have some examples?

Sabine Hossenfelder said...

CIP,

What motivates an individual researcher is one thing. The actual reason why their hunch worked out another thing entirely. I already listed examples above. Special and General relativity, relativistic quantum mechanics, QED, they all solve actual mathematical problems (which of course require assumptions that are based on evidence). That this might not have been the reason why someone worked on it to begin with is psychologically interesting but factually irrelevant. Best,

B.

Tevong said...

"Indeed, it is probably not a coincidence that both ideas became popular around the same time, in the mid 1990s."

I guess you meant the idea quantifying naturalness, as opposed to naturalness itself?

Sabine Hossenfelder said...

Tevong,

I meant the idea of moving around in a theory-landscape.

johnduffieldblog said...

I thought Gian Giudice's essay sounded like a cry for help myself. But I'm afraid to say that if Guido Altarelli was still around, I'm not sure Gian would be listening. Read Particle headache: Why the Higgs could spell disaster by Matthew Chalmers. Note this: "It's a nice story, but one that some find a little contrived. "The minimal standard model Higgs is like a fairy tale", says Guido Altarelli of CERN near Geneva, Switzerland. "It is a toy model to make the theory match the data, a crutch to allow the standard model to walk a bit further until something better comes along". His problem is that the standard model is manifestly incomplete". The problem for particle physics is that people like Gian Giudice will not admit it. Because they've painted themselves into a corner with discoveries which have been cemented into place by Nobel prizes.

Ervin Goldfain said...

Sabine,

Let me offer a different view on naturalness.

Naturalness cannot be easily discarded and it's not necessarily driven by aesthetic considerations. Look at the clustering principle of relativistic QFT, which ensures that phenomena on largely separated energy scales decouple. It is the basis of the Renormalization Group flow, which assumes that the low and high momenta are fully separable and that the high momenta can be sequentially integrated out. Naturalness lies at the heart of the fine-tuning problem in the Standard Model, where perturbative corrections drive the Higgs mass up to the Planck scale, in direct violation of the clustering principle. Naturalness also underlies the cosmological constant (CC) problem, where quantum contributions bring the CC to abnormally high values and break the clustering principle.

Having said that, whether naturalness should be the ultimate guide for model building beyond the Standard Model remains an open issue. You are probably right that other foundational principles may very well come into play. Only time will tell.

Sabine Hossenfelder said...

Ervin,

I explain in my blogpost why what you say is irrelevant. I suggest you read what I wrote about effective field theories and naturalness before making further comments. Best,

B>

Ervin Goldfain said...

Sabine,

But how can one avoid making assumptions about initial conditions near the UV limit of the Renormalization Group flow?

All experimental observations on the effective theories we have today confirm the decoupling theorem. Therefore they confirm that the high and the low energy scales are decoupled. And so they implicitly confirm the choice of initial conditions.

Sabine Hossenfelder said...

Ervin,

Physical phenomena also decouple in a finetuned theory, that's what effective field theory does for you, by construction. The sensitivity that you probe with naturalness is a dependence of the parameters at low energy on the parameters at high energy. This is an additional requirement. But changing the parameters is not a physical process. What observations tell you today (at least in the most straight-forward interpretation) is that the theory *is* finetuned. That being the very problem that Giudice's paper is about. Best,

B.

Ervin Goldfain said...

"What observations tell you today (at least in the most straight-forward interpretation) is that the theory *is* finetuned."

Yes, I agree. And the issue will continue to haunt us as long as the root cause of this apparent fine-tuning is not understood. We know now that some solutions are likely to fail. For example, low-scale SUSY is probably not the mechanism protecting the electroweak scale.

But this (and other) failures should not deter the search for novel answers to the fine-tuning problem. In my opinion, revisiting Wilson's Renormalization Group using the universal transition to chaos in nonlinear dynamics of generic flows should be a viable starting point.

Anatol Sevashko said...

Results any real physical experiments contain elements of the synthesis of the observed properties.

We see increase of the proportion of artificially synthesized properties at with the increase induced energy. We measure the nature of the filter to a greater extent and we measure the nature of the analyzed signal to a lesser extent - in this case. The nature of the observation changes the result.

The proportion of artificially synthesized properties increases with the degree increase of rationalization of the experiment.

We can not identify the "natural" background statistics with the help of monotonous physical experiments. We must use experiments of a various nature to construct a spectrum of "natural" properties. The experimental conditions must contradict each other to some extent. We will be able to build "natural" statistics using a system of irrational experiments only.

"Natural" statistics are the result of irrational experiments only, because reality is irrational (contradictory).

Sabine Hossenfelder said...

Ervin,

I just told you why there is no "fine-tuning problem". What about my explanation did you not understand?

Ervin Goldfain said...

Sabine,

It's quite obvious that we are not on the same page.

Let's agree to disagree.

Sabine Hossenfelder said...

Ervin,

This is science, not opinion. "Agree to disagree" is not an option. I told you why you are wrong. That's all we can agree on.

Ervin Goldfain said...

Sabine,

The dismissive tone of your reply is rude and uncalled for. You are obviously unwilling to listen to counterarguments and this attitude precludes any meaningful conversation.

Good bye.

Andrew said...

Kevin,

Interesting to hear your thoughts. I am aware of your careful treatment of Cox's theorem and thorough reading of Jaynes' works. I agree with your remark.

Sabine,

It's important to emphasise that Kevin asked about an (epistemic) prior for an unknown parameter - a prior that describes our knowledge or ignorance about it. Such priors are ubiquitous in Bayesian statistics. Our prior isn't a physical mechanism by which the parameter was chosen and the parameter isn't determined by sampling from our prior; rather, the prior reflects our state of knowledge.

You argued in your reply that priors for parameters in fundamental physics are problematic because we observe a single n = 1 draw from the prior. We need to remember the meaning of our priors - they aren't distributions from which our Universe was sampled - and that priors can never be determined from experiment. This is true of every prior in a Bayesian analysis.

When I formulate a prior for Kevin's age, for example a flat distribution between 30 - 50 years with some tails, I am not suggesting that Kevin's age was determined by a random draw from such a distribution! I am describing my knowledge. Since there is only one actual age, of which I am incognizant, you could say, in a similar vein, there are only n = 1 possible samples with which to verify my prior. We quickly see that there are no conceptual differences in applying priors to someone's age and a parameter in fundamental physics, and no 'n = 1' problems that only appear in the latter.

Lastly let me clarify a detail about works on this topic in the literature. Log priors, p(log x) = const, are indeed often chosen. The motivation isn't that anyone believes them to be 'natural'; they are discussed in introductory textbooks and are completely unrelated to any notions of naturalness in particle physics. They are chosen because they represent our ignorance of the magnitude of an unknown scale parameter.

Sabine Hossenfelder said...

Ervin,

You have not provided any counterargument. You have not provided any argument whatsoever.

Sabine Hossenfelder said...

Andrew,

As I said, you can of course do a Bayesian analysis to infer the best-fit parameters of the theory. But this has nothing to do with naturalness. The "most likely" parameters that fit present observations are unnatural, which means that if you believe in naturalness they are supposedly unlikely. Best,

B.

Sabine Hossenfelder said...

Andrew,

Forgot to say about the log priors. Even those, needless to say, require an assumption. You say people don't use them because they think they are "natural", but this is just picking on words. They believe they are a good starting point. It doesn't matter which way you put it, you have to chose them. You're the Bayesian, so now tell me how choosing a prior - a function in an infinite space - is any less of an assumption than choosing a constant. How do you want to do that? Introducing priors for priors?

As I have said elsewhere, you are trying to solve a problem that has no solution. If you want to find a mathematical theory that describes nature you always have to chose assumptions "just because" they explain what we see. These will be infinitely finetuned in the sense that they are one specific set of axioms among uncountably many. The whole idea of naturalness is a logical non-starter.

Best,

B.

Andrew said...

Sabine,

'As I said, you can of course do a Bayesian analysis to infer the best-fit parameters of the theory. But this has nothing to do with naturalness. The "most likely" parameters that fit present observations are unnatural, which means that if you believe in naturalness they are supposedly unlikely.'

That's not how any of this works.

'You're the Bayesian, so now tell me how choosing a prior - a function in an infinite space - is any less of an assumption than choosing a constant.'

I'll gladly try to explain. The prior for an unknown parameter reflects our state of knowledge.  If we don't know the parameter, it would be contradictory to use a prior that selected a single permissible value i.e. a constant.

Anatol Sevashko said...

"Natural" properties is a properties of infinite phenomena.

You use rational approximations of infinite phenomena.

You are trying to think in terms of infinite phenomena, but you are using finite tools in fact. You do not have tools with infinite properties. You must recognize the finite nature of your mathematical logic. You must evaluate the consequences of translating your knowledge system on rational rails.

You use postulates (starting conditions) instead of causes.

We can solve this task. We can get away from the postulates at the start of the model. We must solve the task for any conditions at the start. We must find a universal invariant at the level of the abstract form of the solution. We can build a universal theory. But we must solve a task with infinite conditions - we must solve a not rational or irrational task.

The solution of an irrational task is not a problem. The solution of the irrational task is possible. We must overcome of the Faith in the monopoly of rational methodology at this stage of the evolution of science. We will continue to use the rational methodology, it has specific effectiveness, but it will become part of the universal theory.

Sabine Hossenfelder said...

Andrew,

"I'll gladly try to explain. The prior for an unknown parameter reflects our state of knowledge. If we don't know the parameter, it would be contradictory to use a prior that selected a single permissible value i.e. a constant."

No one ever uses a single value for anything because determining that would require infinite measurement precision. You are always assuming some prior on some space and assign special relevance to a certain choice of basis in that space etc. You have a log-distribution in one basis on that space, it's not a log-distribution in some other basis. Either way you put it, you make a choice, you add assumptions.

Having said that, I don't understand what you are arguing about to begin with. Are you just saying the standard model is natural because it's among the most likeliest fits? In which case we don't disagree to begin with.

Anatol Sevashko said...

Mathematics for any modern dynamical theories is finite by definition. Modern mathematics has no tools for describing phenomena with infinite properties.

You are obliged to use boundaries - postulates - initial conditions. You do not have a universal invariant for any initial conditions.

I see the reason for the ban on solving the problem of an infinite in a fundamental error. I see the reason in the ban of contradictions at the level of rational methodology.

Andrew said...

Sabine,

I mentioned constants because you expressly asked me about them. I don't understand your subsequent text.

'Having said that, I don't understand what you are arguing about to begin with. Are you just saying the standard model is natural because it's among the most likeliest fits? In which case we don't disagree to begin with.'

To begin with, I wanted to clarify a conceptual misunderstanding of epistemic probability in your writing, as already mentioned by Kevin, and explain the motivation for log priors, which you wrongly presumed were motivated by naturalness.

I'm certaintly not saying that and in fact politely disagree with almost everything non-trivial you've written on this topic.

Sabine Hossenfelder said...

Andrew,

Clearly something is going wrong in this exchange because I can't figure out what you are even disagreeing with. Let me repeat that I have no problem whatsoever with using Bayesian inference to extract the best-fit parameters (at any energy).

Maybe you could start by explaining what you refer to by "naturalness problem". If you disagree with what I said, then please tell me what it is that you disagree with.

Since you say you don't understand my above text, let me rephrase this as follows. For all we presently know about the SM, the low-energy parameters sensitively depend on the UV parameters. That's not natural according to the present definition, but that's just a fact and not a problem. The relevant argument for why that is supposedly a problem comes from assuming that such a set of parameters is unlikely in a quantifiable way. My argument is simply that if you want to say this is "unlikely" you have to chose a probability distribution and there is no justifiction for that. It requires the exact choice that you didn't want. You could say that the probability distribution is itself unlikely.

For all I can tell, what you want to do by adding Bayesianism is that you say you had some prior for the parameters at high energies, log-normal or what have you. That prior expresses your lack of knowledge. Fine. Then you go and use the existing data to update that prior. Awesome. Upon which you'll end up with a distribution in the UV that's highly peaked around some values. That restates the above mentioned sensitivity. The parameters that are likely to fit present data in the IR correspond to a very focused distribution in the UV. Am I making sense so far?

But there is nothing problematic with this and the whole naturalness problem is about quantifying the supposed trouble of the SM. If you now want to claim that there is any problem here, you'll have to make a case that your initial prior should have been a good prior (in the sense of being close to the updated ones). That is the assumption I am saying is unjustified. As you say, the priors in this game express ignorance, not a property of the theory. You can start with whatever you want, but that starting guess has no fundamental relevance.

My previous comment explained why, if that is what you claim - that the log prior should have been close to what you get after taking into account data - this is just wrong. If this is not what you say, then I have nothing to disagree with, but then you're not saying the SM has a naturalness problem to begin with. So please clarify. Best,

B.

Anatol Sevashko said...

Bayes' formula allows us to rearrange position the cause and effect: we can calculate the probability some reason according to the known fact of the event.

The Bayes formula is based on the reversibility of cause and effect. The Bayes formula uses the properties of rational logic. Rational logic is reversible.

But we know the difference between real logic and rational logic. Between the real effect and the real reason lies the space-time interval. The interval of real space-time obeys the second law of thermodynamics. The second law of thermodynamics describes an asymmetrical phenomenon. We can not use rational logic to describe the relationship between real cause and real effect.

If we can not use rational logic, then we must use a model of non-rational relations to describe the second law of thermodynamics.

Path to the creation of a universal theory is complex, but it lies through creation and use of non-rational mathematics.

Anatol Sevashko said...

Statistical mathematics filters probabilistic fenomena. A universal model must filter deterministic fenomena, for exaple, - among others. We can not build a universal model on the basis of laws of one (statistical) type.

Nobody said...

The conclusion of this thread is the following equation: naturalness=crisis

My view on this subject is that naturalness is an irresponsible and disrespectful (to human intellect) way to accept a theory by simultaneously discard of classical logical consisteny for the sake of aesthetics (e.g. prestige). Unfortunately, Max Planck quote still hunts science evolution: "Science evolves one funeral at a time".

All problems in Physics either major or not can be solved using fundamental classical logical consistency something that is discarded since long (last 100 years at least).

What I mean about logical consistency.

An example: Casimir effect presupposes uncharged and conductive plates where the effect is justified over the quantum vacuum fluctuations.

What argument decides to follow such notion? Is there a counterargument? I would say "yes, there is and it can be demonstrated and proved classically without referring to harmonic quantum oscillator or generally to quantum vacuum fluctuations".

There are countless such examples.

Joe Carmignani said...

"..now want to quantify how sensitively a theory at low energy depends on the choice of parameters at high energies, you first have to define the probability for making such choices. This means you need a probability distribution on theory space."

Hi Sabine,
is such a statistical approach to quantum field theories already established or you are proposing the invention of new mathematics?

All the best for your lecture at PI...

Andrew said...

I'm afraid that your understanding of Bayesian model selection is somewhat garbled. You first described inference of parameters by updating a prior with data resulting in a posterior. You then guessed that one evaluates a model by comparing the posterior and prior. Under this guess, a model would be considered unnatural or implausible if the posterior and prior were markedly different by eye.

This is indeed complete baloney. It shouldn't be particularly surprisingly that one in fact judges a model by directly calculating its plausibility relative to a different model. In practice this amounts to calculating Bayes factors,

p(data | first model) / p(data | second model)

The integrand and denominator are individually refered to as evidences. This is treated in introductory material. 

If one calculates a Bayes factor for the SM with Planck quadratic corrections versus a supersymmetric model, you find that the latter is favoured by a colossal factor by measurements of the weak scale. This conclusion holds so long as priors are constructed honestly, i.e. without reference to measurements of the weak scale, and permit sparticles at scales much less than the Planck scale. This is naturalness reborn in the language of Bayesian statistics - models traditionally regarded as unnatural are indeed relatively implausible in a coherent logical framework.

I will conclude by noting that the spirit of Bayesian statistics is that one updates beliefs in light of new information. I hope that in light of my taking time to correct elementary misapprehensions in your understanding of Bayesian statistics and naturalness, you might update your opinion.

Pavel Nadolsky said...

Dear Sabine,

"But there is nothing problematic with this and the whole naturalness problem is about quantifying the supposed trouble of the SM. If you now want to claim that there is any problem here, you'll have to make a case that your initial prior should have been a good prior (in the sense of being close to the updated ones)."

the problem is then is not with the choice of the prior, the problem is with the structure of your theory model that requires large cancellations between the model parameters in order to explain the observable effects. The statistical analysis needed to define confidence regions on the parameters is then inhibited by large derivatives with respect to the model parameters that reduce predictive power of an "unnatural" model. That's why an "unnatural" model is "ugly" -- it may describe nature successfully, but it is inferior in its predictive power because its probability distribution is poorly behaving.

Thus effective phenomenological models built to describe complex systems often impose a special constraint to discourage large cancellations between the model's parameters. As nature is presumably described well by several dual theories, if one of these descriptions has no cancellations, it would be preferred all other factors being equal.

Liralen said...

I'm not a fan of the multiverse. It's flaws are obvious to non-physicists, like me. However, I want to point out that comments like this: "This is very similar to moving around in the landscape of the multiverse. Indeed, it is probably not a coincidence that both ideas became popular around the same time, in the mid 1990s." aren't very logical either.

It's kind of the opposite to the "appeal to authority" logical fallacy.

TransparencyCNP said...

"That the perturbative expansion doesn't converge, otoh, is an actual problem. So is, eg, Haag's theorem."
These issues are well understood in 1+1d toy models. If you have locally correct dynamics you can patch together a global theory, but it is represented in a different Hilbert space. That's (one of the reasons) why the expansion doesn't converge. This is just math. The physics problem is to define the locally correct dynamics (not just a perturbation approximation to an ill-defined theory).

Phillip Helbig said...

This is science, not opinion. "Agree to disagree" is not an option. I told you why you are wrong. That's all we can agree on.

There is a story that, after a talk, a young researcher was destroyed by a comment from a senior scientist. Whether the latter was correct or not is beside the point. The junior scientist tried to avoid more embarrassment by uttering a line which is often used by more senior people to get the chairman to move on to the next question: "Yes, we should discuss that". The senior scientist: "I thought we just did". :-(

Phillip Helbig said...

“naturalness is a well-defined concept.”

My maths professor (yes, she even has her own (German) Wikipedia page) used to say that "well defined" is not well defined. :-)

Phillip Helbig said...

For the record, let me say that I completely agree with you in your criticism of the naturalist argument in particle physics.

Naturalism claims "dimensionless parameters should be of order 1". This is opposite from fine-tuning arguments which, in contrast, claim that dimensionless parameters of order 1 (such as the ratio of the critical cosmological density to the observed density) need an explanation. (Whether the argument is bogus, misleading, or wrong---that is, whether someone has overlooked something which makes the ratio "natural"---or whether there is really some "additional" explanation needed (i.e. something which goes beyond the science in which the observation was made) is a different question. I would argue that in the case of the flatness problem, the traditional formulation is misleading and the answer is completely within the context of the science in which the problem was posed while in the case of fine-tuning for life some explanation is needed. Again, the puzzle is not "why this number and not some other" since all are a priori equally likely, but rather "why this number, which we knew was interesting even before we had any data".)

Sabine Hossenfelder said...

Andrew,

I asked you several time to state your argument. You didn't but instead left me guessing. Then you complain that my guess is "baloney." That's a transparently stupid mode of argumentation, though I will give it some creativity points since I haven't encountered it often.

For all I can tell the measure you suggest just rephrases the usual sensitivity of the IR on the UV parameters by weighting different models against each other. But no one doubts that, say, susy models are less sensitive to the UV than the vanilla SM, and they are in that sense more predictive. Or maybe one could say more forgiving. The question is, why do you think this means there is anything wrong with the SM? To state the obvious, renaming "natural" into "honest" doesn't make much of a difference. As everyone else who is in favor of naturalness, you have an idea how a fundamental theory "should" be and you believe that this bears relevance.

Yes, I appreciate your comments and I am updating my information. I get the impression though you aren't actually thinking about what I say, otherwise you wouldn't attempt what's impossible just by rather basic logical reasoning. Best,

B.

Sabine Hossenfelder said...

Liralen,

This comment is not a "logical conclusion," it's a historic side remark. I have no idea what you thinks is wrong with pointing out that two similar ideas became popular around the same time.

Sabine Hossenfelder said...

TransparencyCNP,

To state the obvious, we don't live in 1+1 dimensions. But I'm glad that somewhere someone is thinking about it ;) More seriously, every once in a while I do see a paper about these topics, so I'm certainly not saying no one works on that. Clearly though it doesn't get as much attention as naturalness "problems".

Sabine Hossenfelder said...

Joe,

That's how the usual measures work. Except that people don't usually specify the probability distribution. They implicitly assume it's an almost uniform distribution over an interval of length of order 1. Ie, put in 1, get out 1. No, I am not saying one should develop this approach better because it's futile. You are just replacing one guess (what's the parameter) by another guess (what's the probability distribution).

Paul Hayes said...

Sabine,

I think Andrew's (and Kevin's) argument is this:

Obviously, a claim that there is a naturalness problem based on a posterior / prior comparison for one model - the SM - would be baloney. No matter what the prior. But there are 'Bayesian' ways of choosing priors that would not be appeals to naturalness in disguise, and model comparison - SM versus whatever - based on such a prior would be justifiable. So if it's been done in that principled way* the preference for one model over the other would not involve an appeal to naturalness or what anyone thinks a fundamental theory "should" look like.

* You say "There are actually a few papers in the literature on Bayesian naturalness, typically assuming a uniform log prior, presumably because that is 'natural'", suggesting that it has not been.

Sabine Hossenfelder said...

Paul,

I was guessing they wanted to compare different sets of parameters to find out what is a preferred set of parameters. That's a reasonable thing to do.

Comparing different models in this fashion is, in contrast to what you say, not justifiable because this doesn't quantify the probability of the model's assumptions themselves.

Let me give you a simple example for how this goes wrong. The reason that supersymmetric models come out ahead in the model-comparison that Andrew sketches above is that susy models allow a larger range of UV parameters to be compatible with the IR data than models which are not-supersymmetric. The reason for that is of course that the symmetry enforces the cancellation of certain contributions to certain IR values, that being an echo of 't Hooft's original motivation to introduce the notion of 'technical naturalness' to begin with.

But if you wanted to actually compare both models you should take into account that the supersymmetric model has the additional assumption. Which is, well, supersymmetry. And what's the probability for that? Well, with any "honest" prior (in Andrews's words) the probability for this additional assumption is zero, because there are infinitely many assumptions you could have made. And even after you've assumed susy, there are more assumptions about having to break this symmetry and having to add R-symmetry in order to avoid conflict with data. If you'd do a Bayesian analysis between different models (as opposed to different parameters in an EFT expansion) you should take the additional assumptions into account. If you don't, then supersymmetry will come out ahead, rather unsurprisingly.

There is a different way to see what I am saying which is to simply ask yourself why no one who makes SM predictions uses a supersymmetric model instead. Well, that's because to correctly calculate what we observe you don't need susy. It's not supported by data. If you've managed to construe a measure which says susy is actually preferred by the data, you have screwed yourself over in your model assessment.

(I don't understand your *-remark.)

Best,

B.

Paul Hayes said...

Sabine,

Thanks - that clarifies matters.

My (now redundant) *-remark was just noting that what you said suggested that even the (uniform log) prior choice hadn't been made on principled, 'Bayesian' grounds.

Pavel Nadolsky said...

Sabine,

your reply to Paul is enlightening. I would use the Bayesian analysis in the other way around, though. I would not compare the Bayesian priors/probabilities to decide which model is ultimately "true" -- which only an entity with an infinite amount of information ("a deity") can answer. Such question is in the realm of metaphysics.

Instead, let us ask which model is consistent with the existing data and gives robust testable (non-trivial falsifiable) predictions for future measurements. Which model can systematically guide new experiments in order to incorporate new information into our shared picture of the world?

Say, we replace the model-building of the 20th century, driven by human speculations, by a neural net trained to make predictions for future colliders based on the existing data. The neural net learns by updating the probability distribution in the parametric space via recursively applying the Bayes theorem:

do when [new data]
{
p(theory posterior) = const*p(new data | theory)*p(theory prior).
p(theory prior) = p(theory posterior).
}


The probability reflects the knowledge state of the neural net, not the ultimate likelihood of the neural net model. Which models will facilitate relevant, stable, and converging learning process? They should be comprehensive and self-consistent, and it is preferable that they have a regularly behaving likelihood p(data|theory). This can be used to define a "natural" model. An 'unnatural' likelihood that is not smooth, e.g., that has 'landscape' features with haphazard variations of probability, or that requires large cancellations between unrelated parameters to describe the data, generally inhibits the learning process. Large unexplained variations of the likelihood disrupt convergence of the recursive learning. Unnatural models are less predictive, because the global structure of the landscape cannot be easily inferred from the local structure.

This gives a very intuitive definition of naturalness. The unnatural features of SM are therefore not desirable -- they may be what they are, but they limit progress in systematically expanding our knowledge of the microworld. In the absence of a confirmed natural BSM theory, a recourse to random testing of alternatives to SM mentioned at the end of your blog may be a reasonable strategy.

Andrew said...

I did not leave you guessing - the rudiments of Bayesian statistics exist in textbooks and undergraduate classrooms. Before embarking on writing blogs, books and a lecture series criticising the foundations of naturalness, you should have familiarised yourself with it.

'The question is, why do you think this means there is anything wrong with the SM?'

We have concluded that theories traditionally regarded as natural are, in light of data, vastly more plausible than the Standard Model (i.e., they are favored by enormous Bayes factors). This means that we should consider the SM relatively implausible; that is what is wrong with it.

'To state the obvious, renaming "natural" into "honest" doesn't make much of a difference.'

I have not renamed natural into honest. I introduced the word honest when remarking that we must pick priors that honestly reflect our state of knowledge prior to seeing experimental data. This was unconnected to ideas about naturalness in high-energy physics.

Your responses to Paul Hayes revealed further confusion. You argue that the prior probability of any model, including a supersymmetric one, is zero. As mentioned already, we, however, consider relative plausibility. Relative plausibility is updated by the aforementioned Bayes factor

Posterior odds = Bayes factor * prior odds

where prior odds is a ratio of priors for two models. The Bayes factor updates our prior beliefs with data. It is conventional to calculate and report only a Bayes factor and permit a reader to supply their prior odds. It remains the case, then, that data increase the plausibility of traditionally natural models relative to the SM. You could, nevertheless, insist that you a priori disfavoured supersymmetric models by a colossal factor that overpowers the Bayes factor. That is your prerogative. I don't think it is justified by our state of knowledge, though, and in any case wish to focus upon the factor containing experimental data - the Bayes factor. This is, once again, introductory material on this topic.

'If you've managed to construe a measure which says susy is actually preferred by the data, you have screwed yourself over in your model assessment.'

Flattering though it is to be credited with construing the measure (the Bayes factor etc), I must clarify that it was pioneered by Harold Jeffreys in the 1930s, and grown in popularity especially in the last 30 years. May I earnestly beg that before lecturing or writing any more criticisms of the logical foundations of naturalness, you learn the relevant basic material.

Emmette Davidson said...

Would not truth always require mathematical rigor as a point of departure, as anything less just won’t get you there. Besides, the failing (of string theory, dark “matter,” etc.) is not lack of rigor but unexperiential physicality being (arrogantly) imposed on the world to serve the overly complex maths (so of course the world won).

Now, in order to light the darks and resolve residual mysteries besides, it’s good to wildly speculate as to ways that might explain everything. It’d be better to derive truest holistic model expecting to find deficiencies rather than bending to provisional--considering said outstanding dark mysteries--measures of a world so imperfectly understood.

That the world is ugly is not settled science. So best that simplicity yet find beauty.

Mozibur said...

I'm trained as a mathematician and I find physics papers hard to read - which is strange because mathematics is said to the language of physics. I like the Anthropic principle because it's easy to understand and obviously true. Whereas with naturalness, I'm still struggling to understand what this means.

Sabine Hossenfelder said...

Andrew,

You are now being outright condescending even though you either do not understand or willfully ignore what I say.

First, needless to say I didn't say you left me guessing about what Bayesian inference is, but about what you believe it has to say about naturalness. You are still being vague about it. You now say that the Bayes factor (assuming the one you referred to earlier) tells you which model is more "plausible". Please explain what makes such a model more plausible. Do you just use the word "plausible" to replace "probably a correct description of observations"?

Second, the foundations of naturalness have nothing to do with Bayes, but that's only tangentially relevant.

Third, you seem to have not understood my reply to Paul, so let me iterate this once again. Yes, the prior for any model is zero, but that doesn't prevent you from comparing different models provided their priors are "equally zero" in the sense of factoring out and I never said so, so stop putting words into my mouth. You can indeed do that if you have two models that rest on the same assumptions. But you cannot do that if you are comparing models for which this isn't the case.

Forget about Bayes for a moment, Occam already tells you that a model with additional assumptions that are unnecessary to describe observations should be strongly disfavored.

"You could, nevertheless, insist that you a priori disfavoured supersymmetric models by a colossal factor that overpowers the Bayes factor. That is your prerogative. I don't think it is justified by our state of knowledge"

The reason why eg, supersymmetric models come out ahead in such an analysis is that the parameter cancellations are enforced by the symmetry requirement. If you do a Bayesian analysis comparing the predictive power of susy vs non-susy for the IR couplings, ignoring the symmetry assumption, this tells you nothing about whether susy actually describes nature, it merely tells you that it's a more rigid framework. We already knew that. You haven't told us anything new. More importantly, this bears no relevance for the question whether supersymmetry is or isn't correct.

If you have any other "natural" theories in mind, you'd have to give me examples.

"Flattering though it is to be credited with construing the measure (the Bayes factor etc), I must clarify that it was pioneered by Harold Jeffreys in the 1930s, and grown in popularity especially in the last 30 years. May I earnestly beg that before lecturing or writing any more criticisms of the logical foundations of naturalness, you learn the relevant basic material."

Bayes would turn over in his grave if he knew what nonsense you are construing out of his theorem. You seem to be seriously saying that a model with additional assumptions that are entirely superfluous to explain currently available data is preferred by a Bayesian analysis. Look, just because you can calculate something doesn't mean it's relevant. As they say, garbage in, garbage out.

Best,

B.

Sabine Hossenfelder said...

Pavel,

This is an interesting comment, which I am guessing probably captures formally what many people in the community mean informally with naturalness, or why they dislike unnaturalness. As you say, this tells us nothing about whether natural theories are more likely to be correct but it quantifies the reason why unnatural theories are undesirable - they prevent us from making progress because learning (about high energies) becomes more difficult.

Best,

B.

Andrew said...

Alas we must agree to disagree. Please accept my apologies if my manner was rude. I share your passion for this topic (though not your views). Best wishes.

Paul Hayes said...

Pavel,

I would not compare the Bayesian priors/probabilities to decide which model is ultimately "true" -- which only an entity with an infinite amount of information ("a deity") can answer. Such question is in the realm of metaphysics.


No Bayesian would.

Arun said...

The Standard Model is "unnatural" if one starts with uniform log priors.
The Standard Model may be unnatural if one starts with the larger class of priors upon which neural networks work well. So what?

Has anyone tried feeding a neural net with collider data to see what predictions it makes? My guess is that neural nets, fed with data of collisions of energy 0-E will make excellent predictions of what is considered to be the background in collisions in the energy range E - E+ΔE. Predictions of new physics - not so much.

Arun said...

Observation 1: From the discussion here, it seems that a theory of low energy physics that is completely decoupled from every single detail of high energy physics is Bayesianly speaking, the most plausible theory of all.

Observation 2:
...by a neural net trained to make predictions for future colliders based on the existing data. The neural net learns by updating the probability distribution in the parametric space via recursively applying the Bayes theorem:

We already have such a neural net. E.g., see Gordon Kane and his successive predictions for BSM physics, updated each time colliders fail to find supersymmetry.

Sabine Hossenfelder said...

Arun,

There's a lot of data analysis with neural networks, and it's not even a recent thing. Not only the LHC data is analyzed that way, but I also know that it's quite common for many small-scale colliders.

Anatol Sevashko said...

Naturalism claims "dimensionless parameters should be of order 1".

Natural "dimensionless parameters" are not rational parameters. Any rational relationship has dimensions. Irrational relations are dimensionless only.

Fredrik said...

"But note that changing the parameters of a theory is not a physical process. The parameters are whatever they are"

Thanks for declaring this so clearly. This is true when you follow what smolin and unger calls the newtonian schema. If we also accept this as beeing right the most of what you say here are easy to agree with.

I however think this thinking is exactly what leads to fallacious reasoning om "non physical landscspes" or "non physical" probability spaces.

This is where an evolution abstraction than learns has something to add. Here the random walk in theory space IS a physical process - but not one that allows to be captured by a timeless metalaw.

So i think the core of this discussion is - can we find a way to make senes of - and make somewhat rigorous - the idea to unify dynamics as per law with the idea of evolving law?

If you say no here, then i at least understand most of your critique.

/Fredrik