Pages

Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

118 comments:

  1. Bee: Yikes. So, when will you post your explanation of the "better arguments for inflation"?

    ReplyDelete


  2. Well what if you had a physical constant that was equal to the square root of two to a hundred digits. Would you strongly suspect that there was a reason for this strange coincidence? I'm not sure we could call it a problem but it would certainly draw attention.

    Is a constant very close to zero for no apparent reason any different? I'm not sure how to answer such questions.

    ReplyDelete
  3. Thank you for this fascinating post, Dr. Hossenfelder. I have also wondered why certain combinations of fundamental constants (like Planck energy, Planck time and Planck mass) are themselves considered fundamental or meaningful in some sense. Is Planck energy somehow a measure of the energy of the Big Bang? And why is the Planck mass so huge? As you imply, one could take the Planck mass and multiply it by some tiny dimensionless factor, and then no one would then question the relevance of the resulting number. Are these problems, or are they just numbers that look strange to us? It all sounds a lot like biblical numerology, Kabbalah and all that.

    Much has been made by physicists concerning the relevance of certain odd quantities that may not have any real significance. For example, string theorists take the sum S = 1 + 2 + 3 + 4 ... infinity, arriving at S = infinity - 1/12. They then assign great meaning to that straggling fraction. Is this a numbers game, or am I really off the mark here?

    Again, much thanks.

    ReplyDelete
  4. To me, the only plausible justification for seeking 'naturalness' - seeking theories where the dimensionless constants aren't huge or tiny - would be if it led to theories that seem to work. There are many aspects of science that are hard to justify a priori, which we justify by saying "hey, it seems to work". For example, physicists use a lot of mathematical methods that are hard to justify rigorously - which, however, seem to work.

    However, when I look for the successes of 'naturalness', I'm not finding any. There must be some, or this idea wouldn't have caught on - right?

    Wikipedia only lists 3 non-successes:

    Seeking naturalness for the QCD "theta parameter" leads to the strong CP problem, because it is very small (experimentally consistent with "zero") rather than of order of magnitude unity.

    Seeking naturalness for the Higgs mass leads to the hierarchy problem, because it is 17 orders of magnitude smaller than the Planck mass that characterizes gravity. (Equivalently, the Fermi constant characterizing the strength of the Weak Force is very very large compared to the gravitational constant characterizing the strength of gravity.)

    Seeking naturalness for the cosmological constant leads to the cosmological constant problem because it is at least 40 and perhaps as much as 100 or more orders of magnitude smaller than naively expected.

    Attempts to 'solve' all three of these 'problems' have led to many famous theories that - so far - don't seem to work. What are some examples where seeking naturalness did something good?

    ReplyDelete
  5. Leibniz,

    Depends on how interesting next week's conference is ;)

    ReplyDelete
  6. ppnl,

    No, I wouldn't suspect that. I cannot see any objective reason for this hence I don't think it's sound scientific argumentation.

    ReplyDelete
  7. Another nice post, but I'm a little fuzzy on the conceptual clarity question. Most of us know physicists who set to achieve conceptual clarity in quantum mechanics and were never seen again.

    ReplyDelete
  8. Your argument should get more of a hearing given, at least what seems to me, the powerful historical precedent of biological fine turning arguments (which were accepted as conclusive by most of the brightest minds around for centuries if not millenia) turning out to be a baseless result of lack of knowledge of the underlying theory. Seems like a general principle that fine-tuning arguments can never be valid in the absence of a theory that provides a probability distribution. "Hossenfelder's Law"?

    You have to name stuff after yourself in science, that's how you make the big bucks. ;)

    ReplyDelete
  9. I know a little about probability, less about physics.

    Anyway, when I hear discussions of the fine-tuning issues with the cosmological constant, the Bayesian in me thinks about the prior on the parameters. For instance, we know that atoms and chemical bonds and humans and planets and stars and galaxies and clusters exist. So what does that say about our priors on the parameters? Values of parameters such that none of those things could exist, shouldn't be included in the prior. So the prior shouldn't be the uniform distribution of everywhere. It should be constrained to some more reasonable area.

    And really not just one parameter, but the multivariate distribution on parameters. So for instance, if tuning parameter1 means atoms exist and tuning parameter2 means molecules exist, then you also need to consider them jointly so that atoms and molecules exist.

    ReplyDelete
  10. Well, obvs, Eddington would tell you that 10^{-60} is very close to exp{-\alpha}. Yay again for the fine structure constant.

    ReplyDelete
  11. "Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?"

    Because you are misunderstanding it. This does not mean that I agree with most people who make the argument that there is a flatness problem; quite the opposite, I even wrote a paper saying why the traditional arguments are wrong. (It was published in one of the handful of leading cosmology journals, so at least in this case I'm not just some pundit expounding his ideas on the web.) In addition to the references in that paper, one should read the paper by Adler and Overduin.

    Your argument is equivalent to saying that although it is improbable to win the lottery, somebody wins every week, but we don't need any explanation as to why that particular winning combination was drawn. That is correct, in the case of the lottery.

    The situation is more akin to the following problem: there is only one lottery draw (just like there is one universe), and the numbers are (drumroll, please): 1, 2, 3, 4, 5, 6, and 7, in that order. This is a sequence which was deemed to be special before the numbers were drawn. That is the important point which you seem to be missing. Yes, you could say that this sequence is just as probable as any other 7-out-of-49 sequence, which is true, but you would be missing the point.

    As noted in the papers I provided links to in the other thread (all except Adler and Overduin---which I had somehow missed before I wrote my paper---are cited in my paper), this is not a problem because the conventional formulation is unclear about what is meant by "unlikely" but mainly as a result of arguing too far from analogy. Assuming a Friedmann universe (which is the context), it is not unlikely that we observe the universe to be nearly flat. Do the maths; don't argue from analogy.

    It is difficult to explain this in blog comments, though if you search the web you can probably find enough from me on this topic to explain it in detail. None of the papers I have cited has been shown to be wrong (neither in the refereed literature nor anywhere else), so the answer is to read and understand these papers. If you think that they get something wrong, publish a paper pointing out what is wrong. Until this happens, we should assume that the problem has been solved.

    In summary: yes, your statement that all precise values are infinitely unlikely is true, but it misses the point, for two reasons. First, the problem is not to explain a value of exactly 1 for the sum of lambda and Omega (corresponding to a zero-curvature, flat universe), but to explain a value near 1. Second, the value 1 is not just some random value, but has a particular physical significance.

    Red herring: The arguments in the papers I cited completely explain the classical flatness problem. If the sum of lambda and Omega is found to be 1 to within one per mil or better, then an additional explanation is needed. Inflation does indeed provide this, and these days there is good observational evidence that inflation did indeed occur. If you don't know what the "classical flatness problem" is, which has nothing to do with inflation (though inflation trivially solves it), then the first step would be to read my paper and find that out.

    Some of the suggestions (imperative mode above) are aimed at the quote from Bee, some are aimed at people who still claim that there is a flatness problem in classical cosmology.

    ReplyDelete
  12. That would be exp(-1/alpha), whoops.

    ReplyDelete
  13. Do you count the problem of the arrow of time (i.e, why we don't see a lot of white holes around) as numerology as well? This can also be described by a number - the entropy of the early universe. Penrose has a distribution in mind when writing about it, but this is not universally accepted. To my at least, this problem seems to have a different character than the ones you talk about.

    ReplyDelete
  14. Hi Sabine,

    I hope that's not about our previous discussion on critical density, because then you really did not understand what I meant.

    Best,
    J.

    ReplyDelete


  15. Hmmm, I don't think I can go there with you. It is true that a physical constant could be anything. But having it so close to a simple math constant triggers my bayesian thinking. I see it as a pattern that needs an explanation.

    ReplyDelete

  16. Another way to look at it is to think of the Kolmogorov complexity of the physical constant. If the constant is just random then its Kolmogorov complexity should be very high with a very high probability. If it is equal to 0, pi or the square root of two to hundreds of decimal places then it has a very low Kolmogorov complexity. That low complexity implies an underlying mechanism that limits the pool of values that the physical constant can be.

    In short if the constant is structured it implies a structured mechanism that produced it.

    ReplyDelete
  17. I would dispute your argument about probability distribution of constants. If you have just two numbers then it's true, they can be anywhere on the real axis and we can't assign a probability to them.

    But we have more than two numbers and for some reason most of them are clustered fairly close to each other compared to outliers. So there's still a question of "why?"

    ReplyDelete
  18. Sure, "naturalness problems" are essentially philosophical. But physics (at least particle physics & cosmology) is reductionist. Or at least one could say the history of science is full of examples of reductions being surprisingly successful. So we might expect to explain and extend our current theories in simpler and more concise ways. And if we can do so, might one indication of this be that many/most/all dimensionless constants work out to simple forms or at least have an order of magnitude of 1? But now that I think about it, there isn't really any historical tradition to support this and any reasoning behind that is probably gloopey and imprecise.

    I've often pondered how qualities of theories like simplicity and naturalness, and our expectation that "correct" theories (whatever that really means) will have these qualities, is a kind of religious faith. Well, it's not an absolute faith but an expectation that's surprisingly strong and ubiquitous among physicists. But equally uncanny is that reductionism has worked as well as it has.

    If naturalness arguments are to be discarded, what other intangibles are to be used to sell a theory? Ultimately one has to convince people to spend time and money to test any given theory. What arguments are good ones? Why did so many scientists believe General Relativity well in advance of any strong evidence favoring it?

    ReplyDelete
  19. My understanding of the flatness problem is that you've got two numbers: a theoretical number from your theory which tells you the value of the critical density, and then you've got a physical amount of matter and energy, which presumably could be different from the theoretical critical density. And the fact that the two apparently unrelated numbers appear to be extremely close therefore appears coincidental, but it gives you that very small curvature. So it's not really a question of "Where do you get such a small number from?", it's more "two completely different things coincidentally taking almost exactly the same value". No?

    ReplyDelete
  20. Bee,
    No - you are not crazy..
    I am not a theoretical physicist, nor a cosmologist, but reading all those books about those "problems" - I don't see the point. If you can't find a more basic theory that explain the constants - any number can be just fine....

    ReplyDelete
  21. I don't even make it to the level of someone who had read part of a manual about car repair, but I have been wondering about this kind of argument for years, so this is very reassuring.

    ReplyDelete
  22. "... there are more numbers smaller than 10 to the 197 than there are numbers smaller than 10 to the -60, so isn’t that an improvement? Unfortunately, no. There are infinitely many numbers in both cases."
    Dr. B, I hope you'll be the one to finally help me understand how probabilities are supposed to work with real numbers. If there are infinitely many of them that are between zero and one, and infinitely many that aren't, does that make it 50/50 that a random real number will be between zero and one? And also 50/50 that a random real number will be between 0 and 1000? (And wouldn't it take an infinite amount of time to generate a random real number?)
    So when you say "if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one," wouldn't they also come out infinitely small with the same probability? "Logical mush" sounds about right to me.

    ReplyDelete
  23. Nice rant!

    How do you teach a smart person how to think?

    ReplyDelete
  24. "and we don’t, because we have a theory for Planck scale physics"

    I suspect you meant to say "because we *don't* have a theory for Planck scale physics"?

    Although I understand your point, it seems to me that inflation is at least sort of consistent with the flatness value we have, i.e. that it would tend to be small (although I can't quantify that, as you say). If we could rule out very large values by the anthropic principle maybe we might be able to quantify it roughly.

    I agree that fine-tuning arguments in general don't make sense, though.

    ReplyDelete
  25. Dear Sabine,

    Long time (very satisfied!) reader, first time commenter here.

    You are absolutely right that the field and the literature is full of sloppy arguments about naturalness. The field is (for the most part) totally muddled and unclear, unable to state exactly what assumptions are being made, or even what is physically meaningful! This is further confounded by inconsistent and imprecise definitions for e.g. naturalness (global or local? O(1) parameters or fine-tuning?), the hierarchy problem (naturalness for mass parameters? or a failure of dimensional analysis?), and the cutoff scale (the cutoff regulator? or the loose scale of new physics? or a physical lattice-type momentum cutoff?).

    I was recently compelled to write a comprehensive essay about all this to try to shake up the topic, provoke some discussion, and see what falls out. It's overdue in my opinion. I won't link to that here, out of respect, but for those who are interested it can be found on my blog. It's directly relevant to the topic at hand but much more rigorous than a comment could ever be.

    I've come to find a Bayesian framework with prior probability densities on input parameters as the nicest way to think about naturalness. It captures assumptions clearly and rigorously. When you carry through the mathematics of this framework you do find Bayesian preference essentially for models without sensitive dependence on initial conditions. But that is totally dependent on your priors, which is in my opinion fine and expected, naturalness will always be an argument based on subjective assumptions. One is free to make those assumptions as a guiding principle to constrain theory space or research programs, as you said, but we should all be able to openly acknowledge that nature is also free to choose a theory with inputs which go completely against these assumptions. It is a totally unobjectionable logical possibility. Naturalness is not gospel.

    ReplyDelete
  26. In can hardly wait for your rant. Using the field equations, the cosmological constant, added in for the sake of completeness, can be explained in classical terms as the difference between inertial and gravitational mass. Bloody good read. Have fun in Europe.

    ReplyDelete
  27. Very nice description of the modern physics culture. I’ve been influenced by this thinking through reading Greene, Lincoln, Hartle, Carroll, Randall, etc. It would be horrible if the appeal of mathematical elegance is leading physics astray.

    Numerology and aesthetics have deep roots. Dirac said our equations must be beautiful and he wrote an article in Nature that the universe is characterized by several numbers that seem to be connected in a simple way. (The Strangest Man – p. 289)

    Did Sommerfeld first noticed that α, the dimensionless fine structure constant, was a almost a prime number? It fascinated Eddington, Pauli, and Feynman.

    You didn’t allude to arguments based on the Copernican principle which seems to have a lot of appeal to cosmologists who want to extend it so that even our universe is not a special universe and our existence in time is also not unusual. Multiverse theories (Brian Greene describes eight of them) undercuts the uniqueness of human life. The aggressive atheists seem to like this. (I lean pacificist on atheism.)

    In addition to the Intensity Frontier, the Energy Frontier, and the Cosmic Frontier, (https://www.fnal.gov/directorate/plan_for_discovery/index.shtml), the experimentalists seem to be operating at the “sensitivity frontier”. Example, detecting 0υββ which requires insane delicate equipment it seems to me. Proving anything new is very hard.

    Fermilab: “The Muon g-2 experimenters will examine the precession of muons that are subjected to a magnetic field. The main goal is to test the Standard Model's predictions of this value by measuring the precession rate experimentally to a precision of 0.14 parts per million. If there is an inconsistency, it could indicate the Standard Model is incomplete and in need of revision.” (0.0011659209 versus 0.0011659180 calculated)

    Kamioka: “Grand Unification Dream Kept at Bay - Physicists have failed to find disintegrating protons, throwing into limbo the beloved theory that the forces of nature were unified at the beginning of time.” Proton must live at least 16 billion trillion trillion years”

    There are many elegant theories beyond the standard model, and a lot of speculation about cosmological questions. It looks to this layperson that scientists are “overdriving their headlights” based on the theories we are confident in.

    ReplyDelete
  28. TYPO: "we have a theory for Planck scale physics"

    ReplyDelete
  29. Jim V, TransparencyCNP:

    Thanks, I have fixed that typo.

    ReplyDelete
  30. There is literally no possible probability distribution that would make it so that any number below 10^-60 is more likely than any number below 10^261. One of these ranges is contained within the other, so regardless of an initial probability distribution, a hypothesis that can explain the same observations with the larger range of values has more explanatory power than the notion that it's down to coincidence unless P(x>10^-60) = 0.

    Not saying I disagree in general (mathematics is my field and physics decidedly isn't), but increasing the space of initial parameters that can lead to current observations does make something, in the vaguest possible sense, 'more likely' unless the probability distribution makes that expanded space of parameters impossible (p values of 0).

    Basically, if you know the probability distribution of some parameter makes it unnecessary, you can ignore additional 'explanations,' but that doesn't seem prudent in the case of unknown probabilities.

    It's Friday and I'm drunk, so I probably explained this poorly (and have no business responding at all). Maybe I'll give this another read tomorrow.

    ReplyDelete
  31. Ethereal235,

    Take a Gaussian peaked around zero with width 10^-60, done. Yeah, maybe give this another read tomorrow.

    ReplyDelete
    Replies
    1. Sorry to be pedantic, but this is incorrect, and Ethereal235 is right. Even with the Gaussian probability distribution you suggest, it is an indisputable mathematical fact that P(X<10^261) > P(X<10^-60). And it is also true that - if the > is replaced by >= - this statement necessarily holds independent of the choice of the probability distribution.

      This is only tangentially related to the main point of your argument. But it was still an incorrect statement.

      Delete
  32. John Baez,

    The three most commonly named examples in favor of naturalness are the self-energy of the electron, the rho-meson (difference in charged pion masses), and the prediction of the charm quark (absence of FCNC). Only the latter was an actual prediction. Which basically means: sometimes it works, sometimes it doesn't - and nobody understands why. It is therefore not a criterion physicists should be using, though I suspect that it has an element that can be made more rigorous in certain circumstances. Time will tell. Best,

    B.

    ReplyDelete
  33. Jim,

    It all depends on what probability distribution you use! If you insist on using a uniform probability distribution on the real axis (which I do not encourage you to do), then the result is that the probability to get a number in any finite interval is zero and in any infinite interval it's one. You can then try to introduce some measures to compare infinities and so on, but really you'd be better off having a normalizable distribution to begin with.

    ReplyDelete
  34. Andrew,

    Yeah, so there is two values "coincidentally" close to each other. I am asking you why that's something worth of your attention. Please quantify "coincidental."

    ReplyDelete
  35. ppnl,

    Indeed, there is a way to make sense of the number game, which would be to study the complexity of the number. But for all I know nobody is doing this (not even using this argument) and also, I don't know how you would even do that. Note that for most of these constants we only have a few digits.

    ReplyDelete
  36. akidbelle,

    No, this wasn't triggered by our exchange on the other post. Resemblance is coincidental.

    ReplyDelete
  37. Cyberax,

    But that isn't so. Take the values of masses in the SM - they span a range of 11 orders of magnitude or so. And the CC is actually *in* that range.

    ReplyDelete
  38. Phillip,

    Please note that I never spoke of any value being exact this or exact that (which would require infinitely many digits). Your point seems to be that some particular combination of constants is more important because it was given special significance before being measured, even though you agree with me that actually it's not special in an by itself. Sorry but I can't make sense of this. My best interpretation is that this is a sociological argument of the sort that you should trust a prediction more if it was actually made before the measurement.

    ReplyDelete
  39. an excellent column (as usual). and John Baez's comment that "the only plausible justification for seeking 'naturalness' would be if it led to theories that seem to work" seems correct IMO (the alternative non-scientific explanation is "because God likes that number"). as John von Neumann said "The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work."

    ReplyDelete
  40. Feeling "crazy"? It's only natural! Basically it's a linguistic problem or how to properly express the dissonance caused by something not included in the original package of this all reductionist program. The human mind is so made by nature to feel satisfaction only when so called "Kolmogorov complexity" gets dropped somehow. That is to say, we always feel that we see something more "meaningful" or less random when we can compress it. When we no more have to deal with such monstrosities like those lurking in a Cantorian continuum. It seems to me this all story with numbers is just due to the whole program having failed to map the original empirical data to a set less complex.

    ReplyDelete
  41. "That's a different story and shall be told another time" A quote from the neverending story?
    Also:
    I found it really interesting because, even though i'm still an undergraduate "physicist", i already hear this quite often. But normally i wouldn't believe it's really that simple and everybody is just acting irrational. But i probably have to see for myself.

    ReplyDelete
  42. PhysicsDude,

    Yes, indeed, from the Neverending Story :)

    Look and see for yourself. I spent a decade or so thinking there must be something more here that I just don't understand but there isn't.

    ReplyDelete
  43. Which of the arguments pro or contra the specialness of certain numbers continue to hold in other numeral systems? 10^197 doesn't look spectacular in base 10^196. Would that be a route to explain away some specialness or other?

    ReplyDelete
  44. nordover,

    It has nothing to do with the numeral system - that just changes how you represent numbers. Though maybe you could argue that if we had 10^60 fingers then that number would appear pretty natural to us ;)

    ReplyDelete
  45. I'm wondering why none of the "naturalness" people wanted to give you the probability distribution. It is pretty obvious when you listen to them for a while :) "Natural" probability distribution of log(dimensionless quantity) is a Gaussian with mean 1 and sigma 1-2. This way 1 is absolutely natural, pi is natural, electromagnetic coupling constant is 1-2 sigma, so OK, but 10^-60 is at least 30 sigma away, so it is absolutely unnatural and demands immediate explanation :)

    ReplyDelete
  46. Jerzy,

    Oh, yeah, of course that's what they say. And, as you, they don't seem to realize that's a circular argument. You put in "natural" numbers to define what's a "natural" number. If you put in a width of one (or sqrt pi, etc), you get out a width of one. Likewise, if you put in a width of 10^-60, you get out a width of 10^-60. What will you do now, walk off or continue to talk down to me?

    ReplyDelete
  47. It was a joke :) I'm on your side, really...

    ReplyDelete
  48. Newton could have obsessed on why his gravitational constant "G" represented such a weak force that demonstrating gravitational attraction between small bodies in the laboratory should be rather difficult. He wouldn't have had the theoretical equipment to handle that. (Newton did obsess about other unproductive issues.)

    Doing scientific research well seems to involve identifying the questions at the boundaries of scientific knowledge that can be fruitfully attacked, and can potentially answered. Otherwise one would spend one's scientific career rather barrenly, like Albert Einstein spent his last decades.

    I'm not advocating putting off difficult questions. One must use reason, intuition and good taste to see what to spend the majority of one's time on, and what to occasionally revisit and reassess if progress is plausible.

    ---
    IMO, particle physicists today are in the state of the calculus and of infinite series and such, before the notions of limits, convergence of series, real numbers, etc., were formalized. The current seeming drought in experimental results might be a good time to shore up the foundations. I'm struck by Thomas Thiemann's 2001:

    "Ashamingly, the only quantum fields that we fully understand to date in four dimensions are free quantum fields on four-dimensional Minkowski space. Formulated more provocatively: In four dimensions we only understand an (infinite) collection of uncoupled harmonic oscillators on Minkowski space.""

    Perhaps the next great era of theoretical physics is the formalization of the intuitions that physicists have built up over the past many decades. Without this, and in the absence of guidance from experiment, there is no way to know fully the limits of the frameworks of our theories. It is the inadequacy of the framework that leads to adding of hidden dimensions, additional broken symmetries, multiverses and the like willy-nilly to solve what, as Bee points out, may be scientifically meaningless problems.






    ReplyDelete
  49. Thanks for the reply. About finite/infinite intervals, I was hoping to use the Anthropic Principle to rule out infinite intervals. Then with a finite possible range (-a to b) the probability measure could be something like 2|c|/(a+b), where c is the current curvature, -a is the minimum curvature and b is the maximum curvature the universe could have at this time and support life (if there are such values). Then smaller values of c would be more likely under inflation compared to the naive value above - at least for specific models of inflation.

    This argument even if valid would only apply as minor evidence for inflation, not to any "natural" or "unnatural" number itself, of course. The point probability of c within [-a,b] would still be zero, as you say.

    ReplyDelete
  50. Jerzy,

    Sorry for the misunderstanding.

    ReplyDelete
  51. "Naturalness" is an interesting subject. It shows up in mathematics as well as physics. For a famous example, recall the “Basel number”, i.e., the sum of the reciprocal square integers 1/1 + 1/4 + 1/9 + 1/16 + … = 1.644934… Several mathematicians (starting in the 17th century) worked hard to show that the sum was finite, and then to evaluate the value, digit by digit, which was not easy because it converges so slowly. But they were dis-satisfied with simply accepting this as an independent real constant – not because it was extremely small or big, but because it is somehow bothersome to accept it as another independent real number. Then Euler showed that it equals pi^2/6, and everyone was delighted. In fact, Euler found simple expressions (in powers of pi) for the sum of any reciprocal even powers. Great. What about the sum of reciprocal cubes? No one has ever been able to find a similar simple expression for that, after much effort was spent searching. The intuition of the Bernoulli brothers and Euler, et al, that such a simply defined sum must be expressible in terms of some standard transcendental numbers like pi was brilliantly confirmed for zeta(2), but failed miserably for zeta(3). Should we call zeta(3) an unnatural number?

    Similarly, we all know that Eddington convinced himself that 1/alpha must equal exactly the integer 137 (based on a weird chain of reasoning), but more precise measurements showed that it actually equals 137.035999139… This number isn’t especially huge or tiny, but it may be bothersome to some people that it is (apparently) just an arbitrary real number, with infinitely many decimal digits having no simple rule of formation. Some numerologists dream of finding a simple expression for alpha, similar to Euler’s spectacular success with the Basel number. Since an arbitrary real number may have infinite complexity (although we can never prove it for any given number), some physicists may have the intuition that no physical constant can have infinite complexity, in the sense that it must be definable in some “finitistic” way (in Hilbert’s sense). But the inability to ever prove (see Chaitin) the complexity of arbitrarily complex numbers makes this intuition difficult to deploy.

    By the way, I see people here referring to a “uniform distribution over the real numbers”, but mathematically there isno such thing, is there? (There is a famous mathematical riddle based on this fact.) Is this referring to some kind of “non-normalizable” distribution? If so, isn’t that just another way of saying “non-well-defined”?

    ReplyDelete
    Replies
    1. I really enjoy that math and physics people have these kinds of thoughts, I actually find it charming and hilarious.

      It's so indignant isn't it? Like the universe owes them an explanation.

      Delete
  52. "If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am."

    Not that I necessarily prescribe to all your point of views, but fortunately for me, I have identified you as pretty much the smartest person on this globe, so I am 100% with you on this.

    It helps that your argument is straightforward enough that even an 8th grader freshly introduced to probability theory can see the ironclad logic.

    ReplyDelete
  53. Topher wrote: "I've often pondered how qualities of theories like simplicity and naturalness, and our expectation that "correct" theories (whatever that really means) will have these qualities, is a kind of religious faith. Well, it's not an absolute faith but an expectation that's surprisingly strong and ubiquitous among physicists. But equally uncanny is that reductionism has worked as well as it has."

    The idea that simple theories are good is not a "religious faith" - at least not for the reasonable among us. It's just an empirical fact that simple theories often work pretty well. When they stop working in some situation, then we try more complicated theories.

    I believe the reason physicists are more enamored of simplicity than other scientists is that physics is, more or less by definition, about the simple stuff. If some situation gets really complicated, we often stop calling it physics: we call it chemistry, or biology, or something.

    As Sabine points out, the evidence that "natural" theories works well is much weaker.

    ReplyDelete
  54. John, Topher,

    Naturalness (and its opposite, finetuning) arguments aren't the same as simplicity arguments. Simplicity is an argument, roughly, about the number of axioms, while naturalness is an argument about the type of axioms. As I pointed out in my post, all too often naturalness arguments result in a theory that is actually *less* simple. Typically because people introduce various fields with various 'natural' parameters and some dynamics for these fields just to give rise to one 'unnatural' parameter.

    ReplyDelete
  55. Amos,

    Interesting point. Maybe one could argue that from a math pov the order of the monster group is "natural" which will make all our naturalness problems in physics disappear with a poof.

    As I alluded to above, a uniform distribution over the real axis is a bad idea, meaning that mathematicians will pop up in every corner and start yelling at you ;) I mentioned this only because a constant function is the first thing that comes to people's mind as a "natural". Having been reminded it can't be normalized, next thing they'll try is a Gaussian with width 1.

    It's an interesting question, psychologically, why people think of some functions as 'natural' and not others. One oddity, for example, is that monomes (starting with the constant) are 'natural' while pretty much any other basis of functions isn't. Though arguably it would be much nicer to deal with an orthogonal and normalizable basis. In any case, the basis of functions is a good way to hide 'human choice'.

    ReplyDelete
  56. JimV,

    Yes, you can put some anthropic constraints on the curvature. That surely must have been done somewhere in the literature.

    ReplyDelete
  57. Your position is better, IMHO, than that of the strong proponents of naturalness (most other theoretical physicists). But perhaps you're rejecting it too decisively. My thesis advisor, a topologist, used to say "there are only three numbers: 0, 1 and infinity". Obviously he wasn't entirely serious! But it's a fact that in (at least some branches of) mathematics, 1 is indeed a very "natural" number. Seems possible to apply that intuition to physics as well.

    SH >> ... what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity ...

    Surely the distribution in question came from the unknown physical process which produced the Big Bang. Whatever it was, wouldn't you guess, provisionally, that it would produce curvatures of limited magnitude? Although not directly relevant, note there are many verified, observed instances of local spacetime curvature exceeding 10^-60, but none anywhere near 10^197.

    If we must choose between considering "naturalness" 1) a strong physical principle, capable of validating a theory, or 2) illogical and meaningless, I'll go with 2). But I prefer a middle road. It's an intuitive guide when you've got nothing else to go on, but proves nothing. Similar to Occam's razor, or aesthetic arguments, but quite a bit weaker. If you disagree, well, you know more about it than I do.

    SH >> Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say?

    I bet many of the "big names" believe it. Others are just following the herd.

    SH >> Many of my readers like [to] jump to conclusions about my opinions. But you are not one of them ... And since you are among the few who actually read what I wrote ... Enlightened as you are ...

    Thank you!

    SH >> ... it’s me against ten-thousand of the most intelligent people on the planet.

    Similarly, Tim Maudlin once said: "... how did so many prominent and brilliant physicists manage to get so confused?" This reflects a very common fallacy: that IQ and "common sense" are positively correlated. But surprisingly, the correlation is zero. Why should the ability to mentally manipulate symbols, numbers and letters go hand in hand with the ability to make good value judgments? It doesn't. So don't be surprised when a bunch of smart people, for instance physicists, fall for a fad like naturalness or infloss, dogmatically holding up progress for 40 years. Humans often do this sort of thing, be they physicists, priests, sports fans, ditch-diggers, whatever.

    SH >> Am I crazy?

    Only your psychologist knows for sure :-) Consider the vital role Mordehai Milgrom played for decades, advocating MOND against all those Dark Matter-ist's. One lone person thinking for him or herself, putting forth unpopular but interesting and valid views, is of great value - whether right, wrong or neither. Please keep up the good work.

    ReplyDelete
  58. You may try to think of it "Cosmographically" as well!
    "The idea that some of today's multitude of 'physical' constants cannot be calculated from first principles of physics but have to be attributed to a basically accidental structure of a cosmographic vacuum in our universe..."
    https://arxiv.org/pdf/hep-ph/0702168.pdf

    ReplyDelete
  59. Without any arguing on naturalness (yeah, it is cool when it is real), a way for probability distributions is based on their entropies.

    ReplyDelete
  60. Well, I would say a natural theory *is* simpler because a simple geometric relationship will lead to a number of the order of one. For example, the formula for electric field having the 4*pi term because it spreads like the surface of a sphere.
    Also, like I said earlier, the 1.0 value for omega really represents an equality between the density of the universe and the theoretical critical density: that number 1 isn't just a number, it's what the number represents that is the important thing. If you get a number close to 1 in your fundamental theory then it shows a fundamental connection between two quantities which you never knew was there.
    The number 1 is special in that respect. If you get a 1, you've eliminated a crazy arbitrary large number and replaced that with an interesting connection. Anyway, that's what I think.

    ReplyDelete
  61. The point I am trying to make is that saying that constants of order one are just "aesthetic judgements" doesn't really express that here are solid reasons in physics why a constant should take the value 1 or order of 1, and why constants with those values are telling you something deeply fundamental about what is going on.

    ReplyDelete
  62. Physicists are people and like all people have their blind spots: read "Precision Tests of Quantum Mechanics," PRL 30 January 1989, and then read "Weinberg's Nonlinear Quantum Mechanics and the Einstein-Podolky-Rosen Paradox," PRL 28 January 1991.

    ReplyDelete

  63. Meh, I still disagree. I think my kolmogrov complexity argument shows the right way to think about this but it is missing an important element. It isn't the complexity of the number so much as the a priori nature of the number. Complexity is simply a stand in for that.

    To see this imagine I'm on a low gravity asteroid throwing a baseball up. I notice that sometimes I throw the baseball and it falls back. Other times I throw hard enough that it has escape velocity and keeps going. It should quickly be apparent which will happen with any throw. So I start a game where I try to throw the ball so that it goes as high as possible but ultimately still falls back.

    Now say I throw the ball and after 60 seconds I still can't tell if it's going to fall back. Pretty good. After an hour I still can't tell. That's a little creepy. After a year I still can't tell. That's deeply disturbing.

    I hope we all can agree that that is highly unlikely and we would start looking for answers.

    But why? A number equal to the escape velocity to hundreds of decimal places isn't any more or less likely than any other number. And you can't appeal to Kolmogrove complexity since the escape velocity is likely a very complex number.

    I think it is the a priori nature of the number. The number was well defined before the ball was thrown. The chances of hitting an prior defined number is very small.

    At the big bang the universe threw a ball. Thirteen billion years later we still can't tell if that ball will fall back. That seems unnatural.

    ReplyDelete
  64. ... a uniform distribution over the real axis is a bad idea, meaning that mathematicians will pop up in every corner and start yelling at you ;)

    Yes. I suppose people sometimes talk about a uniform distribution on the reals in the same spirit that they talk about the Dirac delta “function”, which of course isn’t a function. Since the Fourier transform of a delta function (at the origin) is a uniform “distribution”, these two notions go together. But this would work only for “explaining” a constant x=0. In general the Fourier transform of a delta function at arbitrary x is f(n) = e^(-2pi n x), which is the simple example that Tim and BHG are discussing in the other thread! :)

    ReplyDelete
  65. Here's a math pov wrt Amos comment. TD Lee's "laws of physicists" say that without the empirical, theorists tend to drift; while without the theoretical, experimentalists tend to falter. Perhaps the big issue is the lack of the theoretical.

    Physical quantities have units, but even in physics, no one is surprised when 2pi or 4pi shows up. We think of pi and e as universal constants whose appearance is justified by mathematical theory. We think nothing of assuming a ratio of 3.14159265358979323846264... between two physical quantities and don't think of any of the digits as being unreliable.

    Likewise, Euler-Mascheroni constant and Feigenbaum constants are dimensionless numerical quantities that pop purely out of mathematical theory. But perhaps there is an issue here we we as mathematicians. All our universal constants tend to be between 0 and 10, but all but certain that there are others that are orders of magnitude larger -- we just don't look for them. The ratio 2pi is the circumference of a 2d circle to its radius, so what about the hypersurface area of an n-dimensional sphere to its radius?

    Would the naturalness and fine tuning issues not be ameliorated if it turned out that there are some large magnitude analogues of 2pi that appear naturally in the correct theory?



    ReplyDelete
  66. HI Sabine,
    Thanks for the post. You have commented later about three examples were naturalness "works". Could you elaborate on the rho case? How do you relate the rho with the mass difference for pions? I am familiar with the other two, but not with the explanation of the difference between pion masses In terms of the rho. Usually it is explained by quark mass difference and EM effects. Thanks.

    ReplyDelete
  67. Sorry Bee, but this post shows a misunderstanding of what the problem of naturalness is in high energy physics. It isn't that you need to explain a number and then argue why the number isn't order one. There isn't any normal distribution over the reals as you said. Indeed a number by itself would be fine (this is why unimodular gravity partially helps the CC problem).

    The real problem in the case of the hierarchy problem is that it's NOT just a number. Rather it's a quantity that is linked to physics associated with a scale. The Higgs mass square (measured at the electroweak scale)is the result of many equations where each one is being dragged towards the Planck scale. This is still not a problem at the level of the effective field theory (renormalization parameters must match experiment). It only becomes a real problem when you write down a new theory that explains the mass of the Higgs. Then those cancellations that were just some UV parameters in the effective theory become real physical values that must sensitively conspire with quantities in the deep infrared.

    Needless to say this has never been seen before in the history of physics and would invalidate all calculations ever made based on dimensional analysis. It would be like if you needed to know tiny details about how a set of quarks were scattering in order to explain the macroscopic paths of baseballs on Alpha Centauri. This UV sensitivity is the heart of the naturalness problem, and is something that is genuinely unheard of...

    ReplyDelete
  68. Bee, I'm not a scientist or a mathematician, so it's taken me a while to grok your argument... And though I lack the vocabulary to express it (or understand it) without stepping all over my feet, I'd like to try anyway: Say we measure two values. The first is 1 and the second is 2. The naturalness argument, as I understand it, might assert that those values are close enough that their proximity is suggestive of...something. Is it your argument that it's not possible to judge whether these values are near enough to each other to be "meaningful" because there are infinitely many real numbers between them--and that a value could fall anywhere in that continuum? And that there is no obvious scale that would tell us whether they are, in fact, close enough that their supposed proximity requires explanation?

    Anyway thank you for what you do!

    ReplyDelete
  69. Sabine, there's no such thing as a Guassian distribution with a 'width' of 10^-60. Normally distributed variables have a mean and a variance, but there's no point at which P(x>mu) = 0 for any value of x. It can become vanishingly small, but never actually hits zero; if 10^-60 is a bit past 37 standard deviations away, then the probability of exceeding that value can no longer be represented using double precision numbers on a computer, and the computer will tell you the probability is 0, but it still isn't actually 0. Maybe you meant the uniform distribution?

    ReplyDelete
  70. ppnl,

    You've thrown a lot of balls but we have only one universe. If you can collect samples, you can create probability estimates from them. That's why, eg, we know there is nothing "unnatural" about the distance between the sun and the other stars in the galaxy (as was once thought). Note that chalking this up to initial conditions does zero to help you in terms of an explanation.

    ReplyDelete
  71. Sesh, Ethereal,

    You are right, I thought it was referring to two different probability distributions. My answer was meant to say, easy enough to find a probability distribution in which P(X<10^-60) ~ 1. I see now what you say and that the relevant paragraph in my post is misleading.

    ReplyDelete
  72. JRPS,

    For what I recall the question is why the mass (square) between the charged pions is small (compared to their absolute mass), which is "unnatural" and signals there's an effective theory that must break down at a certain energy. And it does because at that energy you start producing rho mesons. I don't have a reference at hand, but will look up if I find something useful. My point above was to say that this wasn't a pre-diction, it was a post-diction. Best,

    B.

    ReplyDelete
  73. Haelfix,

    I am not saying naturalness is generally wrong which wouldn't make any sense seeing that the SM is natural (besides the Higgs mass). I'm saying we don't understand under which circumstances it works. "Sensitively conspire" is just another way to talk about a probability distribution.

    If you want to argue that if your parameters in the UV were not very precisely chosen, then you won't get what we observe in the IR, then you have to explain what you mean by 'precisely chosen.' All I am saying is that there's always a probability distribution according to which the initial values were focused on trajectories close enough together so that they all come out somewhere near the SM - even if they diverge. The usually unstated assumption is that the distribution is not focused. I'm asking how do you know.

    I understand that a separation of scales is the holy grail of effective field theory, not sure why you think you have to tell me that.

    ReplyDelete
  74. Similarly, Tim Maudlin once said: "... how did so many prominent and brilliant physicists manage to get so confused?"

    Ironic. Probability has confused many prominent and brilliant philosophers too. (As at least some physicists/philosophers - QBists, relationalists and other (neo-)Copenhagenists - have understood, the proper conclusion is not that "actual physics is non-local" but that "[quantum] probability is local".)

    ReplyDelete
  75. George Rush's thesis advisor: "there are only three numbers: 0, 1 and infinity":

    This is something that I find appealing as well. From this point of view, the fact that the Higgs mass at 125.1 GeV seems to sit exactly on top of the boundary of the stability region (i.e. is infinitely finetuned) is much more natural than if it would be finetuned merely up to a factor 0.001. That the standard model seems to be infinitely finetuned seems to me as one of the major results from the LHC.

    ReplyDelete
  76. Related to the point above, I think you are right to frame the problems of "naturalness" in terms of prior probability distributions, because this enables clear thinking on the topic. But I feel your argument slightly misses the point: standard naturalness objections *are* based on thinking about the probability distribution!

    For example, let's consider a parameter X which could in principle take any value up to say, 10^120 or some similarly large value, and imagine that we know of no physical symmetry or mechanism to restrict its value to some smaller range. This means our prior on X should be something agnostic over the entire available range up to 10^120. One can argue whether the best such prior is uniform, log-uniform, lognormal or whatever, but it doesn't matter much: given the enormous disparity in the scales, for any prior which does not already restrict X~10^-60, we will always find that the value X~10^-60 is vanishingly unlikely compared to X~10^120. That is the naturalness problem for X.

    Your argument is that if the prior were instead a Gaussian with a width of 10^-60, there wouldn't be a problem, which is correct, but misses the point. We knew of no physical mechanism to limit the value of X, so why should such a Gaussian prior be suitable? Assuming it does not make the naturalness problem disappear, it just expresses it in a different way.

    In fact, explaining why such a prior is suitable is precisely the function of proposed solutions to the naturalness problem, such as inflation. In other words, inflation provides a physical mechanism to explain why the prior on the curvature *should* be a Gaussian with width ~10^-60. Modulo other concerns about inflationary models, this provides an excellent resolution to the flatness problem. Ditto with the cosmological constant – if we knew a good argument for why the prior on Lambda should be tightly peaked at small values rather than ranging broadly over 120 orders of magnitude, we'd say we had solved the cosmological constant problem.

    ReplyDelete
  77. @Blogger George Rush "Consider the vital role Mordehai Milgrom played for decades, advocating MOND against all those Dark Matter-ist's." Derivation cannot discover a weak postulate. The empirically right answer must be an obviously – but testable - theoretically wrong answer.

    Spiral galaxy excess angular momentum, Tully-Fisher relation via Milgrom acceleration, is uniform within non-communicating volumes across all redshifts. Inflation created it. Noether's theorems couple angular momentum conservation with spatial isotropy. Physics bleeds chiral anisotropies, baryogenesis through Chern-Simons correction of Einstein-Hilbert action. Inflation’s false vacuum scalar decay had a pseudoscalar component. Achiral spacetime curvature (Einstein) has part-per-billion chiral spacetime torsion (Einstein-Cartan). Angular momentum conservation leaks ppb Milgrom acceleration.

    Consider atoms tightly packed in maximally enantiomorphic self-similar configurations, a pair of maximally divergent shoes embedded within a ppb spacetime left-foot. Opposite shoes differentially embed within a ppb spacetime left foot, obtaining ppb non-identical minimum action trajectories Two solid spheres of single crystal quartz, enantiomorphic space groups P3(1)21 (right shoe anomaly) versus P3(2)21 (left-shoe commercial product) ppb violate the Equivalence Principle in a geometric Eötvös experiment. Look.

    ReplyDelete
  78. "Please note that I never spoke of any value being exact this or exact that (which would require infinitely many digits)."

    OK; I don't think that this was completely clear, at least to me.

    "Your point seems to be that some particular combination of constants is more important because it was given special significance before being measured, even though you agree with me that actually it's not special in an by itself."

    There are two questions: Is the observed range (value including uncertainty/error) improbable in some sense? I think the answer to this is "no", which is what my paper is about. Is it significant that the observed range contains some "special" value? Yes, it is.

    By your argument, nothing needs any explanation. Find a working electronic calculator in 200-million-year-old stone, next to some dinosaur fossils? No problem; this combination of atoms is just as probable as any other. Some elementary particle is pi times as massive as some other particles, to several digits of precision (as many as we can measure; perhaps it is exactly pi times as heavy)---would you dismiss that as well, saying that "all numbers are equally probable"?

    My explanation (and that of those whose papers I cite) is essentially because this is a special value, it is physically more likely. Or, perhaps more clearly, the fact that this value is special and the fact that it is more likely than it seems via a naive estimate have a common origin.

    "Sorry but I can't make sense of this. My best interpretation is that this is a sociological argument of the sort that you should trust a prediction more if it was actually made before the measurement."

    I'm not sure what your point is here. Certainly the flat universe was deemed to be special before it was measured to be as flat as it is today, though even a long time ago it was clear that Omega and lambda are not 1,000,000,000 or whatever.

    Scientifically, it is often an accident whether a value was observed before or after the prediction (or retrodiction). Logically it shouldn't matter, though psychologically the prediction is more important---otherwise one has to be really sure that the known answer didn't somehow influence the theory.

    Let's assume that it doesn't matter whether prediction or observation came first. (If this assumption invalidates your argument, let me know.)

    You are essentially saying that essentially everyone who has thought about this problem, including some people much smarter than I am, incorrectly assume that the flatness problem is a fine-tuning problem because they don't realize that any number is just as probable as any other. That is a pretty strong accusation.

    To be fair, I also claim that most people have misunderstood this, but I literally thought about it for 20 years before writing my paper. So, even if Bob Dicke was smarter than I am, I think that 20 years of my thoughts might be better than some essentially off-the-cuff remarks that he made. :-)

    I am sure that many readers here, especially professional cosmologists, disagree with you on this. Maybe they could chime in. They might not agree with me yet, but there is hope. :-)

    ReplyDelete
  79. Those interested in fine-tuning might want to check out the book by Geraint Lewis and Luke Barnes on this topic. Apart from one aspect, I think it makes a clear case for the existence of fine-tuning and for it needing an explanation.

    ReplyDelete
  80. Bee, again there is no probability distribution. The probability distribution always involves an effective field theory argument concerning a degree of expectation on what such and such a parameter ought to be. The real problem is when you have to write down concretely the UV completion. At that point there are no more tentative guesses based on renormalization group flow arguments. You have a high energy theory that say has to explain GUT scale Bohr atoms, and we now have tiny Higgs mass parameters that are circulating in the theory. The issue is those putative Planckian atoms must be written down in a way such that they take part in the dynamics and contribute Relevant corrections. The point is that it's ridiculously hard to write such a theory down.. so difficult in fact that in the 40 years since naturalness has been understood, there are basically only three or four theories ever written down that succeed.

    The first guess is that you need your high energy theory to involve exponential terms to push up the low scale. But well, try it and you will quickly see the problem...

    ReplyDelete
  81. Sesh, Haelfix,

    Well, since one of you claims naturalness arguments have no probability distribution and the other claims they have, but I am the one missing the point, I think you two could have an interesting discussion.

    Sesh is right in that you could reformulate my argument in forms of the prior for the probability if you want to do Bayesian reasoning. He is wrong to think that there is any prior that is "natural" in that it doesn't already imply a human-made choice, which is exactly what naturalness arguments should avoid. There is no theory for this distribution and hence no rational to choose any prior over any other. Maybe you want to speak then about priors for priors for priors (indeed my thoughts went this way) but if you try you'll see that it doesn't remove the problem. (Seriously, try it! Maybe you get farther than I did.)

    One can of course come up with such a theory! As I explicitly said - I am not saying no one should look for explanations. Just that I'd wish when in search for such explanation people were more careful to distinguish between mathematical rigor and hypotheses (about probability distributions or priors thereof or the absence of tuning or naturalness).

    ReplyDelete
  82. The cosmological constant doesn't fluctuate because spatial or field energy isn't the same thing as vacuum fluctuations. See Svend Rugh and Henrik Zinkernagel’s 2002 paper on the quantum vacuum and the cosmological constant problem. They point out that photons do not scatter on the vacuum fluctuations in QED. If they did,"astronomy based on the observation of electromagnetic light from distant astrophysical objects would be impossible". Hence when they say the QED vacuum energy concept "might be an artefact of the formalism with no physical existence independent of material systems", IMHO they’re right.

    ReplyDelete
  83. As far as arguments are really only numerology, you are surely right to dismiss them.
    And indeed probability distributions need some theoretical basis.
    Any real number is as good as the other - except 0 and 1. But small is not 0 and 1 makes only sense if you have some unit or reference (Planck scale??).
    On the other hand I regard the flatness as a structure - even if it is not really exact, it holds for all references at hand. It is no problem, but I think it is physics to try to understand and explain some obvious structure - though we sometimes see structure where there is none and the smallness may turn out a chance coincidence.
    So even if there are not reall problems, there might perhaps be indications for structure to be explained.

    ReplyDelete
  84. Maybe it was just in 'Gravitation', but don't physicists just set the important constants like c and G to one and have done with it? I never understood the naturalness problem given that so much is contingent. Once a die has been rolled, we're stuck with its outcome. Arguing for naturalness seems to be arguing that there was only one possible outcome. Granted, this is sometimes true, but it should be something one looks for, not the working assumption.

    ReplyDelete
  85. Phillip,

    "By your argument, nothing needs any explanation. Find a working electronic calculator in 200-million-year-old stone, next to some dinosaur fossils? No problem; this combination of atoms is just as probable as any other."

    We have a lot of data on atom combinations, hence reliable statistics for what is and what isn't a plausible combination. Moreover, we have a dynamical law to explain this - not just an initial condition, which is a very different story. Having a theory that connects different moments in time explains many data points by one law. Push back one initial condition to an earlier time merely replaces one initial condition with another - plus the law you need to get from one to the other.

    If you think my argument is "nothing needs explanation" you entirely misunderstood the point.

    "You are essentially saying that essentially everyone who has thought about this problem, including some people much smarter than I am, incorrectly assume that the flatness problem is a fine-tuning problem because they don't realize that any number is just as probable as any other. That is a pretty strong accusation."

    I actually think that most of them never thought about it. It has simply become a standard argument that theories are "good" if they contain only parameters of order one, and "bad" if they contain numbers very much larger or smaller than one and this is not ever discussed or questioned in any paper (except for some few) - just go and look at the arxiv. Or, if you want to have fun, pick yourself any high energy physicist or cosmologist and ask them to explain the rationale behind their arguments.

    ReplyDelete
  86. Since one theory of electrodynamics describes dynamical phenomena ranging from say, 3 Hz to 300 * 10^18 Hz (using this Wiki table: https://en.wikipedia.org/wiki/Electromagnetic_spectrum), I'd say that it is plausible that other theories might have an even wider dynamical range.

    ReplyDelete
  87. Sesh wrote: Our prior on X should be something agnostic over the entire available range up to 10^120. One can argue whether the best such prior is uniform, log-uniform, lognormal or whatever, but it doesn't matter much: given the enormous disparity in the scales, for any prior which does not already restrict X~10^-60, we will always find that the value X~10^-60 is vanishingly unlikely compared to X~10^120.

    We can easily write down a distribution for which that's not true. For example, we could just posit a uniform distribution of exponents from -60 to +120, in which case +120 would be no more likely than -60.

    Sesh wrote: Consider a parameter X which could in principle take any value up to say, 10^120 or some similarly large value, and imagine that we know of no physical symmetry or mechanism to restrict its value to some smaller range.

    Is that really the situation? In other words, do we really have an upper bound (“in principle”) on the magnitude of X? If so, do we also have a lower bound? For example, can we say the exponent must be between -60 and +120? Or should the range be -120 to +120, or perhaps -1000 to +1000?

    Sesh wrote: for any prior which does not already restrict X~10^-60…

    Hmmm… Doesn’t a prior “already restrict”, or at least determine the most likely outcomes, by definition? We can have a prior that favors exponent -60 or a prior that favors exponent +120. But which of these priors is correct? Do we want to consider a population of priors, each with their own probability, and so on? But, as others have commented, even this wouldn’t help unless we know the prior distribution of the prior distributions.

    ReplyDelete
  88. Sabine, somewhere in the post you wrote

    "I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate."

    Isn't the second problem just another "naturalness" problem (either the amplitude of the fluctuations being very small or its intrinsic scale being much larger than the observable universe)?

    ReplyDelete
  89. The cosmological constant doesn't fluctuate because spatial or field energy isn't the same thing as vacuum fluctuations. [...] "an artefact of the formalism with no physical existence".

    Psiontology's version of this, yes.

    ReplyDelete
  90. Suppose somebody constructed a very very accurate minirobotarm, to throw a coin in the air, with the air having constant conditions. This device could then be capable of fipping the coin to the same side over and over again. But suppose we couldn't see the device, only the flying coin landing. We would give both possible results a chance of 50%. This constant identical outcome would be regarded as 'highly problematic'. So let's safely and humbly assume that there is more than meets the eye, and trust in learning more in the future to grasp the causality of so many phenomena currently misunderstood or poorly understood.

    Best , Koenraad.

    ReplyDelete
  91. "I actually think that most of them never thought about it. It has simply become a standard argument that theories are "good" if they contain only parameters of order one, and "bad" if they contain numbers very much larger or smaller than one and this is not ever discussed or questioned in any paper (except for some few) - just go and look at the arxiv. Or, if you want to have fun, pick yourself any high energy physicist or cosmologist and ask them to explain the rationale behind their arguments"

    I completely agree with this. Most people think that there is a flatness problem because they read that there was. This essentially goes back to Dicke and Peebles (the ultimate source of most people's impressions on this topic), although Dicke himself actually claimed the opposite a few years previously.

    A related question is whether small numbers or numbers of order 1 need to be explained, where "number" is the ratio of the smaller to the larger one. Order 1 means that they are almost equal, and this requires some sort of explanation. If they are completely unrelated, then chances are that the ratio will be small.

    Nevertheless, the claim that a nearly flat universe is somehow improbable and should surprise us is, as I point out in my paper, wrong.

    ReplyDelete
  92. MvdM,

    I'm not sure why you would say that, can you explain? You can get rid of the average value pretty easily by just adding a cosmological constant term to GR. I have no idea how you'd get rid of the fluctuations.

    ReplyDelete
  93. Bee, Amos,

    Er, yes, sorry I got a bit carried away. The strong statement I was attempting to make, that all plausible choices of prior would render the very small values of X unlikely, is not true. There are in some cases specific restrictions that can be placed on the prior by other considerations which would disfavour smaller values, but this is not generally the case.

    The weak version of the statement, which is that a theory which provides a mechanism for explaining why the prior should explicitly favour very small values of X is favoured, is I think still true.

    ReplyDelete
  94. I agree with Sesh that this is best stated as a problem of priors, and I think that naturalness could probably be formulated exactly in terms of information gain (Kullback-Liebler divergence). Or perhaps in terms of its derivative. What you are basically saying is "let's be Bayesian about this"?

    A problem is there are very few cases where the prior is well-defined. One example where it is is the case of a Goldstone boson, where the variable is angular and the prior is uniform on the circle. This is actually the case for the strong CP problem and the Peccei-Quinn mechanism. I think it is also true of QCD + theta term as well (since the theta term defines theta vacua that are also angular variables). The issue for PQ is that you now have to explain the value of the decay constant instead of the theta angle. But the free lunch is that you get dark matter out of it.

    If a single measurement gives a huge amount of information gain, we are in general "surprised", and if we think we have fundamental theories, we expect no surprises. I think "surprise" actually has a technical definition, too (but it may not be exactly what I was thinking of).

    Sorry, this comment is not well logically organised.

    ReplyDelete
  95. Hey Bee, have you by any chance read that paper? → https://journals.aps.org/prd/pdf/10.1103/PhysRevD.90.015010

    Here, naturalness is formalized using Bayessian statistics. A "natural" model is then a model where the Bayessian evidence (likelihood function times probability distribution for parameter x integrated over x) as function of the data (e.g. Z boson mass) is narrowly concentrated around the data actually measured vs an "unnatural" model which has low evidence for any particular value (see fig. 1 in the paper). A bad model would then be even worse: the maximum of the evidence function is way off the actual measured value.

    The problem for me is exactly how you stated it: how can we assign a certain probability distribution (prior) to a parameter x? E.g. in the paper the authors restrict the Higgs couplings to a certain range which seems quite arbitrary for me without further clarification. In my opinion the probability distribution has to be an essential ingredient of the theory itself - meaning that it effectively breaks down if x is too 'unlikely' - and not an ad hoc assumption. Only then it makes sense to flag a theory as 'natural'.

    PS: one more thing that bugs me in the paper - they equate Bayesian evidence with Occams Razor which is something totally different. Its more like a Popper thing: the more precise the predictions, the better the theory.

    ReplyDelete
  96. Nicholas,

    Thanks for pointing out, I will have a look at this. I might have seen it flashing by clearly didn't pay much attention at the time.

    Bayesian evidence is something like Occam's razor in the following sense. Saying that a theory should be as simple as possible sounds simple but it isn't. That's because "as simple as possible" hides the complication that it should still describe data well. But you can almost always find a more complicated theory that describes data better. This brings up the question where is the sweet spot?

    There are various statistical measures for that, and Bayesian reasoning is one of them.

    Personally I think one can rightfully question whether that's always a useful assessment. The problem is that humans aren't computers. A theory might actually be less simple in terms of some statistical measure, but if it's one that's easier to use because of some quirk of the human brain that would give it a huge bonus. For that reason I am somewhat skeptical that we're on the right path with this kind of reasoning. Otoh, if you believe the singularists it might well be that theorists will become entirely redundant in the next couple of decades, so who knows ;)

    Another big problem with a lot of statistical arguments is that they tend to entirely ignore certain "complicated" assumptions. Take, eg, some inflationary potential that is a family of functions paramterized by two constants. Now you can say, well, there's two parameters. But where did the function come from to begin with?

    Naturalness is a criterion that can't be addressed with Bayesian reasoning because it's *not* about simplicity. Simplicity is a criterion about the number of assumptions, while naturalness is about the property of those assumptions themselves.

    Best,

    B.

    ReplyDelete
  97. Hello Sabine,

    If I may, I would like to add a few parallel insights from my field of expertise:product design.This is also in a very fudamental way about a problem solving process, and about improving existing concepts, with a specific know-how attached to it.
    Creativity here, is not 'action painting', but efficient creativity, obtained through well tested methods.

    In theoretical physics, there are arguments of naturalness, simplicity, conformity to experimental evidence, logic, mathematical consistency, for some phylosophical considerations are also important, mathematical elegance, etc etc, I forgot many others.

    In product design, you have comparable parallel conditions to comply to, in order to obtain the best result, the best improvement.

    What decades of 'experimental results' in product design has learned us, is that all of these factors are important to obtain the most durable result. The main point learned is that all of these assessment factors are equally important, and the more factors you respect, the more lasting and durable your product solution will be. This is called Integrated Product Design.(integrating all assessment factors instead of ranking them and, consequently dismissing some or many)

    So I believe that the same thing applies to physics research:
    Instead of fighting over which criterion is the best one, it would be most desirable to respect as many of them as possible. Obviously this does not make the task at hand easier, but the resulting obtained theory will be more durable, will stand the test of time longer.

    The problem here is that researchers tend to assign more importance to the criterion which best suits their personal talents, or their specific working environment or specialisation, or..

    The point here is to realise that the approach of generalists is equally important than that of specialists for example.In fact they should work complementary together.

    Remember Faraday-Maxwell and Einstein-Grosmann as a parallel in physics history: this complementarity of talents created 2 of the most important and durable theories in physics history ever.

    Cross-over, hybrid design, genetic diversity in angle of approach, and diversity in criteria : it is important. And a proven fact in product design for decades.

    Physics education - in my oppinion - could benefit from these insights, if implemented to some extent.


    Best, Koenraad

    ReplyDelete
  98. One of the reasons I follow your blog and think you’re a brilliant physicist is rationale like this, “Why is 10-60 any worse than, say, 1.778, or exp(67π)?”

    While Einstein showed us things about the Universe that I don’t think will ever change like, there is only space-time, displacement is relative, the consistency of light speed; to a layman like myself there often appears to be many reasons to suspect the curvature of space by gravity is a brilliant piece of math, and problematic. Most of what it reveals must stay, yet it is incompatible with other important physics. While I again applaud your rationale here, I do wonder sometimes if some of your attachment to GR is for reasons of human behavior that you are unaware? There are times I’ve seen posts in your blog where one could point to an argument for space-time curvature being the culprit to unify other physics.

    ReplyDelete
  99. I think a lot of arguments about naturalness come up from assumptions about statistical independence and a "reasonable" (read: on the same order as the value, at least in some sense) spread of parameters.

    Take the flatness problem, for instance. Suppose we start with an assumption that the density of the universe is independent of the rate of expansion (and thus the critical density). Under the assumption of statistical independence and some "reasonable" (sufficiently broad) spread of parameters, those two quantities will not have their difference be much much less than the latter; therefore, the flatness problem is an actual problem.

    Similarly with the cosmological constant problem: treat the actual (bare) cosmological constant and the QFT effects as statistically independent, and they should almost never "almost exactly cancel" - it's just not something that often happens assuming statistical independence.

    With this logic, not all small/large unitless parameters are unnatural; only those which emerge from a cancellation of two pieces of seemingly independent physics. (Of course, coincidences happen, but I think that's a cop-out answer for these cases, and a better explanation is desirable.)

    ReplyDelete
  100. APDunbrack,

    Yes, that's right, in many cases the "coincidence" is an almost-cancellation. The problem with your argument is the "reasonable" spread of parameters. Clearly there are spreads by which a small difference is very likely. As I said above, you get out what you put in, hence such an argument doesn't explain anything. And it's not reasonable either.

    ReplyDelete
  101. Numerology is not always that bad, imho.
    One can play for example with electron mass and the fine structure constant:
    if you put beta = m_e / 2*alpha, you have muon = 3*beta, pion = 4*beta, kaon = 14*beta, and so on.
    Then you can take gamma = beta / 2*alpha.
    You have approximately, as multiples of gamma:
    W = 34, Z = 38, Top = 72, Higgs = 53, hypothesized Madala boson = 115.
    And the hypothesized electrophobic scalar boson is roughly beta/2.

    ReplyDelete
  102. "Saying that a theory should be as simple as possible sounds simple but it isn't."

    Einstein said that everything should be as simple as possible---but not simpler. :-)

    ReplyDelete
    Replies
    1. The road to simplicity is paved with complexity ,)

      Delete
  103. "... what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant." Do the empirical successes of Milgrom's MOND suggest that Einstein's field equations need to be replaced by somewhat less elegant equations? Google "einstein field equations 3 criticisms" and "kroupa milgrom".

    ReplyDelete
  104. @Thomas Larsson >> "That the standard model seems to be infinitely finetuned seems to me as one of the major results from the LHC."

    ... from the LHC, and other modern physics, especially cosmology. The problem is that finetuning tends to entail a finetuner. Naturalness, anthropic principle, multiverses, and so on, try to account for it, so far with little success.

    @Uncle Al, thanks for commenting on my comment! I might agree with you, if I understood you. Unfortunately I don't.

    @Phillip Helbig, thanks for this interesting paper "Is there a flatness problem in classical cosmology?", which tends to ameliorate the need for inflation. I wonder what pro-inflationists think of it.

    ReplyDelete
  105. I guess we need to change the name of your discipline. Theoretical Physics should be called Theoretical Theology/Astrology.

    ReplyDelete
  106. @Jeff Knisley. Not sure why you think that pure math does not provide some very large constants, e.g. the order of the the largest sporadic simple group comes to mind (~8e53).

    ReplyDelete
  107. George Rush - "... from the LHC, and other modern physics, especially cosmology. The problem is that finetuning tends to entail a finetuner. Naturalness, anthropic principle, multiverses, and so on, try to account for it, so far with little success."

    There are two separate issues here. That the Higgs mass, and thus the SM in general, seems to be infinitely finetuned is IMO a remarkable observation in itself. Then it is obvious that this fact begs a deeper explanation. The AP is perhaps one possibility, but from my viewpoint not the most likely one.

    My worldview was in many ways shaped many years ago, when I studied phase transitions and critical phenomena, and there the situation is somewhat analogous. It was discovered in the 1960s that critical exponents have to satisfy certain inequalities for consistency reasons. But then it was also found that these inequalities are in fact equalities, i.e. critical exponents are infinitely finetuned, and the underlying reason is scale symmetry and the renormalization group.

    So by analogy, I expect that there is some symmetry principle forcing the Higgs mass to be infinitely finetuned. And since I spent most of my non-career studying algebras of gauge transformations and diffeomorphisms, it is not difficult to guess what I believe this symmetry to be.

    ReplyDelete
  108. @ Thomas Larsson >> I expect that there is some symmetry principle forcing the Higgs mass to be infinitely finetuned.

    The symmetry principle is "simplicity", but the road to it - the underlying mechanism, the finetuning - is complex. This is very common in modern physics. Most famous example is special relativity. Lorentz and Poincare showed how length contraction "magically" combined with time dilation to produce principle of relativity and constant speed of light. Einstein said, let's assume those simple principles must be true. (Common physics term for this: we "demand" them.) Then the finetuning of length and time adjustments due to velocity is inevitable. Much cleaner way to present it, completely bypassing the messy question how length contraction actually happens. Many other examples. Maxwell "demanded" his equations, thus deduced the displacement current. Dirac's equation, derived from fundamental principles, "forced" anti-particles. When we "demand" invariance of the action under local gauge transformations, EM field arises naturally. Neutrinos, spin, quark model, charmed and strange, were all "forced" by "demanding" that some simple principle must be upheld.

    No doubt I'm stretching a bit here, but it seems most progress (last 150 years) has come from "demanding" some simple model, thus revealing unexpected complexity. The precise word for this approach is "teleological". Traditional physics started with the complex underlying dynamics and figured out the emergent phenomenon. Teleological reasoning "demands" or assumes the simple result, then figures out what's needed to get there. This is precisely the difference between Lorentzian (/Poincare/Fitzgerald/etc) relativity, and Einsteinian.

    Why does teleological reasoning work so well in modern science? It's almost as if somebody wanted the simple principle to work, and built a complex machine to accomplish it. Don't blame me! My role is to point out facts. It's up to someone else to put the right politically-correct spin on it. I'm no good at that sort of thing.

    ReplyDelete
  109. Reading what APDunbrack wrote, makes me thing the argument for naturalness may be rephrased as follow:

    We have an "almost cancellation" that could be a coincidence, except since we don't know anything about the initial probability distribution (we have no theory) the concept of coincidence is meaningless.

    On the other hand, instead of being a coincidence, the two seemingly unrelated quantities could be related, but we have no clue how. In the example being discussed, the amount of matter and energy could be related to the amount of space that contains it, and a theory that would explain how would also explain the critical density, and why empty space is impossible.

    Without more alternatives, contemplating these two explanations, one realizes that they have never seen in their career a probability distribution, in any theory, similar to what would be needed for the "almost cancellation". So, which explanation is the simplest: the extraordinary probability distribution, or a connection between matter/energy and space that seems so unlikely?

    Perhaps people do not believe in extraordinary probability distribution, nor in a connection between matter/energy and space, and for them the whole affair is a "problem".

    ReplyDelete
  110. Via a comment on Woit's blog:
    http://syymmetries.blogspot.in/2017/06/naturalness-pragmatists-guide.html

    Naturalness: A Pragmatist's Guide

    Quote:

    The descriptor "natural" is commonly used in two distinct senses in the literature.

    A theory may be called natural if the required dimensionless input parameters are of O(1). (If there are input parameters with mass dimension, then this criterion requires all those mass scales to be similar).
    A theory may be called natural if the required input parameters do not need to be very precisely specified.

    These senses are quite different; a theory can satisfy the first condition without satisfying the second, and vice versa. It is the second sense, and naturalness of the Higgs mass in particular, which the remainder of this post is dedicated to, and henceforth what we take as the qualitative definition of what it means to be natural.

    ReplyDelete
  111. ...or just skip to best explanation: simplest to explain everything

    ReplyDelete
  112. I'm somewhat confused by this, are they saying that 10^-60 is so close to zero, it ought to be zero *but* it's not?

    ReplyDelete
  113. "I'm somewhat confused by this, are they saying that 10^-60 is so close to zero, it ought to be zero *but* it's not?"

    That's the basic idea. Not everyone agrees, though.

    ReplyDelete
  114. When the UK introduced its National Lottery, I got caught up in a discussion in the local pub about which numbers would be good to choose. I proposed 1111111 as a) It was as likely as any seven digit other number to come up (statistically), and b) If it did come up, you would be least likely to find yourself sharing the pot with anyone else (sociologically). My suggestion caused such genuine anger on the part of almost everyone else, I seriously thought a fist fight was going to break out. I guess the sociology won.

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.