Tuesday, October 17, 2017

I totally mean it: Inflation never solved the flatness problem.

I’ve had many interesting reactions to my recent post about inflation, this idea that the early universe expanded exponentially and thereby flattened and smoothed itself. The maybe most interesting response to my pointing out that inflation doesn’t solve the problems it was invented to solve is a flabbergasted: “But everyone else says it does.”

Not like I don’t know that. But, yes, most people who work on inflation don’t even get the basics right.

Inflation flattens the universe like
photoshop flattens wrinkles. Impressive!
[Img Src]


I’m not sure why that is so. Those who I personally speak with pretty quickly agree that what I say is correct. The math isn’t all that difficult and the situation pretty clar. The puzzle is, why then do so many of them tell a story that is nonsense? And why do they keep teaching it to students, print it in textbooks, and repeat it in popular science books?

I am fascinated by this for the same reason I’m fascinated by the widely-spread and yet utterly wrong idea that the Bullet-cluster rules out modified gravity. As I explained in an earlier blogpost, it doesn’t. Never did. The Bullet-cluster can be explained just fine with modified gravity. It’s difficult to explain with particle dark matter. But, eh, just the other day I met a postdoc who told me the Bullet-cluster rules out modified gravity. Did he ever look at the literature? No.

One reason these stories survive – despite my best efforts to the contrary – is certainly that they are simple and sound superficially plausible. But it doesn’t take much to tear them down. And that it’s so simple to pull away the carpet under what motivates research of thousands of people makes me very distrustful of my colleagues.

Let us return to the claim that inflation solves the flatness problem. Concretely, the problem is that in cosmology there’s a dynamical variable (ie, one that depends on time), called the curvature density parameter. It’s by construction dimensionless (doesn’t have units) and its value today is smaller than 0.1 or so. The exact digits don’t matter all that much.

What’s important is that this variable increases in value over time, meaning it must have been smaller in the past. Indeed, if you roll it back to the Planck epoch or so, it must have been something like 10-60, take or give some orders of magnitude. That’s what they call the flatness problem.

Now you may wonder, what’s problematic about this. How is it surprising that the value of something which increases in time was smaller in the past? It’s an initial value that’s constrained by observation and that’s really all there is to say about it.

It’s here where things get interesting: The reason that cosmologists believe it’s a problem is that they think a likely value for the curvature density at early times should have been close to 1. Not exactly one, but not much smaller and not much larger. Why? I have no idea.

Each time I explain this obsession with numbers close to 1 to someone who is not a physicist, they stare at me like I just showed off my tin foil hat. But, yeah, that’s what they preach down here. Numbers close to 1 are good. Small or large numbers are bad. Therefore, cosmologists and high-energy physicists believe that numbers close to 1 are more likely initial conditions. It’s like a bizarre cult that you’re not allowed to question.

But if you take away one thing from this blogpost it’s that whenever someone talks about likelihood or probability you should ask “What’s the probability distribution and where does it come from?”

The probability distribution is what you need to define just how likely each possible outcome is. For a fair dice, for example, it’s 1/6 for each outcome. For a not-so-fair dice it could be any combination of numbers, so long as the probabilities all add to 1. There are infinitely many probability distributions and without defining one it is not clear what “likely” means.

If you ask physicists, you will quickly notice that neither for inflation nor for theories beyond the standard model does anyone have a probability distribution or ever even mentions a probability distribution for the supposedly likely values.

How does it matter?

The theories that we currently have work with differential equations and inflation is no exception. But the systems that we observe are not described by the differential equations themselves, they are described by solutions to the equation. To select the right solution, we need an initial condition (or several, depending on the type of equation). You know the drill from Newton’s law: You have an equation, but you only can tell where the arrow will fly if you also know the arrow’s starting position and velocity.

The initial conditions are either designed by the experimenter or inferred from observation. Either way, they’re not predictions. They can not be predicted. That would be a logical absurdity. You can’t use a differential equation to predict its own initial conditions. If you want to speak about the probability of initial conditions you need another theory.

What happens if you ignore this and go with the belief that the likely initial value for the curvature density should be about 1? Well, then you do have a problem indeed, because that’s incompatible with data to a high level of significance.

Inflation then “solves” this supposed problem by taking the initial value and shrinking it by, I dunno, 100 or so orders of magnitude. This has the consequence that if you start with something of order 1 and add inflation, the result today is compatible with observation. But of course if you start with some very large value, say 1060, then the result will still be incompatible with data. That is, you really need the assumption that the initial values are likely to be of order 1. Or, to put it differently, you are not allowed to ask why the initial value was not larger than some other number.

This fineprint, that there are still initial values incompatible with data, often gets lost. A typical example is what Jim Baggot writes in his book “Origins” about inflation:
“when inflation was done, flat spacetime was the only result.”
Well, that’s wrong. I checked with Jim and he totally knows the math. It’s not like he doesn’t understand it. He just oversimplifies it maybe a little too much.

But it’s unfair to pick on Jim because this oversimplification is so common. Ethan Siegel, for example, is another offender. He writes:
“if the Universe had any intrinsic curvature to it, it was stretched by inflation to be indistinguishable from “flat” today.”
That’s wrong too. It is not the case for “any” intrinsic curvature that the outcome will be almost flat. It’s correct only for initial values smaller than something. He too, after some back and forth, agreed with me. Will he change his narrative? We will see.

You might say then, but doesn’t inflation at least greatly improve the situation? Isn’t it better because it explains there are more values compatible with observation? No. Because you have to pay a price for this “explanation:” You have to introduce a new field and a potential for that field and then a way to get rid of this field once it’s done its duty.

I am pretty sure if you’d make a Bayesian estimate to quantify the complexity of these assumptions, then inflation would turn out to be more complicated than just picking some initial parameter. Is there really any simpler assumption than just some number?

Some people have accused me of not understanding that science is about explaining things. But I do not say we should not try to find better explanations. I say that inflation is not a better explanation for the present almost-flatness of the universe than just saying the initial value was small.

Shrinking the value of some number by pulling exponential factors out of thin air is not a particularly impressive gimmick. And if you invent exponential factors already, why not put them into the probability distribution instead?

Let me give you an example for why the distinction matters. Suppose you just hatched from an egg and don’t know anything about astrophysics. You brush off a loose feather and look at our solar system for the first time. You notice immediately that the planetary orbits almost lie in the same plane.

Now, if you assume a uniform probability distribution for the initial values of the orbits, that’s an incredibly unlikely thing to happen. You would think, well, that needs explaining. Wouldn’t you?

The inflationary approach to solving this problem would be to say the orbits started with random values but then some so-far unobserved field pulled them all into the same plane. Then the field decayed so we can’t measure it. “Problem solved!” you yell and wait for the Nobel Prize.

But the right explanation is that due to the way the solar system formed, the initial values are likely to lie in a plane to begin with! You got the initial probability distribution wrong. There’s no fancy new field.

In the case of the solar system you could learn to distinguish dynamics from initial conditions by observing more solar systems. You’d find that aligned orbits are the rule not the exception. You’d then conclude that you should look for a mechanism that explains the initial probability distribution and not a dynamical mechanism to change the uniform distribution later.

In the case of inflation, unfortunately, we can’t do such an observation since this would require measuring the initial value of the curvature density in other universes.

While I am at it, it’s interesting to note that the erroneous argument against the heliocentric solar system, that the stars would have to be “unnaturally” far away, was based on the same mistake that the just-hatched chick made. Astronomers back then implicitly assumed a probability distribution for distances between stellar objects that was just wrong. (And, yes, I know they also wrongly estimated the size of the stars.)

In the hope that you’re still with me, let me emphasize that nevertheless I think inflation is a good theory. Even though it does not solve the flatness problem (or monopole problem or horizon problem) it explains certain correlations in the cosmic-microwave-background. (ET anticorrelations for certain scales, shown in the figure below.)
Figure 3.9 from Daniel Baumann’s highly recommendable lecture notes.


In the case of these correlations, adding inflation greatly simplifies the initial condition that gives rise to the observation. I am not aware that someone actually has quantified this simplification but I’m sure it could be done (and it should be done). Therefore, inflation actually is the better explanation. For the curvature, however, that isn’t so because replacing one number with another number times some exponential factor doesn’t explain anything.

I hope that suffices to convince you that it’s not me who is nuts.

I have a lot of sympathy for the need to sometimes oversimplify scientific explanations to make them accessible to non-experts. I really do. But the narrative that inflation solves the flatness problem can be found even in papers and textbooks. In fact, you can find it in the above-mentioned lecture notes! It’s about time this myth vanishes from the academic literature.

87 comments:

  1. As an ex-statistician, I can only say: more power to your elbow! It's not just cosmologists who are guilty of discussing probabilities without even considering the underlying probability distribution -- happens all over the place. :-(

    ReplyDelete
  2. MLA,

    Do you have examples on your mind?

    ReplyDelete
  3. If spacetime is intrinsically fractal, changing magnification will neither smooth it nor roughen it.

    "You can’t use a differential equation to predict its own initial conditions" You can't trust a theory protecting postulates by excluding facile contradictory observations. Baryogenesis? Sure! Testing for a broken fundamental symmetry allowing baryogenesis but soiling elegant maths? Don't be silly.

    ReplyDelete
  4. MLA's quip: the field of finance and economics come to mind--the soft sciences, generally.

    ReplyDelete
  5. Hi Sabine,

    thanks for the explanation; I also really liked the tone...

    It seems to me that the real problem is that physics theories only have differential equations. The rest of it is parameters free from constraints. Right?

    Best,
    J.

    ReplyDelete
  6. I’m looking forward to reading your book when it’s done; I so enjoy the systematic, logical way you argue a point.

    Regarding when you said, “…most people who work on inflation don’t even get the basics right. I’m not sure why that is so.” I’ve read you enough to think the question is rhetorical? You’ve written enough about human behavior influencing scientific research to realize once a concept or theory has traction it becomes self-perpetuating, likely slowing scientific progress.

    ReplyDelete
  7. Thank you Dr H. Always appreciate your clarity.

    ReplyDelete
  8. I don't like your solar system analogy much. There is every reason to suspect that stellar systems start out as more or less spherical structures. It's the presence of angular momentum, and the dynamics of gas and dust clouds that flattens them. Similar arguments apply to galaxies.

    ReplyDelete
  9. Sorry, can't quote chapter and verse, but two examples spring to mind... (1) It took a looong time for meteorological predictions to get significantly better than the baseline of simply predicting "no change". (2) I used to work in a large R&D outfit and this problem of baseline kept rearing its ugly head -- e.g. when it turned out that much lauded computational chemistry models of protein folding were (at the time!) no better than random guesses. I don't know about meteorologists, but I do know that computational chemists simply took it for granted that the probability of random (well, lightly educated :-)) guesswork being right simply *had* to be much lower than what their models were achieving. Same mistake in both cases. And I would put a lot of money on the same problem featuring in a number of supposedly respectable economical models.

    ReplyDelete
  10. Yeah but no but yeah but no but yeah...

    * At first sight, the flatness problem is even more bogus than you suggest. After all, we don't directly measure the curvature parameter in the very early universe; we extrapolate it backwards using FLRW models, which all have the property that the curvature tends to zero as the time coordinate tends to zero. So ultra-low curvature isn't even a bona fide initial condition, it is actually an assumption of the model. (Of course, observations do go back to t = 380 ky, at which point the curvature was already very low).

    * What physicists really like to do is replace arbitrary parameters with integers: even if there is no way to define a probability distribution on the index, maybe even you would agree that an inverse-square law seems much less in need of further explanation than one that varies r^-2.03747. Of course the best integer of all is zero, and in fact the "standard" LCDM cosmology has zero curvature. Moreover, the current 95% probability range on present-day curvature (Omega_K) is -0.0031 to 0.0048, (Planck 2015 value, including BAO and other constraints), which is impressively closer to zero than the ~0.1 that you quote. So apparently there is even less need for an inflationary explanation.

    * I think the most illuminating re-formulation of the flatness problem is as the oldness problem, which essentially asks: how come the universe lasted so much longer than the Planck time without either re-collapsing or going into de Sitter expansion? In this case, your point about inflation failing to solve the problem translates as claiming that you could have started the universe in such a way that it collapsed/blew up on timescales orders of magnitude shorter than the Planck time. As an expert in quantum gravity, do you really think that would be credible, or even meaningful?

    * Evidently, the cosmological oldness problem is not nearly so hard as the "proton decay time" problem, since that is even more discrepant from the Planck time. So I guess particle physicists have an even worse fine tuning problem than cosmologists, if that's any comfort.

    * The horizon problem also has an illuminating reformulation, which goes like this: In a metric-dependent but still somewhat meaningful sense, the big bang happened at every point in space at the same time! How weird is that?

    * If you are thinking "not nearly as weird as if it happened at different times in different places", the thing is, it did happen at different times a different places, as you can tell by looking out of the window. Our universe is not actually homogeneous and isotropic. And that is why there actually is some content to the inflationary solution to the flatness/oldness/horizon problems, which are more closely tied to the successful prediction of the cosmic power spectrum than you allow. In other words, the problem is not "why is the universe essentially flat?" but "why does it have curvature fluctuations apparently randomly drawn from this very particular probability distribution?" As you rightly say, this is the genuine success of the inflationary model, but the low horizon-scale curvature just the lowest-order special case of that.

    ReplyDelete
  11. Misconceptions about and misapplications of probability theory. The gift that keeps on giving.

    ReplyDelete
  12. I always found the flatness problem and it's supposed resolution doubly confusing for the following reason:
    Apparently, if you start off with an universe which is *exactly* flat, so that the curvature parameter is exactly 0, then there is no problem because it always stays at 0.
    So, the "problem" arises only if the parameter is "small but not exactly zero" today because then "it must have been really close to 0 in the past".
    Then they bring in inflation to "explain" how the curvature was very close to 0 in the past.
    But the next thing you hear is "Inflation predicts that the universe must be completely flat today" and any evidence for that is taken as a victory for inflation.
    But hang on, couldn't one just say instead that "the universe started out completely flat" and just be done with it ?


    ReplyDelete
  13. Your struggle against confusion is admirable and yet, one more reminder of the need for simplicity is due here. For why is this obsession with '1' different than other well known obsessions with numerology in general?
    https://en.wikipedia.org/wiki/The_Number_23

    And yet we do know and have a name for such obsessions when culturally and collectively promoted, and they usually come under the title of 'Ideology'. So what if this is all a 'social' trend more than anything else? Let's remind ourselves of the past when more often than not, waves of ideological obsession coincided with the inner panic of people or at the least, their fear in front of revelation of a truth counter to what they thought of constituting their most inner core of beliefs -hence, of their
    (supposedly) very existence. Just my 2 cents to the argument...

    ReplyDelete
  14. Thanks for this. I look forward to more. I confess to being a fan of both modified gravity and no inflation. Looking forward to more proof one way or the other

    ReplyDelete
  15. Thank you very much!!! Finally an explanation.

    ReplyDelete
  16. One of those typos that a spelling-checker will never catch: "Even though it does not solving the flatness problem ..." - "solving" should be "solve".

    Off-topic but maybe a future topic: https://phys.org/news/2017-10-neutron-star-smash-up-discovery-lifetime.html

    (Looks like LIGO's gravity waves are confirmed?) (Now with gamma-rays included.) (And I was wondering, why couldn't those other, suspiciously-correlated noises be weaker gravity waves from the same region?)

    ReplyDelete
  17. So experts are spouting nonsense based on what they learned by osmosis. It's even worse in the political landscape. As soon as you mention the possibility of even the most limited forms of direct democracy, you get an automated non thoughtful response of, 'DD leads to tyranny of the majority', "why?", 'it just does, everyone knows that'. :|

    I guess some ideas aren't allowed to be investigated, and we all just need to keep voting for benevolent oligarchs and dictators forever and ever and ever because.. Because why? Just because.

    ReplyDelete
  18. There is nothing magical about the TE spectrum and inflation. All the cosmological observations can be explain by an early universe that is on average spatially flat, but has curvature fluctuations with a spectrum of A*k^{-1/30}. As long as you are happy to fix the mean curvature, you should be ok to pick its variance as initial condition, and you won't need inflation.

    ReplyDelete
  19. The emperor has no clothes. That was the thought that occurred to me when I first heard Enron preaching its ideas. Then once again several years later when reading up on physics, which I have strayed away from since I was in college, over 30 years ago. Since I'm an electrical engineer, I was on much firmer ground when rejecting Enron's ideas, and so had no problem calling it BS. Not so much with physics, but I recognized the same pattern.

    It's the preaching that gives it away, along with the hand wave while suggesting its your fault that you can't see the invisible clothes.

    I am so glad I ran into your blog, after several years of failing to see the invisible clothes, but absolutely convinced that this emperor has no clothes, too.

    ReplyDelete
  20. Niayesh,

    You'll have to explain that to me next month.

    ReplyDelete
  21. JimV,

    Thanks for pointing out, I fixed that. The topic you mention has been widely covered, I see no use in writing about it.

    ReplyDelete
  22. Louis,

    Yes, that's my interpretation. But it's a guess, not a research result. Which is why I write, I'm not sure.

    Also, let me add that while I've been thinking about the issues in particle physics for ten years or so and I am pretty sure I understand the ins and outs, I've only looked at inflationary things for a few months. So maybe I'm indeed missing something? The two situations are not exactly the same: In high-energy physics they don't talk about initial conditions. Part of my reason for writing about this here is that I hope if I get something wrong, someone will point it out.

    ReplyDelete
  23. CIP,

    Well, yes. You're a particularly enlightened chicken ;) I should have used the example with the heliocentric model in more detail.

    ReplyDelete
    Replies
    1. Even I was thinking the same on the Solar System example. From my "extensive" reading on this subject (all Wikipedia!) I believe Jupiter - as the planet - with the largest mass shapes the solar system and thus determines the ecliptic. So the intial condition that matters here is Jupiter's initial plane. The rest should "fall in line" over the course of time...

      Delete
  24. Paddy,

    I don't think there is anything you can say about the Planck epoch that is credible.

    Having said that, you failed to convince me that "this very particular probability distribution" for the initial conditions is a worse explanation than inflation.

    ReplyDelete
  25. senanindya,

    Yes, you could just say the universe started out flat and be done by that. As I wrote, any initial parameter smaller than something will be compatible with data (so far).

    ReplyDelete
  26. What if the universe is so large that it extends very, very far beyond what we can observe? Couldn't space NOT be flat and it still look like it to us? And if that were the case, would there be an inflation problem?

    ReplyDelete
  27. picayuneditor,

    I have no idea how you want space to not be flat but still look flat. And if you'd manage to get that done, I suspect your explanation would be even more rabbit-out-of-the-hat than inflation already is.

    ReplyDelete
  28. Dear Dr B.

    I understood that the curvature of the universe is related to the density. If the universe is flat, omega is 1, and vice versa. I also understood that Omega = 1 is an unstable equilibrium state: If omega = 1 exactly, it stays 1. If it is not exactly equal to 1, it will diverge over time. This would make curvature = 0 and omega = 1 a peculiar value, indeed.

    I can understand physicists wondering what is going on when they see all measurements are within the error range of the only meta-stable point.

    ReplyDelete
  29. Rob,

    As I said, I am not against seeking explanations. I am against so-called explanations that don't explain anything.

    ReplyDelete
  30. Mars,

    I dunno what you mean. I'm not an astrophysicist, so quite possible that I remember this wrongly, but my understanding was that the planets are formed in a disk to begin with, hence their orbits are aligned all along.

    ReplyDelete
  31. Here's a simple probability question I keep asking scientists and philosophers of my acquaintance... You are a doctor looking at the blood test results of one of your patients and the test indicates prostate cancer; the test is known to be 99% accurate -- what are (very roughly speaking) the chances of the patient having prostate cancer?
    Naive answer of "very high" is, of course wrong, but still quite frequent. People with more statistical sophistication start muttering about false positives and false negatives and their respective balance, which is more to the point but is still wrong. The correct answer is: how the heck should I know when you didn't give me anywhere near enough facts to go on?
    Firstly, I did not say whether the test was specifically for prostate cancer, or whether it was more on the lines of "I have no idea what's wrong, so let's test for all possibilities, however unlikely" -- i.e. a fishing expedition.
    More fundamentally, I did not tell you anything about the patient. If it is an elderly gentlemen, chances are probably better than 50/50 that the test is correct. OTOH, if the patient is a pregnant woman, the test result is simply wrong -- period.
    Here's the lesson: probabilities only make sense in the context of particular populations.
    Amazingly, having this explained to them, quite a few people start complaining that it is therefore misleading to claim the test to be 99% accurate. We are just not equipped by evolution for probabilistic thinking. :-(

    ReplyDelete
    Replies
    1. Seems like your initial statement "imagine a test that is 99% accurate" causes a lot of implicit bias -- it's a "magic" machine that you're using to convey a basic Bayesian statistic riddle. However, I think it's generally understood that in real-life situations the accuracy of the machine does carry unequal results in different demographics, which does make your initial statement sound misleading :)

      Delete
  32. Sabine

    Here is a account of inflation an flatness from a popular video series (PBS Space Time)

    https://www.youtube.com/watch?v=blSTTFS8Uco and this from Physicsgirl (https://www.youtube.com/watch?v=MTUsOWtxKKA). Both of which are good accounts of cosmic inflation.

    They both last 15 minutes between them and there is also this article in Forbes about inflation and its 6 tests:

    https://www.forbes.com/sites/startswithabang/2016/01/07/why-cosmic-inflations-last-great-prediction-may-fail/#c45da9872279

    only one of which is yet to be demonstrated to be correct (gravitational waves).

    So it looks like Inflation ticks a lot of boxes for cosmologists even if it is not accepted between doubt as yet which is I found your initial article fascinating and your subsequent one just as gripping.

    Excellent stuff Sabine.

    ReplyDelete
  33. If an observation shows that an elephant is balanced precariously on a needle. You may devise two distinct theories.

    1) The theory doesn't involve any modification of the setup, and instead relies upon delicate cancellations between frictional forces, air flow and so on. The problem being that each physical quantity must be finetuned to an extraordinary precision in order to match observation.
    2) There is a hidden or invisible rope holding the elephant in place, or perhaps some other visual trickery.

    The latter is not minimal but is clearly the explanation any sane physicist would give for the observation. You may choose to explain theory 1 away with a stability analysis, but the stability analysis also depends on the assumption of no finetuning.

    Anyway, the desire to avoid finetuning in theories of nature is not some simple theoretician's game, it is very much one of the assumptions of the whole business to begin with.

    ReplyDelete
  34. Although the two are logically independent*, apart from the question whether inflation can solve the flatness problem is the question whether the flatness problem, as originally formulated by Dicke and Peebles (interestingly, in a conference-proceedings contribution; I'm not aware of any refereed-journal article making this claim), actually is a problem at all. I actually wrote an entire paper about this, which appeared in one of the handful of leading journals in the field. It is a similar rant to Bee's here, but a bit more restrained in tone. True, it doesn't have many citations. On the other hand, no-one has refuted it. Usually when rubbish appears on arXiv (as did this paper), within a couple of weeks several people demonstrate that it is rubbish. Hasn't happened. My theory is that no-one has actually thought much about this (except, as often, of course, Paddy), and just repeats what they have heard. Those who think that there is a flatness problem in classical cosmology are either more intimidating than I am and/or can offer people jobs, so they are cited and I am not. If anyone has a better theory, I would like to hear it. Of course, if you agree with the points I make in the paper, feel free to cite it. Especially if you want to avoid the chicken-and-egg problem that people might think it is not important because it is not highly cited but it is not highly cited because people think it is unimportant (since otherwise it would be highly cited). (Another possible reason is that there is little reason to cite a paper which shows that something is wrong. Several of my papers are in this category.)

    ------
    *By independent I mean that the fact that there is no classical flatness problem for inflation to solve (even assuming that inflation could) does not imply that inflation could not have occurred. As I have mentioned in other comments, I am no longer an inflation sceptic. It is possible that a correct idea is originally supported for the wrong reasons. While reality is objective, our path to discovering it is contingent and full of false starts, some of which are dead ends but some of which aren't.

    ReplyDelete
  35. "But of course if you start with some very large value, say 10**60, then the result will still be incompatible with data. That is, you really need the assumption that the initial values are likely to be of order 1. Or, to put it differently, you are not allowed to ask why the initial value was not larger than some other number."

    Couldn't you use the Anthropic Principal to answer that question?

    ReplyDelete
  36. "What’s important is that this variable increases in value over time"

    Or decrease, depending on how it is defined. Thinking in terms of the density parameter Omega and the cosmological constant lambda, if there some is 1 then we have a flat universe. If the universe is not exactly flat, in some cases this sum can deviate markedly from 1, perhaps even become infinite. It can also become arbitrarily small. This is, in a nutshell, the flatness problem.

    As I point out in my paper, there are actually two questions: Should we be surprised that this sum was so close to 1 in the early universe? Given that it was arbitrarily close to 1 early on, should we be surprised that it is still close to 1 today?

    ReplyDelete
  37. "The reason that cosmologists believe it’s a problem is that they think a likely value for the curvature density at early times should have been close to 1. Not exactly one, but not much smaller and not much larger. Why? I have no idea."

    While I agree with most of your rant, I think that you are missing the mark here.
    As I point out in my paper, this is the "quantitative flatness problem". The reason that some think that it had to be close to 1 early on is that, if it were not, it would not still be near 1 today. So we know why people believe this. As I point out in my paper, though, this line of reasoning is flawed.

    The "qualitative flatness problem", on the other hand, is the question as to why it is near 1 early on. In other words, the assumption is actually that it could have "any value", not a belief that it must be near 1, but we know that it didn't. So, after grudgingly accepting that it is near 1 early on, people wonder why it is still not far from 1 today.

    A red herring is that if it is exactly one, then it is exactly 1 always, whereas if it is not exactly 1, it evolves away from 1, either to infinity (in a finite time) or to 0 (in an infinite time). This is a red herring for three reasons. First, this applies only if the universe is exactly homogeneous and isotropic, but we know that this is just an approximation, though in many situations a very good and useful one. Second, there are cosmological models---including the one which describes our universe---in which the opposite of the conventional wisdom holds: fine-tuning is needed to get arbitrarily large values of the curvature parameter (while the conventional argument is that fine-tuning in the early universe is needed to avoid them). This was pointed out in a wonderful paper by Kayll Lake (though it was implicit in the work of others, including the Paddy of this comment thread). Third, this argument was made back when the cosmological constant was thought to be 0, so this special case is the Einstein-de Sitter universe. The parameter has to be 1 exactly because, in the language of dynamical systems, it is an unstable fixed point. However, the static Einstein universe is also an unstable fixed point, but in that case it was deemed to be an argument against that model; probably Eddington was the first to point this out.

    ReplyDelete
  38. "Shrinking the value of some number by pulling exponential factors out of thin air is not a particularly impressive gimmick."

    This is an unfair characterization. It is not just pulled out of thin air. Guth was originally thinking about the monopole problem, and realized that his inflationary solution would also solve the horizon and flatness problems. As discussed here and elsewhere, whether these are problems and whether inflation can solve them satisfactorily are other questions, but it is not just some sort of ad-hoc explanation.

    ReplyDelete
  39. I always found the flatness problem and it's supposed resolution doubly confusing for the following reason:
    Apparently, if you start off with an universe which is *exactly* flat, so that the curvature parameter is exactly 0, then there is no problem because it always stays at 0.


    Right. (Note that your "curvature parameter" is apparently the reciprocal of the rate of curvature or perhaps something else; I was using Omega+lambda, which is 1 when flat, but the same arguments apply.) This is not a viable solution, though. See my paper on the flatness problem for more details.

    So, the "problem" arises only if the parameter is "small but not exactly zero" today because then "it must have been really close to 0 in the past".

    Right. But, again as pointed out in my paper, this line of argument is wrong. First, yes, it really was close to 0 in the past, but that follows from the fact that we are talking about a Friedmann-Lemaitre-Robertson-Walker universe. (Why the universe is of this type is a valid, but completely separate, question.) (Note that this is different from the horizon problem. The horizon problem can be solved, though unsatisfactorily, by appealing to initial conditions, namely that the universe is Friedmann-Lemaitre-Robertson-Walker universe. But the flatness problem claims that there is something strange even given this assumption.

    Then they bring in inflation to "explain" how the curvature was very close to 0 in the past.

    Right.

    But the next thing you hear is "Inflation predicts that the universe must be completely flat today" and any evidence for that is taken as a victory for inflation.

    No, this is wrong. Show me just one refereed-journal paper which makes this claim. This is probably the result of bad popular-science reporting. To my knowledge, no-one has ever made this claim, ever, at least not any real scientist.

    But hang on, couldn't one just say instead that "the universe started out completely flat" and just be done with it ?


    Yes, this solves the problem by appealing to initial conditions. This is seen as unsatisfactory, correctly, in my view. One can explain anything this way.

    The idea of inflation is to get a flat universe regardless of what the initial conditions were. This is, in principle, a good idea. There are some problems with it, though. First, it doesn't work for all initial conditions, which is the main topic of Sabine's rant. Second, the conditions required for inflation to occur are more improbable than the improbable initial conditions one wants it to eradicate, which is the main topic of Roger Penrose's rant.

    ReplyDelete
  40. "I understood that the curvature of the universe is related to the density. If the universe is flat, omega is 1, and vice versa."

    Yes, if you include the cosmological constant in the density and in the Omega.

    "I also understood that Omega = 1 is an unstable equilibrium state: If omega = 1 exactly, it stays 1. If it is not exactly equal to 1, it will diverge over time. This would make curvature = 0 and omega = 1 a peculiar value, indeed."

    First, it is an unstable fixed point only if the cosmological constant is zero. If not, but if the sum of the matter density and the cosmological constant is 1, we still have a flat universe which stays exactly flat with time, though the values of Omega and lambda change (though the sum is always exactly zero).

    However, if you think of the total density, then the flat universe, in general, is not an unstable fixed point (or line). If the cosmological constant is positive (as in our universe), then, as pointed out by Lake (see link to his wonderful paper in an earlier comment), the universe never gets far from flatness (unless there is fine-tuning). (In the case of a negative or zero cosmological constant, both the density parameter and the normalized cosmological constant go to infinity in a finite time (unless the latter is 0, of course).

    ReplyDelete
  41. "I can understand physicists wondering what is going on when they see all measurements are within the error range of the only meta-stable point."

    Right and wrong. Right in the sense that this does need an explanation. I think that Sabine is a bit shy of the mark here. Wrong in that it is not really a problem, as I demonstrate in my <A HREF="http://www.astro.multivax.de:8000/helbig/research/publications/abstracts/flatness.html>paper</A>.

    ReplyDelete
  42. Thanks to Paddy for his insightful comments. Paddy is like a Zen master in the blogosphere: he doesn't write as much as many other people, but when he does, it is always worth reading.

    "What physicists really like to do is replace arbitrary parameters with integers: even if there is no way to define a probability distribution on the index, maybe even you would agree that an inverse-square law seems much less in need of further explanation than one that varies r^-2.03747."

    Yes. I think that Sabine misses this point somewhat. All numbers are not equally "probable".

    "Of course the best integer of all is zero, and in fact the "standard" LCDM cosmology has zero curvature. Moreover, the current 95% probability range on present-day curvature (Omega_K) is -0.0031 to 0.0048, (Planck 2015 value, including BAO and other constraints), which is impressively closer to zero than the ~0.1 that you quote. So apparently there is even less need for an inflationary explanation."

    Although I'm sure that we agree, your last sentence seems like a non sequitur. I'm sure that your thoughts were correct, but it came out a bit confusing.

    "I think the most illuminating re-formulation of the flatness problem is as the oldness problem, which essentially asks: how come the universe lasted so much longer than the Planck time without either re-collapsing or going into de Sitter expansion? In this case, your point about inflation failing to solve the problem translates as claiming that you could have started the universe in such a way that it collapsed/blew up on timescales orders of magnitude shorter than the Planck time. As an expert in quantum gravity, do you really think that would be credible, or even meaningful?"

    Indeed. When we talk of age, we at least implicitly bring in the Hubble constant. Note that almost all discussions of the flatness problem talk about Omega and lambda but not H. This is a different, but related, and much more interesting problem. My best guess is that there is an anthropic explanation for this.

    "Evidently, the cosmological oldness problem is not nearly so hard as the "proton decay time" problem, since that is even more discrepant from the Planck time. So I guess particle physicists have an even worse fine tuning problem than cosmologists, if that's any comfort."

    Yes. My view is that anyone who is off by 120 orders of magnitude should get his act together before claiming that there is a problem. (This is a different particle-physics problem.)

    And now for something completely different: One often hears that discussing times shorter than the Planck time, or lengths shorter than the Planck length, is not meaningful, yet we routinely deal with objects much lighter than the Planck mass. Extra points if the reason is obvious to you, and even more if you post a concise explanation shortly after this comment appears. :-)

    "* The horizon problem also has an illuminating reformulation, which goes like this: In a metric-dependent but still somewhat meaningful sense, the big bang happened at every point in space at the same time! How weird is that?"

    Indeed. Note that even if the universe is infinite, at the big bang it was also infinite, so in this case it is not sufficient to somehow thermalize the early universe.

    "As you rightly say, this is the genuine success of the inflationary model, but the low horizon-scale curvature just the lowest-order special case of that."

    Indeed. Paddy, I think this is another example where you make an off-the-cuff remark about something which is obvious to you, but will get no credit, because someone else will turn it into a paper. :-)

    ReplyDelete
  43. Assuming that all my, errm, inflationary comments on this topic appear, let me thank Sabine for the opportunity to blow my own horn here. As is often the case, it is better to have someone else blow it, but blowing it oneself is better than not having it blown at all.

    I seriously doubt that anyone has contemplated the flatness problem in classical cosmology more than I have. I think that my paper on this topic is really worth reading. (It is possible, of course, that Paddy has achieved just as much in his thoughts but, do to the greater speed of his brain, actually didn't need that much time to do it.)

    We shouldn't get too off-topic though. If someone finds something wrong with my paper, send me an email. (My email address is easy enough to find.)

    ReplyDelete
  44. "only one of which is yet to be demonstrated to be correct (gravitational waves)."

    Note sure what you mean here. The recent detections of gravitational waves which have been in the news have nothing to do with inflation. Inflation does make some predictions regarding primordial gravitational waves, but these have not yet been observed.

    ReplyDelete
  45. Sabine said, “Part of my reason for writing about this here is that I hope if I get something wrong, someone will point it out.”

    I laughed when I read this because combined with what I see in a lot of your writing, it’s eerie how similar our thought process is, albeit my IQ is probably a hundred or so points lower than yours. It’s why I think I enjoy reading your blog so much; it’s a window to what I might say if I were a lot smarter.

    ReplyDelete
  46. As I recall off the top of my head [could be wrong on this recollection; I'm more certain it's true for galaxies, but think it's also true for the solar system], the initial state of the solar system was spherically symmetric, but flattened out before forming planets (in the "gas-and-dust" phase). So "planets started in the plane" falls into the category of "technically correct."

    However, the intuition "why are things so flat, something must have pulled things flat" is almost exactly what happened with angular momentum and thermalization forming the "initial conditions."

    The analogy, therefore, would be: there was a process which led to the unusual initial conditions [inflation would be just such a process], and then from there the stuff we see happened. That particular example seems to justify inflation more than not, if you set your analogy to different timescales...

    ReplyDelete
  47. Certainly sounds like a solid argument. What does Alan Guth have to say about it? If this were Twitter I'd @ him to try and get a response...

    ReplyDelete
  48. Phillip,

    As I've said, I am not claiming that we shouldn't look for explanations, I am saying inflation is not a *good* explanation (at least not for the flatness problem). It's as contrived, if not more contrived, than just assuming the initial value was some number, or the initial distribution was sharply peaked etc. And the idea that the initial distribution was uniform on an interval of width 1 i highly questionable.

    As to the inverse-square law example

    "an inverse-square law seems much less in need of further explanation than one that varies r^-2.03747."

    I have ignored that because I simply don't understand it. I would think if you ask around physicists, they would say that a power of "2.000000" is in much MORE need of explanation. And of course it is explained by GR (or Maxwell's equations, depending on which law you had in mind).

    ReplyDelete
  49. JimV,

    To some extent, yes. You can use anthropic reasoning to bound the curvature. (That's convenient because it solves the normalization problem of the probability distribution.) But it's not a particularly precise constraint. (If it was, we'd have better predictions!) Best,

    B.

    ReplyDelete
  50. What is the current status of the horizon 'problem'? I recall that's another claim of inflation...

    ReplyDelete
  51. Phillip

    Bicep2 is what I mean. Inflation makes predictions about the presence of gravitational waves but as yet they have not been found. I am not talking about relativity gravity waves (LIGO) but the ones predicted by inflation.

    This is covered in the article I have posted in my previous post.

    And finally, there should be a set of primordial gravitational waves, with a particular spectrum. Just as we had an almost perfectly scale-invariant spectrum of density fluctuations, inflation predicts a spectrum of tensor fluctuations in General Relativity, which translate into gravitational waves. The magnitude of these fluctuations are model-dependent on inflation, but the spectrum has a set of unique predictions. This sixth prediction is the only one that has not been verified observationally.

    ReplyDelete
  52. The easier part first: Yes, the planets formed form a disk, but why a disk? Well the disk probably started out from an almost spherical shape of dust (with all initial conditions of the dust particles equally likely) but by friction "cooled down" to the configuration which minimises energy (as that gets radiated off) but maintains angular momentum (which cannot so easily be radiated off) and that is a disk.

    But now to the flatness problem. If the choice of initial conditions were simply do to drawing from an (unknown!) probability distribution I would totally agree with you. But I guess the underlying believe is that it is not strictly a random process but that what is viewed as the initial condition for the curvature fluctuations are themselves the end product of some dynamical process the inner workings of which are yet to be discovered.

    And it is an empirical fact that dynamical processes produce results with characteristic scales (where the scales are predicted by characteristic ingredients). This statement is of course the same as saying that dimensionless parameters (the realised scale measured in terms of the intrinsic scale) are mostly O(1). The standard example would be atoms: Even if you don't know anything about Schroedinger's equation, you could guess the typical size of atoms by arguing that atoms are formed mainly by electromagnetic force of electrons and nuclei (measured by e the unit of charge), they are held together by quantum processes (i.e. h-bar plays some role) and electrons are the dynamical thing and thus their mass should play a role. Just based on that, you would guess some "natural" length scale and, surprise, the Bohr radius is only off by a factor of 2.

    Or, you hear about a new animal which happens to be a vertebrate. Then the natural guess would be it is the size of a cat. Yes, there are mice and there are elephants and both are not exactly the size of a cat, their size can still be viewed as O(1) * size of cat. And that has a dynamical reason: You cannot make bones that do their job for statics that are much much smaller or much much larger. So, just knowing that bones play a role tells you something about the size (irrespective of that animal's ecological niche and history).

    Yes, there is the danger that such an argument misses something important (and thus gets the scale terribly wrong) but that I would then say is something that need explanation.

    ReplyDelete
  53. "As I've said, I am not claiming that we shouldn't look for explanations, I am saying inflation is not a *good* explanation (at least not for the flatness problem). It's as contrived, if not more contrived, than just assuming the initial value was some number, or the initial distribution was sharply peaked etc. And the idea that the initial distribution was uniform on an interval of width highly questionable."

    Yes, I agree. I also cite the paper by Evrard and Coles. :-)

    ReplyDelete
  54. "What is the current status of the horizon 'problem'? I recall that's another claim of inflation..."

    Hasn't changed much in a long time. Whether inflation is the answer is another question, but this is, in my view, a real problem. Yes, it can be explained by an appeal to initial conditions, but then so can anything.

    It's easy to understand: look at the CMB in two different places on the sky. Unless they are really close together, the two areas have never been in causal contact, yet temperature and other properties are the same. Why? This is true if there is not something, inflation or something else, which allows these areas to have been in causal contact early on.

    Other explanations, such as a variable speed of light, seem to ad-hoc to me to take seriously.

    ReplyDelete
  55. Amazing post, always a great pleasure to read you!

    To answer your question "MLA, Do you have examples on your mind?", you can take a look at finance. Gaussian distribution is still use to create security model for banks and everything else when we know since Mandelbrot (he first published about Bachelier hypotheses of gaussian distribution in 1962, 10 years before the explosion of finance in 1970) that prices do not vary following gaussian distribution, but vary like power distribution. Nassim Taleb is the one now who try to to fight against that, but well, it's hard...

    ReplyDelete
  56. Robert,

    The problem with an argument along the lines that you suggest is that it both assumes we know the scales of the underlying dynamics, and that the dynamics isn't too complex. My example with the heliocentric model sheds light on what can go wrong. The distances between stars in our galaxy don't come about by any simple process that you could estimate easily. Your example with the atom works because you know the relevant scales and, if I may, because you assume that shorter scales decouple.

    Now, look, what you can do is make a hypothesis that this is so. This would be all well and fine by me. Then one could discuss whether it's compatible with observation or whether one can test it. My big problem with the current state of affairs is that the problems, as they are used, are ill-defined. We don't even know exactly what's the problem. You only have to look at this thread to see what I mean, there's a lot of: "But maybe one can think of that way" or, "maybe the underlying belief is." Yes, maybe one can. But maybe scientists should be clearer about what they mean to begin with. I really only pick on the issue with the probability distribution because at least that is clear: You shouldn't speak about probabilities without defining the distribution. (Or priors, if you're a Bayesian.) Best,

    B.

    ReplyDelete
  57. The problem I have here is that if you drop 'no finetuning' as a requirement of science, you could setup the ultimate cosmic conspiracy and drop almost all assumptions.

    Why use GR at all? Newtonian cosmology works just fine. Just setup everything just so from the beginning. Make it so that each possible measurement by humans is subtly influenced a priori (think superdeterminism) to give the desired result and voila, a minimal theory of the universe that explains everything.

    Inflation essentially trades multiple dynamical finetuning problems (many causally distinct parts of the universe must match multiple physical quantities including temperature and curvature) to a single more manageable problem (starting inflation may require a very improbable temperature or quantum fluctuation to set off the inflaton field). I would call that progress, but I suppose it depends on whether you agree about what the axioms of science are, and if no finetuning should be part of them or not.

    ReplyDelete
  58. Haelfix,

    What you say is just wrong. I had a longer discussion about this the other week and maybe I'll write more about this some other time, so let me make this brief. You are confusing the requirement of less finetuning with more simplicity. These are *NOT* the same requirements. Simplicity, loosely speaking, is a statement about the number of axioms. No finetuning ("naturalness") is a statement about the type of axioms.

    The examples that you name all lead to an actual increase in simplicity. Replacing one parameter with another parameter doesn't. The examples you have in mind are thus not finetuning problems. At least not in the way I use the word. (I think it's pointless to fight about the use of words. If you don't like to use the word that way, fine by me, just take it as a definition to understand what I say.)

    A dynamical law is a good explanation if it allows you to simplify the initial conditions. In the case of inflation, it's not clear that this indeed is so. It is clear, eg, when it comes to the concordance model. So no problem with that.

    ReplyDelete
  59. I am aware of that. Minimality and finetuning are indeed distinct. One is about how many additional assumptions we ask of theories, the other is in this case about (Dirac) naturalness.

    So the statement is the following. In the concordance model (without inflation) you have an extremely finetuned, but relatively minimal solution. Newtonian cosmology would be even more minimal, and exponentially more finetuned (as it would need to account for all observations that rule out Newtonian cosmology). Inflation has (I claim) the least amount of finetuning but is also the least minimal (it requires an additional unobserved matter component).

    If minimizing the amount of assumptions is all you cared about in science, then my point is the Newtonian case would make the most sense, and inflation the least. However since that doesn't seem like a viable position, it should be the case that finetuning should matter.

    Now if you agree with the reductio ad absurdim argument, we can quibble about the rest (like the premise that its not simply replacing one parameter for another)

    ReplyDelete
  60. Consider electron (muon, tauon) self-energy, a zero-dimensional particle with measured rest mass, charge, and 1/2 spin. This "problem" is "successfully" resolved by renormalization, exactly compensating infinities to obtain observables.

    Remove an electron’s charge and self-energy...leaving only spin? An electron neutrino? Might deeply woven renormalization be curve fitting?

    Perhaps a trained AI insensitive to canon and politics would assemble a very different physics re gravitation, particle theory, and quantum mechanics. GR being exact and quantum field theory being reassembled would be ironic.

    ReplyDelete
  61. Haelfix,

    I have no idea why you think that Newtonian gravity - by which I suppose you mean post-post-Newtonian or whatever is necessary to fit current measurement precision - is axiomatically simpler than GR.

    Besides, you are misunderstanding the use of simplicity. Simplicity is not an absolute criterion. It doesn't make any sense to require a description of nature to be simple, period. As Einstein put it very aptly, it should be as simple as possible but not any simpler, meaning the theory must describe observations, and must do so well.

    Now, simplicity isn't presently entirely used as a quantitative criterion. Much of what we call "simple" comes from it being familiar math. But in principle, at least, I think we could use something like computational complexity. In that case, then, you'd still have to find a "most simple" solution that is as good as it gets to fit data. Introducing more parameters usually gives better fits, but makes the theory more cluttered. There are various statistical methods to find the "simplest" model, then, so let us just assume you use one of these. On that account, general relativity would be preferable to PPPN or whatever other fudges you need to get the data right without GR. I don't know why you think otherwise.

    Best,

    B.

    ReplyDelete
  62. Hi Bee,

    Distinguish between the simplicity of a theory, and the simplicity of a solution of a theory. QCD is a simple theory, but its solution(s) are a complicated mess of bound states. Likewise Newtonian physics is simple (indeed a special case of SR), but the solution necessary to describe the real world must be ugly or contrived (PPN cosmology etc).

    For our purposes it doesn't really matter much what we pick, b/c if you are allowed to postselect arbitrarily finely tuned initial conditions based upon a desired result, you can make almost any classical mechanics mimic the real world. People are actually trying to do this for explaining away Bell's inequalities (see 'T Hoofts recent nonsense)

    The point is, there has to be some sort of reasonableness condition placed upon how you select initial conditions in any viable physical theory. Otherwise you end up with elephants balanced on needles, and so forth.

    Regarding computational complexity.. I agree in principle as a sort of heuristic guess, although I do not understand how it would work in detail. In the particle physics pheno community, many such measures of 'reasonableness' have been proposed that combine 'technical naturalness' and minimality, but it always seems a little 'tuned' to the authors pet theory if you know what I mean.

    ReplyDelete
  63. Haelfix,

    No, what you say is not correct, as I already said above. There is nothing whatsoever wrong with "finely-tuned" conditions, which I put in scare-quotes because it's ill-defined without a probability distribution. You can also not rule out 't Hooft's approach with that, and discarding it baselessly as "nonsense" won't help you either.

    I still don't get your point about Newtonian gravity. You can of course take the present state of the universe and roll it back in time with Newtonian gravity, but the initial conditions will then not be simple - it will be a mess. Look, it doesn't even work if you get the ratio of dark/baryonic matter wrong, and you want to throw out expansion! Besides that, you can probe dynamical laws on short time-scales directly just by seeing if they correctly connect conditions at different times. No initial condition will fix if that doesn't work. You cannot of course use that to probe processes on very long times as you have in cosmology. In that case you have only simplicity left. And I really mean simplicity, not finetuning.

    Yes, I know what you mean about computational complexity. But it's a problem we can worry about when physics will be taken over by AI. Until then, simplicitly will continue to have a very human component based on whether or not we can make sense of the math. In any case, there are situations where a quantification is pretty straight-forward, such as doing a chi^2 evaluation of different models ability to fit data etc. Best,

    B.

    ReplyDelete
  64. bee:

    i previously asked you (but you didn't post the question or answer it) what distinction do you make between a theory and a model. you seem to conflate them or use them interchangeably. Do they differ and if so, how does one determine if a theory or a model 'works' and what does it mean?

    richard

    ReplyDelete
  65. Phillip,your quote "It's easy to understand: look at the CMB in two different places on the sky. Unless they are really close together, the two areas have never been in causal contact...".
    This is valid for points located no more than 2 degees across, ca.4 fold the diameter of the Moon, or for the Andromeda galaxy that subtends a similar angle, although our eye only sees the central bright core.
    Cmb radiation is regarded as ancestral or relic radiation which retains an equilibrium profile since it decoupled from a much higher temperature bath >10^12 Kelvin some 14.7 billion years ago. Our existence in the here and now is then regarded as lying upon a cmb radiation cooling curve, presently at 2.725 Kelvin, the most accurate equilibrium curve ever matched in the laboratory. All this was postulated before the neutrino was thought to have rest mass. At Cern many particle physicists still considered that these neutral uncharged leptons were massless. Particle pair production temperatures were known for all massive fermions and bosons, apart from the particle associated with the four Higgs fields.
    With the realisation that rest mass neutrinos would solve the solar neutrino problem, the measurement of neutrino mass became important and still remains so. A 30eV neutrino(mass eigenstates for 3 generations) would close the universe.
    Today, the probability that the rest mass electron neutrino will mass in at ca. 1 milli eV, 1meV, A photon neutrino equilibrium bath at this mass will create a cmb extanct true equilibrium with 10% of this recorded temperature. Particle pairs of chirality l and R would be created at rest in this thermal bath and would gravitate. Their mass energy density is very low, with a critical density some 1/2000 of the ca. 25% known baryon and dark matter density, the mass equivalent of 1 electron per cubic metre. This gravitational contraction of neutrinos would happen in an anti de Sitter space with a negative cosmological constant rather than a de Sitter model with a positve sign. The Hubble constant , presently ca. 70km/sec/megaparsec +-ca. 7% for an homogenous isotropic universe could be balanced by a much greater overall mass of the cmb that exhibits a counter Hubble flow, a countercurrent of cold neutrinos at ca. 0.8 km/sec/megaparsec.

    The occurrence of such a counter current would be a consequence of neutrino degeneracy pressure. Inorder to operate a mass is envisaged of some 2*10^54 kg . An alloy of two such universes would explain the flatness problem, render it redundant , it was always flat and for a true equilibrium there was no past , no future, no direction for time's arrow. With gravity, contraction is inevitable.

    ReplyDelete
  66. naivetheorist,

    I do not recall any such comment. In any case please note that I sometimes do not approve off-topic comments or questions that are repetitions of earlier asked questions or comments that contain any kinds of links to webpages that look fishy. And that's leaving aside that some comments simply land in the junk folder and I never see them in the first place.

    I wrote about what is a theory and what is a model here. In practice there are no strict rules for how to use them. Roughly speaking, a model is more specific than a theory which is more specific than a "framework" or "paradigm" which you should use with caution because they make you sound very philosophical.

    ReplyDelete
  67. Phillip-
    On Isotropy and homogeneity
    The reason the critical density and the Hubble parameter at time t are derived from Birkhoff's geometric proof etc is that they vastly simplify the calculations. The galaxies, groups,clusters and superclusters we observe and their structures appear similar in all directions of the sky, the cosmological Principle.

    https://www.youtube.com/watch?feature=player_embedded&v=08LBltePDZw&noredirect=1
    for a flight through the universe, ca. 2 milion galaxies to z~0.3; Hubble deep space reveals many more of the recently reported revision to 10 billion in total, and the conventional relic cmb radiation comes from z~1000!; note there are no individual stars resoved in these images. There are filaments, great walls and voids not easily observed in this video. Nevertheless, conventional opinion conforms to the idea that these filaments and voids extend to the deeper fields and become more isotropic and homogenous with distance. Many of the experts would agree that the universe isn't actually isotropic nor homogenous; consider a ight spherical cow of radius r is still the best approach to teaching such a complex topic as cosmology or climate science.

    ReplyDelete
  68. The initial low-entropy of the universe seems to a similar problem. It seems like the same logic applies.

    ReplyDelete
  69. Bee,

    The fact that it would be a 'mess' as you put it, is the point. It tells you very much that the theory is sick. Again, you can arrange your initial data so that every terrestrial observation ever made just so happens to be a classical cosmic ray hitting a detector in a sort of wonderful conspiracy such that it doesn't matter what you think you see (this is right along side 'T Hoofts superdeterminism defense or I don't know, the universe as a simulation).. That's why I don't care too much about the details of which Newtonian cosmology currently looks the closest to the real world for the purpose of the argument.

    More importantly the Concordance model shares the same problem. It is sick b/c it is the entropic equivalent of finding a bunch of air molecules on one side of the room. It screams for a mechanism.

    Now, you can be rigorous and pick a measure to make such an 'air on one side of the room' thing look less crazy, but that's the human theorist picking by hand what he wants to get and be sure that none have ever been proposed that wouldn't badly break something else.

    Anyway, I don't think we're going to agree about finetuning in physics at this point so perhaps best to move on.

    ReplyDelete
  70. Haelfix,

    Indeed it seems like we're talking past each other. Please re-read my first reply to you. You are conflating simplicity with finetuning. It's *not* the same thing. Difficult initial conditions are quantifiably problematic. Finetuning isn't. It's an aesthetic problem. GR simplifies initial conditions over Newtonian gravity. Inflation doesn't make the initial conditions any simpler. (Not in the case of curvature and the horizon problem, but note my remark about ET correlations.)

    ReplyDelete
  71. This whole business with some initial conditions being "difficult" or "fine tuned" or whatever starts being unnervingly incomprehensible. It'd be much more understandable if someone was to talk of their "computability' - "compressibility" and such. Then again, there is this terible problem with what allows passage to dissipative chaotic structures where bits get rapidly eaten one by one. (Not to mention recent revival of Elis "top-down" causation (Auletta - Tognoni) on top of all that as a prerequisite for self-aware dissipative structures!)

    ReplyDelete
  72. Sabine,
    I'm glad that you referred to your earlier post about the "widely-spread and yet utterly wrong idea that the Bullet-cluster rules out modified gravity". I recently noted another article relevant to that discussion.
    You cited two articles estimating tiny probabilities for observing such an extremely high energy collision of galaxy clusters. I noted a 2015 article by Kraljica & Sarkar who suggest that such probabilities can be misleading, and instead estimate the absolute number of such observations to be expected. I understand their conclusion to be this:
    If we assume that dark matter exists so that we can use the relative collision velocity of 3000 km/s from hydrodynamic simulations based on dark matter, then we conclude that about 0.1 Bullet-Cluster observations should have been seen, which is ok..
    — On the other hand, if we use the relative collision velocity of 4500 km/s from the bow shock front deduced from X-ray observations, then we conclude that the observation is very unlikely.

    The latter assumption suggests the falsification of General Relativity. (STVG explains 4500 km/s.)
    Now let us recall the response of the Particle Physics community to every hint in new data of possible falsification of the Standard Model of Particle Physics. Elation! Ecstacy!! New Physics!!!
    Why is the response of the Cosmology Community exactly the opposite to new data that might undermine ��CDM? Studied indifference if not hostility?

    Finally, Sabine, I seem to recall that you sometimes cite Stacy McGaugh's "living review" of theories of modified gravity. If so, I think it is inadequate, as it doesn't even mention STVG. I get the impression that Stacy is somehow allergic to STVG. I sent him Moffat's book as an amazon gift. I hope it will help.

    ReplyDelete
  73. "A 30eV neutrino(mass eigenstates for 3 generations) would close the universe."

    But a 30ev neutrino is ruled out by other observations.

    ReplyDelete
  74. "The reason the critical density and the Hubble parameter at time t are derived from Birkhoff's geometric proof etc is that they vastly simplify the calculations. The galaxies, groups,clusters and superclusters we observe and their structures appear similar in all directions of the sky, the cosmological Principle."

    The question is why does the cosmological principle (that the universe is homogeneous) hold? It can't justify itself.

    In the old days, it was indeed an assumption to simplify calculations. These days, it follows from observed isotropy and the assumption that we aren't in a special place (Copernican principle). There are also other indications that it is true. So, now it is not an assumption, but an observation. The question is, why is the universe like this?

    ReplyDelete
  75. Phillip-
    The reason a ~30eV neutrino was raised was because it would explain the Dark Matter component of the universe. From hindsight it was untenable.

    Why is the Earth like it is? From a chemistry viewpoint, the Earth is vastly reducing. The atmosphere is highly oxidising. As a consequence of life it has been transformed and maintained in a far from equilibrium thermodynamic state. Of course we can always ask the why, but currently that's a too ambition a question to raise in the context of a whole universe.

    ReplyDelete
  76. "But of course if you start with some very large value, say 10^60, then the result will still be incompatible with data. That is, you really need the assumption that the initial values are likely to be of order 1."

    Not in a universe of closed geometry, in which the initial absolute value of the curvature density parameter is infinite. In a simplified model of exponential expansion due to a cosmological constant L, the scale factor is:

    a(t) = a0 cosh(Ha t)

    where:
    Ha = asymptotical Hubble parameter = c sqrt(L / 3)
    a0 = sqrt(3 / L)

    The Hubble parameter is: H(t) = Ha tanh(Ha t)

    The absolute value of the negative curvature density parameter is:

    -Omega_k(t) = 1 / [sinh(Ha t)]^2

    At t = 0, -Omega_k(t) is infinite.

    ReplyDelete
  77. Johannes,

    All you learn from that is that you better not use initial conditions at t=0.

    ReplyDelete
  78. Sorry, but since initial conditions are just the conditions at t=0, I don't get your last point.

    My point was that the geometry of the universe is just another initial condition, which a priori can be closed or open, and that, if it was closed, then the initial absolute value of the curvature density parameter was infinite. Thus, your statement "that there are still initial values [of the curvature density parameter] incompatible with data" applies only to the case of open geometry. As is well known, observations of flat geometry such as the latest from Planck are compatible with the 3 cases (https://arxiv.org/abs/0901.3354).

    Summarizing the point for the benefit of casual readers: the beginning of slow roll inflation can be approximated by a lambdavacuum FLRW model with curvature, which would be "no roll". The 2 versions of that model have opposite initial conditions:

    Closed geometry
    a(t) = a0 cosh(Ha t)
    H(t) = Ha tanh(Ha t); H(0) = 0, starts at rest.
    -Omega_k(t) = 1 / sinh^2(Ha t); -Omega_k(0) = infinite.

    Open geometry
    a(t) = a0 sinh(Ha t)
    H(t) = Ha cotanh(Ha t); H(0) = infinite, starts with a bang.
    Omega_k(t) = 1 / cosh^2(Ha t); Omega_k(0) = 1.

    ReplyDelete
  79. Initial condition are chosen at an initial time, not necessarily t=0. You never put initial conditions on a singularity.

    ReplyDelete
  80. OK, but the closed lambdavacuum model does not have a singularity at t=0! Note that the value of the scale factor at t=0 is a0 = sqrt(3 / L), i.e. nonzero and finite.

    The reason why both density parameters, for curvature and for lambda, become infinite at t=0 in that model is just because H(0) = 0, i.e. because the universe starts at rest. Let's recall that the density parameter for each component is defined as:

    Omega_i(t) = Rho_i(t) / Rho_crit(t)

    where the critical density is defined as:

    Rho_crit(t) = 3 H(t)^2 / (8 pi G)

    Thus we have:

    Omega_k(t) = -k c^2 / [a(t)^2 H(t)^2]

    Omega_L(t) = c^2 L / [3 H(t)^2]

    ReplyDelete
  81. Johannes,

    Well, you began your comment referring to an infinite initial value. I merely told you that's nonsense. If that's not what you mean, then maybe explain what you mean.

    ReplyDelete
  82. OK, let's round it up. My point is that your statement...

    "This has the consequence that if you start with something of order 1 and add inflation, the result today is compatible with observation. But of course if you start with some very large value, say 10^60, then the result will still be incompatible with data. That is, you really need the assumption that the initial values are likely to be of order 1.
    [...]
    It is not the case for “any” intrinsic curvature that the outcome will be almost flat. It’s correct only for initial values smaller than something."

    ... when viewed in the context of the simplest inflationary models, lambda-vacuum with curvature, applies ONLY to the case with open geometry, where:

    Omega_k(t) = 1 / cosh^2(Ha t), and therefore Omega_k(0) = 1,

    and NOT to the case with closed geometry, where:

    -Omega_k(t) = 1 / sinh^2(Ha t), and therefore -Omega_k(0) = infinite.

    In the latter case, the infinite initial value of -Omega_k is NOT due to an initial singularity (which there is not in this model), but to an initial state of rest.

    Even when Omega_k starts as 1 in one model and as -infinite in the other, with sufficient elapsed time its absolute value becomes arbitrarily small in both models.

    ReplyDelete
  83. Johannes,

    I think you should do a dimensional analysis. You seem to be speaking of the curvature term k/a^2, which is dimensionful, rather than of the curvature density, which is dimensionless.

    ReplyDelete
  84. Sabine,
    I believe you are not alone, by any means, in your healthy skepticism of the motivations for Inflation. At least in writing, one can find R. Penrose suggesting similar arguments in his 2006 book "Road to Reality", section 28.5 "Are the motivations for inflation valid?". I suppose that he makes a longer argument for that in his most recent book. I believe there are many of us around that agree with you and him.
    Best

    ReplyDelete
  85. Alejandro,

    I feel really bad for never having read Penrose's book. They look so scary... But one day, I promise, I'll give them a read.

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.