Src: The Swell Designer
Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.
One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.
The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.
The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.
The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.
It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.
One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.
Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.
Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.
Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.
That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.
A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.
This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)
Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.
Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.
Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.
The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.
The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.
In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.
[This article previously appeared on Starts With A Bang.]
Update June 21: Corrected several sentences to address comments below.
Thank you B. I hope I will still be alive when some of this fog clears.ReplyDelete
Thank you for the article.ReplyDelete
So if the CC can vary over time.
1.Is there another force constraing it or can it vary by itself?
2.Does this mean the big Rip is still a possibility?
Your last sentence. Not necessarily. Perhaps, if the constant changes, it oscillates...ReplyDelete
...Tunable in "natural" ways
Cosmic expansion is morphology development progressively minimizing interfacial free energy (e.g., of a gyroid phase). The "foam" large scale structure of the universe has dirt (matter) collecting at phase discontinuities. Model in conflicting ways to generate a universe of publications.
Fourier transform the universe's mass distribution seeking periodicities.
Fascinating. I concur with Michael Fisher.ReplyDelete
I am also a little suspicious of the accuracy of microlensing.
Suggested word substitution: "Intransparent" ==> opaque or arcane. Although with intransparent you have a claim of invention. ;-)
I've followed some of your posts before, and usually find them informative and well considered. However, now you're treading on territory that's much more familiar to me, and I find myself not being so impressed - sorry!ReplyDelete
First of all you seem to confuse the "cosmological constant" (where there's no real tension discussed in the literature) with the "Hubble constant" (where there are ~3sigma differences between some CMB and classical measurements).
Secondly you say that most cosmologists would bet on a systematic problem with the Planck data - but give no basis for making such a statement. This certainly isn't the opinion of most cosmologists that I talk to!
There are always apparent "tensions" in data, particular when one is free to search over all the different directions in a multi-dimensional parameter space. And it's always easy to find models with extra degrees of freedom that fit some of these small differences - this has been fully discussed in many papers.
The issue right now is trying to decide whether any of the "tensions" are big enough to get excited by. What we need are "tensions" that grow into genuinely significant differences (which we haven't seen yet) and theoretical ideas that in some simple way explain more than one thing (which we also haven't really seen yet). The trick I suppose (and hence the reason why so many people are engaged in this endeavour) is to try to pick the thing that grows to be a big deal, before everyone else does!
My own guess is that there's nothing here to get excited about, and that it will all resolve itself slowly, into a series of small systematic corrections and unsurprising differences. But I'm paid to be skeptical!
They measure the expansion rate, no? In my understanding the expansion is currently dominated by the cosmological constant, provided it's constant, so I didn't see the need to distinguish the two. Sorry about the sloppiness and thanks for the correction.
As to my statement what most cosmologists bet on, etc, I didn't make a survey. I merely write things like this because one of the reason ppl read blogs is that they hope to get a sense what's being discussed. You are more than welcome to add your impression!
Thanks. Will keep in mind that the word "intransparent" is rather, erm, "intransparent" :D
1) if it varies it means it's a field and there's a force associated with it. It's sometimes referred to as 'the fifth force'
2) in principle yes, but personally I doubt this question will be settled in my lifetime
Wasn't there just this week a paper suggesting that the tension could be resolved if we (and the entire volume probed by supernova measurements) live within a slightly underdense region ("void") of the universe?ReplyDelete
If the CC varies over time, does that rule it out as being the "simple" CC Einstein added to his field equations and mean it must be something contributing to the stress-energy tensor instead? IIRC the lambda term multiplying the metric has to be constant otherwise it's not a covariant term.ReplyDelete
When reading this post, I immediately thought the same thing as Douglas Scott (whose comment I read after reading the post), who is a really famous CMB guy and certainly knows his stuff. Yes, at least in part of your post you are confusing the cosmological constant with the Hubble constant.ReplyDelete
"They measure the expansion rate, no? In my understanding the expansion is currently dominated by the cosmological constant, provided it's constant, so I didn't see the need to distinguish the two."
I agree with Douglas Scott's comments. Your reply is just plain wrong. Sorry, but there is no other way to put it.
The Hubble constant is the rate of expansion divided by the scale factor, so it is the "speed" of the expansion. The presence of matter slows this down---deceleration---and the cosmological constant speeds it up---acceleration. However, the cosmological constant is constant in time, while matter dilutes with the expansion, so in the early universe there was deceleration while now there has been acceleration for a few billion years. This is a higher-order effect. There are various ways to measure it. They all agree. There is no tension between various measurements nor is there any real evidence that the cosmological constant is not constant.
The Hubble constant is different. There is real tension. What the source of this is, I don't know. But I doubt it is new physics. (By the way, the Hubble constant in general changes with time. It is not called the Hubble constant because it is constant in time, but because it is a constant of proportionality.)
Yes, the Hubble constant is the expansion rate. But this is not "dominated by the cosmological constant" in any meaningful since. Even if it were, one could still measure the Hubble constant independent of any knowledge of the cosmological constant.
"Wasn't there just this week a paper suggesting that the tension could be resolved if we (and the entire volume probed by supernova measurements) live within a slightly underdense region ("void") of the universe?"ReplyDelete
Probably. This is an old idea. The question is whether there is independent evidence for such a void.
If the CC varies over time, does that rule it out as being the "simple" CC Einstein added to his field equations and mean it must be something contributing to the stress-energy tensor instead? IIRC the lambda term multiplying the metric has to be constant otherwise it's not a covariant term.ReplyDelete
Right on all counts. Even in the case of the "simple" CC, some people put it on the right side as part of the stress-energy tensor rather than on the left as a purely geometric term. Weinberg famously explained the smallness of the cosmological constant by having both: a negative geometrical term which almost, but not quite, balances the large particle-physics-inspired term, with the degree of cancellation being explained via the weak anthropic principle.
"I am also a little suspicious of the accuracy of microlensing."ReplyDelete
Note that there is no microlensing in the post or in the sites linked to in the post.
I hear what you say. My understanding was that the value we are talking about is H_0, and not H(t). What is wrong with my statement that the current expansion is dominated by the cosmological constant? I am terribly sorry of course for getting this wrong, but please explain.
Hi again Phillip, Douglas,ReplyDelete
I've changed some sentences, but I'd still appreciate an explanation.
OK, I'll give it a try.ReplyDelete
Yes, we are talking about H_0, not H(t). In general, H does change with time, not only in the presence of a cosmological constant. Exactly how H changes with time does depend on the cosmological constant. But in practice no-one measures H(t).
Think of a diagonal curve which is almost, but not quite, a straight line. The Hubble constant is like the slope of the line and deviations from this depend on the values of the density parameter and cosmological constant, but these show up only far up the curve (i.e. at high redshift). One can measure the slope of the curve, in particular that part of it close to you, with no knowledge of slight deviations farther away. Also, one can measure these deviations even if the slope is not known (basically because the slope cancels out).
A Friedmann model is characterized by three free parameters. A common choice are the Hubble constant, the density parameter, and the normalized cosmological constant (essentially the constant cosmological constant divided by the square of the Hubble constant---up to a constant factor---just as the density parameter is the physical density divided by the square of the Hubble constant---up to a (different) constant factor). In general, all of these change with time (the normalized cosmological constant only if the Hubble constant changes (which it in general does), the density parameter in addition because of dilution due to the expansion). One measures Omega_0, lambda_0 (by convention, lower-case lambda is the normalized version) and H_0. One then knows their values at all times. (This is because trajectories in the lambda-Omega plane do not cross, so knowing the values at any one time gives them at all times, with the Hubble constant just a scaling factor. The classic paper on this topic, and probably my all-time favourite cosmology paper, is a wonderful paper by Rolf Stabell and Sjur Refsdal.)
Classical cosmology works out how some observed quantity as a function of redshift depends on these three parameters. One then observes this quantity as a function of redshift and fits for the three parameters. This is what the 2011 Nobel Prize was awarded for, where the observed quantity was essentially the observed brightness of supernovae. With supernovae, one can measure the Hubble constant by the low-redshift slope of brightness as a function of redshift; higher-order effects don't matter here. One can also---independently---measure the curvature at higher redshifts, which depends on lambda and Omega.
With the CMB, one also works out how the angular power spectrum depends on these three parameters (and some additional, non-classical parameters) and does the fit. No redshift dependence here. Even though the CMB is at high redshift, what one fits for is H_0.
To be continued.
There is tension between this value of H_0 and that measured by the supernovae at low redshift. This has nothing to do with the cosmological constant. Also, don't think that the fact that one measurement is at low redshift and one at high redshift means that the Hubble constant changes with time or with redshift. It does, but in both cases the derived quantity is H_0. And in general H changes with time, whether or not there is a cosmological constant.
There is no tension between the values of the other parameters as measured by the CMB and as measured by the supernovae or by any other method.
The universe is expanding. It is also accelerating. I don't know what it means to say that the expansion is dominated by the cosmological constant. If other effects are negligible, then one has exponential expansion, which our universe will approach asymptotically (de Sitter space). But the measurements of the Hubble constant and other parameters don't depend at all on how close we are to pure exponential expansion.
Imagine your job is to determine how much money one needs to live a reasonably comfortable life as a student. Different people will have different estimates, based on what they think are essential things. Of course, if one is responsible for this, one needs to adjust it for inflation. Imagine that inflation is the same for all products (not true in practice, of course). Then various people could agree on the amount of inflation---one per cent per year, say---even if they don't agree on how much money a student needs. Similarly, there is agreement between various measurements of lambda and Omega even if there is some tension between measurements of the Hubble constant. Note that people could still disagree on how much money a student needs even if there were no inflation.
Check out the papers by the supernova groups where they plot the observed brightness as a function of redshift. The asymptotic slope at low redshift depends on the Hubble constant while the curvature at high redshift depends on the other parameters. (Of course, in a logarithmic plot, the Hubble constant can be an offset.)
More detail is not practical in a blog comment; that would need reading some books or a personal lecture. :-)
"The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant."ReplyDelete
Almost. Yes, the expansion rate is encoded in the Hubble parameter. But one could measure the acceleration even if the Hubble parameter were completely unknown.
What is confusing is that the tension in the measurements of the Hubble constant (which does exist) does not imply tension in the measurements of the higher-order parameters (which does not exist). Yes, it might be possible to resolve this tension by some sort of time-variable cosmological "constant". This obviously moves beyond a standard Friedmann cosmology. But it is probably more important to determine if the tension is real first.
Yes, an acceleration implies a change in speed with time, and the Hubble constant is "speed", but the acceleration is not measured by measuring the Hubble constant at low redshift and comparing to a measurement at high redshift, at least not in any meaningful sense. In all cases, one fits parameters to observations. No-one measures the acceleration "directly". The acceleration is a derived quantity, predicted by the assumed theory (GR) given the parameters obtained from the fit.
@Phillip Helbig & SabineReplyDelete
"The Hubble constant is the rate of expansion divided by the scale factor, so it is the "speed" of the expansion. The presence of matter slows this down---deceleration---and the cosmological constant speeds it up---acceleration."
On a conceptual level I would like to understand this: How does the presence of matter slow expansion down, deceleration ? I mean matter attracts matter, but matter does not attract 'space' and we are talking about the expansion of space here.
Could you elaborate on that please ?
H_0 is about 2 thirds explained by the cosmological constant. But it is not what the papers disagree on here.ReplyDelete
H_0 is the measurement of the current expansion speed. The two experiments measure different values for the current expansion speed does imply that the cosmological constant changes over time.
H_0 should have a unique value independent on how dark energy changes over time.
But then it is true that a time dependent cosmological constant could explain the difference since it will impact the experiments different amounts. But it is only one out of many possibilities which I hope is constrained in the near future.
I understand the Hubble constant H_0 does not a priori have a fix relation to the cosmological constant. All I mean is that in the 1st Friedmann equation, if the cc dominates, the one is the square of the other. And I thought that the cc does dominate
You write "I don't know what it means to say that the expansion is dominated by the cosmological constant."
That's all I meant. Until today I thought that as a rule-of-thumb estimate (H_0)^2 = \Lambda - that is, given what we already know about the energy/matter content of course. Now I'm very confused and will have to think about this. Thanks for your explanation,
I have a related question about cosmological expansion and there seems to be some know-how clustered here:ReplyDelete
What was driving the cosmological expansion during the first couple of billions of years of the universe? I.e. before the breakeven point when the matter was diluted enough that the dark energy driven acceleration began dominating the matter driven deceleration.
Or why did cosmological expansion start of with a strongly positive value after the big bang ? Is that some kind of "space inertia from the bang" ? Probably not.
Of course without this initial strong expansion the high matter density should have instantly recollapsed the universe, no?
kind regards and thanks in advance, Tobias
In the last paragraph you say that the data from microlensing was compared to the Planck data and some tension was found between the two. Was it also compared to the supernovae data?
Thanks for the post!
"On a conceptual level I would like to understand this: How does the presence of matter slow expansion down, deceleration ? I mean matter attracts matter, but matter does not attract 'space' and we are talking about the expansion of space here."ReplyDelete
The correct answer is that this is what GR says.
Some insight can be gained by so-called Newtonian cosmology. Yes, in this case the deceleration is due to matter attracting matter, but the duality to relativistic cosmology is well defined.
There is some debate about whether the expansion-of-space picture is "real". In the end, what matters is that the numbers are correct. Personally, I think it is is useful, especially since there are empty (neither matter nor radiation) universes which have a finite volume which changes with time (they do contain a cosmological constant, though); this is hard to visualize if space isn't "real".
"Matter doesn't attract space" is true in day-to-day life, but our intuition based on day-to-day life doesn't necessarily apply in other regimes.
"H_0 is about 2 thirds explained by the cosmological constant. But it is not what the papers disagree on here."ReplyDelete
I'm not sure what this means. About 2/3 of the energy density is explained by the cosmological constant, but that is something rather different.
"H_0 is the measurement of the current expansion speed. The two experiments measure different values for the current expansion speed does imply that the cosmological constant changes over time."
"H_0 should have a unique value independent on how dark energy changes over time."
"But then it is true that a time dependent cosmological constant could explain the difference since it will impact the experiments different amounts. But it is only one out of many possibilities which I hope is constrained in the near future."
In other words, there is only one true value of H_0, at least if there is homogeneous expansion, but if the assumptions behind the measurements are invalid, it might appear to have a different value depending on the type of measurement.
"I mean matter attracts matter, but matter does not attract 'space' and we are talking about the expansion of space here."ReplyDelete
Layman's attempt at an answer: but we can only make measurements related to matter (photons from distant stars and galaxies), and these measurements are affected by matter attracting matter.
"All I mean is that in the 1st Friedmann equation, if the cc dominates, the one is the square of the other. And I thought that the cc does dominate"ReplyDelete
This is the asymptotic case, where lambda=1, the de Sitter model. So, the cosmological constant dominates in the sense that it is the largest component of the mass-energy budget (even having an absolute majority). It does not dominate in the sense that the de Sitter model is a good approximation.
What threw me was saying that the expansion is dominated by the cosmological constant. I'm not sure that this is well defined. (My maths professor Ina Kersten used to say that "well defined" is not well defined. :-) ) The acceleration is indeed dominated by the cosmological constant, since it is positive rather than negative (as would be the case with matter but no cosmological constant).
Since H_0 is measurable, what is clear is the present rate of expansion. It is what it is, whatever the value of the cosmological constant. A cosmological constant means that the rate of expansion was slower in the past than it would have been without a cosmological constant.
The fact that the universe is expanding is primarily due to initial conditions. Since the cosmological constant causes acceleration, one might think that it would make the current expansion faster than it would otherwise be. True in some sense, but since the current rate of expansion is measurable, H_0, it is better to say that because of the cosmological constant, the expansion was slower in the past than it otherwise would have been.
Think of the function R(t), where R is the scale factor. The cosmological constant tends to increase the second derivative. However, the first derivative is normalized at the present time.
By (as far as we know) coincidence, the age of the universe now (and only now) is approximately the same as that of an empty universe with the same value of H_0. In other words, the early deceleration has been made up for---just now---by more recent acceleration. (This coincidence, equivalent to saying that the average value of the deceleration parameter is 0, is good to at least a few per cent. That the energy densities due to matter and the cosmological constant are "nearly equal" is more an order-of-magnitude coincidence (the former is a bit less than half the latter).
"That's all I meant. Until today I thought that as a rule-of-thumb estimate (H_0)^2 = \Lambda - that is, given what we already know about the energy/matter content of course."ReplyDelete
The usual definition is lambda = Lambda/3H^2. We measure lambda at about 0.7. The de Sitter universe (cosmological constant, no matter, asymptotic state of our universe in the inifinite future) has lambda = 1. So, here, H^2 = Lambda/3 while today it is about Lambda/2. Whether this is a good rule of thumb (50% error) depends on the degree of approximation needed. (This definition of lambda is by analogy with that of Omega = (8 pi G rho)/(3H^2). In both cases there is a quantity with dimension time^(-2) divided by 3 times the Hubble constant. It is sometimes easier to work with dimensionless quantities. The fraction 8/3 ultimately comes from the volume of a sphere and and a factor of 1/2 in kinetic energy.)
Ok, good. I think at least I got the math right. For what I am concerned 0.7 = 1.
Regarding that statement that "threw" you, note that I explicitly wrote "the expansion is currently dominated by the cosmological constant" (emphasis added), where by "expansion" I mean H, hence "current expansion" = H_0.
Supernovae data is included here. (SNe = supernovae).
"H_0 is about 2 thirds explained by the cosmological constant. But it is not what the papers disagree on here."
I'm not sure what this means. About 2/3 of the energy density is explained by the cosmological constant, but that is something rather different."
And the square of H_0 is proportional to the energy density so my statement is still correct. I guess I could say the square of H_0 is about 2 thirds explained by the cosmological constant.
"But then it is true that a time dependent cosmological constant could explain the difference since it will impact the experiments different amounts. But it is only one out of many possibilities which I hope is constrained in the near future."
In other words, there is only one true value of H_0, at least if there is homogeneous expansion, but if the assumptions behind the measurements are invalid, it might appear to have a different value depending on the type of measurement."
If H_0 is not a constant (in space I assume) we are not in a FLRW metric so several assumptions will be wrong. It is possible and I did meantion there being several possibilities. I personaly do not think we are far from a FLRW metric given how homogenous the CMB is.
Expansion rate is related to energy density and curvature of space. Curvature in our universe is very flat so it can be ignored. A flat universe expands or contracts depending on how much energy it contains. GR does not care what the kind of energy is.
The difference between dark energy and matter is that matter gets diluted when the universe expands, thereby slowing the expansion while dark energy/cosmological constant does not loose energy density with expansion.
"Regarding that statement that "threw" you, note that I explicitly wrote "the expansion is currently dominated by the cosmological constant" (emphasis added), where by "expansion" I mean H, hence "current expansion" = H_0."ReplyDelete
OK, sure, the cosmological constant is important now but not in the early universe. However, I always think of the acceleration being determined now by the cosmological constant, not the expansion. Two reasons. First, as I said, the current value is what is fixed, it is what it is; other parameters affect its value at earlier and later times. Second, the expansion is not caused by the cosmological constant. (Another way of looking at it: it seems strange to say that the first derivative (of the scale factor) is dominated by something which affects the second derivative.)
It's been a good discussion, though. Now time for some sunshine and swimming!
The cosmological constant varying over time reminds me of the idea once proposed that the speed of light was faster in the earlier Universe to be a counter argument to inflationary theory and the issues that is claims to resolve.ReplyDelete
Matti: Thanks for your reply. Unfortunately it didn't really help me understand and I'll explain why. I hope I am not missing something extremely fundamental. But if so please let me know and have some patience ;)ReplyDelete
My state of mind was that regular matter or energy content lead to contracting universes. Hence DE was needed to stabilize the universe against collapse (Einstein anecdotes about the cosmological constant). As DE density is constant in an expanding universe while matter/energy is diluted, DE will eventually dominate and drive expansion at an ever increasing rate. I thought this was the scenario we find ourselves in. In this scenario I don't understand what was driving expansion before DE began dominating, i.e. before expansion reached the point where the outwards pressure of DE became larger than the inwards pressure of the energy/matter content.
Unfortunately, the only way to interpret your post in a way that might advance my understanding is this: not only DE, but also regular energy/matter expand space. This would soundly explain the fast and decelerating initial expansion. After that though expansion should have continuously slowed down asymptotically to the non-diluting expansion imposed by DE. This would be in conflict with observed accelerating expansion. Therefore I conclude that this interpretation of your post is wrong.
Just a reminder, that we don't understand why the universe expands (since Einstein's equations are symmetric with respect to time). See this nice paper.ReplyDelete
Paddy's papers are some of the best in the business, and he has done much good work trying to answer the Big Questions. Within traditional cosmology, the universe expands because of initial conditions. The equations also describe a contracting universe. This might be an unsatisfactory answer, like the one to the question why the universe exists at all, but this can't be answered within the framework of traditional cosmology (which doesn't mean that it can't be answered scientifically).ReplyDelete
Thank you for several explanations, I appreciate your elaborations.
But I am still puzzled about how gravity could make the universe collapse, or how matter could slow down an expansion.
I copied this short explanation from somewhere:
"The Einstein field equations predict that the universe can expand, not that it is expanding. The fact that the universe is expanding was first observationally discovered by Edwin Hubble in 1929. Before this discovery, Einstein had introduced the cosmological constant to keep the universe from collapsing under the influence of gravity. When he heard of Hubble's discovery, he removed the cosmological constant from the equation, as expansion could explain why the universe wasn't collapsing due to gravity. The inflationary universe theory gives us clues on how the expansion first originated/occurred after the big bang"
I can understand how all the matter would end up being attracted to one center, due to matter attracting matter. But logically this would leave space unaffected in therms of contraction or expansion.
In fact this huge amount of concentrated matter would then simply generate a gravitational field, no ?
Is there any description of a mechanism or principle along which this 'space' would contract, or slow down in expansion ?
Shantanu/Phillip: So just to make sure that this was addressing my question: The initial expansion does not unambiguously follow from GR but is merely the expected strong initial contraction inverted in sign ? That is indeed quite frustrating.ReplyDelete
How far back in time can measurements properly determine the Hubble parameter ?
@Koenrad: Einstein thought that the universe was static, but in GR this is in general not the case. Thus Einstein introduced the cosmological constant with a specific value which allows for a static (though, as Eddington pointed out, unstable) universe.* When he learned that expansion had been observed, he saw no reason to keep the cosmological constant. Gamow claims that Einstein said that the cosmological constant was his "biggest blunder" ("großte Eselei"), but there is no independent evidence of this. Weinberg said that his blunder was thinking it a blunder. Some say that his blunder was that if he had not included Lambda to make the universe static, he could have predicted an expanding (or contracting!) universe. Maybe this is what Einstein meant if he actually said this.ReplyDelete
According to GR, gravity does indeed cause space to expand more slowly than it is expanding now. This doesn't necessarily mean collapse if expansion is there as an initial condition. Without lambda, there is a critical density above which the universe will collapse in the future; if the universe is less dense, it will expand forever. Without lambda, this density corresponds to a spatially flat universe (denser means positive curvature and less dense means negative curvature). With lambda, things are more complicated. It is the sum of lambda and Omega which determines the geometry. Roughly speaking, if lambda is large enough, the universe will expand forever, even if it is spatially closed (which can't happen for vanishing lambda). (If lambda is negative, the universe will always collapse.)
* Is instability a problem? For an idealized universe, now, since there is nothing to perturb it. If the solution is an approximation to the real world, then it is a problem. Ironically, the Einstein-de Sitter universe is unstable in exactly the same mathematical sense (a repulsor in the lambda-Omega parameter space when these parameters are used to construct a dynamical system), though this didn't prevent it from being the "standard model" for a number of years. Of course, expansion directly contradicts a static model, so the two cases are not directly comparable.
@Tobias: Like all differential equations, the Friedmann equations describe how something changes. Initial conditions are extra. I'm not sure what the "strong initial contraction" is nor whether it is expected. (I learned yesterday that "universal nocturnal expansion" is actually something discussed in serious philosophy journals! Extra points if you can figure out what it is without looking it up.)ReplyDelete
No-one directly measures the Hubble parameter at any time other than the present in any meaningful sense. What is done is to derive the cosmological parameters from observations, which can then predict the Hubble parameter (and other things) at any time. Of course this assumes some model (e.g. the Friedmann models), but one of course also tests whether the model is consistent. Thus, we know the Hubble parameter back to very shortly after the big bang.
It might seem strange that we can go back this far, but consider the following: We understand nuclear matter very well. Imagine the entire visible universe compressed to a ball with the density of nuclear matter (ignoring gravity for the moment). How large would it be? Remember the Hubble Deep Field has an angular size about the same as a grain of rice at arm's length. Remember that in deep observations the sky is practically covered with galaxies. What do you think? It would be smaller than the solar system.
Cosmology as a science is perhaps more involved with its own history than other physical sciences. The history of cosmology in the first half of the 20th century or so is particularly interesting. I recommend Discovering the Expanding Universe by Nussbaumer and Bieri and an excellent volume of conference proceedings from a conference honoring Vesto Slipher (who performed early measurements of the redshifts of galaxies). A book which is mainly about something else but contains an excellent summary of 20th-century cosmology I reviewed for The Observatory a few years ago. The cosmologist and popular-science writer (very successful in both fields, in addition to being a nice guy and having a life outside of science as well) has written an excellent history of cosmology, as usual taking in a broader scope but with an emphasis on 20th-century cosmology; I reviewed this for The Observatory as well.ReplyDelete
Everyone interested in cosmology should own a copy of (and of course read more than once) Edward Harrison's textbook Cosmology: The Science of the Universe.
Oh, the joys of serendipitous discovery in the internet (which, alas, can lead to confusing me with a BMX guy, even though my name is about as unique as Bee's)! While searching for "CUP" (Cambridge University Press" and "cosmology" I came across A Cup of Cosmology which is a blog written by a cosmology student which might be of interest to readers here.ReplyDelete
Philipp: thanks for your answers! With "strong initial contraction" (not a technical term) I meant the following: Shortly after the big bang, with dark energy the same as it is now but with matter and energy much less diluted 'AND' without an expanding initial condition (positive hubble parameter), would GR then not give rise to an accelerating contraction ?ReplyDelete
So it is just the deliberate choice of an initially sufficiently positive hubble parameter that kicked the universe on its path of almost perfectly balanced slow expansion ?
As for the universal nocturnal expansion ;) I'm taking a guess: Sounds like it has to do with the differences in cognition at night, such as sleeping/dreaming/self reflecting and expansion could be related to either the magnitude of brain activity in certain regions or a physical expansion of certain brain regions even. Or most likely I am completely off :)
Another meaningless layman's comment:ReplyDelete
In GR, mass does not attract mass; rather mass acts on space - it curves space. If it can curve space, why should it be surprising that it can cause spacial volume to contract?
The whole notion of 'space volume' is, in GR, ill-defined. It depends on the choice of coordinates. To see this, just imagine you'd decide to chose coordinates based on mysteriously shrinking yardsticks and define volume in terms of them. Suddenly, Brooklyn will expand! And that's a perfectly allowed definition.ReplyDelete
It is indeed possible to chose a coordinate system in cosmology in which the space-volume just doesn't increase. But nobody uses it because it has some other awkward properties. The right statement is therefore that in the usually chosen spatial coordinates space-volume does increase. (And even then, just how it increases depends on what time-coordinate you chose.)
For this reason I consider the question of whether or not volumes increase entirely pointless. You can solve the conundrum by sticking to actually observable quantities, such as redshift or time-delay or what have you. Also, four-distances are actually good quantities to use.
"The cosmologist and popular-science writer (very successful in both fields, in addition to being a nice guy and having a life outside of science as well)"ReplyDelete
While this description is probably enough to identify him, I didn't intend to leave him anonymous. Of course, the author is John D. Barrow.
"Shortly after the big bang, with dark energy the same as it is now but with matter and energy much less diluted 'AND' without an expanding initial condition (positive hubble parameter), would GR then not give rise to an accelerating contraction ?"ReplyDelete
Yes, it would.
"So it is just the deliberate choice of an initially sufficiently positive hubble parameter that kicked the universe on its path of almost perfectly balanced slow expansion ?"
I wrote a whole paper on this very subtle question for Monthly Notices of the Royal Astronomical Society.
"As for the universal nocturnal expansion ;) I'm taking a guess"
Way off. Hint: it has something to do with Einstein, although indirectly.
That's not much of a hint :-) It's hard to think of any aspect of modern physics that doesn't have something to do, at least indirectly, with Einstein.ReplyDelete
A better hint: just google it!
I'm gratified that you've deduced whether the expansion can really be said to be currently dominated by the cc, or rather the acceleration is now determined by the cc, or perhaps the current value is simply what it is, and whether indeed the cc can be said to have any causative effect on the expansion at all. Terminology, let's face it, is of the utmost importance. But if it were up to me, I'd just do the math and not worry about the precise terminology. It's so much easier.
Yes, googling it turns up the correct answer.ReplyDelete
I'm sure that there are other forms of universal nocturnal expansion. :-)
If the cosmological constant comes out not to be actually constant, would that mean that the laws of physics are not time-invariant? Which in turn would mean (by virtue of Noerther's theorem) that energy is not conserved ? (Excuse me if I ask, I am not a physicist)ReplyDelete
The connection with Einstein might be a bit unclear. I'm reading Paul Nahin's Time Machine Tales (with Einstein and Gödel as the frontispiece), which is and is not a new edition of his classic Time Machines; the current volume has fewer technical notes but is expanded to include new developments, especially from philosophers, in the last 20 years. I was surprised that time travel is a subject of many articles in leading philosophy journals, as is universal nocturnal expansion.ReplyDelete
I've just started the newer book. I'm sure I'll end up recommending it, though. The older book (which went through two editions) contains discussion of science fiction (the new one contains two science-fiction short stories by Nahin as well), philosophy, and physics in relation to time travel. Nahin has an encyclopedic knowledge of (at least) time-travel science fiction, and also knows his GR.
"If the cosmological constant comes out not to be actually constant, would that mean that the laws of physics are not time-invariant?"ReplyDelete
"Which in turn would mean (by virtue of Noerther's theorem) that energy is not conserved ? (Excuse me if I ask, I am not a physicist)"
Energy isn't conserved anyway in cosmology; even in GR the concept of energy is not clearly defined.
Imagine a universe consisting only of radiation. (To avoid problems with infinity, make the density large enough that the universe is spatially closed and hence has a finite volume.) The number of photons is constant, but they are redshifted and hence lose energy.
As always, Edward Harrison's textbook Cosmology: The Science of the Universe has the best discussion of this topic.
Phillip: energy is conserved full stop. The ascending photon doesn't lose energy. The descending photon doesn't gain it. If you send a 511keV photon into a black hole, the black hole mass increases by 511keV/c², no more. In similar vein I think conservation of energy applies to dark energy. Hence I think the density is reducing and that the authors of vacuum https://arxiv.org/abs/1610.08965 are right.ReplyDelete
Koenraad: gravity doesn't make the universe contract. A gravitational field alters the motion of light and matter through space, but it doesn't make space fall down. Despite the waterfall analogy, we do not live in some Chicken Little world where the sky is falling in. Sabine: thanks for this, it's very timely.
Phillip is correct for what energy conservation is concerned. The example you give has a static background so its not relevant for the discussion.
I think the main issue in cosmological data is that the universe seems to have critical density. In theory, chances for this are... zero.
So, is it that GR equations are incomplete and need dark fields (as a poor patch) to explain facts? Or is it some real energy? Or is it something else?
"In theory, chances for this are... zero."
Please explain this statement.
"I think the main issue in cosmological data is that the universe seems to have critical density. In theory, chances for this are... zero."ReplyDelete
I think this might mean "The universe is flat if the sum of lambda and Omega is exactly one, and this would require infinitely fine tuning". In other words, flat universes are a set of measure zero in the lambda-Omega parameter space.
But consider the following: The universe is (arbitrarily close to) flat if the radius of curvature is (arbitrarily close to) infinity. Pick a number at random between 0 and infinity. Chances are that it will be large. Thus, flat universes are infinitely more probable.
The problem with both of these statements is that the quantities involved change with time. Kayll Lake pointed out a) that there is a combination of lambda and Omega which is conserved (i.e. it is a constant of motion if one thinks of the evolution of the universe as trajectories in the lambda-Omega plane; see the classic paper by Stabell and Refsdal linked to above) and b) nearly flat universes are much more probable based on this measure.
Adler and Overduin wrote an interesting paper quantifying flatness which readers here might want to check out.ReplyDelete
I mean any quantity of energy (mass, light, etc..), stress, and "lambda" can be used in the Einstein field equations; it will make a theoretically valid universe. Even a mass-empty universe can be valid "in theory" (there are a number of toy-universes in GR textbooks - nice exercises). Those "parameters" being free at the origin, the probability for measuring critical density is zero (actually close enough because it may still be measured at one epoch). So, either we live at a very interesting place, and time, or there is something more to critical density.
Take it as a law of nature, it implies that the universe birth and evolution have no freedom. I guess you know what it gives (see the paper I mentioned to you 2 or 3 weeks ago: MOND where a_0 evolves like H, no dark matter particles, discrepancy between the local H=1/T and H_0, definite and calculable ratios between matter, dark matter and dark energy, and so on... I think 7 or 8 "predictions" all already measured).
PS: W.r.t. your previous answer on the choice of metric where the rulers "schrink", if the ruler is an electron wavelength, Brooklyn will not expand... but the universe will.
"the probability for measuring critical density is zero"
Please explain that.
PS: Sure, there are good and bad ways to define rulers.
I'm pretty sure that my penultimate comment above captures what akidbelle is trying to say.ReplyDelete
I think the assumption that the parameters are "free at the origin" is wrong. If by this it means that there is a flat probability distribution, then this distribution won't be flat at other times. In classical cosmology (Friedmann equation etc), there is no special time, so saying "I assume a flat distribution of likely values of cosmological parameters at some arbitrarily chosen time" doesn't make sense.
Read my paper. Read Lake's paper. Read the paper by Alder and Overduin.
this is the question I tried to answer.
The Planck mission results give a total universe density pretty close to the critical density (you know the figures and the equation). There is no theoretical reason in GR for this, except for incredibly sheer luck - or probability ~ 0. This is what I mean.
Is that clear now?
PS: I do not know what is good or bad. Do you?ReplyDelete
There is no theoretical reason in GR or any other theory for the value of any parameter, hence I cannot follow your argument. If the total density was any other value, that too would have probability of zero.
True, but I think there is a difference, in that zero curvature is in some sense a special value. If we find that lambda+Omega=2.35254646437578686, no-one would say "hey, that's pretty improbable" (well, Ramanujan might, if he were still alive: don't forget 1729!). True, lambda+Omega=1 is just as improbable, but since this has some significance, I think it would mean something different (just as lambda+Omega=3.141592653589793238462643383279 would be interesting and probably need an explanation.ReplyDelete
But the point is moot since "the probability is zero to measure lambda+Omega=1" doesn't mean anything in practice since at best we can measure it to be near 1 to some accuracy. But also "the probability to measure it arbitrarily close to 1 is arbitrarily small" is wrong. It is based on an invalid assumption. See Adler and Overduin, my paper, Lake's paper (all mentioned above) and also Evrard and Coles.
Sabine, yes of course.ReplyDelete
What you say is just the definition of a free parameter.
The argument is that it is not any other data. Among infinite possibilities it is the only "special" density in GR. So why reject the possibility that it can be a law of nature (of event horizons), and then analyze consequences? The emerging concept is much more productive than putting free parameters in the equations or model building in general - the first calculus are simplistic and give a natural match.
Also, there is no reason for this in GR, but 150 years ago there was no reason for SR, GR, or QM. The next generation was to catch the next decimal... and this situation was the exact opposite of what happens right now with the SM and GR. It is nature that tells you nothing - unless of course you are blinded or trained to disregard data. How long could you afford this? (No offense meant of course.)
admittedly. But in the paper I talk about, just look at "predictions" :)
This is supposed to be the way science works...
If not, this is not science.
I am not "rejecting the possibility" I am merely pointing out that your belief that there is something in need of being explained has no rational basis. Neither SR nor QM nor GR were invented to explain a numerical coincidence, so these examples work against your case.
I am not trying to defend a "need to be explained" but rather a strong opportunity - in my opinion even a huge one according to the results in the paper. Sorry if I misworded.
the paper is in Progress in Physics, current issue, 3rd paper:
Best to both of you,
I've had a look at the paper. A typical crackpot paper. Progress in Physics appears to be a crackpot journal. Probably, the name was chosen to sound like Progress of Theoretical Physics or Reports on Progress in Physics. (There is even a crackpot journal called Nature and Science so that one can say "I have published in Nature and Science" and have it sound like "I have published in Nature and Science".)ReplyDelete
I think you are wrong on GR, on this account.
GR was invented precisely to explain a numerical coincidence: the fact that the gravitational and inertial masses have the same value.
It could have been a "coincidence" but it wasn't. Same thing for the universe density being close to the critical value: could be a coincidence ... or not. And you have to admit that the latter possibility is more exciting for physicists.
GR resolves the inconsistency between SR and Newtonian gravity. That has nothing to do with coincidence. Besides this, you seem to be confused about the equivalence principle. It is not a parameter equality that the inertial mass of every body is equal to its gravitational mass. If you think it is, why don't you let me know which is the parameter that you think ensures this?
I agree with your first sentence but not with the rest.
I put myself in an historical perspective -- which is indeed what one should do when trying to understand what is now a numerical coincidence but may become something more significant in the future. So I was talking about the Newtonian "weak" EP (universality of free fall).
Within Newtonian gravity, the equivalence of inertial and gravitational mass was just that - an unexplained coincidence. And it did puzzle people, Einstein included. Of course, since GR has been developed, we understand a lot more about this, and what was before a coincidence has now been incorporated into a complex theory in a very nontrivial way.
So when you say "[The EP] is not a parameter equality that the inertial mass of every body is equal to its gravitational mass" you are of course right, but you are reasoning in post-GR terms. Before GR, there was a coincidence problem; after GR, we have a totally different and deeper view on this issue.
The same might be true for the fact that the universe density is very very close to the critical value. Today a coincidence, tomorrow perhaps a consequence of a new theory (whether as significant as GR, nobody knows).
"you are of course right"
Of course. And that's really all there is to say about that.
But since you insist on history, Einstein did emphatically not succeed by attempting to cook up a mechanism that would tune the inertial mass to be equal the gravitational mass. Instead he succeeded in *postulating* that this be so - based on observation. Consequently, GR does not explain this coincidence because it's an assumption to derive the theory.
Using this analogy, we should just postulate that the spatial curvature is zero. Fine by me, but good luck convincing akidbelle.
I did not talk about "cooking up mechanisms" or "postulating" something. You know the former was wrong and the latter right (for GR), because you have the benefit of hindsight.
Look, what I mean is very simple, and to me uncontroversial: before Einstein, people noticed that m_i=m_g, remarked it was a funny coincidence, but did not make anything out of it. Einstein thought it deserved more attention and used this "coincidence" (among other things) as a stepping stone to construct GR, by postulating it is a fundamental property of nature. That's all I mean: that if we say "move along, nothing to explain here" too quickly, we might pass by some important discovery.
Apart from this, I have a genuine question (I am a physicist from a different subfield, in case you had not noticed...). Is it really true that the critical density is just a value among others, and thus as probable or improbable as any? I have my doubts, because if you pick a random value of the density, it will yield either a closed or an open universe (with scale factor a(t) \propto t). Only the critical case yields a(t) \propto t^(2/3). In that sense, it seems to me that the critical universe really has zero probability.
In the language of statistical physics, I'm thinking of the 3 possible macro-states: [closed, open, critical] that you can construct from the micro-states (defined by the value of the density \rho). Now, the macro-states [closed] and [open] correspond to an infinite number of micro-states (i.e., densities), whereas the macro-state [critical] corresponds to only ONE micro-state. Doesn't this make it special?
In other words: the critical DENSIY (micro-state) is as probable as any other, but the critical UNIVERSE (macro-state) is not.
Sorry for the long comment! I'm really curious to hear your opinion on the second part.
Well thanks for agreeing that GR does not explain the coincidence which is what I said to begin with.
Regarding curvature, in the language of statistical physics, if you want to speak of probabilities you better have a probability distribution and a space over which its defined. Without that, quite plausibly the probability that there is any universe is zero, so excuse me for not finding such considerations all that enlightening.
opamanfred, above I've posted links to four or so papers, in the leading cosmology journals, which investigate this coincidence problem in relation to the flatness problem. It is not a puzzle anymore. We won't make any progress if people ignore papers (which have not been rebutted) in the leading journals in the field.ReplyDelete
Actually, I think few people have actually personally investigated this in any detail; they read somewhere "there is a flatness problem" or "there is a coincidence problem" and repeat it, without actually investigating it themselves nor checking to see if anything has happened in the literature in the last 35 or 40 years.
While I'm at it, let me also mention that the cosmological-constant problem is not really a problem either, citing a paper by Bianchi and Rovelli (at least the latter sometimes reads this blog). Yes, this is "just" on the arXiv, but it is an extended version which appeared in Nature.
Thanks for directing me to this post. In the article I read about this issue I'd gotten the impression the measurement discrepancy problem became known only recently.ReplyDelete