|An oscillator too.|
A lot of people asked for my opinion about a paper by Wang, Zhu, and Unruh that recently got published in Physical Reviews D, one of the top journals in the field.
How the huge energy of quantum vacuum gravitates to drive the slow accelerating expansion of the Universe
Qingdi Wang, Zhen Zhu, William G. Unruh
Phys. Rev. D 95, 103504 (2017)
Following a press-release from UBC, the paper has attracted quite some attention in the pop science media which is remarkable for such a long and technically heavy work. My summary of the coverage so far is “bla-bla-bla parametric resonance.”
I tried to ignore the media buzz a) because it’s a long paper, b) because it’s a long paper, and c) because I’m not your public community debugger. I actually have own research that I’m more interested in. Sulk.
But of course I eventually came around and read it. Because I’ve toyed with a similar idea some while ago and it worked badly. So, clearly, these folks outscored me, and after some introspection I thought that instead of being annoyed by the attention they got, I should figure out why they succeeded where I failed.
Turns out that once you see through the math, the paper is not so difficult to understand. Here’s the quick summary.
One of the major problems in modern cosmology is that vacuum fluctuations of quantum fields should gravitate. Unfortunately, if one calculates the energy density and pressure contained in these fluctuations, the values are much too large to be compatible with the expansion history of the universe.
This vacuum energy gravitates the same way as the cosmological constant. Such a large cosmological constant, however, should lead to a collapse of the universe long before the formation of galactic structures. If you switch the sign, the universe doesn’t collapse but expands so rapidly that structures can’t form because they are ripped apart. Evidently, since we are here today, that didn’t happen. Instead, we observe a small positive cosmological constant and where did that come from? That’s the cosmological constant problem.
The problem can be solved by introducing an additional cosmological constant that cancels the vacuum energy from quantum field theory, leaving behind the observed value. This solution is both simple and consistent. It is, however, unpopular because it requires fine-tuning the additional term so that the two contributions almost – but not exactly – cancel. (I believe this argument to be flawed, but that’s a different story and shall be told another time.) Physicists therefore have tried for a long time to explain why the vacuum energy isn’t large or doesn’t couple to gravity as expected.
Strictly speaking, however, the vacuum energy density is not constant, but – as you expect of fluctuations – it fluctuates. It is merely the average value that acts like a cosmological constant, but the local value should change rapidly both in space and in time. (These fluctuations are why I’ve never bought the “degravitation” idea according to which the vacuum energy decouples because gravity has a built-in high-pass filter. In that case, you could decouple a cosmological constant, but you’d still be stuck with the high-frequency fluctuations.)
In the new paper, the authors make the audacious attempt to calculate how gravity reacts to the fluctuations of the vacuum energy. I say it’s audacious because this is not a weak-field approximation and solving the equations for gravity without a weak-field approximation and without symmetry assumptions (as you would have for the homogeneous and isotropic case) is hard, really hard, even numerically.
The vacuum fluctuations are dominated by very high frequencies corresponding to a usually rather arbitrarily chosen ‘cutoff’ – denoted Λ – where the effective theory for the fluctuations should break down. One commonly assumes that this frequency roughly corresponds to the Planck mass, mp. The key to understanding the new paper is that the authors do not assume this cutoff, Λ, to be at the Planck mass, but at a much higher energy, Λ >> mp.
As they demonstrate in the paper, massaged into a suitable form, one of the field equations for gravity takes the form of an oscillator equation with a time- and space-dependent coupling term. This means, essentially, space-time at each place has the properties of a driven oscillator.
The important observation that solves the cosmological constant problem is then that the typical resonance frequency of this oscillator is Λ2/mp which is by assumption much larger than the main frequency of fluctuations the oscillator is driven by, which is Λ. This means that space-time resonates with the frequency of the vacuum fluctuations – leading to an exponential expansion like that from a cosmological constant – but it resonates only with higher harmonics, so that the resonance is very weak.
The result is that the amplitude of the oscillations grows exponentially, but it grows slowly. The effective cosmological constant they get by averaging over space is therefore not, as one would naively expect, Λ, but (omitting factors that are hopefully of order one) Λ* exp (-Λ2/mp). One hence uses a trick quite common in high-energy physics, that one can create a large hierarchy of numbers by having a small hierarchy of numbers in an exponent.
In conclusion, by pushing the cutoff above the Planck mass, they suppress the resonance and slow down the resulting acceleration.
But I know you didn’t come for the nice words, so here’s the main course. The idea has several problems. Let me start with the most basic one, which is also the reason I once discarded a (related but somewhat different) project. It’s that their solution doesn’t actually solve the field equations of gravity.
It’s not difficult to see. Forget all the stuff about parametric resonance for a moment. Their result doesn’t solve the field equations if you set all the fluctuations to zero, so that you get back the case with a cosmological constant. That’s because if you integrate the second Friedmann-equation for a negative cosmological constant you can only solve the first Friedmann-equation if you have negative curvature. You then get Anti-de Sitter space. They have not introduced a curvature term, hence the first Friedmann-equation just doesn’t have a (real valued) solution.
Now, if you turn back on the fluctuations, their solution should reduce to the homogeneous and isotropic case on short distances and short times, but it doesn’t. It would take a very good reason for why that isn’t so, and no reason is given in the paper. It might be possible, but I don’t see how.
I further find it perplexing that they rest their argument on results that were derived in the literature for parametric resonance on the assumption that solutions are linearly independent. General relativity, however, is non-linear. Therefore, one generally isn’t free to combine solutions arbitrarily.
So far that’s not very convincing. To make matters worse, if you don’t have homogeneity, you have even more equations that come from the position-dependence and they don’t solve these equations either. Let me add, however, that this doesn’t worry me all that much because I think it might be possible to deal with it by exploiting the stochastic properties of the local oscillators (which are homogeneous again, in some sense).
Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings.
The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study.
In conclusion, “bla-bla-bla parametric resonance” is a pretty accurate summary.
How serious are these problems? Is there something in the paper that might be interesting after all?
Maybe. But the assumption (see below Eq (42)) that the fields that source the fluctuations satisfy normal energy conditions is, I believe, a non-starter if you want to get an exponential expansion. Even if you introduce a curvature term so that you can solve the equations, I can’t for the hell of it see how you average over locally approximately Anti-de Sitter spaces to get an approximate de Sitter space. You could of course just flip the sign, but then the second Friedmann equation no longer describes an oscillator.
Maybe allowing complex-valued solutions is a way out. Complex numbers are great. Unfortunately, nature’s a bitch and it seems we don’t live in a complex manifold. Hence, you’d then have to find a way to get rid of the imaginary numbers again. In any case, that’s not discussed in the paper either.
I admit that the idea of using a de-tuned parametric resonance to decouple vacuum fluctuations and limit their impact on the expansion of the universe is nice. Maybe I just lack vision and further work will solve the above mentioned problems. More generally, I think numerically solving the field equations with stochastic initial conditions is of general interest and it would be great if their paper inspires follow-up studies. So, give it ten years, and then ask me again. Maybe something will have come out of it.
In other news, I have also written a paper that explains the cosmological constant and I haven’t only solved the equations that I derived, I also wrote a Maple work-sheet that you can download and check the calculation for yourself. The paper was just accepted for publication in PRD.
For what my self-reflection is concerned, I concluded I might be too ambitious. It’s much easier to solve equations if you don’t actually solve them.
I gratefully acknowledge helpful conversation with two of this paper’s authors who have been very, very patient with me. Sorry I didn’t have anything nicer to say.
Thank you, that does help clear up some questions.ReplyDelete
As one of the people asking for your input, thank you. So many complain about popular science journalism, but without actual experts commenting on these papers it is easy for us lay folk to be lead to believe a great breakthrough has happened. Sober analysis by experts such as yourself goes a long way to combating this. Again, thank you.ReplyDelete
You are great Dr. H. Diplomatic and thorough. Nice review here thank you and congratulations on your own paper.ReplyDelete
Verlinde got 5 million in total to get that equation. You did it better by solving it. Perhaps you should also be awarded a bit more! :-)ReplyDelete
Great explanations have the advantage that people get them easily - and the disadvantage that people may not realise they are fiddling with the data to get to the explanation. Usually observation can be used to test the different approaches to see which one is matched best by data, but in highly theoretical fields, even the choice of that data may be a problem.ReplyDelete
I think your own approach to let go of the interpretation and re-derive the math is the better one here. (And such an approach might be successful in a few other spots as well)
I was curious concerning your Maple worksheet. Checking it out, I noted that you are using the old tensor package. Are you aware of the much more powerful GR-capabilities of the Physics package?, see for instance http://www.maplesoft.com/support/help/Maple/view.aspx?path=physics.ReplyDelete
Yes, I am aware of it because the damn software tells me about this every single time I execute my worksheet. So thanks for adding to my pain. I have used it a few times, but it tends to get confused about coordinate transformations in rather non-transparent ways that have cost me a lot of time. Hence, I've been sticking with the old package.
I cannot say that I do not understand your sentiment, for the learning curve of the Physics package is quite steep, something I am fighting with as well.ReplyDelete
Without any desire to intrude, I think, though, that you miss out on some nice software. For instance, most of your first three execution groups for the Schwarzschild case can be initialized/calculated by the following few lines:
signature = `-+++`,
metric = Schwarzschild
The Christoffel symbols, the Riemann tensor, the Ricci tensor, etc., are all calculated on the flight when loading the above few lines. All Christoffel symbols of the first and second kind, respectively, can be accessed as Christoffel[mu,nu,rho,array] and Christoffel[~mu,nu,rho,array] (note tilde). A specific component is readily accessed as Christoffel[1,2,1], say. Similar comments apply, of course, to all the other tensors.
Analogously, most of the first three execution groups of the FRW case may be initialized/calculated by the following few lines:
ga := exp(2*t*sqrt(Lambda)):
signature = `-+++`,
metric = -dt^2 + ga*(dr^2 + r^2*dtheta^2 + r^2*sin(theta)^2*dphi^2)
As I said, I have used the package and I know that. But to me the points you mention are not an advantage. I often want to change one or the other component of the metric, so loading a default is pointless. And I can access the Christoffels just fine using this package. In any case, as I said, I am aware of the package. Thanks for mentioning. Honestly, your comment just makes me think I shouldn't share my worksheets.
"Honestly, your comment just makes me think I shouldn't share my worksheets."ReplyDelete
Perhaps it was imprudent of me to pry in your worksheet. If that is the case, then I apologize.
It's there for the benefit of the reader.ReplyDelete
Worksheet sharing needs to become the norm in science, IMO.
E.g., see slide 12 onwards here:
"Data and Code Sharing in Bioinformatics: From Bermuda to Toronto to Your Laptop
Victoria Stodden Department of Statistics, Columbia University UC Berkeley Statistics and Genomics Seminar March 13, 2014"
Thanks for looking at this article. I like the paper more than you it seems.
You say: They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized.
The paper states:
The key difference from the usual semiclassical gravity is that we go one more step—instead of assuming the semiclassical Einstein equation, where the curvature of the spacetime is sourced by the expectation value of the quantum field stress energy tensor, we also take the huge fluctuations of the stress energy tensor into account.
These two statements are at odds with one another it would seem.
They use the expectation value in the stress-energy plus fluctuations in the stress-energy, but of course the expectation value thereof, so that it's a classical source again. That is still semi-classical because gravity is still classical. It's not the 'usual semiclassical gravity' as they write, because they use a different source. But In my opinion the statement you quote is a funny way to say the limit is inconsistent. Either way you put it, gravity isn't quantized.