Showing posts with label Cosmology. Show all posts
Showing posts with label Cosmology. Show all posts

Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

Tuesday, June 20, 2017

If tensions in cosmological data are not measurement problems, they probably mean dark energy changes

Galaxy pumpkin.
Src: The Swell Designer
According to physics, the universe and everything in it can be explained by but a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit.

Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.

One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.

The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.

The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.

It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.

Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.

Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.

Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.

That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.

A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.

This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)

Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.

Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.

The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.


[This article previously appeared on Starts With A Bang.]

Update June 21: Corrected several sentences to address comments below.

Wednesday, May 31, 2017

Does parametric resonance solve the cosmological constant problem?

An oscillator too.
Source: Giphy.
Tl;dr: Ask me again in ten years.

A lot of people asked for my opinion about a paper by Wang, Zhu, and Unruh that recently got published in Physical Reviews D, one of the top journals in the field.


Following a press-release from UBC, the paper has attracted quite some attention in the pop science media which is remarkable for such a long and technically heavy work. My summary of the coverage so far is “bla-bla-bla parametric resonance.”

I tried to ignore the media buzz a) because it’s a long paper, b) because it’s a long paper, and c) because I’m not your public community debugger. I actually have own research that I’m more interested in. Sulk.

But of course I eventually came around and read it. Because I’ve toyed with a similar idea some while ago and it worked badly. So, clearly, these folks outscored me, and after some introspection I thought that instead of being annoyed by the attention they got, I should figure out why they succeeded where I failed.

Turns out that once you see through the math, the paper is not so difficult to understand. Here’s the quick summary.

One of the major problems in modern cosmology is that vacuum fluctuations of quantum fields should gravitate. Unfortunately, if one calculates the energy density and pressure contained in these fluctuations, the values are much too large to be compatible with the expansion history of the universe.

This vacuum energy gravitates the same way as the cosmological constant. Such a large cosmological constant, however, should lead to a collapse of the universe long before the formation of galactic structures. If you switch the sign, the universe doesn’t collapse but expands so rapidly that structures can’t form because they are ripped apart. Evidently, since we are here today, that didn’t happen. Instead, we observe a small positive cosmological constant and where did that come from? That’s the cosmological constant problem.

The problem can be solved by introducing an additional cosmological constant that cancels the vacuum energy from quantum field theory, leaving behind the observed value. This solution is both simple and consistent. It is, however, unpopular because it requires fine-tuning the additional term so that the two contributions almost – but not exactly – cancel. (I believe this argument to be flawed, but that’s a different story and shall be told another time.) Physicists therefore have tried for a long time to explain why the vacuum energy isn’t large or doesn’t couple to gravity as expected.

Strictly speaking, however, the vacuum energy density is not constant, but – as you expect of fluctuations – it fluctuates. It is merely the average value that acts like a cosmological constant, but the local value should change rapidly both in space and in time. (These fluctuations are why I’ve never bought the “degravitation” idea according to which the vacuum energy decouples because gravity has a built-in high-pass filter. In that case, you could decouple a cosmological constant, but you’d still be stuck with the high-frequency fluctuations.)

In the new paper, the authors make the audacious attempt to calculate how gravity reacts to the fluctuations of the vacuum energy. I say it’s audacious because this is not a weak-field approximation and solving the equations for gravity without a weak-field approximation and without symmetry assumptions (as you would have for the homogeneous and isotropic case) is hard, really hard, even numerically.

The vacuum fluctuations are dominated by very high frequencies corresponding to a usually rather arbitrarily chosen ‘cutoff’ – denoted Λ – where the effective theory for the fluctuations should break down. One commonly assumes that this frequency roughly corresponds to the Planck mass, mp. The key to understanding the new paper is that the authors do not assume this cutoff, Λ, to be at the Planck mass, but at a much higher energy, Λ >> mp.

As they demonstrate in the paper, massaged into a suitable form, one of the field equations for gravity takes the form of an oscillator equation with a time- and space-dependent coupling term. This means, essentially, space-time at each place has the properties of a driven oscillator.

The important observation that solves the cosmological constant problem is then that the typical resonance frequency of this oscillator is Λ2/mp which is by assumption much larger than the main frequency of fluctuations the oscillator is driven by, which is Λ. This means that space-time resonates with the frequency of the vacuum fluctuations – leading to an exponential expansion like that from a cosmological constant – but it resonates only with higher harmonics, so that the resonance is very weak.

The result is that the amplitude of the oscillations grows exponentially, but it grows slowly. The effective cosmological constant they get by averaging over space is therefore not, as one would naively expect, Λ, but (omitting factors that are hopefully of order one) Λ* exp (-Λ2/mp). One hence uses a trick quite common in high-energy physics, that one can create a large hierarchy of numbers by having a small hierarchy of numbers in an exponent.

In conclusion, by pushing the cutoff above the Planck mass, they suppress the resonance and slow down the resulting acceleration.

Neat, yes.

But I know you didn’t come for the nice words, so here’s the main course. The idea has several problems. Let me start with the most basic one, which is also the reason I once discarded a (related but somewhat different) project. It’s that their solution doesn’t actually solve the field equations of gravity.

It’s not difficult to see. Forget all the stuff about parametric resonance for a moment. Their result doesn’t solve the field equations if you set all the fluctuations to zero, so that you get back the case with a cosmological constant. That’s because if you integrate the second Friedmann-equation for a negative cosmological constant you can only solve the first Friedmann-equation if you have negative curvature. You then get Anti-de Sitter space. They have not introduced a curvature term, hence the first Friedmann-equation just doesn’t have a (real valued) solution.

Now, if you turn back on the fluctuations, their solution should reduce to the homogeneous and isotropic case on short distances and short times, but it doesn’t. It would take a very good reason for why that isn’t so, and no reason is given in the paper. It might be possible, but I don’t see how.

I further find it perplexing that they rest their argument on results that were derived in the literature for parametric resonance on the assumption that solutions are linearly independent. General relativity, however, is non-linear. Therefore, one generally isn’t free to combine solutions arbitrarily.

So far that’s not very convincing. To make matters worse, if you don’t have homogeneity, you have even more equations that come from the position-dependence and they don’t solve these equations either. Let me add, however, that this doesn’t worry me all that much because I think it might be possible to deal with it by exploiting the stochastic properties of the local oscillators (which are homogeneous again, in some sense).

Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings.

The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study.

In conclusion, “bla-bla-bla parametric resonance” is a pretty accurate summary.

How serious are these problems? Is there something in the paper that might be interesting after all?

Maybe. But the assumption (see below Eq (42)) that the fields that source the fluctuations satisfy normal energy conditions is, I believe, a non-starter if you want to get an exponential expansion. Even if you introduce a curvature term so that you can solve the equations, I can’t for the hell of it see how you average over locally approximately Anti-de Sitter spaces to get an approximate de Sitter space. You could of course just flip the sign, but then the second Friedmann equation no longer describes an oscillator.

Maybe allowing complex-valued solutions is a way out. Complex numbers are great. Unfortunately, nature’s a bitch and it seems we don’t live in a complex manifold. Hence, you’d then have to find a way to get rid of the imaginary numbers again. In any case, that’s not discussed in the paper either.

I admit that the idea of using a de-tuned parametric resonance to decouple vacuum fluctuations and limit their impact on the expansion of the universe is nice. Maybe I just lack vision and further work will solve the above mentioned problems. More generally, I think numerically solving the field equations with stochastic initial conditions is of general interest and it would be great if their paper inspires follow-up studies. So, give it ten years, and then ask me again. Maybe something will have come out of it.

In other news, I have also written a paper that explains the cosmological constant and I haven’t only solved the equations that I derived, I also wrote a Maple work-sheet that you can download and check the calculation for yourself. The paper was just accepted for publication in PRD.

For what my self-reflection is concerned, I concluded I might be too ambitious. It’s much easier to solve equations if you don’t actually solve them.


I gratefully acknowledge helpful conversation with two of this paper’s authors who have been very, very patient with me. Sorry I didn’t have anything nicer to say.

Monday, April 17, 2017

Book review: “A Big Bang in a Little Room” by Zeeya Merali

A Big Bang in a Little Room: The Quest to Create New Universes
Zeeya Merali
Basic Books (February 14, 2017)

When I heard that Zeeya Merali had written a book, I expected something like a Worst Of New Scientist compilation. But A Big Bang in A Little Room turned out to be both interesting and enjoyable, if maybe not for the reason the author intended.

If you follow the popular science news on physics foundations, you almost certainly have come across Zeeya’s writing before. She was the one to break news about the surfer dude’s theory of everything and brought black hole echoes to Nature News. She also does much of the outreach work for the Foundational Questions Institute (FQXi).

Judged by the comments I get when sharing Zeeya’s articles, for some of my colleagues she embodies the decline of science journalism to bottomless speculation. Personally, I think what’s decaying to speculation is my colleagues’ research, and if so then Nature’s readership deserves to know about this. But, yes, Zeeya is frequently to be found on the wild side of physics. So, a book about creating universes in the lab seems in line.

To get it out of the way, the idea that we might grow a baby universe has, to date, no scientific basis. It’s an interesting speculation but the papers that have been written about it are little more than math-enriched fiction. To create a universe, we’d first have to understand how our universe began, and we don’t. The theories necessary for this – inflation and quantum gravity – are not anywhere close to being settled. Nobody has a clue how to create a universe, and for what I am concerned that’s really all there is to say about it.

But baby universes are a great excuse to feed real science to the reader, and if that’s the sugar-coating to get medicine down, I approve. And indeed, Zeeya’s book is quite nutritious: From entanglement to general relativity, structure formation, and inflation, to loop quantum cosmology and string theory, it’s all part of her story.

The narrative of A Big Bang in A Little Room starts with the question whether there might be a message encoded in the cosmic microwave background, and then moves on to bubble- and baby-universes, the multiverse, mini-black holes at the LHC, and eventually – my pet peeve! – the hypothesis that we might be living in a computer simulation.

Thankfully, on the latter issue Zeeya spoke to Seth Lloyd who – like me – doesn’t buy Bostrom’s estimate that we likely live in a computer simulation:
“Arguments such as Bostrom’s that hinge on the assumption that in the future physically evolved cosmoses will be outnumbered by a plethora of simulated universes, making it vastly more likely that we are artificial intelligences rather than biological beings, also fail to take into account the immense resources needed to create even basic simulations, says Lloyd.”
So, I’ve found nothing to complain even about the simulation argument!

Zeeya has a PhD in physics, cosmology more specifically, so she has all the necessary background to understand the topics she writes about. Her explanations are both elegant and, for all I can tell, almost entirely correct. I’d have some quibbles on one or the other point, eg her explanation of entanglement doesn’t make clear what’s the difference between classical and quantum correlations, but then it doesn’t matter for the rest of the book. Zeeya is also careful to state that neither inflation nor string theory are established theories, and the book is both well-referenced and has useful endnotes for the reader who wants more details.

Overall, however, Zeeya doesn’t offer the reader much guidance, but rather presents one thought-provoking idea after the other – like that there are infinitely many copies of each of us in the multiverse, making every possible decision – and then hurries on.

Furthermore, between the chapters there are various loose ends that she never ties together. For example, if the creator of our universe could write a message into the cosmic microwave background, then why do we need inflation to solve the horizon problem? How do baby universes fit together with string theory, or AdS/CFT more specifically, and why was the idea mostly abandoned? It’s funny also that Lee Smolin’s cosmological natural selection – an idea according to which we should live in a universe that amply procreates and is hence hugely supportive of the whole universe-creation issue  – is mentioned merely as an aside, and when it comes to loop quantum gravity, both Smolin and Rovelli are bypassed as Ashtekhar’s “collaborators,” (which I’m sure the two gentlemen will just love to hear).

For what I am concerned, the most interesting aspect of Zeeya’s book is that she spoke to various scientists about their creation beliefs: Anthony Zee, Stephen Hsu, Abhay Ashtekar, Joe Polchinski, Alan Guth, Eduardo Guendelman, Alexander Vilenkin, Don Page, Greg Landsberg, and Seth Lloyd are familiar names that appear on the pages. (The majority of these people are FQXi members.)

What we believe to be true is a topic physicists rarely talk about, and I think this is unfortunate. We all believe in something – most scientists, for example believe in an external reality – but fessing up to the limits of our rationality isn’t something we like to get caught with. For this reason I find Zeeya’s book very valuable.

About the value of discussing baby universes I’m not so sure. As Zeeya notes towards the end of her book, of the physicists she spoke to, besides Don Page no one seems to have thought about the ethics of creating new universes. Let me offer a simple explanation for this: It’s that besides Page no one believes the idea has scientific merit.

In summary: It’s a great book if you don’t take the idea of universe-creation too seriously. I liked the book as much as you can possibly like a book whose topic you think is nonsense.

[Disclaimer: Free review copy.]

Friday, March 31, 2017

Book rant: “Universal” by Brian Cox and Jeff Forshaw

Universal: A Guide to the Cosmos
Brian Cox and Jeff Forshaw
Da Capo Press (March 28, 2017)
(UK Edition, Allen Lane (22 Sept. 2016))

I was meant to love this book.

In “Universal” Cox and Forshaw take on astrophysics and cosmology, but rather than using the well-trodden historic path, they offer do-it-yourself instructions.

The first chapters of the book start with every-day observations and simple calculations, by help of which the reader can estimate eg the radius of Earth and its mass, or – if you let a backyard telescope with a 300mm lens and equatorial mount count as every-day items – the distance to other planets in the solar system.

Then, the authors move on to distances beyond the solar system. With that, self-made observations understandably fade out, but are replaced with publicly available data. Cox and Forshaw continue to explain the “cosmic distance ladder,” variable stars, supernovae, redshift, solar emission spectra, Hubble’s law, the Herzsprung-Russell diagram.

Set apart from the main text, the book has “boxes” (actually pages printed white on black) with details of the example calculations and the science behind them. The first half of the book reads quickly and fluidly and reminds me in style of school textbooks: They make an effort to illuminate the logic of scientific reasoning, with some historical asides, and concrete numbers. Along the way, Cox and Forshaw emphasize that the great power of science lies in the consistency of its explanations, and they highlight the necessity of taking into account uncertainty both in the data and in the theories.

The only thing I found wanting in the first half of the book is that they use the speed of light without explaining why it’s constant or where to get it from, even though that too could have been done with every-day items. But then maybe that’s explained in their first book (which I haven’t read).

For me, the fascinating aspect of astrophysics and cosmology is that it connects the physics of the very small scales with that of the very large scales, and allows us to extrapolate both into the distant past and future of our universe. Even though I’m familiar with the research, it still amazes me just how much information about the universe we have been able to extract from the data in the last two decades.

So, yes, I was meant to love this book. I would have been an easy catch.

Then the book continues to explain the dark matter hypothesis as a settled fact, without so much as mentioning any shortcomings of LambdaCDM, and not a single word on modified gravity. The Bullet Cluster is, once again, used as a shut-up argument – a gross misrepresentation of the actual situation, which I previously complained about here.

Inflation gets the same treatment: It’s presented as if it’s a generally accepted model, with no discussion given to the problem of under-determination, or whether inflation actually solves problems that need a solution (or solves the problems period).

To round things off, the authors close the final chapter with some words on eternal inflation and bubble universes, making a vague reference to string theory (because that’s also got something to do with multiverses you see), and then they suggest this might mean we live in a computer simulation:

“Today, the cosmologists responsible for those simulations are hampered by insufficient computing power, which means that they can only produce a small number of simulations, each with different values for a few key parameters, like the amount of dark matter and the nature of the primordial perturbations delivered at the end of inflation. But imagine that there are super-cosmologists who know the String Theory that describes the inflationary Multiverse. Imagine that they run a simulation in their mighty computers – would the simulated creatures living within one of the simulated bubble universes be able to tell that they were in a simulation of cosmic proportions?”
Wow. After all the talk about how important it is to keep track of uncertainty in scientific reasoning, this idea is thrown at the reader with little more than a sentence which mentions that, btw, “evidence for inflation” is “not yet absolutely compelling” and there is “no firm evidence for the validity of String Theory or the Multiverse.” But, hey, maybe we live in a computer simulation, how cool is that?

Worse than demonstrating slippery logic, their careless portrayal of speculative hypotheses as almost settled is dumb. Most of the readers who buy the book will have heard of modified gravity as dark matter’s competitor, and will know the controversies around inflation, string theory, and the multiverse: It’s been all over the popular science news for several years. That Cox and Forshaw don’t give space to discussing the pros and cons in a manner that at least pretends to be objective will merely convince the scientifically-minded reader that the authors can’t be trusted.

The last time I thought of Brian Cox – before receiving the review copy of this book – it was because a colleague confided to me that his wife thinks Brian is sexy. I managed to maneuver around the obviously implied question, but I’ll answer this one straight: The book is distinctly unsexy. It’s not worthy of a scientist.

I might have been meant to love the book, but I ended up disappointed about what science communication has become.

[Disclaimer: Free review copy.]

Saturday, March 11, 2017

Is Verlinde’s Emergent Gravity compatible with General Relativity?

Dark matter filaments, Millenium Simulation
Image: Volker Springel
A few months ago, Erik Verlinde published an update of his 2010 idea that gravity might originate in the entropy of so-far undetected microscopic constituents of space-time. Gravity, then, would not be fundamental but emergent.

With the new formalism, he derived an equation for a modified gravitational law that, on galactic scales, results in an effect similar to dark matter.

Verlinde’s emergent gravity builds on the idea that gravity can be reformulated as a thermodynamic theory, that is as if it was caused by the dynamics of a large number of small entities whose exact identity is unknown and also unnecessary to describe their bulk behavior.

If one wants to get back usual general relativity from the thermodynamic approach, one uses an entropy that scales with the surface area of a volume. Verlinde postulates there is another contribution to the entropy which scales with the volume itself. It’s this additional entropy that causes the deviations from general relativity.

However, in the vicinity of matter the volume-scaling entropy decreases until it’s entirely gone. Then, one is left with only the area-scaling part and gets normal general relativity. That’s why on scales where the average density is high – high compared to galaxies or galaxy clusters – the equation which Verlinde derives doesn’t apply. This would be the case, for example, near stars.

The idea quickly attracted attention in the astrophysics community, where a number of papers have since appeared which confront said equation with data. Not all of these papers are correct. Two of them seemed to have missed entirely that the equation which they are using doesn’t apply on solar-system scales. Of the remaining papers, three are fairly neutral in the conclusions, while one – by Lelli et al – is critical. The authors find that Verlinde’s equation – which assumes spherical symmetry – is a worse fit to the data than particle dark matter.

There has not, however, so far been much response from theoretical physicists. I’m not sure why that is. I spoke with science writer Anil Ananthaswamy some weeks ago and he told me he didn’t have an easy time finding a theorist willing to do as much as comment on Verlinde’s paper. In a recent Nautilus article, Anil speculates on why that might be:
“A handful of theorists that I contacted declined to comment, saying they hadn’t read the paper; in physics, this silent treatment can sometimes be a polite way to reject an idea, although, in fairness, Verlinde’s paper is not an easy read even for physicists.”
Verlinde’s paper is indeed not an easy read. I spent some time trying to make sense of it and originally didn’t get very far. The whole framework that he uses – dealing with an elastic medium and a strain-tensor and all that – isn’t only unfamiliar but also doesn’t fit together with general relativity.

The basic tenet of general relativity is coordinate invariance, and it’s absolutely not clear how it’s respected in Verlinde’s framework. So, I tried to see whether there is a way to make Verlinde’s approach generally covariant. The answer is yes, it’s possible. And it actually works better than I expected. I’ve written up my findings in a paper which just appeared on the arxiv:


It took some trying around, but I finally managed to guess a covariant Lagrangian that would produce the equations in Verlinde’s paper when one makes the same approximations. Without these approximations, the equations are fully compatible with general relativity. They are however – as so often in general relativity – hideously difficult to solve.

Making some simplifying assumptions allows one to at least find an approximate solution. It turns out however, that even if one makes the same approximations as in Verlinde’s paper, the equation one obtains is not exactly the same that he has – it has an additional integration constant.

My first impulse was to set that constant to zero, but upon closer inspection that didn’t make sense: The constant has to be determined by a boundary condition that ensures the gravitational field of a galaxy (or galaxy cluster) asymptotes to Friedmann-Robertson-Walker space filled with normal matter and a cosmological constant. Unfortunately, I haven’t been able to find the solution that one should get in the asymptotic limit, hence wasn’t able to fix the integration constant.

This means, importantly, that the data fits which assume the additional constant is zero do not actually constrain Verlinde’s model.

With the Lagrangian approach that I have tried, the interpretation of Verlinde’s model is very different – I dare to say far less outlandish. There’s an additional vector-field which permeates space-time and which interacts with normal matter. It’s a strange vector field both because it’s not – as the other vector-fields we know of – a gauge-boson, and has a different kinetic energy term. In addition, the kinetic term also appears in a way one doesn’t commonly have in particle physics but instead in condensed matter physics.

Interestingly, if you look at what this field would do if there was no other matter, it would behave exactly like a cosmological constant.

This, however, isn’t to say I’m sold on the idea. What I am missing is, most importantly, some clue that would tell me the additional field actually behaves like matter on cosmological scales, or at least sufficiently similar to reproduce other observables, like eg baryon acoustic oscillation. This should be possible to find out with the equations in my paper – if one manages to actually solve them.

Finding solutions to Einstein’s field equations is a specialized discipline and I’m not familiar with all the relevant techniques. I will admit that my primary method of solving the equations – to the big frustration of my reviewers – is to guess solutions. It works until it doesn’t. In the case of Friedmann-Robertson-Walker with two coupled fluids, one of which is the new vector field, it hasn’t worked. At least not so far. But the equations are in the paper and maybe someone else will be able to find a solution.

In summary, Verlinde’s emergent gravity has withstood the first-line bullshit test. Yes, it’s compatible with general relativity.

Thursday, March 02, 2017

Yes, a violation of energy conservation can explain the cosmological constant

Chad Orzel recently pointed me towards an article in Physics World according to which “Dark energy emerges when energy conservation is violated.” Quoted in the Physics World article are George Ellis, who enthusiastically notes that the idea is “no more fanciful than many other ideas being explored in theoretical physics at present,” and Lee Smolin, according to whom it’s “speculative, but in the best way.” Chad clearly found this somewhat too polite to be convincing and asked me for some open words:



I had seen the headline flashing by earlier but ignored it because – forgive me – it’s obvious energy non-conservation can mimic a cosmological constant.

Reason is that usually, in General Relativity, the expansion of space-time is described by two equations, known as the Friedmann-equations. They relate the velocity and acceleration of the universe’s normalized distance measures – called the ‘scale factor’ – with the average energy density and pressure of matter and radiation in the universe. If you put in energy-density and pressure, you can calculate how the universe expands. That, basically, is what cosmologists do for a living.

The two Friedmann-equations, however, are not independent of each other because General Relativity presumes that the various forms of energy-densities are locally conserved. That means if you take only the first Friedmann-equation and use energy-conservation, you get the second Friedmann-equation, which contains the cosmological constant. If you turn this statement around it means that if you throw out energy conservation, you can produce an accelerated expansion.

It’s an idea I’ve toyed with years ago, but it’s not a particularly appealing solution to the cosmological constant problem. The issue is you can’t just selectively throw out some equations from a theory because you don’t like them. You have to make everything work in a mathematically consistent way. In particular, it doesn’t make sense to throw out local energy-conservation if you used this assumption to derive the theory to begin with.

Upon closer inspection, the Physics World piece summarizes the paper:
which got published in PRL a few weeks ago, but has been on the arxiv for almost a year. Indeed, when I looked at it, I recalled I had read the paper and found it very interesting. I didn’t write about it here because the point they make is quite technical. But since Chad asked, here we go.

Modifying General Relativity is chronically hard because the derivation of the theory is so straight-forward that much violence is needed to avoid Einstein’s Field Equations. It took Einstein a decade to get the equations right, but if you know your differential geometry it’s a three-liner really. This isn’t to belittle Einstein’s achievement – the mathematical apparatus wasn’t then fully developed and he was guessing its way around underived theorems – but merely to emphasize that General Relativity is easy to get but hard to amend.

One of the few known ways to consistently amend General Relativity is ‘unimodular gravity,’ which works as follows.

In General Relativity the central dynamical quantity is the metric tensor (or just “metric”) which you need to measure the ratio of distances relative to each other. From the metric tensor and its first and second derivative you can calculate the curvature of space-time.

General Relativity can be derived from an optimization principle by asking: “From all the possible metrics, which is the one that minimizes curvature given certain sources of energy?” This leads you to Einstein’s Field Equations. In unimodular gravity in contrast, you don’t look at all possible metrics but only those with a fixed metric determinant, which means you don’t allow a rescaling of volumes. (A very readable introduction to unimodular gravity by George Ellis can be found here.)

Unimodular gravity does not result in Einstein’s Field Equations, but only in a reduced version thereof because the variation of the metric is limited. The result is that in unimodular gravity, energy is not automatically locally conserved. Because of the limited variation of the metric that is allowed in unimodular gravity, the theory has fewer symmetries. And, as Emmy Noether taught us, symmetries give rise to conservation laws. Therefore, unimodular gravity has fewer conservation laws.

I must emphasize that this is not the ‘usual’ non-conservation of total energy one already has in General Relativity, but a new violation of local energy-densities does that not occur in General Relativity.

If, however, you then add energy-conservation to unimodular gravity, you get back Einstein’s field equations, though this re-derivation comes with a twist: The cosmological constant now appears as an integration constant. For some people this solves a problem, but personally I don’t see what difference it makes just where the constant comes from – its value is unexplained either way. Therefore, I’ve never found unimodular gravity particularly interesting, thinking, if you get back General Relativity you could as well have used General Relativity to begin with.

But in the new paper the authors correctly point out that you don’t necessarily have to add energy conservation to the equations you get in unimodular gravity. And if you don’t, you don’t get back general relativity, but a modification of general relativity in which energy conservation is violated – in a mathematically consistent way.

Now, the authors don’t look at all allowed violations of energy-conservation in their paper and I think smartly so, because most of them will probably result in a complete mess, by which I mean be crudely in conflict with observation. They instead look at a particularly simple type of energy conservation and show that this effectively mimics a cosmological constant.

They then argue that on the average such a type of energy-violation might arise from certain quantum gravitational effects, which is not entirely implausible. If space-time isn’t fundamental, but is an emergent description that arises from an underlying discrete structure, it isn’t a priori obvious what happens to conservation laws.

The framework proposed in the new paper, therefore, could be useful to quantify the observable effects that arise from this. To demonstrate this, the authors look at the example of 1) diffusion from causal sets and 2) spontaneous collapse models in quantum mechanics. In both cases, they show, one can use the general description derived in the paper to find constraints on the parameters in this model. I find this very useful because it is a simple, new way to test approaches to quantum gravity using cosmological data.

Of course this leaves many open questions. Most importantly, while the authors offer some general arguments for why such violations of energy conservation would be too small to be noticeable in any other way than from the accelerated expansion of the universe, they have no actual proof for this. In addition, they have only looked at this modification from the side of General Relativity, but I would like to also know what happens to Quantum Field Theory when waving good-bye to energy conservation. We want to make sure this doesn’t ruin the standard model’s fit of any high-precision data. Also, their predictions crucially depend on their assumption about when energy violation begins, which strikes me as quite arbitrary and lacking a physical motivation.

In summary, I think it’s a so-far very theoretical but also interesting idea. I don’t even find it all that speculative. It is also clear, however, that it will require much more work to convince anybody this doesn’t lead to conflicts with observation.

Friday, February 17, 2017

Black Hole Information - Still Lost

[Illustration of black hole.
Image: NASA]
According to Google, Stephen Hawking is the most famous physicist alive, and his most famous work is the black hole information paradox. If you know one thing about physics, therefore, that’s what you should know.

Before Hawking, black holes weren’t paradoxical. Yes, if you throw a book into a black hole you can’t read it anymore. That’s because what has crossed a black hole’s event horizon can no longer be reached from the outside. The event horizon is a closed surface inside of which everything, even light, is trapped. So there’s no way information can get out of the black hole; the book’s gone. That’s unfortunate, but nothing physicists sweat over. The information in the book might be out of sight, but nothing paradoxical about that.

Then came Stephen Hawking. In 1974, he showed that black holes emit radiation and this radiation doesn’t carry information. It’s entirely random, except for the distribution of particles as a function of energy, which is a Planck spectrum with temperature inversely proportional to the black hole’s mass. If the black hole emits particles, it loses mass, shrinks, and gets hotter. After enough time and enough emission, the black hole will be entirely gone, with no return of the information you put into it. The black hole has evaporated; the book can no longer be inside. So, where did the information go?

You might shrug and say, “Well, it’s gone, so what? Don’t we lose information all the time?” No, we don’t. At least, not in principle. We lose information in practice all the time, yes. If you burn the book, you aren’t able any longer to read what’s inside. However, fundamentally, all the information about what constituted the book is still contained in the smoke and ashes.

This is because the laws of nature, to our best current understanding, can be run both forwards and backwards – every unique initial-state corresponds to a unique end-state. There are never two initial-states that end in the same final state. The story of your burning book looks very different backwards. If you were able to very, very carefully assemble smoke and ashes in just the right way, you could unburn the book and reassemble it. It’s an exceedingly unlikely process, and you’ll never see it happening in practice. But, in principle, it could happen.

Not so with black holes. Whatever formed the black hole doesn't make a difference when you look at what you wind up with. In the end you only have this thermal radiation, which – in honor of its discoverer – is now called ‘Hawking radiation.’ That’s the paradox: Black hole evaporation is a process that cannot be run backwards. It is, as we say, not reversible. And that makes physicists sweat because it demonstrates they don’t understand the laws of nature.

Black hole information loss is paradoxical because it signals an internal inconsistency of our theories. When we combine – as Hawking did in his calculation – general relativity with the quantum field theories of the standard model, the result is no longer compatible with quantum theory. At a fundamental level, every interaction involving particle processes has to be reversible. Because of the non-reversibility of black hole evaporation, Hawking showed that the two theories don’t fit together.

The seemingly obvious origin of this contradiction is that the irreversible evaporation was derived without taking into account the quantum properties of space and time. For that, we would need a theory of quantum gravity, and we still don’t have one. Most physicists therefore believe that quantum gravity would remove the paradox – just how that works they still don’t know.

The difficulty with blaming quantum gravity, however, is that there isn’t anything interesting happening at the horizon – it's in a regime where general relativity should work just fine. That’s because the strength of quantum gravity should depend on the curvature of space-time, but the curvature at a black hole horizon depends inversely on the mass of the black hole. This means the larger the black hole, the smaller the expected quantum gravitational effects at the horizon.

Quantum gravitational effects would become noticeable only when the black hole has reached the Planck mass, about 10 micrograms. When the black hole has shrunken to that size, information could be released thanks to quantum gravity. But, depending on what the black hole formed from, an arbitrarily large amount of information might be stuck in the black hole until then. And when a Planck mass is all that’s left, it’s difficult to get so much information out with such little energy left to encode it.

For the last 40 years, some of the brightest minds on the planets have tried to solve this conundrum. It might seem bizarre that such an outlandish problem commands so much attention, but physicists have good reasons for this. The evaporation of black holes is the best-understood case for the interplay of quantum theory and gravity, and therefore might be the key to finding the right theory of quantum gravity. Solving the paradox would be a breakthrough and, without doubt, result in a conceptually new understanding of nature.

So far, most solution attempts for black hole information loss fall into one of four large categories, each of which has its pros and cons.

  • 1. Information is released early.

    The information starts leaking out long before the black hole has reached Planck mass. This is the presently most popular option. It is still unclear, however, how the information should be encoded in the radiation, and just how the conclusion of Hawking’s calculation is circumvented.

    The benefit of this solution is its compatibility with what we know about black hole thermodynamics. The disadvantage is that, for this to work, some kind of non-locality – a spooky action at a distance – seems inevitable. Worse still, it has recently been claimed that if information is released early, then black holes are surrounded by a highly-energetic barrier: a “firewall.” If a firewall exists, it would imply that the principle of equivalence, which underlies general relativity, is violated. Very unappealing.

  • 2. Information is kept, or it is released late.

    In this case, the information stays in the black hole until quantum gravitational effects become strong, when the black hole has reached the Planck mass. Information is then either released with the remaining energy or just kept forever in a remnant.

    The benefit of this option is that it does not require modifying either general relativity or quantum theory in regimes where we expect them to hold. They break down exactly where they are expected to break down: when space-time curvature becomes very large. The disadvantage is that some have argued it leads to another paradox, that of the possibility to infinitely produce black hole pairs in a weak background field: i.e., all around us. The theoretical support for this argument is thin, but it’s still widely used.

  • 3. Information is destroyed.

    Supporters of this approach just accept that information is lost when it falls into a black hole. This option was long believed to imply violations of energy conservation and hence cause another inconsistency. In recent years, however, new arguments have surfaced according to which energy might still be conserved with information loss, and this option has therefore seem a little revival. Still, by my estimate it’s the least popular solution.

    However, much like the first option, just saying that’s what one believes doesn’t make for a solution. And making this work would require a modification of quantum theory. This would have to be a modification that doesn’t lead to conflict with any of our experiments testing quantum mechanics. It’s hard to do.

  • 4. There’s no black hole.

    A black hole is never formed or information never crosses the horizon. This solution attempt pops up every now and then, but has never caught on. The advantage is that it’s obvious how to circumvent the conclusion of Hawking’s calculation. The downside is that this requires large deviations from general relativity in small curvature regimes, and it is therefore difficult to make compatible with precision tests of gravity.
There are a few other proposed solutions that don’t fall into any of these categories, but I will not – cannot! – attempt to review all of them here. In fact, there isn’t any good review on the topic – probably because the mere thought of compiling one is dreadful. The literature is vast. Black hole information loss is without doubt the most-debated paradox ever.

And it’s bound to remain so. The temperature of black holes which we can observe today is far too small to be observable. Hence, in the foreseeable future nobody is going to measure what happens to the information which crosses the horizon. Let me therefore make a prediction. In ten years from now, the problem will still be unsolved.

Hawking just celebrated his 75th birthday, which is a remarkable achievement by itself. 50 years ago, his doctors declared him dead soon, but he's stubbornly hung onto life. The black hole information paradox may prove to be even more stubborn. Unless a revolutionary breakthrough comes, it may outlive us all.

(I wish to apologize for not including references. If I’d start with this, I wouldn’t be done by 2020.)

[This post previously appeared on Starts With A Bang.]

Thursday, February 09, 2017

New Data from the Early Universe Does Not Rule Out Holography

[img src: entdeckungen.net]
It’s string theorists’ most celebrated insight: The world is a hologram. Like everything else string theorists have come up with, it’s an untested hypothesis. But now, it’s been put to test with a new analysis that compares a holographic early universe with its non-holographic counterpart.

Tl;dr: Results are inconclusive.

When string theorists say we live in a hologram, they don’t mean we are shadows in Plato’s cave. They mean their math says that all information about what’s inside a box can be encoded on the boundary of that box – albeit in entirely different form.

The holographic principle – if correct – means there are two different ways to describe the same reality. Unlike in Plato’s cave, however, where the shadows lack information about what caused them, with holography both descriptions are equally good.

Holography would imply that the three dimensions of space which we experience are merely one way to think of the world. If you can describe what happens in our universe by equations that use only two-dimensional surfaces, you might as well say we live in two dimensions – just that these are dimensions we don’t normally experience.

It’s a nice idea but hard to test. That’s because the two-dimensional interpretation of today’s universe isn’t normally very workable. Holography identifies two different theories with each other by a relation called “duality.” The two theories in question here are one for gravity in three dimensions of space, and a quantum field theory without gravity in one dimension less. However, whenever one of the theories is weakly coupled, the other one is strongly coupled – and computations in strongly coupled theories are hard, if not impossible.

The gravitational force in our universe is presently weakly coupled. For this reason General Relativity is the easier side of the duality. However, the situation might have been different in the early universe. Inflation – the rapid phase of expansion briefly after the big bang – is usually assumed to take place in gravity’s weakly coupled regime. But that might not be correct. If instead gravity at that early stage was strongly coupled, then a description in terms of a weakly coupled quantum field theory might be more appropriate.

This idea has been pursued by Kostas Skenderis and collaborators for several years. These researchers have developed a holographic model in which inflation is described by a lower-dimensional non-gravitational theory. In a recent paper, their predictions have been put to test with new data from the Planck mission, a high-precision measurement of the temperature fluctuations of the cosmic microwave background.


In this new study, the authors compare the way that holographic inflation and standard inflation in the concordance model – also known as ΛCDM – fit the data. The concordance model is described by six parameters. Holographic inflation has a closer connection to the underlying theory and so the power spectrum brings in one additional parameter, which makes a total of seven. After adjusting for the number of parameters, the authors find that the concordance model fits better to the data.

However, the biggest discrepancy between the predictions of holographic inflation and the concordance model arise at large scales, or low multipole moments respectively. In this regime, the predictions from holographic inflation cannot really be trusted. Therefore, the authors repeat the analysis with the low multipole moments omitted from the data. Then, the two models fit the data equally well. In some cases (depending on the choice of prior for one of the parameters) holographic inflation is indeed a better fit, but the difference is not statistically significant.

To put this result into context it must be added that the best-understood cases of holography work in space-times with a negative cosmological constant, the Anti-de Sitter spaces. Our own universe, however, is not of this type. It has instead a positive cosmological constant, described by de-Sitter space. The use of the holographic principle in our universe is hence not strongly supported by string theory, at least not presently.

The model for holographic inflation can therefore best be understood as one that is motivated by, but not derived from, string theory. It is a phenomenological model, developed to quantify predictions and test them against data.

While the difference between the concordance model and holographic inflation which this study finds are insignificant, it is interesting that a prediction based on such an entirely different framework is able to fit the data at all. I should also add that there is a long-standing debate in the community as to whether the low multipole moments are well-described by the concordance model, or whether any of the large-scale anomalies are to be taken seriously.

In summary, I find this an interesting result because it’s an entirely different way to think of the early universe, and yet it describes the data. For the same reason, however, it’s also somewhat depressing. Clearly, we don’t presently have a good way to test all the many ideas that theorists have come up with.

Thursday, January 19, 2017

Dark matter’s hideout just got smaller, thanks to supercomputers.

Lattice QCD. Artist’s impression.
Physicists know they are missing something. Evidence that something’s wrong has piled up for decades: Galaxies and galaxy clusters don’t behave like Einstein’s theory of general relativity predicts. The observed discrepancies can be explained either by modifying general relativity, or by the gravitational pull of some, so-far unknown type of “dark matter.”

Theoretical physicists have proposed many particles which could make up dark matter. The most popular candidates are a class called “Weakly Interacting Massive Particles” or WIMPs. They are popular because they appear in supersymmetric extensions of the standard model, and also because they have a mass and interaction strength in just the right ballpark for dark matter. There have been many experiments, however, trying to detect the elusive WIMPs, and one after the other reported negative results.

The second popular dark matter candidate is a particle called the “axion,” and the worse the situation looks for WIMPs the more popular axions are becoming. Like WIMPs, axions weren’t originally invented as dark matter candidates.

The strong nuclear force, described by Quantum ChromoDynamics (QCD), could violate a symmetry called “CP symmetry,” but it doesn’t. An interaction term that could give rise to this symmetry-violation therefore has a pre-factor – the “theta-parameter” (θ) – that is either zero or at least very, very small. That nobody knows just why the theta-parameter should be so small is known as the “strong CP problem.” It can be solved by promoting the theta-parameter to a field which relaxes to the minimum of a potential, thereby setting the coupling to the troublesome term to zero, an idea that dates back to Peccei and Quinn in 1977.

Much like the Higgs-field, the theta-field is then accompanied by a particle – the axion – as was pointed out by Steven Weinberg and Frank Wilczek in 1978.

The original axion was ruled out within a few years after being proposed. But theoretical physicists quickly put forward more complicated models for what they called the “hidden axion.” It’s a variant of the original axion that is more weakly interacting and hence more difficult to detect. Indeed it hasn’t been detected. But it also hasn’t been ruled out as a dark matter candidate.

Normally models with axions have two free parameters: one is the mass of the axion, the other one is called the axion decay constant (usually denoted f_a). But these two parameters aren’t actually independent of each other. The axion gets its mass by the breaking of a postulated new symmetry. A potential, generated by non-perturbative QCD effects, then determines the value of the mass.

If that sounds complicated, all you need to know about it to understand the following is that it’s indeed complicated. Non-perturbative QCD is hideously difficult. Consequently, nobody can calculate what the relation is between the axion mass and the decay constant. At least so far.

The potential which determines the particle’s mass depends on the temperature of the surrounding medium. This is generally the case, not only for the axion, it’s just a complication often omitted in the discussion of mass-generation by symmetry breaking. Using the potential, it can be shown that the mass of the axion is inversely proportional to the decay constant. The whole difficulty then lies in calculating the factor of proportionality, which is a complicated, temperature-dependent function, known as the topological susceptibility of the gluon field. So, if you could calculate the topological susceptibility, you’d know the relation between the axion mass and the coupling.

This isn’t a calculation anybody presently knows how to do analytically because the strong interaction at low temperatures is, well, strong. The best chance is to do it numerically by putting the quarks on a simulated lattice and then sending the job to a supercomputer.

And even that wasn’t possible until now because the problem was too computationally intensive. But in a new paper, recently published in Nature, a group of researchers reports they have come up with a new method of simplifying the numerical calculation. This way, they succeeded in calculating the relation between the axion mass and the coupling constant.

    Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
    S. Borsanyi et al
    Nature 539, 69–71 (2016)

(If you don’t have journal access, it’s not the exact same paper as this but pretty close).

This result is a great step forward in understanding the physics of the early universe. It’s a new relation which can now be included in cosmological models. As a consequence, I expect that the parameter-space in which the axion can hide will be much reduced in the coming months.

I also have to admit, however, that for a pen-on-paper physicist like me this work has a bittersweet aftertaste. It’s a remarkable achievement which wouldn’t have been possible without a clever formulation of the problem. But in the end, it’s progress fueled by technological power, by bigger and better computers. And maybe that’s where the future of our field lies, in finding better ways to feed problems to supercomputers.

Friday, December 02, 2016

Can dark energy and dark matter emerge together with gravity?

A macaroni pie? Elephants blowing ballons? 
No, it’s Verlinde’s entangled universe.
In a recent paper, the Dutch physicist Erik Verlinde explains how dark energy and dark matter arise in emergent gravity as deviations from general relativity.

It’s taken me some while to get through the paper. Vaguely titled “Emergent Gravity and the Dark Universe,” it’s a 51-pages catalog of ideas patched together from general relativity, quantum information, quantum gravity, condensed matter physics, and astrophysics. It is clearly still research in progress and not anywhere close to completion.

The new paper substantially expands on Verlinde’s earlier idea that the gravitational force is some type of entropic force. If that was so, it would mean gravity is not due to the curvature of space-time – as Einstein taught us – but instead caused by the interaction of the fundamental elements which make up space-time. Gravity, hence, would be emergent.

I find it an appealing idea because it allows one to derive consequences without having to specify exactly what the fundamental constituents of space-time are. Like you can work out the behavior of gases under pressure without having a model for atoms, you can work out the emergence of gravity without having a model for whatever builds up space-time. The details would become relevant only at very high energies.

As I noted in a comment on the first paper, Verlinde’s original idea was merely a reinterpretation of gravity in thermodynamic quantities. What one really wants from emergent gravity, however, is not merely to get back general relativity. One wants to know which deviations from general relativity come with it, deviations that are specific predictions of the model and which can be tested.

Importantly, in emergent gravity such deviations from general relativity could make themselves noticeable at long distances. The reason is that the criterion for what it means for two points to be close by each other emerges with space-time itself. Hence, in emergent gravity there isn’t a priori any reason why new physics must be at very short distances.

In the new paper, Verlinde argues that his variant of emergent gravity gives rise to deviations from general relativity on long distances, and these deviations correspond to dark energy and dark matter. He doesn’t explain dark energy itself. Instead, he starts with a universe that by assumption contains dark energy like we observe, ie one that has a positive cosmological constant. Such a universe is described approximately by what theoretical physicists call a de-Sitter space.

Verlinde then argues that when one interprets this cosmological constant as the effect of long-distance entanglement between the conjectured fundamental elements, then one gets a modification of the gravitational law which mimics dark matter.

The reason is works is that to get normal gravity one assigns an entropy to a volume of space which scales with the surface of the area that encloses the volume. This is known as the “holographic scaling” of entropy, and is at the core of Verlinde’s first paper (and earlier work by Jacobson and Padmanabhan and others). To get deviations from normal gravity, one has to do something else. For this, Verlinde argues that de Sitter space is permeated by long-distance entanglement which gives rise to an entropy which scales, not with the surface area of a volume, but with the volume itself. It consequently leads to a different force-law. And this force-law, so he argues, has an effect very similar to dark matter.

Not only does this modified force-law from the volume-scaling of the entropy mimic dark matter, it more specifically reproduces some of the achievements of modified gravity.

In his paper, Verlinde derives the observed relation between the luminosity of spiral galaxies and the angular velocity of their outermost stars, known as the Tully-Fisher relation. The Tully-Fisher relation can also be found in certain modifications of gravity, such as Moffat Gravity (MOG), but more generally every modification that approximates Milgrom’s modified Newtonian Dynamics (MOND). Verlinde, however, does more than that. He also derives the parameter which quantifies the acceleration at which the modification of general relativity becomes important, and gets a value that fits well with observations.

It was known before that this parameter is related to the cosmological constant. There have been various attempts to exploit this relation, most recently by Lee Smolin. In Verlinde’s approach the relation between the acceleration scale and the cosmological constant comes out naturally, because dark matter has the same origin of dark energy. Verlinde further offers expressions for the apparent density of dark matter in galaxies and clusters, something that, with some more work, can probably be checked observationally.

I find this is an intriguing link which suggests that Verlinde is onto something. However, I also find the model sketchy and unsatisfactory in many regards. General Relativity is a rigorously tested theory with many achievements. To do any better than general relativity is hard, and thus for any new theory of gravity the most important thing is to have a controlled limit in which General Relativity is reproduced to good precision. How this might work in Verlinde’s approach isn’t clear to me because he doesn’t even attempt to deal with the general case. He starts right away with cosmology.

Now in cosmology we have a preferred frame which is given by the distribution of matter (or by the restframe of the CMB if you wish). In general relativity this preferred frame does not originate in the structure of space-time itself but is generated by the stuff in it. In emergent gravity models, in contrast, the fundamental structure of space-time tends to have an imprint of the preferred frame. This fundamental frame can lead to violations of the symmetries of general relativity and the effects aren’t necessarily small. Indeed, there are many experiments that have looked for such effects and haven’t found anything. It is hence a challenge for any emergent gravity approach to demonstrate just how to avoid such violations of symmetries.

Another potential problem with the idea is the long-distance entanglement which is sprinkled over the universe. The physics which we know so far works “locally,” meaning stuff can’t interact over long distances without a messenger that travels through space and time from one to the other point. It’s the reason my brain can’t make spontaneous visits to the Andromeda nebula, and most days I think that benefits both of us. But like that or not, the laws of nature we presently have are local, and any theory of emergent gravity has to reproduce that.

I have worked for some years on non-local space-time defects, and based on what I learned from that I don’t think the non-locality of Verlinde’s model is going to be a problem. My non-local defects aren’t the same as Verlinde’s entanglement, but guessing that the observational consequences scale similarly, the amount of entanglement that you need to get something like a cosmological constant is too small to leave any other noticeable effects on particle physics. I am therefore more worried about the recovery of local Lorentz-invariance. I went to great pain in my models to make sure I wouldn’t get these, and I can’t see how Verlinde addresses the issue.

The more general problem I have with Verlinde’s paper is the same I had with his 2010 paper, which is that it’s fuzzy. It remained unclear to me exactly what are the necessary assumptions. I hence don’t know whether it’s really necessary to have this interpretation with the entanglement and the volume-scaling of the entropy and with assigning elasticity to the dark energy component that pushes in on galaxies. Maybe it would be sufficient already to add a non-local modification to the sources of general relativity. Having toyed with that idea for a while, I doubt it. But I think Verlinde’s approach would benefit from a more axiomatic treatment.

In summary, Verlinde’s recent paper offers the most convincing argument I have seen so far that dark matter and dark energy are related. However, it is presently unclear if not this approach would also have unwanted side-effects that are in conflict with observation already.

Monday, October 31, 2016

Modified Gravity vs Particle Dark Matter. The Plot Thickens.

They sit in caves, deep underground. Surrounded by lead, protected from noise, shielded from the warmth of the Sun, they wait. They wait for weakly interacting massive particles, short WIMPs, the elusive stuff that many physicists believe makes up 80% of the matter in the universe. They have been waiting for 30 years, but the detectors haven’t caught a single WIMP.

Even though the sensitivity of dark matter detectors has improved by more than five orders of magnitude since the early 1980s, all results so far are compatible with zero events. The searches for axions, another popular dark matter candidate, haven’t fared any better. Coming generations of dark matter experiments will cross into the regime where the neutrino background becomes comparable to the expected signal. But, as a colleague recently pointed out to me, this merely means that the experimentalists have to understand the background better.

Maybe in 100 years they’ll still sit in caves, deep underground. And wait.

Meanwhile others are running out of patience. Particle dark matter is a great explanation for all the cosmological observations that general relativity sourced by normal matter cannot explain. But maybe it isn’t right after all. The alternative to using general relativity and adding particle is to modify general relativity so that space-time curves differently in response to the matter we already know.

Already in the mid 1980s, Modehai Milgrom showed that modifying gravity has the potential to explain observations commonly attributed to particle dark matter. He proposed Modified Newtonian Dynamics – short MOND – to explain the galactic rotation curves instead of adding particle dark matter. Intriguingly, MOND, despite it having only one free parameter, fits a large number of galaxies. It doesn’t work well for galaxy clusters, but this clearly shows that many galaxies are similar in very distinct ways, ways that the concordance model (also known as LambdaCDM) hasn’t been able to account for.

In its simplest form the concordance model has sources which are collectively described as homogeneous throughout the universe – an approximation known as the cosmological principle. In this form, the concordance model doesn’t predict how galaxies rotate – it merely describes the dynamics on supergalactic scales.

To get galaxies right, physicists have to also take into account astrophysical processes within the galaxies: how stars form, which stars form, where do they form, how do they interact with the gas, how long do they live, when and how they go supernova, what magnetic fields permeate the galaxies, how the fields affect the intergalactic medium, and so on. It’s a mess, and it requires intricate numerical simulations to figure out just exactly how galaxies come to look how they look.

And so, physicists today are divided in two camps. In the larger camp are those who think that the observed galactic regularities will eventually be accounted for by the concordance model. It’s just that it’s a complicated question that needs to be answered with numerical simulations, and the current simulations aren’t good enough. In the smaller camp are those who think there’s no way these regularities will be accounted for by the concordance model, and modified gravity is the way to go.

In a recent paper, McGaugh et al reported a correlation among the rotation curves of 153 observed galaxies. They plotted the gravitational pull from the visible matter in the galaxies (gbar) against the gravitational pull inferred from the observations (gobs), and find that the two are closely related.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This correlation – the mass-discrepancy-acceleration relation (MDAR) – so they emphasize, is not itself new, it’s just a new way to present previously known correlations. As they write in the paper:
“[This Figure] combines and generalizes four well-established properties of rotating galaxies: flat rotation curves in the outer parts of spiral galaxies; the “conspiracy” that spiral rotation curves show no indication of the transition from the baryon-dominated inner regions to the outer parts that are dark matter-dominated in the standard model; the Tully-Fisher relation between the outer velocity and the inner stellar mass, later generalized to the stellar plus atomic hydrogen mass; and the relation between the central surface brightness of galaxies and their inner rotation curve gradient.”
But this was only act 1.

In act 2, another group of researchers responds to the McGaugh et al paper. They present results of a numerical simulation for galaxy formation and claim that particle dark matter can account for the MDAR. The end of MOND, so they think, is near.

Figure from arXiv:1610.06183 [astro-ph.GA]

McGaugh, hero of act 1, points out that the sample size for this simulation is tiny and also pre-selected to reproduce galaxies like we observe. Hence, he thinks the results are inconclusive.

In act 3, Mordehai Milgrom, the original inventor of MOND – posts a comment on the arXiv. He also complains about the sample size of the numerical simulation and further explains that there is much more to MOND than the MDAR correlation. Numerical simulations with particle dark matter have been developed to fit observations, he writes, so it’s not surprising they now fit observations.

“The simulation in question attempt to treat very complicated, haphazard, and unknowable events and processes taking place during the formation and evolution histories of these galaxies. The crucial baryonic processes, in particular, are impossible to tackle by actual, true-to-nature, simulation. So they are represented in the simulations by various effective prescriptions, which have many controls and parameters, and which leave much freedom to adjust the outcome of these simulations [...]

The exact strategies involved are practically impossible to pinpoint by an outsider, and they probably differ among simulations. But, one will not be amiss to suppose that over the years, the many available handles have been turned so as to get galaxies as close as possible to observed ones.”
In act 4, another paper with results of a numerical simulation for galaxy structures with particle dark matter appears.

This one uses a code with acronym EAGLE, for Evolution and Assembly of GaLaxies and their Environments. This code has “quite a few” parameters, as Aaron Ludlow, the paper’s first author told me, and these parameters have been optimized to reproduce realistic galaxies. In this simulation, however, the authors didn’t use this optimized parameter configuration but let several parameters (3-4) vary to produce a larger set of galaxies. These galaxies in general do not look like those we observe. Nevertheless, the researchers find that all their galaxies display the MDAR correlation, regardless.

This would indicate that the particle dark matter is enough to describe the observations.


Figure from arXiv:1610.07663 [astro-ph.GA] 


However, even when varying some parameters, the EAGLE code still contains parameters that have been fixed previously to reproduce observations. Ludlow calls them “subgrid parameters,” meaning they quantify physics on scales smaller than what the simulation can presently resolve. One sees for example in Figure 1 of their paper (shown below) that all those galaxies have a pronounced correlation between the velocities of the outer stars (Vmax) and the luminosity (M*) already.
Figure from arXiv:1610.07663 [astro-ph.GA]
Note that the plotted quantities are correlated in all data sets,
though the off-sets differ somewhat.

One shouldn’t hold this against the model. Such numerical simulations are done for the purpose of generating and understanding realistic galaxies. Runs are time-consuming and costly. From the point of view of an astrophysicist, the question just how unrealistic galaxies can get in these simulations is entirely nonsensical. And yet that’s exactly what the modified-gravity/dark matter showoff now asks for.

In act 5, John Moffat shows that modified gravity – the general relativistic completion of MOND – reproduces the MDAR correlation, but also predicts a distinct deviation for the very outside stars of galaxies.

Figure from arXiv:1610.06909 [astro-ph.GA] 
The green curve is the prediction from modified gravity.


The crucial question here is, I think, which correlations are independent of each other. I don’t know. But I’m sure there will be further acts in this drama.