Showing posts with label Astrophysics. Show all posts
Showing posts with label Astrophysics. Show all posts

Wednesday, July 19, 2017

Penrose claims LIGO noise is evidence for Cyclic Cosmology

Noise is the physicists’ biggest enemy. Unless you are a theorist whose pet idea masquerades as noise. Then you are best friends with noise. Like Roger Penrose.
    Correlated "noise" in LIGO gravitational wave signals: an implication of Conformal Cyclic Cosmology
    Roger Penrose
    arXiv:1707.04169 [gr-qc]

Roger Penrose made his name with the Penrose-Hawking theorems and twistor theory. He is also well-known for writing books with very many pages, most recently “Fashion, Faith, and Fantasy in the New Physics of the Universe.”

One man’s noise is another man’s signal.
Penrose doesn’t like most of what’s currently in fashion, but believes that human consciousness can’t be explained by known physics and that the universe is cyclically reborn. This cyclic cosmology, so his recent claim, gives rise to correlations in the LIGO noise – just like what’s been observed.

The LIGO experiment consists of two interferometers in the USA, separated by about 3,000 km. A gravitational wave signal should pass through both detectors with a delay determined by the time it takes the gravitational wave to sweep from one US-coast to the other. This delay is typically of the order of 10ms, but its exact value depends on where the waves came from.

The correlation between the two LIGO detectors is one of the most important criteria used by the collaboration to tell noise from signal. The noise itself, however, isn’t entirely uncorrelated. Some sources of the correlations are known, but some are not. This is not unusual – understanding the detector is as much part of a new experiment as is the measurement itself. The LIGO collaboration, needless to say, thinks everything is under control and the correlations are adequately taken care of in their signal analysis.

A Danish group of researchers begs to differ. They recently published a criticism on the arXiv in which they complain that after subtracting the signal of the first gravitational wave event, correlations remain at the same time-delay as the signal. That clearly shouldn’t happen. First and foremost it would demonstrate a sloppy signal extraction by the LIGO collaboration.

A reply to the Danes’ criticism by Ian Harry from the LIGO collaboration quickly appeared on Sean Carroll’s blog. Ian pointed out some supposed mistakes in the Danish group’s paper. Turns out though, the mistake was on his site. Once corrected, Harry’s analysis reproduces the correlations which shouldn’t be there. Bummer.

Ian Harry did not respond to my requests for comment. Neither did Alessandra Buonanno from the LIGO collaboration, who was also acknowledged by the Danish group. David Shoemaker, the current LIGO spokesperson, let me know he has “full confidence” in the results, and also, the collaboration is working on a reply, which might however take several months to appear. In other words, go away, there’s nothing to see here.

But while we wait for the LIGO response, speculations abound what might cause the supposed correlation. Penrose beat everyone to it with an explanation, even Craig Hogan, who has run his own experiment looking for correlated noise in interferometers, and who I was counting on.

Penrose’s cyclic cosmology works by gluing the big bang together with what we usually think of as the end of the universe – an infinite accelerated expansion into nothingness. Penrose conjectures that both phases – the beginning and the end – are conformally invariant, which means they possess a symmetry under a stretching of distance scales. Then he identifies the end of the universe with the beginning of a new one, creating a cycle that repeats indefinitely. In his theory, what we think of as inflation – the accelerated expansion in the early universe – becomes the final phase of acceleration in the cycle preceding our own.

Problem is, the universe as we presently see it is not conformally invariant. What screws up conformal invariance is that particles have masses, and these masses also set a scale. Hence, Penrose has to assume that eventually all particle masses fade away so that conformal invariance is restored.

There’s another problem. Since Penrose’s conformal cyclic cosmology has no inflation it also lacks a mechanism to create temperature fluctuations in the cosmic microwave background (CMB). Luckily, however, the theory also gives rise to a new scalar particle that couples only gravitationally. Penrose named it  “erebon” after the ancient Greek God of Darkness, Erebos, that gives rise to new phenomenology.

Erebos, the God of Darkness,
according to YouTube.
The erebons have a mass of about 10-5 gram because “what else could it be,” and they have a lifetime determined by the cosmological constant, presumably also because what else could it be. (Aside: Note that these are naturalness arguments.) The erebons make up dark matter and their decay causes gravitational waves that seed the CMB temperature fluctuations.

Since erebons are created at the beginning of each cycle and decay away through it, they also create a gravitational wave background. Penrose then argues that a gravitational wave signal from a binary black hole merger – like the ones LIGO has observed – should be accompanied by noise-like signals from erebons that decayed at the same time in the same galaxy. Just that this noise-like contribution would be correlated with the same time-difference as the merger signal.

In his paper, Penrose does not analyze the details of his proposal. He merely writes:
“Clearly the proposal that I am putting forward here makes many testable predictions, and it should not be hard to disprove it if it is wrong.”
In my impression, this is a sketchy idea and I doubt it will work. I don’t have a major problem with inventing some particle to make up dark matter, but I have a hard time seeing how the decay of a Planck-mass particle can give rise to a signal comparable in strength to a black hole merger (or why several of them would add up exactly for a larger signal).

Even taking this at face value, the decay signals wouldn’t only come from one galaxy but from all galaxies, so the noise should be correlated all over and at pretty much all time-scales – not just at the 12ms as the Danish group has claimed. Worst of all, the dominant part of the signal would come from our own galaxy and why haven’t we seen this already?

In summary, one can’t blame Penrose for being fashionable. But I don’t think that erebons will be added to the list of LIGO’s discoveries.

Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

Tuesday, June 20, 2017

If tensions in cosmological data are not measurement problems, they probably mean dark energy changes

Galaxy pumpkin.
Src: The Swell Designer
According to physics, the universe and everything in it can be explained by but a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit.

Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.

One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.

The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.

The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.

It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.

Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.

Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.

Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.

That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.

A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.

This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)

Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.

Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.

The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.


[This article previously appeared on Starts With A Bang.]

Update June 21: Corrected several sentences to address comments below.

Friday, May 26, 2017

Can we probe the quantization of the black hole horizon with gravitational waves?


Tl;dr: Yes, but the testable cases aren’t the most plausible ones.

It’s the year 2017, but we still don’t know how space and time get along with quantum mechanics. The best clue so far comes from Stephen Hawking and Jacob Bekenstein. They made one of the most surprising finds that theoretical physics saw in the 20th century: Black holes have entropy.

It was a surprise because entropy is a measure for unresolved microscopic details, but in general relativity black holes don’t have details. They are almost featureless balls. That they nevertheless seem to have an entropy – and a gigantically large one in addition – indicates strongly that black holes can be understood only by taking into account quantum effects of gravity. The large entropy, so the idea, quantifies all the ways the quantum structure of black holes can differ.

The Bekenstein-Hawking entropy scales with the horizon area of the black hole and is usually interpreted as a measure for the number of elementary areas of size Planck-length squared. A Planck-length is a tiny 10-35 meters. This area-scaling is also the basis of the holographic principle which has dominated research in quantum gravity for some decades now. If anything is important in quantum gravity, this is.

It comes with the above interpretation that the area of the black hole horizon always has to be a multiple of the elementary Planck area. However, since the Planck area is so small compared to the size of astrophysical black holes – ranging from some kilometers to some billion kilometers – you’d never notice the quantization just by looking at a black hole. If you got to look at it to begin with. So it seems like a safely untestable idea.

A few months ago, however, I noticed an interesting short note on the arXiv in which the authors claim that one can probe the black hole quantization with gravitational waves emitted from a black hole, for example in the ringdown after a merger event like the one seen by LIGO:
    Testing Quantum Black Holes with Gravitational Waves
    Valentino F. Foit, Matthew Kleban
    arXiv:1611.07009 [hep-th]

The basic idea is simple. Assume it is correct that the black hole area is always a multiple of the Planck area and that gravity is quantized so that it has a particle – the graviton – associated with it. If the only way for a black hole to emit a graviton is to change its horizon area in multiples of the Planck area, then this dictates the energy that the black hole loses when the area shrinks because the black hole’s area depends on the black hole’s mass. The Planck-area quantization hence sets the frequency of the graviton that is emitted.

A gravitational wave is nothing but a large number of gravitons. According to the area quantization, the wavelengths of the emitted gravitons is of the order of the order of the black hole radius, which is what one expects to dominate the emission during the ringdown. However, so the authors’ argument, the spectrum of the gravitational wave should be much narrower in the quantum case.

Since the model that quantizes the black hole horizon in Planck-area chunks depends on a free parameter, it would take two measurements of black hole ringdowns to rule out the scenario: The first to fix the parameter, the second to check whether the same parameter works for all measurements.

It’s a simple idea but it may be too simple. The authors are careful to list the possible reasons for why their argument might not apply. I think it doesn’t apply for a reason that’s a combination of what is on their list.

A classical perturbation of the horizon leads to a simultaneous emission of a huge number of gravitons, and for those there is no good reason why every single one of them must fit the exact emission frequency that belongs to an increase of one Planck area as long as the total energy adds up properly.

I am not aware, however, of a good theoretical treatment of this classical limit from the area-quantization. It might indeed not work in some of the more audacious proposals we have recently seen, like Gia Dvali’s idea that black holes are condensates of gravitons. Scenarios such like Dvali’s might be testable indeed with the ringdown characteristics. I’m sure we will hear more about this in the coming years as LIGO accumulates data.

What this proposed test would do, therefore, is to probe the failure of reproducing general relativity for large oscillations of the black hole horizon. Clearly, it’s something that we should look for in the data. But I don’t think black holes will release their secrets quite as easily.

Friday, May 19, 2017

Can we use gravitational waves to rule out extra dimensions – and string theory with it?

Gravitational Waves,
Computer simulation.

Credits: Henze, NASA
Tl;dr: Probably not.

Last week I learned from New Scientist that “Gravitational waves could show hints of extra dimensions.” The article is about a paper which recently appeared on the arxiv:

The claim in this paper is nothing but stunning. Authors Andriot and Gómez argue that if our universe has additional dimensions, no matter how small, then we could find out using gravitational waves in the frequency regime accessible by LIGO.

While LIGO alone cannot do it because the measurement requires three independent detectors, soon upcoming experiments could either confirm or forever rule out extra dimensions – and kill string theory along the way. That, ladies and gentlemen, would be the discovery of the millennium. And, almost equally stunning, you heard it first from New Scientist.

Additional dimensions are today primarily associated with string theory, but the idea is much older. In the context of general relativity, it dates back to the work of Kaluza and Klein the 1920s. I came across their papers as an undergraduate and was fascinated. Kaluza and Klein showed that if you add a fourth space-like coordinate to our universe and curl it up to a tiny circle, you don’t get back general relativity – you get back general relativity plus electrodynamics.

In the presently most widely used variants of string theory one has not one, but six additional dimensions and they can be curled up – or ‘compactified,’ as they say – to complicated shapes. But a key feature of the original idea survives: Waves which extend into the extra dimension must have wavelengths in integer fractions of the extra dimension’s radius. This gives rise to an infinite number of higher harmonics – the “Kaluza-Klein tower” – that appear like massive excitations of any particle that can travel into the extra dimensions.

The mass of these excitations is inversely proportional to the radius (in natural units). This means if the radius is small, one needs a lot of energy to create an excitation, and this explains why he haven’t yet noticed the additional dimensions.

In the most commonly used model, one further assumes that the only particle that experiences the extra-dimensions is the graviton – the hypothetical quantum of the gravitational interaction. Since we have not measured the gravitational interaction on short distances as precisely as the other interactions, such gravity-only extra-dimensions allow for larger radii than all-particle extra-dimensions (known as “universal extra-dimensions”.) In the new paper, the authors deal with gravity-only extra-dimensions.

From the current lack of observation, one can then derive bounds on the size of the extra-dimension. These bounds depend on the number of extra-dimensions and on their intrinsic curvature. For the simplest case – the flat extra-dimensions used in the paper – the bounds range from a few micrometers (for two extra-dimensions) to a few inverse MeV for six extra dimensions (natural units again).

Such extra-dimensions do more, however, than giving rise to a tower of massive graviton excitations. Gravitational waves have spin two regardless of the number of spacelike dimensions, but the number of possible polarizations depends on the number of dimensions. More dimensions, more possible polarizations. And the number of polarizations, importantly, doesn’t depend on the size of the extra-dimensions at all.

In the new paper, the authors point out that the additional polarization of the graviton affects the propagation even of the non-excited gravitational waves, ie the ones that we can measure. The modified geometry of general relativity gives rise to a “breathing mode,” that is a gravitational wave which expands and contracts synchronously in the two (large) dimensions perpendicular to the direction of the wave. Such a breathing mode does not exist in normal general relativity, but it is not specific to extra-dimensions; other modifications of general relativity also have a breathing mode. Still, its non-observation would indicate no extra-dimensions.

But an old problem of Kaluza-Klein theories stands in the way of drawing this conclusion. The radii of the additional dimensions (also known as “moduli”) are unstable. You can assume that they have particular initial values, but there is no reason for the radii to stay at these values. If you shake an extra-dimension, its radius tends to run away. That’s a problem because then it becomes very difficult to explain why we haven’t yet noticed the extra-dimensions.

To deal with the unstable radius of an extra-dimension, theoretical physicists hence introduce a potential with a minimum at which the value of the radius is stuck. This isn’t optional – it’s necessary to prevent conflict with observation. One can debate how well-motivated that is, but it’s certainly possible, and it removes the stability problem.

Fixing the radius of an extra-dimension, however, will also make it more difficult to wiggle it – after all, that’s exactly what the potential was made to do. Unfortunately, in the above mentioned paper the authors don’t have stabilizing potentials.

I do not know for sure what stabilizing the extra-dimensions would do to their analysis. This would depend not only on the type and number of extra-dimension but also on the potential. Maybe there is a range in parameter-space where the effect they speak of survives. But from the analysis provided so far it’s not clear, and I am – as always – skeptical.

In summary: I don’t think we’ll rule out string theory any time soon.

[Updated to clarify breathing mode also appears in other modifications of general relativity.]

Saturday, March 11, 2017

Is Verlinde’s Emergent Gravity compatible with General Relativity?

Dark matter filaments, Millenium Simulation
Image: Volker Springel
A few months ago, Erik Verlinde published an update of his 2010 idea that gravity might originate in the entropy of so-far undetected microscopic constituents of space-time. Gravity, then, would not be fundamental but emergent.

With the new formalism, he derived an equation for a modified gravitational law that, on galactic scales, results in an effect similar to dark matter.

Verlinde’s emergent gravity builds on the idea that gravity can be reformulated as a thermodynamic theory, that is as if it was caused by the dynamics of a large number of small entities whose exact identity is unknown and also unnecessary to describe their bulk behavior.

If one wants to get back usual general relativity from the thermodynamic approach, one uses an entropy that scales with the surface area of a volume. Verlinde postulates there is another contribution to the entropy which scales with the volume itself. It’s this additional entropy that causes the deviations from general relativity.

However, in the vicinity of matter the volume-scaling entropy decreases until it’s entirely gone. Then, one is left with only the area-scaling part and gets normal general relativity. That’s why on scales where the average density is high – high compared to galaxies or galaxy clusters – the equation which Verlinde derives doesn’t apply. This would be the case, for example, near stars.

The idea quickly attracted attention in the astrophysics community, where a number of papers have since appeared which confront said equation with data. Not all of these papers are correct. Two of them seemed to have missed entirely that the equation which they are using doesn’t apply on solar-system scales. Of the remaining papers, three are fairly neutral in the conclusions, while one – by Lelli et al – is critical. The authors find that Verlinde’s equation – which assumes spherical symmetry – is a worse fit to the data than particle dark matter.

There has not, however, so far been much response from theoretical physicists. I’m not sure why that is. I spoke with science writer Anil Ananthaswamy some weeks ago and he told me he didn’t have an easy time finding a theorist willing to do as much as comment on Verlinde’s paper. In a recent Nautilus article, Anil speculates on why that might be:
“A handful of theorists that I contacted declined to comment, saying they hadn’t read the paper; in physics, this silent treatment can sometimes be a polite way to reject an idea, although, in fairness, Verlinde’s paper is not an easy read even for physicists.”
Verlinde’s paper is indeed not an easy read. I spent some time trying to make sense of it and originally didn’t get very far. The whole framework that he uses – dealing with an elastic medium and a strain-tensor and all that – isn’t only unfamiliar but also doesn’t fit together with general relativity.

The basic tenet of general relativity is coordinate invariance, and it’s absolutely not clear how it’s respected in Verlinde’s framework. So, I tried to see whether there is a way to make Verlinde’s approach generally covariant. The answer is yes, it’s possible. And it actually works better than I expected. I’ve written up my findings in a paper which just appeared on the arxiv:


It took some trying around, but I finally managed to guess a covariant Lagrangian that would produce the equations in Verlinde’s paper when one makes the same approximations. Without these approximations, the equations are fully compatible with general relativity. They are however – as so often in general relativity – hideously difficult to solve.

Making some simplifying assumptions allows one to at least find an approximate solution. It turns out however, that even if one makes the same approximations as in Verlinde’s paper, the equation one obtains is not exactly the same that he has – it has an additional integration constant.

My first impulse was to set that constant to zero, but upon closer inspection that didn’t make sense: The constant has to be determined by a boundary condition that ensures the gravitational field of a galaxy (or galaxy cluster) asymptotes to Friedmann-Robertson-Walker space filled with normal matter and a cosmological constant. Unfortunately, I haven’t been able to find the solution that one should get in the asymptotic limit, hence wasn’t able to fix the integration constant.

This means, importantly, that the data fits which assume the additional constant is zero do not actually constrain Verlinde’s model.

With the Lagrangian approach that I have tried, the interpretation of Verlinde’s model is very different – I dare to say far less outlandish. There’s an additional vector-field which permeates space-time and which interacts with normal matter. It’s a strange vector field both because it’s not – as the other vector-fields we know of – a gauge-boson, and has a different kinetic energy term. In addition, the kinetic term also appears in a way one doesn’t commonly have in particle physics but instead in condensed matter physics.

Interestingly, if you look at what this field would do if there was no other matter, it would behave exactly like a cosmological constant.

This, however, isn’t to say I’m sold on the idea. What I am missing is, most importantly, some clue that would tell me the additional field actually behaves like matter on cosmological scales, or at least sufficiently similar to reproduce other observables, like eg baryon acoustic oscillation. This should be possible to find out with the equations in my paper – if one manages to actually solve them.

Finding solutions to Einstein’s field equations is a specialized discipline and I’m not familiar with all the relevant techniques. I will admit that my primary method of solving the equations – to the big frustration of my reviewers – is to guess solutions. It works until it doesn’t. In the case of Friedmann-Robertson-Walker with two coupled fluids, one of which is the new vector field, it hasn’t worked. At least not so far. But the equations are in the paper and maybe someone else will be able to find a solution.

In summary, Verlinde’s emergent gravity has withstood the first-line bullshit test. Yes, it’s compatible with general relativity.

Friday, February 17, 2017

Black Hole Information - Still Lost

[Illustration of black hole.
Image: NASA]
According to Google, Stephen Hawking is the most famous physicist alive, and his most famous work is the black hole information paradox. If you know one thing about physics, therefore, that’s what you should know.

Before Hawking, black holes weren’t paradoxical. Yes, if you throw a book into a black hole you can’t read it anymore. That’s because what has crossed a black hole’s event horizon can no longer be reached from the outside. The event horizon is a closed surface inside of which everything, even light, is trapped. So there’s no way information can get out of the black hole; the book’s gone. That’s unfortunate, but nothing physicists sweat over. The information in the book might be out of sight, but nothing paradoxical about that.

Then came Stephen Hawking. In 1974, he showed that black holes emit radiation and this radiation doesn’t carry information. It’s entirely random, except for the distribution of particles as a function of energy, which is a Planck spectrum with temperature inversely proportional to the black hole’s mass. If the black hole emits particles, it loses mass, shrinks, and gets hotter. After enough time and enough emission, the black hole will be entirely gone, with no return of the information you put into it. The black hole has evaporated; the book can no longer be inside. So, where did the information go?

You might shrug and say, “Well, it’s gone, so what? Don’t we lose information all the time?” No, we don’t. At least, not in principle. We lose information in practice all the time, yes. If you burn the book, you aren’t able any longer to read what’s inside. However, fundamentally, all the information about what constituted the book is still contained in the smoke and ashes.

This is because the laws of nature, to our best current understanding, can be run both forwards and backwards – every unique initial-state corresponds to a unique end-state. There are never two initial-states that end in the same final state. The story of your burning book looks very different backwards. If you were able to very, very carefully assemble smoke and ashes in just the right way, you could unburn the book and reassemble it. It’s an exceedingly unlikely process, and you’ll never see it happening in practice. But, in principle, it could happen.

Not so with black holes. Whatever formed the black hole doesn't make a difference when you look at what you wind up with. In the end you only have this thermal radiation, which – in honor of its discoverer – is now called ‘Hawking radiation.’ That’s the paradox: Black hole evaporation is a process that cannot be run backwards. It is, as we say, not reversible. And that makes physicists sweat because it demonstrates they don’t understand the laws of nature.

Black hole information loss is paradoxical because it signals an internal inconsistency of our theories. When we combine – as Hawking did in his calculation – general relativity with the quantum field theories of the standard model, the result is no longer compatible with quantum theory. At a fundamental level, every interaction involving particle processes has to be reversible. Because of the non-reversibility of black hole evaporation, Hawking showed that the two theories don’t fit together.

The seemingly obvious origin of this contradiction is that the irreversible evaporation was derived without taking into account the quantum properties of space and time. For that, we would need a theory of quantum gravity, and we still don’t have one. Most physicists therefore believe that quantum gravity would remove the paradox – just how that works they still don’t know.

The difficulty with blaming quantum gravity, however, is that there isn’t anything interesting happening at the horizon – it's in a regime where general relativity should work just fine. That’s because the strength of quantum gravity should depend on the curvature of space-time, but the curvature at a black hole horizon depends inversely on the mass of the black hole. This means the larger the black hole, the smaller the expected quantum gravitational effects at the horizon.

Quantum gravitational effects would become noticeable only when the black hole has reached the Planck mass, about 10 micrograms. When the black hole has shrunken to that size, information could be released thanks to quantum gravity. But, depending on what the black hole formed from, an arbitrarily large amount of information might be stuck in the black hole until then. And when a Planck mass is all that’s left, it’s difficult to get so much information out with such little energy left to encode it.

For the last 40 years, some of the brightest minds on the planets have tried to solve this conundrum. It might seem bizarre that such an outlandish problem commands so much attention, but physicists have good reasons for this. The evaporation of black holes is the best-understood case for the interplay of quantum theory and gravity, and therefore might be the key to finding the right theory of quantum gravity. Solving the paradox would be a breakthrough and, without doubt, result in a conceptually new understanding of nature.

So far, most solution attempts for black hole information loss fall into one of four large categories, each of which has its pros and cons.

  • 1. Information is released early.

    The information starts leaking out long before the black hole has reached Planck mass. This is the presently most popular option. It is still unclear, however, how the information should be encoded in the radiation, and just how the conclusion of Hawking’s calculation is circumvented.

    The benefit of this solution is its compatibility with what we know about black hole thermodynamics. The disadvantage is that, for this to work, some kind of non-locality – a spooky action at a distance – seems inevitable. Worse still, it has recently been claimed that if information is released early, then black holes are surrounded by a highly-energetic barrier: a “firewall.” If a firewall exists, it would imply that the principle of equivalence, which underlies general relativity, is violated. Very unappealing.

  • 2. Information is kept, or it is released late.

    In this case, the information stays in the black hole until quantum gravitational effects become strong, when the black hole has reached the Planck mass. Information is then either released with the remaining energy or just kept forever in a remnant.

    The benefit of this option is that it does not require modifying either general relativity or quantum theory in regimes where we expect them to hold. They break down exactly where they are expected to break down: when space-time curvature becomes very large. The disadvantage is that some have argued it leads to another paradox, that of the possibility to infinitely produce black hole pairs in a weak background field: i.e., all around us. The theoretical support for this argument is thin, but it’s still widely used.

  • 3. Information is destroyed.

    Supporters of this approach just accept that information is lost when it falls into a black hole. This option was long believed to imply violations of energy conservation and hence cause another inconsistency. In recent years, however, new arguments have surfaced according to which energy might still be conserved with information loss, and this option has therefore seem a little revival. Still, by my estimate it’s the least popular solution.

    However, much like the first option, just saying that’s what one believes doesn’t make for a solution. And making this work would require a modification of quantum theory. This would have to be a modification that doesn’t lead to conflict with any of our experiments testing quantum mechanics. It’s hard to do.

  • 4. There’s no black hole.

    A black hole is never formed or information never crosses the horizon. This solution attempt pops up every now and then, but has never caught on. The advantage is that it’s obvious how to circumvent the conclusion of Hawking’s calculation. The downside is that this requires large deviations from general relativity in small curvature regimes, and it is therefore difficult to make compatible with precision tests of gravity.
There are a few other proposed solutions that don’t fall into any of these categories, but I will not – cannot! – attempt to review all of them here. In fact, there isn’t any good review on the topic – probably because the mere thought of compiling one is dreadful. The literature is vast. Black hole information loss is without doubt the most-debated paradox ever.

And it’s bound to remain so. The temperature of black holes which we can observe today is far too small to be observable. Hence, in the foreseeable future nobody is going to measure what happens to the information which crosses the horizon. Let me therefore make a prediction. In ten years from now, the problem will still be unsolved.

Hawking just celebrated his 75th birthday, which is a remarkable achievement by itself. 50 years ago, his doctors declared him dead soon, but he's stubbornly hung onto life. The black hole information paradox may prove to be even more stubborn. Unless a revolutionary breakthrough comes, it may outlive us all.

(I wish to apologize for not including references. If I’d start with this, I wouldn’t be done by 2020.)

[This post previously appeared on Starts With A Bang.]

Friday, January 13, 2017

What a burst! A fresh attempt to see space-time foam with gamma ray bursts.

It’s an old story: Quantum fluctuations of space-time might change the travel-time of light. Light of higher frequencies would be a little faster than that of lower frequencies. Or slower, depending on the sign of an unknown constant. Either way, the spectral colors of light would run apart, or ‘disperse’ as they say if they don’t want you to understand what they say.

Such quantum gravitational effects are miniscule, but added up over long distances they can become observable. Gamma ray bursts are therefore ideal to search for evidence of such an energy-dependent speed of light. Indeed, the energy-dependent speed of light has been sought for and not been found, and that could have been the end of the story.

Of course it wasn’t because rather than giving up on the idea, the researchers who’d been working on it made their models for the spectral dispersion increasingly difficult and became more inventive when fitting them to unwilling data. Last thing I saw on the topic was a linear regression with multiple curves of freely chosen offset – sure way to fit any kind of data on straight lines of any slope – and various ad-hoc assumptions to discard data that just didn’t want to fit, such as energy cuts or changes in the slope.

These attempts were so desperate I didn’t even mention them previously because my grandma taught me if you have nothing nice to say, say nothing.

But here’s a new twist to the story, so now I have something to say, and something nice in addition.

On June 25 2016, the Fermi Telescope recorded a truly remarkable burst. The event, GRB160625, had a total duration of 770s and had three separate sub-bursts with the second, and largest, sub-burst lasting 35 seconds (!). This has to be contrasted with the typical burst lasting a few seconds in total.

This gamma ray burst for the first time allowed researchers to clearly quantify the relative delay of the different energy channels. The analysis can be found in this paper
    A New Test of Lorentz Invariance Violation: the Spectral Lag Transition of GRB 160625B
    Jun-Jie Wei, Bin-Bin Zhang, Lang Shao, Xue-Feng Wu, Peter Mészáros
    arXiv:1612.09425 [astro-ph.HE]

Unlike supernovae IIa, which have very regular profiles, gamma ray bursts are one of a kind and they can therefore be compared only to themselves. This makes it very difficult to tell whether or not highly energetic parts of the emission are systematically delayed because one doesn’t know when they were emitted. Until now, the analysis relied on some way of guessing the peaks in three different energy channels and (basically) assuming they were emitted simultaneously. This procedure sometimes relied on as little as one or two photons per peak. Not an analysis you should put a lot of trust in.

But the second sub-burst of GRB160625 was so bright, the researchers could break it down in 38 energy channels – and the counts were still high enough to calculate the cross-correlation from which the (most likely) time-lag can be extracted.

Here are the 38 energy channels for the second sub-burst

Fig 1 from arXiv:1612.09425


For the 38 energy channels they calculate 37 delay-times relative to the lowest energy channel, shown in the figure below. I find it a somewhat confusing convention, but in their nomenclature a positive time-lag corresponds to an earlier arrival time. The figure therefore shows that the photons of higher energy arrive earlier. The trend, however, isn’t monotonically increasing. Instead, it turns around at a few GeV.

Fig 2 from arXiv:1612.09425


The authors then discuss a simple model to fit the data. First, they assume that the emission has an intrinsic energy-dependence due to astrophysical effects which cause a positive lag. They model this with a power-law that has two free parameters: an exponent and an overall pre-factor.

Second, they assume that the effect during propagation – presumably from the space-time foam – causes a negative lag. For the propagation-delay they also make a power-law ansatz which is either linear or quadratic. This ansatz has one free parameter which is an energy scale (expected to be somewhere at the Planck energy).

In total they then have three free parameters, for which they calculate the best-fit values. The fitted curves are also shown in the image above, labeled n=1 (linear) and n=2 (quadratic). At some energy, the propagation-delay becomes more relevant than the intrinsic delay, which leads to the turn-around of the curve.

The best-fit value of the quantum gravity energy is 10q GeV with q=15.66 for the linear and q=7.17 for the quadratic case. From this they extract a lower limit on the quantum gravity scale at the 1 sigma confidence level, which is 0.5 x 1016 GeV for the linear and 1.4 x 107 GeV for the quadratic case. As you can see in the above figure, the data in the high energy bins has large error-bars owing to the low total count, so the evidence that there even is a drop isn’t all that great.

I still don’t buy there’s some evidence for space-time foam to find here, but I have to admit that this data finally convinces me that at least there is a systematic lag in the spectrum. That’s the nice thing I have to say.

Now to the not-so-nice. If you want to convince me that some part of the spectral distortion is due to a propagation-effect, you’ll have to show me evidence that its strength depends on the distance to the source. That is, in my opinion, the only way to make sure one doesn’t merely look at delays present already at emission. And even if you’d done that, I still wouldn’t be convinced that it has anything to do with space-time foam.

I’m skeptic of this because the theoretical backing is sketchy. Quantum fluctuations of space-time in any candidate-theory for quantum gravity do not lead to this effect. One can work with phenomenological models, in which such effects are parameterized and incorporated as new physics into the known theories. This is all well and fine. Unfortunately, in this case existing data already constrains the parameters so that the effect on the propagation of light is unmeasurably small. It’s already ruled out. Such models introduce a preferred frame and break Lorentz-invariance and there is loads of data speaking against it.

It has been claimed that the already existing constraints from Lorentz-invariance violation can be circumvented if Lorentz-invariance is not broken but instead deformed. In this case the effective field theory limit supposedly doesn’t apply. This claim is also quoted in the paper above (see end of section 3.) However, if you look at the references in question, you will not find any argument for how one manages to avoid this. Even if one can make such an argument though (I believe it’s possible, not sure why it hasn’t been done), the idea suffers from various other theoretical problems that, to make a very long story very short, make me think the quantum gravity-induced spectral lag is highly implausible.

However, leaving aside my theory-bias, this newly proposed model with two overlaid sources for the energy-dependent time-lag is simple and should be straight-forward to test. Most likely we will soon see another paper evaluating how well the model fits other bursts on record. So stay tuned, something’s happening here.

Tuesday, January 03, 2017

The Bullet Cluster as Evidence against Dark Matter

Once upon a time, at the far end of the universe, two galaxy clusters collided. Their head-on encounter tore apart the galaxies and left behind two reconfigured heaps of stars and gas, separating again and moving apart from each other, destiny unknown.

Four billion years later, a curious group of water-based humanoid life-forms tries to make sense of the galaxies’ collision. They point their telescope at the clusters’ relics and admire its odd shape. They call it the “Bullet Cluster.”

In the below image of the Bullet Cluster you see three types of data overlaid. First, there are the stars and galaxies in the optical regime. (Can you spot the two foreground objects?) Then there are the regions colored red which show the distribution of hot gas, inferred from X-ray measurements. And the blue-colored regions show the space-time curvature, inferred from gravitational lensing which deforms the shape of galaxies behind the cluster.

The Bullet Cluster.
[Img Src: APOD. Credits: NASA]


The Bullet Cluster comes to play an important role in the humanoids’ understanding of the universe. Already a generation earlier, they had noticed that their explanation for the gravitational pull of matter did not match observations. The outer stars of many galaxies, they saw, moved faster than expected, meaning that the gravitational pull was stronger than what their theories could account for. Galaxies which combined in clusters, too, were moving too fast, indicating more pull than expected. The humanoids concluded that their theory, according to which gravity was due to space-time curvature, had to be modified.

Some of them, however, argued it wasn’t gravity they had gotten wrong. They thought there was instead an additional type of unseen, “dark matter,” that was interacting so weakly it wouldn’t have any consequences besides the additional gravitational pull. They even tried to catch the elusive particles, but without success. Experiment after experiment reported null results. Decades passed. And yet, they claimed, the dark matter particles might just be even more weakly interacting. They built larger experiments to catch them.

Dark matter was a convenient invention. It could be distributed in just the right amounts wherever necessary and that way the data of every galaxy and galaxy cluster could be custom-fit. But while dark matter worked well to fit the data, it failed to explain how regular the modification of the gravitational pull seemed to be. On the other hand, a modification of gravity was difficult to work with, especially for handling the dynamics of the early universe, which was much easier to explain with particle dark matter.

To move on, the curious scientists had to tell apart their two hypotheses: Modified gravity or particle dark matter? They needed an observation able to rule out one of these ideas, a smoking gun signal – the Bullet Cluster.

The theory of particle dark matter had become known as the “concordance model” (also: ΛCDM). It heavily relied on computer simulations which were optimized so as to match the observed structures in the universe. From these simulations, the scientists could tell the frequency by which galaxy clusters should collide and the typical relative speed at which that should happen.

From the X-ray observations, the scientists inferred that the collision speed of the galaxies in the Bullet Cluster must have taken place at approximately 3000 km/s. But such high collision speeds almost never occurred in the computer simulations based on particle dark matter. The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity.

However, a few years later some inventive humanoids had optimized the dark-matter based computer simulations and arrived at a more optimistic estimate of a probability of 4.6×10-4 for seeing something like the Bullet-Cluster. Briefly later they revised the probability again to 6.4×10−6.

Either way, the Bullet Cluster remained a stunningly unlikely event to happen in the theory of particle dark matter. It was, in contrast, easy to accommodate in theories of modified gravity, in which collisions with high relative velocity occur much more frequently.

It might sound like a story from a parallel universe – but it’s true. The Bullet Cluster isn’t the incontrovertible evidence for particle dark matter that you have been told it is. It’s possible to explain the Bullet Cluster with models of modified gravity. And it’s difficult to explain it with particle dark matter.

How come we so rarely read about the difficulties the Bullet Cluster poses for particle dark matter? It’s because the pop sci media doesn’t like anything better than a simple explanation that comes with an image that has “scientific consensus” written all over it. Isn’t it obvious the visible stuff is separated from the center of the gravitational pull?

But modifying gravity works by introducing additional fields that are coupled to gravity. There’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is. Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions. And never mind that when they pointed at the image of the Bullet Cluster nobody told you how rarely such an event occurs in models with particle dark matter.

No, the real challenge for modified gravity isn’t the Bullet Cluster. The real challenge is to get the early universe right, to explain the particle abundances and the temperature fluctuations in the cosmic microwave background. The Bullet Cluster is merely a red-blue herring that circulates on social media as a shut-up argument. It’s a simple explanation. But simple explanations are almost always wrong.

Friday, December 16, 2016

Cosmic rays hint at new physics just beyond the reach of the LHC

Cosmic ray shower. Artist’s impression.
[Img Src]
The Large Hadron Collider (LHC) – the world’s presently most powerful particle accelerator – reaches a maximum collision energy of 14 TeV. But cosmic rays that collide with atoms in the upper atmosphere have been measured with collision energies about ten times as high.

The two types of observations complement each other. At the LHC, energies are smaller, but collisions happen in a closely controlled experimental environment, directly surrounded by detectors. This is not the case for cosmic rays – their collisions reach higher energies, but the experimental uncertainties are higher.

Recent results from the Pierre Auger Cosmic Ray observatory at center-of-mass energies of approximately 100 TeV are incompatible with the Standard Model of particle physics and hint at unexplained new phenomena. The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations.

Cosmic rays are created either by protons or light atomic nuclei which come from outer space. These particles are accelerated in galactic magnetic fields, though the exact way how their get their high speeds is often unknown. When they enter the atmosphere of planet Earth, they sooner or later hit an air molecule. This destroys the initial particle and creates a primary shower of new particles. This shower has an electromagnetic part and a part of quarks and gluons that quickly form bound states known as hadrons. These particles undergo further decays and collisions, leading to a secondary shower.

The particles of the secondary shower can be detected on Earth in large detector arrays like Pierre Auger, which is located in Argentina. Pierre Auger has two types of detectors: 1) detectors that directly collect the particles which make it to the ground, and 2) fluorescence detectors which captures the light emitted from the ionization air molecules.

The hadronic component of the shower is dominated by pions, which are the lightest mesons and composed of a quark and an anti-quark. The neutral pions decay quickly, mostly into photons; the charged pions create muons which make it into the ground-based detectors.

It has been known for several years that the muon signal seems too large compared to the electromagnetic signal – the balance between them is off. This however was not based on very solid data analyses because it was dependent on a total energy estimate, and that’s very hard to do if you don’t measure all particles of the shower and have to extrapolate from what you measure.

In the new paper – just published in PRL – the Pierre Auger collaboration used a different analysis method for the data, one that does not depend on the total energy calibration. They individually fit the results of detected showers by comparing them to computer-simulated events. From a previously generated sample, they pick the simulated event that best matches the fluorescence result.

Then they add two parameters to also fit the hadronic result: The one parameter adjusts the energy calibration of the fluorescence signal, the other rescales the number of particles in the hadronic component. Then they look for the best-fit values and find that these are systematically off the standard model prediction. As an aside, their analysis also shows that the energy does not need to be recalibrated.

The main reason for the mismatch with the standard model predictions is that the detectors measure more muons than expected. What’s up with those muons? Nobody knows, but the origin of the mystery seems not in the muons themselves, but in the pions from whose decay they come.

Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. So the result by Pierre Auger indicates that the total energy in neutral pions is smaller than what the present simulations predict.

One possible explanation for this, which has been proposed by Farrar and Allen, is that we misunderstand chiral symmetry breaking. It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons (not the Higgs!). The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

Chiral symmetry restoration almost certainly plays a role in the cosmic ray collisions, and a more important role than it does at the LHC. So, quite possibly this is the culprit here. But it might be something more exotic, new short-lived particles that become important at high energies and which make interaction probabilities deviate from the standard model extrapolation. Or maybe it’s just a measurement fluke that will go away with more data.

If the signal remains, however, that’s a strong motivation to build the next larger particle collider which could reach 100 TeV. Our accelerators would then be as good as the heavens.


[This post previously appeared on Forbes.]

Friday, December 02, 2016

Can dark energy and dark matter emerge together with gravity?

A macaroni pie? Elephants blowing ballons? 
No, it’s Verlinde’s entangled universe.
In a recent paper, the Dutch physicist Erik Verlinde explains how dark energy and dark matter arise in emergent gravity as deviations from general relativity.

It’s taken me some while to get through the paper. Vaguely titled “Emergent Gravity and the Dark Universe,” it’s a 51-pages catalog of ideas patched together from general relativity, quantum information, quantum gravity, condensed matter physics, and astrophysics. It is clearly still research in progress and not anywhere close to completion.

The new paper substantially expands on Verlinde’s earlier idea that the gravitational force is some type of entropic force. If that was so, it would mean gravity is not due to the curvature of space-time – as Einstein taught us – but instead caused by the interaction of the fundamental elements which make up space-time. Gravity, hence, would be emergent.

I find it an appealing idea because it allows one to derive consequences without having to specify exactly what the fundamental constituents of space-time are. Like you can work out the behavior of gases under pressure without having a model for atoms, you can work out the emergence of gravity without having a model for whatever builds up space-time. The details would become relevant only at very high energies.

As I noted in a comment on the first paper, Verlinde’s original idea was merely a reinterpretation of gravity in thermodynamic quantities. What one really wants from emergent gravity, however, is not merely to get back general relativity. One wants to know which deviations from general relativity come with it, deviations that are specific predictions of the model and which can be tested.

Importantly, in emergent gravity such deviations from general relativity could make themselves noticeable at long distances. The reason is that the criterion for what it means for two points to be close by each other emerges with space-time itself. Hence, in emergent gravity there isn’t a priori any reason why new physics must be at very short distances.

In the new paper, Verlinde argues that his variant of emergent gravity gives rise to deviations from general relativity on long distances, and these deviations correspond to dark energy and dark matter. He doesn’t explain dark energy itself. Instead, he starts with a universe that by assumption contains dark energy like we observe, ie one that has a positive cosmological constant. Such a universe is described approximately by what theoretical physicists call a de-Sitter space.

Verlinde then argues that when one interprets this cosmological constant as the effect of long-distance entanglement between the conjectured fundamental elements, then one gets a modification of the gravitational law which mimics dark matter.

The reason is works is that to get normal gravity one assigns an entropy to a volume of space which scales with the surface of the area that encloses the volume. This is known as the “holographic scaling” of entropy, and is at the core of Verlinde’s first paper (and earlier work by Jacobson and Padmanabhan and others). To get deviations from normal gravity, one has to do something else. For this, Verlinde argues that de Sitter space is permeated by long-distance entanglement which gives rise to an entropy which scales, not with the surface area of a volume, but with the volume itself. It consequently leads to a different force-law. And this force-law, so he argues, has an effect very similar to dark matter.

Not only does this modified force-law from the volume-scaling of the entropy mimic dark matter, it more specifically reproduces some of the achievements of modified gravity.

In his paper, Verlinde derives the observed relation between the luminosity of spiral galaxies and the angular velocity of their outermost stars, known as the Tully-Fisher relation. The Tully-Fisher relation can also be found in certain modifications of gravity, such as Moffat Gravity (MOG), but more generally every modification that approximates Milgrom’s modified Newtonian Dynamics (MOND). Verlinde, however, does more than that. He also derives the parameter which quantifies the acceleration at which the modification of general relativity becomes important, and gets a value that fits well with observations.

It was known before that this parameter is related to the cosmological constant. There have been various attempts to exploit this relation, most recently by Lee Smolin. In Verlinde’s approach the relation between the acceleration scale and the cosmological constant comes out naturally, because dark matter has the same origin of dark energy. Verlinde further offers expressions for the apparent density of dark matter in galaxies and clusters, something that, with some more work, can probably be checked observationally.

I find this is an intriguing link which suggests that Verlinde is onto something. However, I also find the model sketchy and unsatisfactory in many regards. General Relativity is a rigorously tested theory with many achievements. To do any better than general relativity is hard, and thus for any new theory of gravity the most important thing is to have a controlled limit in which General Relativity is reproduced to good precision. How this might work in Verlinde’s approach isn’t clear to me because he doesn’t even attempt to deal with the general case. He starts right away with cosmology.

Now in cosmology we have a preferred frame which is given by the distribution of matter (or by the restframe of the CMB if you wish). In general relativity this preferred frame does not originate in the structure of space-time itself but is generated by the stuff in it. In emergent gravity models, in contrast, the fundamental structure of space-time tends to have an imprint of the preferred frame. This fundamental frame can lead to violations of the symmetries of general relativity and the effects aren’t necessarily small. Indeed, there are many experiments that have looked for such effects and haven’t found anything. It is hence a challenge for any emergent gravity approach to demonstrate just how to avoid such violations of symmetries.

Another potential problem with the idea is the long-distance entanglement which is sprinkled over the universe. The physics which we know so far works “locally,” meaning stuff can’t interact over long distances without a messenger that travels through space and time from one to the other point. It’s the reason my brain can’t make spontaneous visits to the Andromeda nebula, and most days I think that benefits both of us. But like that or not, the laws of nature we presently have are local, and any theory of emergent gravity has to reproduce that.

I have worked for some years on non-local space-time defects, and based on what I learned from that I don’t think the non-locality of Verlinde’s model is going to be a problem. My non-local defects aren’t the same as Verlinde’s entanglement, but guessing that the observational consequences scale similarly, the amount of entanglement that you need to get something like a cosmological constant is too small to leave any other noticeable effects on particle physics. I am therefore more worried about the recovery of local Lorentz-invariance. I went to great pain in my models to make sure I wouldn’t get these, and I can’t see how Verlinde addresses the issue.

The more general problem I have with Verlinde’s paper is the same I had with his 2010 paper, which is that it’s fuzzy. It remained unclear to me exactly what are the necessary assumptions. I hence don’t know whether it’s really necessary to have this interpretation with the entanglement and the volume-scaling of the entropy and with assigning elasticity to the dark energy component that pushes in on galaxies. Maybe it would be sufficient already to add a non-local modification to the sources of general relativity. Having toyed with that idea for a while, I doubt it. But I think Verlinde’s approach would benefit from a more axiomatic treatment.

In summary, Verlinde’s recent paper offers the most convincing argument I have seen so far that dark matter and dark energy are related. However, it is presently unclear if not this approach would also have unwanted side-effects that are in conflict with observation already.

Monday, October 31, 2016

Modified Gravity vs Particle Dark Matter. The Plot Thickens.

They sit in caves, deep underground. Surrounded by lead, protected from noise, shielded from the warmth of the Sun, they wait. They wait for weakly interacting massive particles, short WIMPs, the elusive stuff that many physicists believe makes up 80% of the matter in the universe. They have been waiting for 30 years, but the detectors haven’t caught a single WIMP.

Even though the sensitivity of dark matter detectors has improved by more than five orders of magnitude since the early 1980s, all results so far are compatible with zero events. The searches for axions, another popular dark matter candidate, haven’t fared any better. Coming generations of dark matter experiments will cross into the regime where the neutrino background becomes comparable to the expected signal. But, as a colleague recently pointed out to me, this merely means that the experimentalists have to understand the background better.

Maybe in 100 years they’ll still sit in caves, deep underground. And wait.

Meanwhile others are running out of patience. Particle dark matter is a great explanation for all the cosmological observations that general relativity sourced by normal matter cannot explain. But maybe it isn’t right after all. The alternative to using general relativity and adding particle is to modify general relativity so that space-time curves differently in response to the matter we already know.

Already in the mid 1980s, Modehai Milgrom showed that modifying gravity has the potential to explain observations commonly attributed to particle dark matter. He proposed Modified Newtonian Dynamics – short MOND – to explain the galactic rotation curves instead of adding particle dark matter. Intriguingly, MOND, despite it having only one free parameter, fits a large number of galaxies. It doesn’t work well for galaxy clusters, but this clearly shows that many galaxies are similar in very distinct ways, ways that the concordance model (also known as LambdaCDM) hasn’t been able to account for.

In its simplest form the concordance model has sources which are collectively described as homogeneous throughout the universe – an approximation known as the cosmological principle. In this form, the concordance model doesn’t predict how galaxies rotate – it merely describes the dynamics on supergalactic scales.

To get galaxies right, physicists have to also take into account astrophysical processes within the galaxies: how stars form, which stars form, where do they form, how do they interact with the gas, how long do they live, when and how they go supernova, what magnetic fields permeate the galaxies, how the fields affect the intergalactic medium, and so on. It’s a mess, and it requires intricate numerical simulations to figure out just exactly how galaxies come to look how they look.

And so, physicists today are divided in two camps. In the larger camp are those who think that the observed galactic regularities will eventually be accounted for by the concordance model. It’s just that it’s a complicated question that needs to be answered with numerical simulations, and the current simulations aren’t good enough. In the smaller camp are those who think there’s no way these regularities will be accounted for by the concordance model, and modified gravity is the way to go.

In a recent paper, McGaugh et al reported a correlation among the rotation curves of 153 observed galaxies. They plotted the gravitational pull from the visible matter in the galaxies (gbar) against the gravitational pull inferred from the observations (gobs), and find that the two are closely related.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This correlation – the mass-discrepancy-acceleration relation (MDAR) – so they emphasize, is not itself new, it’s just a new way to present previously known correlations. As they write in the paper:
“[This Figure] combines and generalizes four well-established properties of rotating galaxies: flat rotation curves in the outer parts of spiral galaxies; the “conspiracy” that spiral rotation curves show no indication of the transition from the baryon-dominated inner regions to the outer parts that are dark matter-dominated in the standard model; the Tully-Fisher relation between the outer velocity and the inner stellar mass, later generalized to the stellar plus atomic hydrogen mass; and the relation between the central surface brightness of galaxies and their inner rotation curve gradient.”
But this was only act 1.

In act 2, another group of researchers responds to the McGaugh et al paper. They present results of a numerical simulation for galaxy formation and claim that particle dark matter can account for the MDAR. The end of MOND, so they think, is near.

Figure from arXiv:1610.06183 [astro-ph.GA]

McGaugh, hero of act 1, points out that the sample size for this simulation is tiny and also pre-selected to reproduce galaxies like we observe. Hence, he thinks the results are inconclusive.

In act 3, Mordehai Milgrom, the original inventor of MOND – posts a comment on the arXiv. He also complains about the sample size of the numerical simulation and further explains that there is much more to MOND than the MDAR correlation. Numerical simulations with particle dark matter have been developed to fit observations, he writes, so it’s not surprising they now fit observations.

“The simulation in question attempt to treat very complicated, haphazard, and unknowable events and processes taking place during the formation and evolution histories of these galaxies. The crucial baryonic processes, in particular, are impossible to tackle by actual, true-to-nature, simulation. So they are represented in the simulations by various effective prescriptions, which have many controls and parameters, and which leave much freedom to adjust the outcome of these simulations [...]

The exact strategies involved are practically impossible to pinpoint by an outsider, and they probably differ among simulations. But, one will not be amiss to suppose that over the years, the many available handles have been turned so as to get galaxies as close as possible to observed ones.”
In act 4, another paper with results of a numerical simulation for galaxy structures with particle dark matter appears.

This one uses a code with acronym EAGLE, for Evolution and Assembly of GaLaxies and their Environments. This code has “quite a few” parameters, as Aaron Ludlow, the paper’s first author told me, and these parameters have been optimized to reproduce realistic galaxies. In this simulation, however, the authors didn’t use this optimized parameter configuration but let several parameters (3-4) vary to produce a larger set of galaxies. These galaxies in general do not look like those we observe. Nevertheless, the researchers find that all their galaxies display the MDAR correlation, regardless.

This would indicate that the particle dark matter is enough to describe the observations.


Figure from arXiv:1610.07663 [astro-ph.GA] 


However, even when varying some parameters, the EAGLE code still contains parameters that have been fixed previously to reproduce observations. Ludlow calls them “subgrid parameters,” meaning they quantify physics on scales smaller than what the simulation can presently resolve. One sees for example in Figure 1 of their paper (shown below) that all those galaxies have a pronounced correlation between the velocities of the outer stars (Vmax) and the luminosity (M*) already.
Figure from arXiv:1610.07663 [astro-ph.GA]
Note that the plotted quantities are correlated in all data sets,
though the off-sets differ somewhat.

One shouldn’t hold this against the model. Such numerical simulations are done for the purpose of generating and understanding realistic galaxies. Runs are time-consuming and costly. From the point of view of an astrophysicist, the question just how unrealistic galaxies can get in these simulations is entirely nonsensical. And yet that’s exactly what the modified-gravity/dark matter showoff now asks for.

In act 5, John Moffat shows that modified gravity – the general relativistic completion of MOND – reproduces the MDAR correlation, but also predicts a distinct deviation for the very outside stars of galaxies.

Figure from arXiv:1610.06909 [astro-ph.GA] 
The green curve is the prediction from modified gravity.


The crucial question here is, I think, which correlations are independent of each other. I don’t know. But I’m sure there will be further acts in this drama.