Pages

Monday, October 31, 2016

Modified Gravity vs Particle Dark Matter. The Plot Thickens.

They sit in caves, deep underground. Surrounded by lead, protected from noise, shielded from the warmth of the Sun, they wait. They wait for weakly interacting massive particles, short WIMPs, the elusive stuff that many physicists believe makes up 80% of the matter in the universe. They have been waiting for 30 years, but the detectors haven’t caught a single WIMP.

Even though the sensitivity of dark matter detectors has improved by more than five orders of magnitude since the early 1980s, all results so far are compatible with zero events. The searches for axions, another popular dark matter candidate, haven’t fared any better. Coming generations of dark matter experiments will cross into the regime where the neutrino background becomes comparable to the expected signal. But, as a colleague recently pointed out to me, this merely means that the experimentalists have to understand the background better.

Maybe in 100 years they’ll still sit in caves, deep underground. And wait.

Meanwhile others are running out of patience. Particle dark matter is a great explanation for all the cosmological observations that general relativity sourced by normal matter cannot explain. But maybe it isn’t right after all. The alternative to using general relativity and adding particle is to modify general relativity so that space-time curves differently in response to the matter we already know.

Already in the mid 1980s, Modehai Milgrom showed that modifying gravity has the potential to explain observations commonly attributed to particle dark matter. He proposed Modified Newtonian Dynamics – short MOND – to explain the galactic rotation curves instead of adding particle dark matter. Intriguingly, MOND, despite it having only one free parameter, fits a large number of galaxies. It doesn’t work well for galaxy clusters, but this clearly shows that many galaxies are similar in very distinct ways, ways that the concordance model (also known as LambdaCDM) hasn’t been able to account for.

In its simplest form the concordance model has sources which are collectively described as homogeneous throughout the universe – an approximation known as the cosmological principle. In this form, the concordance model doesn’t predict how galaxies rotate – it merely describes the dynamics on supergalactic scales.

To get galaxies right, physicists have to also take into account astrophysical processes within the galaxies: how stars form, which stars form, where do they form, how do they interact with the gas, how long do they live, when and how they go supernova, what magnetic fields permeate the galaxies, how the fields affect the intergalactic medium, and so on. It’s a mess, and it requires intricate numerical simulations to figure out just exactly how galaxies come to look how they look.

And so, physicists today are divided in two camps. In the larger camp are those who think that the observed galactic regularities will eventually be accounted for by the concordance model. It’s just that it’s a complicated question that needs to be answered with numerical simulations, and the current simulations aren’t good enough. In the smaller camp are those who think there’s no way these regularities will be accounted for by the concordance model, and modified gravity is the way to go.

In a recent paper, McGaugh et al reported a correlation among the rotation curves of 153 observed galaxies. They plotted the gravitational pull from the visible matter in the galaxies (gbar) against the gravitational pull inferred from the observations (gobs), and find that the two are closely related.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This correlation – the mass-discrepancy-acceleration relation (MDAR) – so they emphasize, is not itself new, it’s just a new way to present previously known correlations. As they write in the paper:
“[This Figure] combines and generalizes four well-established properties of rotating galaxies: flat rotation curves in the outer parts of spiral galaxies; the “conspiracy” that spiral rotation curves show no indication of the transition from the baryon-dominated inner regions to the outer parts that are dark matter-dominated in the standard model; the Tully-Fisher relation between the outer velocity and the inner stellar mass, later generalized to the stellar plus atomic hydrogen mass; and the relation between the central surface brightness of galaxies and their inner rotation curve gradient.”
But this was only act 1.

In act 2, another group of researchers responds to the McGaugh et al paper. They present results of a numerical simulation for galaxy formation and claim that particle dark matter can account for the MDAR. The end of MOND, so they think, is near.

Figure from arXiv:1610.06183 [astro-ph.GA]

McGaugh, hero of act 1, points out that the sample size for this simulation is tiny and also pre-selected to reproduce galaxies like we observe. Hence, he thinks the results are inconclusive.

In act 3, Mordehai Milgrom, the original inventor of MOND – posts a comment on the arXiv. He also complains about the sample size of the numerical simulation and further explains that there is much more to MOND than the MDAR correlation. Numerical simulations with particle dark matter have been developed to fit observations, he writes, so it’s not surprising they now fit observations.

“The simulation in question attempt to treat very complicated, haphazard, and unknowable events and processes taking place during the formation and evolution histories of these galaxies. The crucial baryonic processes, in particular, are impossible to tackle by actual, true-to-nature, simulation. So they are represented in the simulations by various effective prescriptions, which have many controls and parameters, and which leave much freedom to adjust the outcome of these simulations [...]

The exact strategies involved are practically impossible to pinpoint by an outsider, and they probably differ among simulations. But, one will not be amiss to suppose that over the years, the many available handles have been turned so as to get galaxies as close as possible to observed ones.”
In act 4, another paper with results of a numerical simulation for galaxy structures with particle dark matter appears.

This one uses a code with acronym EAGLE, for Evolution and Assembly of GaLaxies and their Environments. This code has “quite a few” parameters, as Aaron Ludlow, the paper’s first author told me, and these parameters have been optimized to reproduce realistic galaxies. In this simulation, however, the authors didn’t use this optimized parameter configuration but let several parameters (3-4) vary to produce a larger set of galaxies. These galaxies in general do not look like those we observe. Nevertheless, the researchers find that all their galaxies display the MDAR correlation, regardless.

This would indicate that the particle dark matter is enough to describe the observations.


Figure from arXiv:1610.07663 [astro-ph.GA] 


However, even when varying some parameters, the EAGLE code still contains parameters that have been fixed previously to reproduce observations. Ludlow calls them “subgrid parameters,” meaning they quantify physics on scales smaller than what the simulation can presently resolve. One sees for example in Figure 1 of their paper (shown below) that all those galaxies have a pronounced correlation between the velocities of the outer stars (Vmax) and the luminosity (M*) already.
Figure from arXiv:1610.07663 [astro-ph.GA]
Note that the plotted quantities are correlated in all data sets,
though the off-sets differ somewhat.

One shouldn’t hold this against the model. Such numerical simulations are done for the purpose of generating and understanding realistic galaxies. Runs are time-consuming and costly. From the point of view of an astrophysicist, the question just how unrealistic galaxies can get in these simulations is entirely nonsensical. And yet that’s exactly what the modified-gravity/dark matter showoff now asks for.

In act 5, John Moffat shows that modified gravity – the general relativistic completion of MOND – reproduces the MDAR correlation, but also predicts a distinct deviation for the very outside stars of galaxies.

Figure from arXiv:1610.06909 [astro-ph.GA] 
The green curve is the prediction from modified gravity.


The crucial question here is, I think, which correlations are independent of each other. I don’t know. But I’m sure there will be further acts in this drama.

Sunday, October 23, 2016

The concordance model strikes back

Two weeks ago, I summarized a recent paper by McGaugh et al who reported a correlation in galactic structures. The researchers studied a data-set with the rotation curves of 153 galaxies and showed that the gravitational acceleration inferred from the rotational velocity (including dark matter), gobs, is strongly correlated to the gravitational acceleration from the normal matter (stars and gas), gbar.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This isn’t actually new data or a new correlation, but a new way to look at correlations in previously available data.

The authors of the paper were very careful not to jump to conclusions from their results, but merely stated that this correlation requires some explanation. That galactic rotation curves have surprising regularities, however, has been evidence in favor of modified gravity for two decades, so the implication was clear: Here is something that the concordance model might have trouble explaining.

As I remarked in my previous blogpost, while the correlation does seem to be strong, it would be good to see the results of a simulation with the concordance model that describes dark matter, as usual, as a pressureless, cold fluid. In this case too one would expect there to be some relation. Normal matter forms galaxies in the gravitational potentials previously created by dark matter, so the two components should have some correlation with each other. The question is how much.

Just the other day, a new paper appeared on the arxiv, which looked at exactly this. The authors of the new paper analyzed the result of a specific numerical simulation within the concordance model. And they find that the correlation in this simulated sample is actually stronger than the observed one!

Figure from arXiv:1610.06183 [astro-ph.GA]


Moreover, they also demonstrate that in the concordance model, the slope of the best-fit curve should depend on the galaxies’ redshift (z), ie the age of the galaxy. This would be a way to test which explanation is correct.

Figure from arXiv:1610.06183 [astro-ph.GA]

I am not familiar with the specific numerical code that the authors use and hence I am not sure what to make of this. It’s been known for a long time that the concordance model has difficulties getting structures on galactic size right, especially galactic cores, and so it isn’t clear to me just how many parameters this model uses to work right. If the parameters were previously chosen so as to match observations already, then this result is hardly surprising.

McGaugh, one of the authors of the first paper, has already offered some comments (ht Yves). He notes that the sample size of the galaxies in the simulation is small, which might at least partly account for the small scatter. He also expresses himself skeptical of the results: “It is true that a single model does something like this as a result of dissipative collapse. It is not true that an ensemble of such models are guaranteed to fall on the same relation.”

I am somewhat puzzled by this result because, as I mentioned above, the correlation in the McGaugh paper is based on previously known correlations, such as the brightness-velocity relation which, to my knowledge, hadn’t been explained by the concordance model. So I would find it surprising should the results of the new paper hold up. I’m sure we’ll hear more about this in the soon future.

Wednesday, October 19, 2016

Dear Dr B: Where does dark energy come from and what’s it made of?

“As the universe expands and dark energy remains constant (negative pressure) then where does the ever increasing amount of dark energy come from? Is this genuinely creating something from nothing (bit of lay man’s hype here), do conservation laws not apply? Puzzled over this for ages now.”
-- pete best
“When speaking of the Einstein equation, is it the case that the contribution of dark matter is always included in the stress energy tensor (source term) and that dark energy is included in the cosmological constant term? If so, is this the main reason to distinguish between these two forms of ‘darkness’? I ask because I don’t normally read about dark energy being ‘composed of particles’ in the way dark matter is discussed phenomenologically.”
-- CGT

Dear Pete, CGT:

Dark energy is often portrayed as very mysterious. But when you look at the math, it’s really the simplest aspect of general relativity.

Ahead, allow me to clarify that your questions refer to “dark energy” but are specifically about the cosmological constant which is a certain type of dark energy. For all we know, the cosmological constant fits all existing observations. Dark energy could be more complicated than that, but let’s start with the cosmological constant.

Einstein’s field equations can be derived from very few assumptions. First, there’s the equivalence principle, which can be formulated mathematically as the requirement that the equations be tensor-equations. Second, the equations should describe the curvature of space-time. Third, the source of gravity is the stress-energy tensor and it’s locally conserved.

If you write down the simplest equations which fulfill these criteria you get Einstein’s field equations with two free constants. One constant can be fixed by deriving the Newtonian limit and it turns out to be Newton’s constant, G. The other constant is the cosmological constant, usually denoted Λ. You can make the equations more complicated by adding higher order terms, but at low energies these two constants are the only relevant ones.
Einstein's field equations. [Image Source]
If the cosmological constant is not zero, then flat space-time is no longer a solution of the equations. If the constant is positive-valued in particular, space will undergo accelerated expansion if there are no other matter sources, or these are negligible in comparison to Λ. Our universe presently seems to be in a phase that is dominated by a positive cosmological constant – that’s the easiest way to explain the observations which were awarded the 2011 Nobel Prize in physics.

Things get difficult if one tries to find an interpretation of the rather unambiguous mathematics. You can for example take the term with the cosmological constant and not think of it as geometrical, but instead move it to the other side of the equation and think of it as some stuff that causes curvature. If you do that, you might be tempted to read the entries of the cosmological constant term as if it was a kind of fluid. It would then correspond to a fluid with constant density and with constant, negative pressure. That’s something one can write down. But does this interpretation make any sense? I don’t know. There isn’t any known fluid with such behavior.

Since the cosmological constant is also present if matter sources are absent, it can be interpreted as the energy-density and pressure of the vacuum. Indeed, one can calculate such a term in quantum field theory, just that the result is infamously 120 orders of magnitude too large. But that’s a different story and shall be told another time. The cosmological constant term is therefore often referred to as the “vacuum energy,” but that’s sloppy. It’s an energy-density, not an energy, and that’s an important difference.

How can it possibly be that an energy density remains constant as the universe expands, you ask. Doesn’t this mean you need to create more energy from somewhere? No, you don’t need to create anything. This is a confusion which comes about because you interpret the density which has been assigned to the cosmological constant like a density of matter, but that’s not what it is. If it was some kind of stuff we know, then, yes, you would expect the density to dilute as space expands. But the cosmological constant is a property of space-time itself. As space expands, there’s more space, and that space still has the same vacuum energy density – it’s constant!

The cosmological constant term is indeed conserved in general relativity, and it’s conserved separately from that of the other energy and matter sources. It’s just that conservation of stress-energy in general relativity works differently than you might be used to from flat space.

According to Noether’s theorem there’s a conserved quantity for every (continuous) symmetry. A flat space-time is the same at every place and at every moment of time. We say it has a translational invariance in space and time. These are symmetries, and they come with conserved quantities: Translational invariance of space conserves momentum, translational invariance in time conserves energy.

In a curved space-time generically neither symmetry is fulfilled, hence neither energy nor momentum are conserved. So, if you take the vacuum energy density and you integrate it over some volume to get an energy, then the total energy grows with the volume indeed. It’s just not conserved. How strange! But that makes perfect sense: It’s not conserved because space expands and hence we have no invariance in time. Consequently, there’s no conserved quantity for invariance in time.

But General Relativity has a more complicated type of symmetry to which Noether’s theorem can be applied. This gives rise to a local conservation of stress-momentum when coupled to gravity (the stress-momentum tensor is covariantly conserved).

The conservation law for the density of a pressureless fluid, for example, works as you expect it to work: As space expands, the density goes down with the volume. For radiation – which has pressure – the energy density falls faster than that of matter because wavelengths also redshift. And if you put the cosmological constant term with its negative pressure into the conservation law, both energy and pressure remain the same. It’s all consistent: They are conserved if they are constant.

Dark energy now is a generalization of the cosmological constant, in which one invents some fields which give rise to a similar term. There are various fields that theoretical physicists have played with: chameleon fields and phantom fields and quintessence and such. The difference to the cosmological constant is that these fields’ densities do change with time, albeit slowly. There is however presently no evidence that this is the case.

As to the question which dark stuff to include in which term. Dark matter is usually assumed to be pressureless, which means that for what its gravitational pull is concerned it behaves just like normal matter. Dark energy, in contrast, has negative pressure and does odd things. That’s why they are usually collected in different terms.

Why don’t you normally read about dark energy being made of particles? Because you need some really strange stuff to get something that behaves like dark energy. You can’t make it out of any kind of particle that we know – this would either give you a matter term or a radiation term, neither of which does what dark energy needs to do.

If dark energy was some kind of field, or some kind of condensate, then it would be made of something else. In that case its density might indeed also vary from one place to the next and we might be able to detect the presence of that field in some way. Again though, there isn’t presently any evidence for that.

Thanks for your interesting questions!

Wednesday, October 12, 2016

What if dark matter is not a particle? The second wind of modified gravity.

Another year has passed and Vera Rubin was not awarded the Nobel Prize. She’s 88 and the prize can’t be awarded posthumously, so I can’t shake the impression the Royal Academy is waiting for her to die while they work off a backlog of condensed-matter breakthroughs.

Sure, nobody knows whether galaxies actually contain the weakly interacting and non-luminous particles we have come to call dark matter. And Fritz Zwicky was first to notice a cluster of galaxies which moved faster than the visible mass alone could account for – and the one to coin the term dark matter. But it was Rubin who pinned down the evidence that galaxies are systematically misbehaved by showing the rotational velocities of spiral galaxies don’t flatten out with distance from the galactic center – as if there was unseen extra mass in the galaxies. And Zwicky is dead anyway, so the Nobel committee doesn’t have to worry about him.

After Rubin’s discovery, many other observations confirmed that we were missing matter, and not only a little bit, but 80% of all matter in the universe. It’s there, but it’s not some stuff that we know. The fluctuations in the cosmic microwave background, gravitational lensing, the formation of large-scale structures in the universe – none of these would fit with the predictions of general relativity if there wasn’t additional matter to curve space-time. And if you go through all the particles in the standard model, none of them fits the bill. They’re either too light or too heavy or too strongly interacting or too unstable.

But once physicists had the standard model, every problem began to look like a particle, and so, beginning in the mid-1980s, dozens of experiments started to search for dark matter particles. So far, they haven’t found anything. No WIMPS, no axions, no wimpzillas, neutralinos, sterile neutrinos, or other things that would be good candidates for the missing matter.

This might not mean much. It might mean merely that the dark matter particles are even more weakly interacting than expected. It might mean that the particle types we’ve dealt with so far were too simple. Or maybe it means dark matter isn’t made of particles.

It’s an old idea, though one that never rose to popularity, that rather than adding new sources for gravity we could instead keep the known sources but modify the way they gravitate. And the more time passes without a dark matter particle caught in a detector, the more appealing this alternative starts to become. Maybe gravity doesn’t work the way Einstein taught us.

Modified gravity had an unfortunate start because its best known variant – Modified Newtonian Dynamics or MOND – is extremely unappealing from a theoretical point of view. It’s in contradiction with general relativity and that makes it a non-starter for most theorists. Meanwhile, however, there are variants of modified gravity which are compatible with general relativity.

The benefit of modifying gravity is that it offers an explanation for observations that particle dark matter has nothing to say about: Many galaxies show regularities in the way their stars’ motion is affected by dark matter. Clouds of dark particles that would collect in halos around galaxies can be flexibly adapted to match the observations of all observed galaxies. But dark matter particles are so flexible, that it’s difficult to reproduce regularities.

The best known of them is the Tully-Fisher relation, a correlation between the luminosity of a galaxy and the velocity of the outermost stars. Nobody has succeeded to explain this with particle dark matter, but modified gravity can explain it.

In a recent paper, a group of researchers from the United States offers a neat new way to quantify these regularities. They compare the gravitational acceleration that must be acting on stars in galaxies as inferred from observation (gobs) with the gravitational acceleration due to the observed stars and gas, ie baryonic matter (gbar). As expected, the observed gravitational acceleration is much larger than what the visible mass would lead one to expect. They are also, however, strongly correlated with each other (see figure below). It’s difficult to see how particle dark matter could cause this. (Though I would like to see how this plot looks for a ΛCDM simulation. I would still expect some correlation and would prefer not to judge its strength by gut feeling.)

Figure from arXiv:1609.05917 [astro-ph.GA] 


This isn’t so much new evidence as an improved way to quantify existing evidence for regularities in spiral galaxies. Lee Smolin, always quick on his feet, thinks he can explain this correlation with quantum gravity. I don’t quite share his optimism, but it’s arguably intriguing.

Modifying gravity however has its shortcomings. While it seems to work reasonably well on the level of galaxies, it’s hard to make it work for galaxy clusters too. Observations for example of the Bullet cluster (image below) seem to show that the visible mass can be at a different place than the gravitating mass. That’s straight-forward to explain with particle dark matter but difficult to make sense of with modified gravity.

The bullet cluster.
In red: estimated distribution of baryonic mass.
In blue: estimated distribution of gravitating mass, extracted from gravitational lensing.
Source: APOD.

The explanation I presently find most appealing is that dark matter is a type of particle whose dynamical equations sometimes mimic those of modified gravity. This option, pursued, among others, by Stefano Liberati and Justin Khoury, combines the benefits of both approaches without the disadvantages of either. There is, however, a lot of data in cosmology and it will take a long time to find out whether this idea can fit the observations as well – or better – than particle dark matter.

But regardless of what dark matter turns out to be, Rubin’s observations have given rise to one of the most active research areas in physics today. I hope that the Royal Academy eventually wakes up and honors her achievement.

Wednesday, October 05, 2016

Demystifying Spin 1/2

Theoretical physics is the most math-heavy of disciplines. We don’t use all that math because we like to be intimidating, but because it’s the most useful and accurate description of nature we know.

I am often asked to please explain this or that mathematical description in layman terms – and I try to do my best. But truth is, it’s not possible. The mathematical description is the explanation. The best I can do is to summarize the conclusions we have drawn from all that math. And this is pretty much how popular science accounts of theoretical physics work: By summarizing the consequences of lots of math.

This, however, makes science communication in theoretical physics a victim of its own success. If readers get away thinking they can follow a verbal argument, they’re left to wonder why physicists use all that math to begin with. Sometimes I therefore wish articles reporting on recent progress in theoretical physics would on occasion have an asterisk that notes “It takes several years of lectures to understand how B follows from A.”

One of the best examples for the power of math in theoretical physics – if not the best example to illustrate this – are spin 1/2 particles. They are usually introduced as particles that have to be rotated twice to return to the same initial state. I don’t know if anybody who didn’t know the math already has ever been able to make sense of this explanation – certainly not me when I was a teenager.

But this isn’t the only thing you’ll stumble across if you don’t know the math. Your first question may be: Why have spin 1/2 to begin with?

Well, one answer to this is that we need spin 1/2 particles to describe observations. Such particles are fermionic and therefore won’t occupy the same quantum state. (It takes several years of lectures to understand how B follows from A.) This is why for example electrons – which have spin 1/2 – sit in shells around the atomic nucleus rather than clumping together.

But a better answer is “Why not?” (Why not?, it turns out, is also a good answer to most why-questions that Kindergartners come up with.)

Mathematics allows you to classify everything a quantum state can do under rotations. If you do that you not only find particles that return to their initial state after 1, 1/2, 1/3 and so on of a rotation – corresponding to spin 1, 2, 3... etc – you also find particles that return to their initial state after 2, 2/3, 2/5 and so on of a rotation – corresponding to spin 1/2, 3/2, 5/2 etc. The spin, generally, is the inverse of the fraction of rotations necessary to return the particle to itself. The one exception is spin 0 which doesn’t change at all.

So the math tells you that spin 1/2 is a thing, and it’s there in our theories already. It would be stranger if it nature didn’t make use of it.

But how come that the math gives rise to such strange and non-intuitive particle behaviors? It comes from the way that rotations (or symmetry transformations more generally) act on quantum states, which is different from how they act on non-quantum states. A symmetry transformation acting on a quantum state must be described by a unitary transformation – this is a transformation which, most importantly, ensures that probabilities always add up to one. And the full set of all symmetry transformations must be described by a “unitary representation” of the group.

Symmetry groups, however, can be difficult to handle, and so physicists prefer to instead work with the algebra associated to the group. The algebra can be used to build up the group, much like you can build up a grid from right-left steps and forwards-backwards steps, repeated sufficiently often. But here’s where things get interesting: If you use the algebra of the rotation group to describe how particles transform, you don’t get back merely the rotation group. Instead you get what’s called a “double cover” of the rotation group. It means – guess! – you have to turn the state around twice to get back to the initial state.

I’ve been racking my brain trying to find a good metaphor for “double-cover” to use in the-damned-book I’m writing. Last year, I came across the perfect illustration in real life when we took the kids to a Christmas market. Here it is:



I made a sketch of this for my book:



The little trolley has to make two full rotations to get back to the starting point. And that’s pretty much how the double-cover of the rotation group gives rise to particles with spin 1/2. Though you might have to wrap your head around it twice to understand how it works.

I later decided not to use this illustration in favor of one easier to generalize to higher spin. But you’ll have to buy the-damned-book to see how this works :p