Hamster. Not to scale. Img src: Petopedia. |

Dark matter, to remind you, are hypothetical clouds of particles that hover around galaxies. We can’t see them because they neither emit nor reflect light, but we do notice their gravitational pull because it affects the motion of the matter that we

*can*observe. Modified gravity, on the other hand, posits that normal matter is all there is, but the laws of gravity don’t work as Einstein taught us.

Which one is right? We still don’t know, though astrophysicists have been on the case since decades.

Ruling out modified gravity is hard because it was invented to fit observed correlations, and this achievement is difficult to improve on. The idea which Milgrom came up with in 1983 was a simple model called Modified Newtonian Dynamics (MOND). It does a good job fitting the rotation curves of hundreds of observed galaxies, and in contrast to particle dark matter this model requires only one parameter as input. That parameter is an acceleration scale which determines when the gravitational pull begins to be markedly different from that predicted by Einstein’s theory of General Relativity. Based on his model, Milgrom also made some predictions which held up so far.

In a 2016 paper, McGaugh, Lelli, and Schomberg analyzed data from a set of about 150 disk galaxies. They identified the best-fitting acceleration scale for each of them and found that the distribution is clearly peaked around a mean-value:

Histogram of best-fitting acceleration scale. Blue: Only high quality data. Via Stacy McGaugh. |

McGaugh

*et al*conclude that the data contains evidence for a universal acceleration scale, which is strong support for modified gravity.

Then, a month ago, Nature Astronomy published a paper titled “Absence of a fundamental acceleration scale in galaxies“ by Rodrigues et al (arXiv-version here). The authors claim to have ruled out modified gravity with at least 5 σ, ie with high certainty.

That’s pretty amazing given that two months ago modified gravity worked just fine for galaxies. It’s even more amazing once you notice that they ruled out modified gravity using the same data from which McGaugh

*et al*extracted the universal acceleration scale that’s evidence for modified gravity.

Here is the key figure from the Rodrigues

*et al*paper:

Figure 1 from Rodrigues et al |

Shown on the vertical axis is their best-fit parameter for the (log of) the acceleration scale. On the horizontal axis are the individual galaxies. The authors have sorted the galaxies so that the best-fit value is monotonically increasing from left to right, so the increase is not relevant information. Relevant is that if you compare the error-margins marked by the colors, then the best-fit value for the galaxies on the very left side of the plot are incompatible with the best-fit values for the galaxies on the very right side of the plot.

So what the heck is going on?

A first observation is that the two studies don’t use the same data analysis. The main difference is the priors for the distribution of the parameters which are the acceleration scale of modified gravity and the stellar mass-to-light ratio. Where McGaugh

*et al*use Gaussian priors, Rodrigues

*et al*use flat priors over a finite bin. The prior is the assumption you make for what the likely distribution of a parameter is, which you then feed into your model to find the best-fit parameters. A bad prior can give you misleading results.

Example: Suppose you have an artificially intelligent infrared camera. One night it issues an alert: Something’s going on in the bushes of your garden. The AI tells you the best fit to the observation is a 300-pound hamster, the second-best fit is a pair of humans in what seems a peculiar kind of close combat. Which option do you think is more likely?

I’ll go out on a limb and guess the second. And why is that? Because you probably know that 300-pound hamsters are somewhat of a rare occurrence, whereas pairs of humans are not. In other words, you have a different prior than your camera.

Back to the galaxies. As we’ve seen, if you start with an unmotivated prior, you can end up with a “best fit” (the 300 pound hamster) that’s unlikely for reasons your software didn’t account for. At the very least, therefore, you should check that whatever the resulting best-fit distribution of your parameters is doesn’t contradict other data. The Rodrigues

*et al*analysis hence raises the concern that their best-fit distribution for the stellar mass-to-light ratio doesn’t match commonly observed distributions. The McGaugh paper on the other hand starts with a Gaussian prior, which is a reasonable expectation, and hence their analysis makes more physical sense.

Having said this though, it turns out the priors don’t make much of a difference for the results. Indeed, for what the numbers are concerned the results in both papers are pretty much the same. What differs is the conclusion the authors draw from it.

Let me tell you a story to illustrate what’s going on. Suppose you are Isaac Newton and an apple just banged on your head. “Eureka,” you shout and postulate that the gravitational potential fulfils the Poisson-equation.

^{*}Smart as you are, you assume that the Earth is approximately a homogeneous sphere, solve the equation and find an inverse-square law. It contains one free parameter which you modestly call “Newton’s constant.”

You then travel around the globe, note down your altitude and measure the acceleration of a falling test-body. Back home you plot the results and extract Newton’s constant (times the mass of the Earth) from the measurements. You find that the measured values cluster around a mean. You declare that you have found evidence for a universal law of gravity.

Or have you?

A week later your good old friend Bob knocks on the door. He points out that if you look at the measurement errors (which you have of course recorded), then some of the measurement results are incompatible with each other at five sigma certainty. There, Bob declares, I have ruled out your law of gravity.

Same data, different conclusion. How does this make sense?

“Well,” Newton would say to Bob, “You have forgotten that besides the measurement uncertainty there is theoretical uncertainty. The Earth is neither homogeneous nor a sphere, so you should expect a spread in the data that exceeds the measurement uncertainty.” – “Ah,” Bob says triumphantly, “But in this case you can’t make predictions!” – “Sure I can,” Newton speaks and points to his inverse square law, “I did.” Bob frowns, but Newton has run out of patience. “Look,” he says and shoves Bob out of the door, “Come back when you have a better theory than I.”

Back to 2018 and modified gravity. Same difference. In the Rodrigues

*et al*paper, the authors rule out that modified gravity’s one-parameter law fits all disk galaxies in the sample. This shouldn’t come as much of a surprise. Galaxies aren’t disks with bulges any more than the Earth is a homogeneous sphere. It’s such a crude oversimplification it’s remarkable it works at all.

Indeed, it would be an interesting exercise to quantify how well modified gravity does in this set of galaxies compared to particle dark matter with the same number of parameters. Chances are, you’d find that particle dark matter too is ruled out at 5 σ. It’s just that no one is dumb enough to make such a claim. When it comes to particle dark matter, astrophysicists will be quick to tell you galaxy dynamics involves loads of complicated astrophysics and it’s rather unrealistic that one parameter will account for the variety in any sample.

Without the comparison to particle dark matter, therefore, the only thing I learn from the Rodrigues

*et al*paper is that a non-universal acceleration scale fits the data better than a universal one. And that I could have told you without even looking at the data.

Summary: I’m not impressed.

It’s day 12,805 in the war between modified gravity and dark matter and dark matter enthusiasts still haven’t found the battle field.

^{*}Dude, I know that Newton isn’t Archimedes. I’m telling a story not giving a history lesson.