Sunday, December 25, 2016

Physics is good for your health

Book sandwich
[Img src: strangehistory.net]
Yes, physics is good for your health. And that’s not only because it’s good to know that peeing on high power lines is a bad idea. It’s also because, if they wheel you to the hospital, physics is your best friend. Without physics, there’d be no X-rays and no magnetic resonance imaging. There’d be no ultrasound and no spectroscopy, no optical fiber imaging and no laser surgery. There wouldn’t even be centrifuges.

But physics is good for your health in another way – as the resort of sanity.

Human society may have entered a post-factual era, but the laws of nature don’t give a shit. Planet Earth is a crazy place, full with crazy people, getting crazier by the minute. But the universe still expands, atoms still decay, electric currents still take the path of least resistance. Electrons don’t care if you believe in them and supernovae don’t want your money. And that’s the beauty of knowledge discovery: It’s always waiting for you. Stupid policy decisions can limit our collective benefit from science, but the individual benefit is up to each of us.

In recent years I’ve found it impossible to escape the “mindfulness” movement. Its followers preach that focusing on the present moment will ease your mental tension. I don’t know about you, but most days focusing on the present moment is the last thing I want. I’ve done a lot of breaths and most of them were pretty unremarkable – I’d much rather think about something more interesting.

And physics is there for you: Find peace of mind in Hubble images of young nebulae or galaxy clusters billions of light years away. Gauge the importance of human affairs by contemplating the enormous energies released in black hole mergers. Remember how lucky we are that our planet is warmed but not roasted by the Sun, then watch some videos of recent solar eruptions. Reflect on the long history of our own galaxy, seeded by tiny density fluctuations whose imprint still see today in the cosmic microwave background.

Or stretch your imagination and try to figure out what happens when you fall into a black hole, catch light like Einstein, or meditate over the big questions: Does time exist? Is the future determined? What, if anything, happened before the big bang? And if there are infinitely many copies of you in the multiverse, does that mean you are immortal?

This isn’t to say the here and now doesn’t matter. But if you need to recharge, physics can be a welcome break from human insanity.

And if everything else fails, there’s always the 2nd law of thermodynamics to remind us: All this will pass.

Wednesday, December 21, 2016

Reasoning in Physics

I’m just back from a workshop about “Reasoning in Physics” at the Center for Advanced Studies in Munich. I went because it seemed a good idea to improve my reasoning, but as I sat there, something entirely different was on my mind: How did I get there? How did I, with my avowed dislike of all things -ism and -ology, end up in a room full of philosophers, people who weren’t discussing physics, but the philosophical underpinning of physicists’ arguments. Or, as it were, the absence of such underpinnings.

The straight-forward answer is that they invited me, or invited me back, I should say, since this was my third time visiting the Munich philosophers. Indeed, they invited me to stay some longer for a collaborative project, but I’ve successfully blamed the kids for my inability to reply with either yes or no.

So I sat there, in one of these awkwardly quiet rooms where everyone will hear your stomach gargle, trying to will my stomach not to gargle and instead listen to the first talk. It was Jeremy Butterfield, speaking about a paper which I commented on here. Butterfield has been praised to me as one of the four good physics philosophers, but I’d never met him. The praise was deserved – he turned out to be very insightful and, dare I say, reasonable.

The talks of the first day focused on multiple multiverse measures (meta meta), inflation (still eternal), Bayesian inference (a priori plausible), anthropic reasoning (as observed), and arguments from mediocrity and typicality which were typically mediocre. Among other things, I noticed with consternation that the doomsday argument is still being discussed in certain circles. This consterns me because, as I explained a decade ago, it’s an unsound abuse of probability calculus. You can’t randomly distribute events that are causally related. It’s mathematical nonsense, end of story. But it’s hard to kill a story if people have fun discussing it. Should “constern” be a verb? Discuss.

In a talk by Mathias Frisch I learned of a claim by Huw Price that time-symmetry in quantum mechanics implies retro-causality. It seems the kind of thing that I should have known about but didn’t, so I put the paper on the reading list and hope that next week I’ll have read it last year.

The next day started with two talks about analogue systems of which I missed one because I went running in the morning without my glasses and, well, you know what they say about women and their orientation skills. But since analogue gravity is a topic I’ve been working on for a couple of years now, I’ve had some time to collect thoughts about it.

Analogue systems are physical systems whose observables can, in a mathematically precise way, be mapped to – usually very different – observables of another system. The best known example is sound-waves in certain kinds of fluids which behave exactly like light does in the vicinity of a black hole. The philosophers presented a logical scheme to transfer knowledge gained from observational test of one system to the other system. But to me analogue systems are much more than a new way to test hypotheses. They’re fundamentally redefining what physicists mean by doing science.

Presently we develop a theory, express it in mathematical language, and compare the theory’s predictions with data. But if you can directly test whether observations on one system correctly correspond to that of another, why bother with a theory that predicts either? All you need is the map between the systems. This isn’t a speculation – it’s what physicists already do with quantum simulations: They specifically design one system to learn how another, entirely different system, will behave. This is usually done to circumvent mathematically intractable problems, but in extrapolation it might just make theories and theorists superfluous.

It then followed a very interesting talk by Peter Mattig, who reported from the DFG research program “Epistemology of the LHC.” They have, now for the 3rd time, surveyed both theoretical and experimental particle physicists to track researchers’ attitudes to physics beyond the standard model. The survey results, however, will only get published in January, so I presently can’t tell you more than that. But once the paper is available you’ll read about it on this blog.

The next talk was by Radin Dardashti who warned us ahead that he’d be speaking about work in progress. I very much liked Radin’s talk at last year’s workshop, and this one didn’t disappoint either. In his new work, he is trying to make precise the notion of “theory space” (in the general sense, not restricted to qfts).

I think it’s a brilliant idea because there are many things that we know about theories but that aren’t about any particular theory, ie we know something about theory space, but we never formalize this knowledge. The most obvious example may be that theories in physics tend to be nice and smooth and well-behaved. They can be extrapolated. They have differentiable potentials. They can be expanded. There isn’t a priori any reason why that should be so; it’s just a lesson we have learned through history. I believe that quantifying meta-theoretical knowledge like this could play a useful role in theory development. I also believe Radin has a bright future ahead.

The final session on Tuesday afternoon was the most physicsy one.

My own talk about the role of arguments from naturalness was followed by a rather puzzling contribution by two young philosophers. They claimed that quantum gravity doesn’t have to be UV-complete, which would mean it’s not a consistent theory up to arbitrarily high energies.

It’s right of course that quantum gravity doesn’t have to be UV-complete, but it’s kinda like saying a plane doesn’t have to fly. If you don’t mind driving, then why put wings on it? If you don’t mind UV-incompleteness, then why quantize gravity?

This isn’t to say that there’s no use in thinking about approximations to quantum gravity which aren’t UV-complete and, in particular, trying to find ways to test them. But these are means to an end, and the end is still UV-completion. Now we can discuss whether it’s a good idea to start with the end rather than the means, but that’s a different story and shall be told another time.

I think this talk confused me because the argument wasn’t wrong, but for a practicing researcher in the field the consideration is remarkably irrelevant. Our first concern is to find a promising problem to work on, and that the combination of quantum field theory and general relativity isn’t UV complete is the most promising problem I know of.

The last talk was by Michael Krämer about recent developments in modelling particle dark matter. In astrophysics – like in particle-physics – the trend is to go away from top-down models and work with slimmer “simplified” models. I think it’s a good trend because the top-down constructions didn’t lead us anywhere. But removing the top-down guidance must be accompanied by new criteria, some new principle of non-empirical theory-selection, which I’m still waiting to see. Otherwise we’ll just endlessly produce models of questionable relevance.

I’m not sure whether a few days with a group of philosophers have improved my reasoning – be my judge. But the workshop helped me see the reason I’ve recently drifted towards philosophy: I’m frustrated by the lack of self-reflection among theoretical physicists. In the foundations of physics, everybody’s running at high speed without getting anywhere, and yet they never stop to ask what might possibly be going wrong. Indeed, most of them will insist nothing’s wrong to begin with. The philosophers are offering the conceptual clarity that I find missing in my own field.

I guess I’ll be back.

Monday, December 19, 2016

Book Review, “Why Quark Rhymes With Pork” by David Mermin

Why Quark Rhymes with Pork: And Other Scientific Diversions
By N. David Mermin
Cambridge University Press (January 2016)

The content of many non-fiction books can be summarized as “the blurb spread thinly,” but that’s a craft which David Mermin’s new essay collection Why Quark Rhymes With Pork cannot be accused of. The best summary I could therefore come up with is “things David Mermin is interested in,” or at least was interested in some time during the last 30 years.

This isn’t as undescriptive as it seems. Mermin is Horace White Professor of Physics Emeritus at Cornell University, and a well-known US-American condensed matter physicist, active in science communication, famous for his dissatisfaction with the Copenhagen interpretation and an obsession with properly punctuating equations. And that’s also what his essays are about: quantum mechanics, academia, condensed matter physicists, writing in general, and obsessive punctuation in particular. Why Quark Rhymes With Pork collects all of Mermin’s Reference Frame columns published in Physics Today from 1988 to 2009, updated with postscripts, plus 13 previously unpublished essays.

The earliest of Mermin’s Reference Frame columns stem from the age of handwritten transparencies and predate the arXiv, the Superconducting Superdisaster, and the “science wars” of the 1990s. I read these first essays with the same delighted horror evoked by my grandma’s tales of slide-rules and logarithmic tables, until I realized that we’re still discussing today the same questions as Mermin did 20 years ago: Why do we submit papers to journals for peer review instead of reviewing them independently of journal publication? Have we learned anything profound in the last half century? What do you do when you give a talk and have mustard on your ear? Why is the sociology of science so utterly disconnected from the practice of science? Does anybody actually read PRL? And, of course, the mother of all questions: How to properly pronounce “quark”?

The later essays in the book mostly focus on the quantum world, just what is and isn’t wrong with it, and include the most insightful (and yet brief) expositions of quantum computing that I have come across. The reader also hears again from Professor Mozart, a semi-fictional character that Mermin introduced in his Reference Frame columns. Several of the previously unpublished pieces are summaries of lectures, birthday speeches, and obituaries.

Even though some of Mermin’s essays are accessible for the uninitiated, most of them are likely incomprehensible without some background knowledge in physics, either because he presumes technical knowledge or because the subject of his writing must remain entirely obscure. The very first essay might make a good example. It channels Mermin’s outrage over “Lagrangeans,” and even though written with both humor and purpose, it’s a spelling that I doubt non-physicists will perceive as properly offensive. Likewise, a 12-verse poem on the standard model or elaborations on how to embed equations into text will find their audience mostly among physicists.

My only prior contact with Mermin’s writing was a Reference Frame in 2009, in which Mermin laid out his favorite interpretation of quantum mechanics, Qbism, a topic also pursued in several of this book’s chapters. Proposed by Carl Caves, Chris Fuchs, and Rüdinger Sachs, Qbism views quantum mechanics as the observers’ rule-book for updating information about the world. In his 2009 column, Mermin argues that it is a “bad habit” to believe in the reality of the quantum state. “I hope you will agree,” he writes, “that you are not a continuous field of operators on an infinite-dimensional Hilbert space.”

I left a comment to this column, lamenting that Mermin’s argument is “polemic” and “uninsightful,” an offhand complaint that Physics Today published a few months later. Mermin replied that his column was “an amateurish attempt” to contribute to the philosophy of science and quantum foundations. But while reading Why Quark Rhymes With Pork, I found his amateurism to be a benefit: In contrast to professional attempts to contribute to the philosophy of science (or linguistics, or sociology, or scholarly publishing) Mermin’s writing is mostly comprehensible. I’m thus happy to leave further complaints to philosophers (or linguists, or sociologists).

Why Quark Rhymes With Pork is a book I’d never have bought. But having read it, I think you should read it too. Because I’d rather not still discuss the same questions 20 years from now.

And the only correct way to pronounce quark is of course the German way as “qvark.”

[This book review appeared in the November 2016 issue of Physics Today.]

Friday, December 16, 2016

Cosmic rays hint at new physics just beyond the reach of the LHC

Cosmic ray shower. Artist’s impression.
[Img Src]
The Large Hadron Collider (LHC) – the world’s presently most powerful particle accelerator – reaches a maximum collision energy of 14 TeV. But cosmic rays that collide with atoms in the upper atmosphere have been measured with collision energies about ten times as high.

The two types of observations complement each other. At the LHC, energies are smaller, but collisions happen in a closely controlled experimental environment, directly surrounded by detectors. This is not the case for cosmic rays – their collisions reach higher energies, but the experimental uncertainties are higher.

Recent results from the Pierre Auger Cosmic Ray observatory at center-of-mass energies of approximately 100 TeV are incompatible with the Standard Model of particle physics and hint at unexplained new phenomena. The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations.

Cosmic rays are created either by protons or light atomic nuclei which come from outer space. These particles are accelerated in galactic magnetic fields, though the exact way how their get their high speeds is often unknown. When they enter the atmosphere of planet Earth, they sooner or later hit an air molecule. This destroys the initial particle and creates a primary shower of new particles. This shower has an electromagnetic part and a part of quarks and gluons that quickly form bound states known as hadrons. These particles undergo further decays and collisions, leading to a secondary shower.

The particles of the secondary shower can be detected on Earth in large detector arrays like Pierre Auger, which is located in Argentina. Pierre Auger has two types of detectors: 1) detectors that directly collect the particles which make it to the ground, and 2) fluorescence detectors which captures the light emitted from the ionization air molecules.

The hadronic component of the shower is dominated by pions, which are the lightest mesons and composed of a quark and an anti-quark. The neutral pions decay quickly, mostly into photons; the charged pions create muons which make it into the ground-based detectors.

It has been known for several years that the muon signal seems too large compared to the electromagnetic signal – the balance between them is off. This however was not based on very solid data analyses because it was dependent on a total energy estimate, and that’s very hard to do if you don’t measure all particles of the shower and have to extrapolate from what you measure.

In the new paper – just published in PRL – the Pierre Auger collaboration used a different analysis method for the data, one that does not depend on the total energy calibration. They individually fit the results of detected showers by comparing them to computer-simulated events. From a previously generated sample, they pick the simulated event that best matches the fluorescence result.

Then they add two parameters to also fit the hadronic result: The one parameter adjusts the energy calibration of the fluorescence signal, the other rescales the number of particles in the hadronic component. Then they look for the best-fit values and find that these are systematically off the standard model prediction. As an aside, their analysis also shows that the energy does not need to be recalibrated.

The main reason for the mismatch with the standard model predictions is that the detectors measure more muons than expected. What’s up with those muons? Nobody knows, but the origin of the mystery seems not in the muons themselves, but in the pions from whose decay they come.

Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. So the result by Pierre Auger indicates that the total energy in neutral pions is smaller than what the present simulations predict.

One possible explanation for this, which has been proposed by Farrar and Allen, is that we misunderstand chiral symmetry breaking. It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons (not the Higgs!). The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

Chiral symmetry restoration almost certainly plays a role in the cosmic ray collisions, and a more important role than it does at the LHC. So, quite possibly this is the culprit here. But it might be something more exotic, new short-lived particles that become important at high energies and which make interaction probabilities deviate from the standard model extrapolation. Or maybe it’s just a measurement fluke that will go away with more data.

If the signal remains, however, that’s a strong motivation to build the next larger particle collider which could reach 100 TeV. Our accelerators would then be as good as the heavens.


[This post previously appeared on Forbes.]

Saturday, December 10, 2016

Away Note

I'll be in Munich next week, attending a workshop at the Center for Advanced Studies on the topic "Reasoning in Physics." I'm giving a talk about "Naturalness: How religion turned to math" which has attracted criticism already before I've given it. I take that to mean I'm hitting a nerve ;)

Thursday, December 08, 2016

No, physicists have no fear of math. But they should have more respect.

Heart curve. [Img Src]
Even physicists are `afraid’ of mathematics,” a recent phys.org headline screamed at me. This, I thought, is ridiculous. You can accuse physicists of many stupidities, but being afraid of math isn’t one of them.

But the headline was supposedly based on scientific research. Someone, somewhere, had written a paper claiming that physicists are more likely to cite papers which are light on math. So, I put aside my confirmation bias and read the paper. It was more interesting than expected.

The paper in question, it turned out, didn’t show that physicists are afraid of math. Instead, it was a reply to a comment on an analysis of an earlier paper which had claimed that biologists are afraid of math.

The original paper, “Heavy use of equations impedes communication among biologists,” was published in 2012 by Tim Fawcett and Andrew Higginson, both at the Centre for Research in Animal Behaviour at the University of Exeter. They analyzed a sample of 649 papers published in the top journals in ecology and evolution and looked for a correlation between the density of equations (equations per text) and the number of citations. They found a statistically significant negative correlation: Papers with a higher density of equations were less cited.

Unexpectedly, a group of physicists came to the defense of biologists. In a paper published last year under the title “Are physicists afraid of mathematics?” Jonathan Kollmer, Thorsten Pöschel, and Jason Galla set out to demonstrate that the statistics underlying the conclusion that biologists are afraid of math were fundamentally flawed. With these methods, the authors claimed, you could show anything, even that physicists are afraid of math. Which is surely absurd. Right? They argued that Fawcett and Higginson had arrived at a wrong conclusion because they had sorted their data into peculiar and seemingly arbitrarily chosen bins.

It’s a good point to make. The chance that you find a correlation with any one binning is much higher than the chance that you find it with one particular binning. Therefore, you can easily screw over measures of statistical significance if you allow a search for a correlation with different binnings.

As example, Kollmer at al used a sample of papers from Physical Review Letters (PRL) and showed that, with the bins used by Fawcett and Higginson, physicists too could be said to be afraid of math. Alas, the correlation goes away with a finer binning and hence is meaningless.

PRL, for those not familiar with it, is one of the most highly ranked journals in physics generally. It publishes papers from all subfields that are of broad interest to the community. PRL also has a strictly enforced page limit: You have to squeeze everything on four pages – an imo completely idiotic policy that more often than not means the authors have to publish a longer, comprehensible, paper elsewhere.

The paper that now made headline is a reply by the authors of the original study to the physicists who criticized it. Fawcett and Higginson explain that the physicists’ data analysis is too naïve. They point out that the citation rates have a pronounced rich-get-richer trend which amplifies any initial differences. This leads to an `overdispersed’ data set in which the standard errors are misleading. In that case, a more complicated statistical analysis is necessary, which is the type of analysis they had done in the original paper. The arbitrarily seeming bins were just chosen to visualize the results, they write, but their finding is independent of that.

Fawcett and Higginson then repeated the same analysis on the physics papers and revealed a clear trend: Physicists too are more likely to cite papers with a smaller density of equations!

I have to admit this doesn’t surprise me much. A paper with fewer verbal explanations per equation assumes the reader is more familiar with the particular formalism being used, and this means the target audience shrinks. The consequence is fewer citations.

But this doesn’t mean physicists are afraid of math, it merely means they have to decide which calculations are worth their time. If it’s a topic they might never have an application for, making their way through a paper heavy on math might not be the so helpful to advance their research. On the other hand, reading a more general introduction or short survey with fewer equations might be useful also on topics farther from one’s own research. These citation habits therefore show mostly that the more specialized a paper, the fewer people will read it.

I had a brief exchange with Andrew Higginson, one of the authors of the paper that’s been headlined as “Physicists are afraid of math.” He emphasizes that their point was that “busy scientists might not have time to digest lots of equations without accompanying text.” But I don’t think that’s the right conclusion to draw. Busy scientists who are familiar with the equations might not have the time to digest much text, and busy scientists might not have the time to digest long papers, period. (The corresponding author of the physicists’ study did not respond to my question for comment.)

In their recent reply, the Fawcett and Higginson suggest that “an immediate, pragmatic solution to this apparent problem would be to reduce the density of equations and add explanatory text for non-specialised readers.”

I’m not sure, however, there is any problem here in need of being solved. Adding text for non-specialized readers might be cumbersome for the specialized readers. I understand the risk that the current practice exaggerates the already pronounced specialization, which can hinder communication. But this, I think, would be better taken care of by reviews and overview papers to be referenced in the, typically short, papers on recent research.

So, I don’t think physicists are afraid of math. Indeed, it sometimes worries me how much and how uncritically they love math.

Math can do a lot of things for you, but in the end it’s merely a device to derive consequences from assumptions. Physics isn’t math, however, and physics papers don’t work by theorems and proofs. Theoretical physicists pride themselves on their intuition and frequently take the freedom to shortcut mathematical proofs by drawing on experience. This, however, amounts to making additional assumptions, for example that a certain relation holds or an expansion is well-defined.

That works well as long as these assumptions are used to arrive at testable predictions. In that case it matters only if the theory works, and the mathematical rigor can well be left to mathematical physicists for clean-up, which is how things went historically.

But today in the foundations of physics, theory-development proceeds largely without experimental feedback. In such cases, keeping track of assumptions is crucial – otherwise it becomes impossible to tell what really follows from what. Or, I should say, it would be crucial because theoretical physicists are bad at this.

The result is that some research areas can amass loosely connected arguments that follow from a set of assumptions that aren’t written down anywhere. This might result in an entirely self-consistent construction and yet not have anything to do with reality. If the underlying assumptions aren’t written down anywhere, the result is conceptual mud in which case we can’t tell philosophy from mathematics.

One such unwritten assumption that is widely used, for example, is the absence of finetuning or that a physical theory be “natural.” This assumption isn’t supported by evidence and it can’t be mathematically derived. Hence, it should be treated as a hypothesis - but that isn’t happening because the assumption itself isn’t recognized for what it is.

Another unwritten assumption is that more fundamental theories should somehow be simpler. This is reflected for example in the belief that the gauge couplings of the standard model should meet in one point. That’s an assumption; it isn’t supported by evidence. And yet it’s not treated as a hypothesis but as a guide to theory-development.

And all presently existing research on the quantization of gravity rests on the assumption that quantum theory itself remains unmodified at short distance scales. This is another assumption that isn’t written down anywhere. Should that turn out to be not true, decades of research will have been useless.

In lack of experimental guidance, what we need in the foundations of physics is conceptual clarity. We need rigorous math, not claims to experience, intuition, and aesthetic appeal. Don’t be afraid, but we need more math.

Friday, December 02, 2016

Can dark energy and dark matter emerge together with gravity?

A macaroni pie? Elephants blowing ballons? 
No, it’s Verlinde’s entangled universe.
In a recent paper, the Dutch physicist Erik Verlinde explains how dark energy and dark matter arise in emergent gravity as deviations from general relativity.

It’s taken me some while to get through the paper. Vaguely titled “Emergent Gravity and the Dark Universe,” it’s a 51-pages catalog of ideas patched together from general relativity, quantum information, quantum gravity, condensed matter physics, and astrophysics. It is clearly still research in progress and not anywhere close to completion.

The new paper substantially expands on Verlinde’s earlier idea that the gravitational force is some type of entropic force. If that was so, it would mean gravity is not due to the curvature of space-time – as Einstein taught us – but instead caused by the interaction of the fundamental elements which make up space-time. Gravity, hence, would be emergent.

I find it an appealing idea because it allows one to derive consequences without having to specify exactly what the fundamental constituents of space-time are. Like you can work out the behavior of gases under pressure without having a model for atoms, you can work out the emergence of gravity without having a model for whatever builds up space-time. The details would become relevant only at very high energies.

As I noted in a comment on the first paper, Verlinde’s original idea was merely a reinterpretation of gravity in thermodynamic quantities. What one really wants from emergent gravity, however, is not merely to get back general relativity. One wants to know which deviations from general relativity come with it, deviations that are specific predictions of the model and which can be tested.

Importantly, in emergent gravity such deviations from general relativity could make themselves noticeable at long distances. The reason is that the criterion for what it means for two points to be close by each other emerges with space-time itself. Hence, in emergent gravity there isn’t a priori any reason why new physics must be at very short distances.

In the new paper, Verlinde argues that his variant of emergent gravity gives rise to deviations from general relativity on long distances, and these deviations correspond to dark energy and dark matter. He doesn’t explain dark energy itself. Instead, he starts with a universe that by assumption contains dark energy like we observe, ie one that has a positive cosmological constant. Such a universe is described approximately by what theoretical physicists call a de-Sitter space.

Verlinde then argues that when one interprets this cosmological constant as the effect of long-distance entanglement between the conjectured fundamental elements, then one gets a modification of the gravitational law which mimics dark matter.

The reason is works is that to get normal gravity one assigns an entropy to a volume of space which scales with the surface of the area that encloses the volume. This is known as the “holographic scaling” of entropy, and is at the core of Verlinde’s first paper (and earlier work by Jacobson and Padmanabhan and others). To get deviations from normal gravity, one has to do something else. For this, Verlinde argues that de Sitter space is permeated by long-distance entanglement which gives rise to an entropy which scales, not with the surface area of a volume, but with the volume itself. It consequently leads to a different force-law. And this force-law, so he argues, has an effect very similar to dark matter.

Not only does this modified force-law from the volume-scaling of the entropy mimic dark matter, it more specifically reproduces some of the achievements of modified gravity.

In his paper, Verlinde derives the observed relation between the luminosity of spiral galaxies and the angular velocity of their outermost stars, known as the Tully-Fisher relation. The Tully-Fisher relation can also be found in certain modifications of gravity, such as Moffat Gravity (MOG), but more generally every modification that approximates Milgrom’s modified Newtonian Dynamics (MOND). Verlinde, however, does more than that. He also derives the parameter which quantifies the acceleration at which the modification of general relativity becomes important, and gets a value that fits well with observations.

It was known before that this parameter is related to the cosmological constant. There have been various attempts to exploit this relation, most recently by Lee Smolin. In Verlinde’s approach the relation between the acceleration scale and the cosmological constant comes out naturally, because dark matter has the same origin of dark energy. Verlinde further offers expressions for the apparent density of dark matter in galaxies and clusters, something that, with some more work, can probably be checked observationally.

I find this is an intriguing link which suggests that Verlinde is onto something. However, I also find the model sketchy and unsatisfactory in many regards. General Relativity is a rigorously tested theory with many achievements. To do any better than general relativity is hard, and thus for any new theory of gravity the most important thing is to have a controlled limit in which General Relativity is reproduced to good precision. How this might work in Verlinde’s approach isn’t clear to me because he doesn’t even attempt to deal with the general case. He starts right away with cosmology.

Now in cosmology we have a preferred frame which is given by the distribution of matter (or by the restframe of the CMB if you wish). In general relativity this preferred frame does not originate in the structure of space-time itself but is generated by the stuff in it. In emergent gravity models, in contrast, the fundamental structure of space-time tends to have an imprint of the preferred frame. This fundamental frame can lead to violations of the symmetries of general relativity and the effects aren’t necessarily small. Indeed, there are many experiments that have looked for such effects and haven’t found anything. It is hence a challenge for any emergent gravity approach to demonstrate just how to avoid such violations of symmetries.

Another potential problem with the idea is the long-distance entanglement which is sprinkled over the universe. The physics which we know so far works “locally,” meaning stuff can’t interact over long distances without a messenger that travels through space and time from one to the other point. It’s the reason my brain can’t make spontaneous visits to the Andromeda nebula, and most days I think that benefits both of us. But like that or not, the laws of nature we presently have are local, and any theory of emergent gravity has to reproduce that.

I have worked for some years on non-local space-time defects, and based on what I learned from that I don’t think the non-locality of Verlinde’s model is going to be a problem. My non-local defects aren’t the same as Verlinde’s entanglement, but guessing that the observational consequences scale similarly, the amount of entanglement that you need to get something like a cosmological constant is too small to leave any other noticeable effects on particle physics. I am therefore more worried about the recovery of local Lorentz-invariance. I went to great pain in my models to make sure I wouldn’t get these, and I can’t see how Verlinde addresses the issue.

The more general problem I have with Verlinde’s paper is the same I had with his 2010 paper, which is that it’s fuzzy. It remained unclear to me exactly what are the necessary assumptions. I hence don’t know whether it’s really necessary to have this interpretation with the entanglement and the volume-scaling of the entropy and with assigning elasticity to the dark energy component that pushes in on galaxies. Maybe it would be sufficient already to add a non-local modification to the sources of general relativity. Having toyed with that idea for a while, I doubt it. But I think Verlinde’s approach would benefit from a more axiomatic treatment.

In summary, Verlinde’s recent paper offers the most convincing argument I have seen so far that dark matter and dark energy are related. However, it is presently unclear if not this approach would also have unwanted side-effects that are in conflict with observation already.