Sunday, December 25, 2016

Physics is good for your health

Book sandwich
[Img src:]
Yes, physics is good for your health. And that’s not only because it’s good to know that peeing on high power lines is a bad idea. It’s also because, if they wheel you to the hospital, physics is your best friend. Without physics, there’d be no X-rays and no magnetic resonance imaging. There’d be no ultrasound and no spectroscopy, no optical fiber imaging and no laser surgery. There wouldn’t even be centrifuges.

But physics is good for your health in another way – as the resort of sanity.

Human society may have entered a post-factual era, but the laws of nature don’t give a shit. Planet Earth is a crazy place, full with crazy people, getting crazier by the minute. But the universe still expands, atoms still decay, electric currents still take the path of least resistance. Electrons don’t care if you believe in them and supernovae don’t want your money. And that’s the beauty of knowledge discovery: It’s always waiting for you. Stupid policy decisions can limit our collective benefit from science, but the individual benefit is up to each of us.

In recent years I’ve found it impossible to escape the “mindfulness” movement. Its followers preach that focusing on the present moment will ease your mental tension. I don’t know about you, but most days focusing on the present moment is the last thing I want. I’ve done a lot of breaths and most of them were pretty unremarkable – I’d much rather think about something more interesting.

And physics is there for you: Find peace of mind in Hubble images of young nebulae or galaxy clusters billions of light years away. Gauge the importance of human affairs by contemplating the enormous energies released in black hole mergers. Remember how lucky we are that our planet is warmed but not roasted by the Sun, then watch some videos of recent solar eruptions. Reflect on the long history of our own galaxy, seeded by tiny density fluctuations whose imprint still see today in the cosmic microwave background.

Or stretch your imagination and try to figure out what happens when you fall into a black hole, catch light like Einstein, or meditate over the big questions: Does time exist? Is the future determined? What, if anything, happened before the big bang? And if there are infinitely many copies of you in the multiverse, does that mean you are immortal?

This isn’t to say the here and now doesn’t matter. But if you need to recharge, physics can be a welcome break from human insanity.

And if everything else fails, there’s always the 2nd law of thermodynamics to remind us: All this will pass.

Wednesday, December 21, 2016

Reasoning in Physics

I’m just back from a workshop about “Reasoning in Physics” at the Center for Advanced Studies in Munich. I went because it seemed a good idea to improve my reasoning, but as I sat there, something entirely different was on my mind: How did I get there? How did I, with my avowed dislike of all things -ism and -ology, end up in a room full of philosophers, people who weren’t discussing physics, but the philosophical underpinning of physicists’ arguments. Or, as it were, the absence of such underpinnings.

The straight-forward answer is that they invited me, or invited me back, I should say, since this was my third time visiting the Munich philosophers. Indeed, they invited me to stay some longer for a collaborative project, but I’ve successfully blamed the kids for my inability to reply with either yes or no.

So I sat there, in one of these awkwardly quiet rooms where everyone will hear your stomach gargle, trying to will my stomach not to gargle and instead listen to the first talk. It was Jeremy Butterfield, speaking about a paper which I commented on here. Butterfield has been praised to me as one of the four good physics philosophers, but I’d never met him. The praise was deserved – he turned out to be very insightful and, dare I say, reasonable.

The talks of the first day focused on multiple multiverse measures (meta meta), inflation (still eternal), Bayesian inference (a priori plausible), anthropic reasoning (as observed), and arguments from mediocrity and typicality which were typically mediocre. Among other things, I noticed with consternation that the doomsday argument is still being discussed in certain circles. This consterns me because, as I explained a decade ago, it’s an unsound abuse of probability calculus. You can’t randomly distribute events that are causally related. It’s mathematical nonsense, end of story. But it’s hard to kill a story if people have fun discussing it. Should “constern” be a verb? Discuss.

In a talk by Mathias Frisch I learned of a claim by Huw Price that time-symmetry in quantum mechanics implies retro-causality. It seems the kind of thing that I should have known about but didn’t, so I put the paper on the reading list and hope that next week I’ll have read it last year.

The next day started with two talks about analogue systems of which I missed one because I went running in the morning without my glasses and, well, you know what they say about women and their orientation skills. But since analogue gravity is a topic I’ve been working on for a couple of years now, I’ve had some time to collect thoughts about it.

Analogue systems are physical systems whose observables can, in a mathematically precise way, be mapped to – usually very different – observables of another system. The best known example is sound-waves in certain kinds of fluids which behave exactly like light does in the vicinity of a black hole. The philosophers presented a logical scheme to transfer knowledge gained from observational test of one system to the other system. But to me analogue systems are much more than a new way to test hypotheses. They’re fundamentally redefining what physicists mean by doing science.

Presently we develop a theory, express it in mathematical language, and compare the theory’s predictions with data. But if you can directly test whether observations on one system correctly correspond to that of another, why bother with a theory that predicts either? All you need is the map between the systems. This isn’t a speculation – it’s what physicists already do with quantum simulations: They specifically design one system to learn how another, entirely different system, will behave. This is usually done to circumvent mathematically intractable problems, but in extrapolation it might just make theories and theorists superfluous.

It then followed a very interesting talk by Peter Mattig, who reported from the DFG research program “Epistemology of the LHC.” They have, now for the 3rd time, surveyed both theoretical and experimental particle physicists to track researchers’ attitudes to physics beyond the standard model. The survey results, however, will only get published in January, so I presently can’t tell you more than that. But once the paper is available you’ll read about it on this blog.

The next talk was by Radin Dardashti who warned us ahead that he’d be speaking about work in progress. I very much liked Radin’s talk at last year’s workshop, and this one didn’t disappoint either. In his new work, he is trying to make precise the notion of “theory space” (in the general sense, not restricted to qfts).

I think it’s a brilliant idea because there are many things that we know about theories but that aren’t about any particular theory, ie we know something about theory space, but we never formalize this knowledge. The most obvious example may be that theories in physics tend to be nice and smooth and well-behaved. They can be extrapolated. They have differentiable potentials. They can be expanded. There isn’t a priori any reason why that should be so; it’s just a lesson we have learned through history. I believe that quantifying meta-theoretical knowledge like this could play a useful role in theory development. I also believe Radin has a bright future ahead.

The final session on Tuesday afternoon was the most physicsy one.

My own talk about the role of arguments from naturalness was followed by a rather puzzling contribution by two young philosophers. They claimed that quantum gravity doesn’t have to be UV-complete, which would mean it’s not a consistent theory up to arbitrarily high energies.

It’s right of course that quantum gravity doesn’t have to be UV-complete, but it’s kinda like saying a plane doesn’t have to fly. If you don’t mind driving, then why put wings on it? If you don’t mind UV-incompleteness, then why quantize gravity?

This isn’t to say that there’s no use in thinking about approximations to quantum gravity which aren’t UV-complete and, in particular, trying to find ways to test them. But these are means to an end, and the end is still UV-completion. Now we can discuss whether it’s a good idea to start with the end rather than the means, but that’s a different story and shall be told another time.

I think this talk confused me because the argument wasn’t wrong, but for a practicing researcher in the field the consideration is remarkably irrelevant. Our first concern is to find a promising problem to work on, and that the combination of quantum field theory and general relativity isn’t UV complete is the most promising problem I know of.

The last talk was by Michael Krämer about recent developments in modelling particle dark matter. In astrophysics – like in particle-physics – the trend is to go away from top-down models and work with slimmer “simplified” models. I think it’s a good trend because the top-down constructions didn’t lead us anywhere. But removing the top-down guidance must be accompanied by new criteria, some new principle of non-empirical theory-selection, which I’m still waiting to see. Otherwise we’ll just endlessly produce models of questionable relevance.

I’m not sure whether a few days with a group of philosophers have improved my reasoning – be my judge. But the workshop helped me see the reason I’ve recently drifted towards philosophy: I’m frustrated by the lack of self-reflection among theoretical physicists. In the foundations of physics, everybody’s running at high speed without getting anywhere, and yet they never stop to ask what might possibly be going wrong. Indeed, most of them will insist nothing’s wrong to begin with. The philosophers are offering the conceptual clarity that I find missing in my own field.

I guess I’ll be back.

Monday, December 19, 2016

Book Review, “Why Quark Rhymes With Pork” by David Mermin

Why Quark Rhymes with Pork: And Other Scientific Diversions
By N. David Mermin
Cambridge University Press (January 2016)

The content of many non-fiction books can be summarized as “the blurb spread thinly,” but that’s a craft which David Mermin’s new essay collection Why Quark Rhymes With Pork cannot be accused of. The best summary I could therefore come up with is “things David Mermin is interested in,” or at least was interested in some time during the last 30 years.

This isn’t as undescriptive as it seems. Mermin is Horace White Professor of Physics Emeritus at Cornell University, and a well-known US-American condensed matter physicist, active in science communication, famous for his dissatisfaction with the Copenhagen interpretation and an obsession with properly punctuating equations. And that’s also what his essays are about: quantum mechanics, academia, condensed matter physicists, writing in general, and obsessive punctuation in particular. Why Quark Rhymes With Pork collects all of Mermin’s Reference Frame columns published in Physics Today from 1988 to 2009, updated with postscripts, plus 13 previously unpublished essays.

The earliest of Mermin’s Reference Frame columns stem from the age of handwritten transparencies and predate the arXiv, the Superconducting Superdisaster, and the “science wars” of the 1990s. I read these first essays with the same delighted horror evoked by my grandma’s tales of slide-rules and logarithmic tables, until I realized that we’re still discussing today the same questions as Mermin did 20 years ago: Why do we submit papers to journals for peer review instead of reviewing them independently of journal publication? Have we learned anything profound in the last half century? What do you do when you give a talk and have mustard on your ear? Why is the sociology of science so utterly disconnected from the practice of science? Does anybody actually read PRL? And, of course, the mother of all questions: How to properly pronounce “quark”?

The later essays in the book mostly focus on the quantum world, just what is and isn’t wrong with it, and include the most insightful (and yet brief) expositions of quantum computing that I have come across. The reader also hears again from Professor Mozart, a semi-fictional character that Mermin introduced in his Reference Frame columns. Several of the previously unpublished pieces are summaries of lectures, birthday speeches, and obituaries.

Even though some of Mermin’s essays are accessible for the uninitiated, most of them are likely incomprehensible without some background knowledge in physics, either because he presumes technical knowledge or because the subject of his writing must remain entirely obscure. The very first essay might make a good example. It channels Mermin’s outrage over “Lagrangeans,” and even though written with both humor and purpose, it’s a spelling that I doubt non-physicists will perceive as properly offensive. Likewise, a 12-verse poem on the standard model or elaborations on how to embed equations into text will find their audience mostly among physicists.

My only prior contact with Mermin’s writing was a Reference Frame in 2009, in which Mermin laid out his favorite interpretation of quantum mechanics, Qbism, a topic also pursued in several of this book’s chapters. Proposed by Carl Caves, Chris Fuchs, and Rüdinger Sachs, Qbism views quantum mechanics as the observers’ rule-book for updating information about the world. In his 2009 column, Mermin argues that it is a “bad habit” to believe in the reality of the quantum state. “I hope you will agree,” he writes, “that you are not a continuous field of operators on an infinite-dimensional Hilbert space.”

I left a comment to this column, lamenting that Mermin’s argument is “polemic” and “uninsightful,” an offhand complaint that Physics Today published a few months later. Mermin replied that his column was “an amateurish attempt” to contribute to the philosophy of science and quantum foundations. But while reading Why Quark Rhymes With Pork, I found his amateurism to be a benefit: In contrast to professional attempts to contribute to the philosophy of science (or linguistics, or sociology, or scholarly publishing) Mermin’s writing is mostly comprehensible. I’m thus happy to leave further complaints to philosophers (or linguists, or sociologists).

Why Quark Rhymes With Pork is a book I’d never have bought. But having read it, I think you should read it too. Because I’d rather not still discuss the same questions 20 years from now.

And the only correct way to pronounce quark is of course the German way as “qvark.”

[This book review appeared in the November 2016 issue of Physics Today.]

Friday, December 16, 2016

Cosmic rays hint at new physics just beyond the reach of the LHC

Cosmic ray shower. Artist’s impression.
[Img Src]
The Large Hadron Collider (LHC) – the world’s presently most powerful particle accelerator – reaches a maximum collision energy of 14 TeV. But cosmic rays that collide with atoms in the upper atmosphere have been measured with collision energies about ten times as high.

The two types of observations complement each other. At the LHC, energies are smaller, but collisions happen in a closely controlled experimental environment, directly surrounded by detectors. This is not the case for cosmic rays – their collisions reach higher energies, but the experimental uncertainties are higher.

Recent results from the Pierre Auger Cosmic Ray observatory at center-of-mass energies of approximately 100 TeV are incompatible with the Standard Model of particle physics and hint at unexplained new phenomena. The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations.

Cosmic rays are created either by protons or light atomic nuclei which come from outer space. These particles are accelerated in galactic magnetic fields, though the exact way how their get their high speeds is often unknown. When they enter the atmosphere of planet Earth, they sooner or later hit an air molecule. This destroys the initial particle and creates a primary shower of new particles. This shower has an electromagnetic part and a part of quarks and gluons that quickly form bound states known as hadrons. These particles undergo further decays and collisions, leading to a secondary shower.

The particles of the secondary shower can be detected on Earth in large detector arrays like Pierre Auger, which is located in Argentina. Pierre Auger has two types of detectors: 1) detectors that directly collect the particles which make it to the ground, and 2) fluorescence detectors which captures the light emitted from the ionization air molecules.

The hadronic component of the shower is dominated by pions, which are the lightest mesons and composed of a quark and an anti-quark. The neutral pions decay quickly, mostly into photons; the charged pions create muons which make it into the ground-based detectors.

It has been known for several years that the muon signal seems too large compared to the electromagnetic signal – the balance between them is off. This however was not based on very solid data analyses because it was dependent on a total energy estimate, and that’s very hard to do if you don’t measure all particles of the shower and have to extrapolate from what you measure.

In the new paper – just published in PRL – the Pierre Auger collaboration used a different analysis method for the data, one that does not depend on the total energy calibration. They individually fit the results of detected showers by comparing them to computer-simulated events. From a previously generated sample, they pick the simulated event that best matches the fluorescence result.

Then they add two parameters to also fit the hadronic result: The one parameter adjusts the energy calibration of the fluorescence signal, the other rescales the number of particles in the hadronic component. Then they look for the best-fit values and find that these are systematically off the standard model prediction. As an aside, their analysis also shows that the energy does not need to be recalibrated.

The main reason for the mismatch with the standard model predictions is that the detectors measure more muons than expected. What’s up with those muons? Nobody knows, but the origin of the mystery seems not in the muons themselves, but in the pions from whose decay they come.

Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. So the result by Pierre Auger indicates that the total energy in neutral pions is smaller than what the present simulations predict.

One possible explanation for this, which has been proposed by Farrar and Allen, is that we misunderstand chiral symmetry breaking. It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons (not the Higgs!). The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

Chiral symmetry restoration almost certainly plays a role in the cosmic ray collisions, and a more important role than it does at the LHC. So, quite possibly this is the culprit here. But it might be something more exotic, new short-lived particles that become important at high energies and which make interaction probabilities deviate from the standard model extrapolation. Or maybe it’s just a measurement fluke that will go away with more data.

If the signal remains, however, that’s a strong motivation to build the next larger particle collider which could reach 100 TeV. Our accelerators would then be as good as the heavens.

[This post previously appeared on Forbes.]

Saturday, December 10, 2016

Away Note

I'll be in Munich next week, attending a workshop at the Center for Advanced Studies on the topic "Reasoning in Physics." I'm giving a talk about "Naturalness: How religion turned to math" which has attracted criticism already before I've given it. I take that to mean I'm hitting a nerve ;)

Thursday, December 08, 2016

No, physicists have no fear of math. But they should have more respect.

Heart curve. [Img Src]
Even physicists are `afraid’ of mathematics,” a recent headline screamed at me. This, I thought, is ridiculous. You can accuse physicists of many stupidities, but being afraid of math isn’t one of them.

But the headline was supposedly based on scientific research. Someone, somewhere, had written a paper claiming that physicists are more likely to cite papers which are light on math. So, I put aside my confirmation bias and read the paper. It was more interesting than expected.

The paper in question, it turned out, didn’t show that physicists are afraid of math. Instead, it was a reply to a comment on an analysis of an earlier paper which had claimed that biologists are afraid of math.

The original paper, “Heavy use of equations impedes communication among biologists,” was published in 2012 by Tim Fawcett and Andrew Higginson, both at the Centre for Research in Animal Behaviour at the University of Exeter. They analyzed a sample of 649 papers published in the top journals in ecology and evolution and looked for a correlation between the density of equations (equations per text) and the number of citations. They found a statistically significant negative correlation: Papers with a higher density of equations were less cited.

Unexpectedly, a group of physicists came to the defense of biologists. In a paper published last year under the title “Are physicists afraid of mathematics?” Jonathan Kollmer, Thorsten Pöschel, and Jason Galla set out to demonstrate that the statistics underlying the conclusion that biologists are afraid of math were fundamentally flawed. With these methods, the authors claimed, you could show anything, even that physicists are afraid of math. Which is surely absurd. Right? They argued that Fawcett and Higginson had arrived at a wrong conclusion because they had sorted their data into peculiar and seemingly arbitrarily chosen bins.

It’s a good point to make. The chance that you find a correlation with any one binning is much higher than the chance that you find it with one particular binning. Therefore, you can easily screw over measures of statistical significance if you allow a search for a correlation with different binnings.

As example, Kollmer at al used a sample of papers from Physical Review Letters (PRL) and showed that, with the bins used by Fawcett and Higginson, physicists too could be said to be afraid of math. Alas, the correlation goes away with a finer binning and hence is meaningless.

PRL, for those not familiar with it, is one of the most highly ranked journals in physics generally. It publishes papers from all subfields that are of broad interest to the community. PRL also has a strictly enforced page limit: You have to squeeze everything on four pages – an imo completely idiotic policy that more often than not means the authors have to publish a longer, comprehensible, paper elsewhere.

The paper that now made headline is a reply by the authors of the original study to the physicists who criticized it. Fawcett and Higginson explain that the physicists’ data analysis is too naïve. They point out that the citation rates have a pronounced rich-get-richer trend which amplifies any initial differences. This leads to an `overdispersed’ data set in which the standard errors are misleading. In that case, a more complicated statistical analysis is necessary, which is the type of analysis they had done in the original paper. The arbitrarily seeming bins were just chosen to visualize the results, they write, but their finding is independent of that.

Fawcett and Higginson then repeated the same analysis on the physics papers and revealed a clear trend: Physicists too are more likely to cite papers with a smaller density of equations!

I have to admit this doesn’t surprise me much. A paper with fewer verbal explanations per equation assumes the reader is more familiar with the particular formalism being used, and this means the target audience shrinks. The consequence is fewer citations.

But this doesn’t mean physicists are afraid of math, it merely means they have to decide which calculations are worth their time. If it’s a topic they might never have an application for, making their way through a paper heavy on math might not be the so helpful to advance their research. On the other hand, reading a more general introduction or short survey with fewer equations might be useful also on topics farther from one’s own research. These citation habits therefore show mostly that the more specialized a paper, the fewer people will read it.

I had a brief exchange with Andrew Higginson, one of the authors of the paper that’s been headlined as “Physicists are afraid of math.” He emphasizes that their point was that “busy scientists might not have time to digest lots of equations without accompanying text.” But I don’t think that’s the right conclusion to draw. Busy scientists who are familiar with the equations might not have the time to digest much text, and busy scientists might not have the time to digest long papers, period. (The corresponding author of the physicists’ study did not respond to my question for comment.)

In their recent reply, the Fawcett and Higginson suggest that “an immediate, pragmatic solution to this apparent problem would be to reduce the density of equations and add explanatory text for non-specialised readers.”

I’m not sure, however, there is any problem here in need of being solved. Adding text for non-specialized readers might be cumbersome for the specialized readers. I understand the risk that the current practice exaggerates the already pronounced specialization, which can hinder communication. But this, I think, would be better taken care of by reviews and overview papers to be referenced in the, typically short, papers on recent research.

So, I don’t think physicists are afraid of math. Indeed, it sometimes worries me how much and how uncritically they love math.

Math can do a lot of things for you, but in the end it’s merely a device to derive consequences from assumptions. Physics isn’t math, however, and physics papers don’t work by theorems and proofs. Theoretical physicists pride themselves on their intuition and frequently take the freedom to shortcut mathematical proofs by drawing on experience. This, however, amounts to making additional assumptions, for example that a certain relation holds or an expansion is well-defined.

That works well as long as these assumptions are used to arrive at testable predictions. In that case it matters only if the theory works, and the mathematical rigor can well be left to mathematical physicists for clean-up, which is how things went historically.

But today in the foundations of physics, theory-development proceeds largely without experimental feedback. In such cases, keeping track of assumptions is crucial – otherwise it becomes impossible to tell what really follows from what. Or, I should say, it would be crucial because theoretical physicists are bad at this.

The result is that some research areas can amass loosely connected arguments that follow from a set of assumptions that aren’t written down anywhere. This might result in an entirely self-consistent construction and yet not have anything to do with reality. If the underlying assumptions aren’t written down anywhere, the result is conceptual mud in which case we can’t tell philosophy from mathematics.

One such unwritten assumption that is widely used, for example, is the absence of finetuning or that a physical theory be “natural.” This assumption isn’t supported by evidence and it can’t be mathematically derived. Hence, it should be treated as a hypothesis - but that isn’t happening because the assumption itself isn’t recognized for what it is.

Another unwritten assumption is that more fundamental theories should somehow be simpler. This is reflected for example in the belief that the gauge couplings of the standard model should meet in one point. That’s an assumption; it isn’t supported by evidence. And yet it’s not treated as a hypothesis but as a guide to theory-development.

And all presently existing research on the quantization of gravity rests on the assumption that quantum theory itself remains unmodified at short distance scales. This is another assumption that isn’t written down anywhere. Should that turn out to be not true, decades of research will have been useless.

In lack of experimental guidance, what we need in the foundations of physics is conceptual clarity. We need rigorous math, not claims to experience, intuition, and aesthetic appeal. Don’t be afraid, but we need more math.

Friday, December 02, 2016

Can dark energy and dark matter emerge together with gravity?

A macaroni pie? Elephants blowing ballons? 
No, it’s Verlinde’s entangled universe.
In a recent paper, the Dutch physicist Erik Verlinde explains how dark energy and dark matter arise in emergent gravity as deviations from general relativity.

It’s taken me some while to get through the paper. Vaguely titled “Emergent Gravity and the Dark Universe,” it’s a 51-pages catalog of ideas patched together from general relativity, quantum information, quantum gravity, condensed matter physics, and astrophysics. It is clearly still research in progress and not anywhere close to completion.

The new paper substantially expands on Verlinde’s earlier idea that the gravitational force is some type of entropic force. If that was so, it would mean gravity is not due to the curvature of space-time – as Einstein taught us – but instead caused by the interaction of the fundamental elements which make up space-time. Gravity, hence, would be emergent.

I find it an appealing idea because it allows one to derive consequences without having to specify exactly what the fundamental constituents of space-time are. Like you can work out the behavior of gases under pressure without having a model for atoms, you can work out the emergence of gravity without having a model for whatever builds up space-time. The details would become relevant only at very high energies.

As I noted in a comment on the first paper, Verlinde’s original idea was merely a reinterpretation of gravity in thermodynamic quantities. What one really wants from emergent gravity, however, is not merely to get back general relativity. One wants to know which deviations from general relativity come with it, deviations that are specific predictions of the model and which can be tested.

Importantly, in emergent gravity such deviations from general relativity could make themselves noticeable at long distances. The reason is that the criterion for what it means for two points to be close by each other emerges with space-time itself. Hence, in emergent gravity there isn’t a priori any reason why new physics must be at very short distances.

In the new paper, Verlinde argues that his variant of emergent gravity gives rise to deviations from general relativity on long distances, and these deviations correspond to dark energy and dark matter. He doesn’t explain dark energy itself. Instead, he starts with a universe that by assumption contains dark energy like we observe, ie one that has a positive cosmological constant. Such a universe is described approximately by what theoretical physicists call a de-Sitter space.

Verlinde then argues that when one interprets this cosmological constant as the effect of long-distance entanglement between the conjectured fundamental elements, then one gets a modification of the gravitational law which mimics dark matter.

The reason is works is that to get normal gravity one assigns an entropy to a volume of space which scales with the surface of the area that encloses the volume. This is known as the “holographic scaling” of entropy, and is at the core of Verlinde’s first paper (and earlier work by Jacobson and Padmanabhan and others). To get deviations from normal gravity, one has to do something else. For this, Verlinde argues that de Sitter space is permeated by long-distance entanglement which gives rise to an entropy which scales, not with the surface area of a volume, but with the volume itself. It consequently leads to a different force-law. And this force-law, so he argues, has an effect very similar to dark matter.

Not only does this modified force-law from the volume-scaling of the entropy mimic dark matter, it more specifically reproduces some of the achievements of modified gravity.

In his paper, Verlinde derives the observed relation between the luminosity of spiral galaxies and the angular velocity of their outermost stars, known as the Tully-Fisher relation. The Tully-Fisher relation can also be found in certain modifications of gravity, such as Moffat Gravity (MOG), but more generally every modification that approximates Milgrom’s modified Newtonian Dynamics (MOND). Verlinde, however, does more than that. He also derives the parameter which quantifies the acceleration at which the modification of general relativity becomes important, and gets a value that fits well with observations.

It was known before that this parameter is related to the cosmological constant. There have been various attempts to exploit this relation, most recently by Lee Smolin. In Verlinde’s approach the relation between the acceleration scale and the cosmological constant comes out naturally, because dark matter has the same origin of dark energy. Verlinde further offers expressions for the apparent density of dark matter in galaxies and clusters, something that, with some more work, can probably be checked observationally.

I find this is an intriguing link which suggests that Verlinde is onto something. However, I also find the model sketchy and unsatisfactory in many regards. General Relativity is a rigorously tested theory with many achievements. To do any better than general relativity is hard, and thus for any new theory of gravity the most important thing is to have a controlled limit in which General Relativity is reproduced to good precision. How this might work in Verlinde’s approach isn’t clear to me because he doesn’t even attempt to deal with the general case. He starts right away with cosmology.

Now in cosmology we have a preferred frame which is given by the distribution of matter (or by the restframe of the CMB if you wish). In general relativity this preferred frame does not originate in the structure of space-time itself but is generated by the stuff in it. In emergent gravity models, in contrast, the fundamental structure of space-time tends to have an imprint of the preferred frame. This fundamental frame can lead to violations of the symmetries of general relativity and the effects aren’t necessarily small. Indeed, there are many experiments that have looked for such effects and haven’t found anything. It is hence a challenge for any emergent gravity approach to demonstrate just how to avoid such violations of symmetries.

Another potential problem with the idea is the long-distance entanglement which is sprinkled over the universe. The physics which we know so far works “locally,” meaning stuff can’t interact over long distances without a messenger that travels through space and time from one to the other point. It’s the reason my brain can’t make spontaneous visits to the Andromeda nebula, and most days I think that benefits both of us. But like that or not, the laws of nature we presently have are local, and any theory of emergent gravity has to reproduce that.

I have worked for some years on non-local space-time defects, and based on what I learned from that I don’t think the non-locality of Verlinde’s model is going to be a problem. My non-local defects aren’t the same as Verlinde’s entanglement, but guessing that the observational consequences scale similarly, the amount of entanglement that you need to get something like a cosmological constant is too small to leave any other noticeable effects on particle physics. I am therefore more worried about the recovery of local Lorentz-invariance. I went to great pain in my models to make sure I wouldn’t get these, and I can’t see how Verlinde addresses the issue.

The more general problem I have with Verlinde’s paper is the same I had with his 2010 paper, which is that it’s fuzzy. It remained unclear to me exactly what are the necessary assumptions. I hence don’t know whether it’s really necessary to have this interpretation with the entanglement and the volume-scaling of the entropy and with assigning elasticity to the dark energy component that pushes in on galaxies. Maybe it would be sufficient already to add a non-local modification to the sources of general relativity. Having toyed with that idea for a while, I doubt it. But I think Verlinde’s approach would benefit from a more axiomatic treatment.

In summary, Verlinde’s recent paper offers the most convincing argument I have seen so far that dark matter and dark energy are related. However, it is presently unclear if not this approach would also have unwanted side-effects that are in conflict with observation already.

Wednesday, November 30, 2016

Dear Dr. B: What is emergent gravity?

    “Hello Sabine, I've seen a couple of articles lately on emergent gravity. I'm not a scientist so I would love to read one of your easy-to-understand blog entries on the subject.


    Michael Tucker
    Wichita, KS”

Dear Michael,

Emergent gravity has been in the news lately because of a new paper by Erik Verlinde. I’ll tell you some more about that paper in an upcoming post, but answering your question makes for a good preparation.

The “gravity” in emergent gravity refers to the theory of general relativity in the regimes where we have tested it. That means Einstein’s field equations and curved space-time and all that.

The “emergent” means that gravity isn’t fundamental, but instead can be derived from some underlying structure. That’s what we mean by “emergent” in theoretical physics: If theory B can be derived from theory A but not the other way round, then B emerges from A.

You might be more familiar with seeing the word “emergent” applied to objects or properties of objects, which is another way physicists use the expression. Sound waves in the theory of gases, for example, emerge from molecular interactions. Van-der Waals forces emerge from quantum electrodynamics. Protons emerge from quantum chromodynamics. And so on.

Everything that isn’t in the standard model or general relativity is known to be emergent already. And since I know that it annoys so many of you, let me point out again that, yes, to our current best knowledge this includes cells and brains and free will. Fundamentally, you’re all just a lot of interacting particles. Get over it.

General relativity and the standard model are the currently the most fundamental descriptions of nature which we have. For the theoretical physicist, the interesting question is then whether these two theories are also emergent from something else. Most physicists in the field think the answer is yes. And any theory in which general relativity – in the tested regimes – is derived from a more fundamental theory, is a case of “emergent gravity.”

That might not sound like such a new idea and indeed it isn’t. In string theory, for example, gravity – like everything else – “emerges” from, well, strings. There are a lot of other attempts to explain gravitons – the quanta of the gravitational interaction – as not-fundamental “quasi-particles” which emerge, much like sound-waves, because space-time is made of something else. An example for this is the model pursued by Xiao-Gang Wen and collaborators in which space-time, and matter, and really everything is made of qbits. Including cells and brains and so on.

Xiao-Gang’s model stands out because it can also include the gauge-groups of the standard model, though last time I looked chirality was an issue. But there are many other models of emergent gravity which focus on just getting general relativity. Lorenzo Sindoni has written a very useful, though quite technical, review of such models.

Almost all such attempts to have gravity emerge from some underlying “stuff” run into trouble because the “stuff” defines a preferred frame which shouldn’t exist in general relativity. They violate Lorentz-invariance, which we know observationally is fulfilled to very high precision.

An exception to this is entropic gravity, an idea pioneered by Ted Jacobson 20 years ago. Jacobson pointed out that there are very close relations between gravity and thermodynamics, and this research direction has since gained a lot of momentum.

The relation between general relativity and thermodynamics in itself doesn’t make gravity emergent, it’s merely a reformulation of gravity. But thermodynamics itself is an emergent theory – it describes the behavior of very large numbers of some kind of small things. Hence, that gravity looks a lot like thermodynamics makes one think that maybe it’s emergent from the interaction of a lot of small things.

What are the small things? Well, the currently best guess is that they’re strings. That’s because string theory is (at least to my knowledge) the only way to avoid the problems with Lorentz-invariance violation in emergent gravity scenarios. (Gravity is not emergent in Loop Quantum Gravity – its quantized version is directly encoded in the variables.)

But as long as you’re not looking at very short distances, it might not matter much exactly what gravity emerges from. Like thermodynamics was developed before it could be derived from statistical mechanics, we might be able to develop emergent gravity before we know what to derive it from.

This is only interesting, however, if the gravity that “emerges” is only approximately identical to general relativity, and differs from it in specific ways. For example, if gravity is emergent, then the cosmological constant and/or dark matter might emerge with it, whereas in our current formulation, these have to be added as sources for general relativity.

So, in summary “emergent gravity” is a rather vague umbrella term that encompasses a large number of models in which gravity isn’t a fundamental interaction. The specific theory of emergent gravity which has recently made headlines is better known as “entropic gravity” and is, I would say, the currently most promising candidate for emergent gravity. It’s believed to be related to, or maybe even be part of string theory, but if there are such links they aren’t presently well understood.

Thanks for an interesting question!

Aside: Sorry about the issue with the comments. I turned on G+ comments, thinking they'd be displayed in addition, but that instead removed all the other comments. So I've reset this to the previous version, though I find it very cumbersome to have to follow four different comment threads for the same post.

Monday, November 28, 2016

This isn’t quantum physics. Wait. Actually it is.

Rocket science isn’t what it used to be. Now that you can shoot someone to Mars if you can spare a few million, the colloquialism for “It’s not that complicated” has become “This isn’t quantum physics.” And there are many things which aren’t quantum physics. For example, making a milkshake:
“Guys, this isn’t quantum physics. Put the stuff in the blender.”
Or losing weight:
“if you burn more calories than you take in, you will lose weight. This isn't quantum physics.”
Or economics:
“We’re not talking about quantum physics here, are we? We’re talking ‘this rose costs 40p, so 10 roses costs £4’.”
You should also know that Big Data isn’t Quantum Physics and Basketball isn’t Quantum Physics and not driving drunk isn’t quantum physics. Neither is understanding that “[Shoplifting isn’t] a way to accomplish anything of meaning,” or grasping that no doesn’t mean yes.

But my favorite use of the expression comes from Noam Chomsky who explains how the world works (so the modest title of his book):
“Everybody knows from their own experience just about everything that’s understood about human beings – how they act and why – if they stop to think about it. It’s not quantum physics.”
From my own experience, stopping to think and believing one understands other people effortlessly is the root of much unnecessary suffering. Leaving aside that it’s quite remarkable some people believe they can explain the world, and even more remarkable others buy their books, all of this is, as a matter of fact, quantum physics. Sorry, Noam.

Yes, that’s right. Basketballs, milkshakes, weight loss – it’s all quantum physics. Because it’s all happening by the interactions of tiny particles which obey the rules of quantum mechanics. If it wasn’t for quantum physics, there wouldn’t be atoms to begin with. There’d be no Sun, there’d be no drunk driving, and there’d be no rocket science.

Quantum mechanics is often portrayed as the theory of the very small, but this isn’t so. Quantum effects can stretch over large distances and have been measured over distances up to several hundred kilometers. It’s just that we don’t normally observe them in daily life.

The typical quantum effects that you have heard of – things whose position and momentum can’t be measured precisely, are both dead and alive, have a spooky action at a distance and so on – don’t usually manifest themselves for large objects. But that doesn’t mean that the laws of quantum physics suddenly stop applying at a hair’s width. It’s just that the effects are feeble and human experience is limited. There is some quantum physics, however, which we observe wherever we look: If it wasn’t for Pauli’s exclusion principle, you’d fall right through the ground.

Indeed, a much more interesting question is What is not quantum physics?” For all we presently know, the only thing not quantum is space-time and its curvature, manifested by gravity. Most physicists believe, however, that gravity too is a quantum theory, just that we haven’t been able to figure out how this works.

“This isn’t quantum physics,” is the most unfortunate colloquialism ever because really everything is quantum physics. Including Noam Chomsky.

Wednesday, November 23, 2016

I wrote you a song.

I know you’ve all missed my awesome chord progressions and off-tune singing, so I’ve made yet another one of my music videos!

In the attempt to protect you from my own appearance, I recently invested some money into an animation software by name Anime Studio. It has a 350 pages tutorial. Me being myself, I didn’t read it. But I spent the last weekend clicking on any menu item that couldn’t vanish quickly enough, and I’ve integrated the outcome into the above video. I think I kind of figured out now how the basics work. I might do some more of this. It was actually fun to make a visual idea into a movie, something I’ve never done before. Though it might help if I could draw, so excuse the sickly looking tree.

Having said this, I also need to get myself a new video editing software. I’m presently using the Corel VideoStudio Pro which, after the Win10 upgrade works even worse than it did before. I could not for the hell of it export the clip with both good video and audio quality. In the end I sacrificed on the video quality, so sorry about the glitches. They’re probably simply computation errors or, I don’t know, the ghost of Windows 7 still haunting my hard disk.

The song I hope explains itself. One could say it’s the aggregated present mood of my facebook and twitter feeds. You can download the mp3 here.

I wish you all a Happy Thanksgiving, and I want to thank you for giving me some of your attention, every now and then. I especially thank those of you who have paid attention to the donate-button in the top right corner. It’s not much that comes in through this channel, but for me it makes all the difference -- it demonstrates that you value my writing and that keeps me motivated.

I’m somewhat behind with a few papers that I wanted to tell you about, so I’ll be back next week with more words and fewer chords. Meanwhile, enjoy my weltschmerz song ;)

Wednesday, November 16, 2016

A new theory SMASHes problems

Most of my school nightmares are history exams. But I also have physics nightmares, mostly about not being able to recall Newton’s laws. Really, I didn’t like physics in school. The way we were taught the subject, it was mostly dead people’s ideas. On the rare occasion our teacher spoke about contemporary research, I took a mental note every time I heard “nobody knows.” Unsolved problems were what fascinated me, not laws I knew had long been replaced by better ones.

Today, mental noting is no longer necessary – Wikipedia helpfully lists the unsolved problems in physics. And indeed, in my field pretty much every paper starts with a motivation that names at least one of these problems, preferably several.

A recent paper which excels on this count is that of Guillermo Ballesteros and collaborators, who propose a new phenomenological model named SM*A*S*H.
    Unifying inflation with the axion, dark matter, baryogenesis and the seesaw mechanism
    Guillermo Ballesteros, Javier Redondo, Andreas Ringwald, Carlos Tamarit
    arXiv:1608.05414 [hep-ph]

A phenomenological model in high energy particle physics is an extension of the Standard Model by additional particles (or fields, respectively) for which observable, and potentially testable, consequences can be derived. There are infinitely many such models, so to grab the reader’s attention, you need a good motivation why your model in particular is worth the attention. Ballesteros et al do this by tackling not one but five different problems! The name SM*A*S*H stands for Standard Model*Axion*Seesaw*Higgs portal inflation.

First, there are the neutrino oscillations. Neutrinos can oscillate into each other if at least two of them have small but nonzero masses. But neutrinos are fermions and fermions usually acquire masses by a coupling between left-handed and right-handed versions of the particle. Trouble is, nobody has ever seen a right-handed neutrino. We have measured only left-handed neutrinos (or right-handed anti-neutrinos).

So to explain neutrino oscillations, there either must be right-handed neutrinos so heavy we haven’t yet seen them. Or the neutrinos differ from the other fermions – they could be so-called Majorana neutrinos, which can couple to themselves and that way create masses. Nobody knows which is the right explanation.

Ballesteros et al in their paper assume heavy right-handed neutrinos. These create small masses for the left-handed neutrinos by a process called see-saw. This is an old idea, but the authors then try to use these heavy neutrinos also for other purposes.

The second problem they take on is the baryon asymmetry, or the question why matter was left over from the Big Bang but no anti-matter. If matter and anti-matter had existed in equal amounts – as the symmetry between them would suggest – then they would have annihilated to radiation. Or, if some of the stuff failed to annihilate, the leftovers should be equal amounts of both matter and anti-matter. We have not, however, seen any large amounts of anti-matter in the universe. These would be surrounded by tell-tale signs of matter-antimatter annihilation, and none have been observed. So, presently, nobody knows what tilted the balance in the early universe.

In the SM*A*S*H model, the right-handed neutrinos give rise to the baryon asymmetry by a process called thermal leptogenesis. This works basically because the most general way to add right-handed neutrinos to the standard model already offers an option to violate this symmetry. One just has to get the parameters right. That too isn’t a new idea. What’s interesting is that Ballesteros et al point out it’s possible to choose the parameters so that the neutrinos also solve a third problem.

The third problem is dark matter. The universe seems to contain more matter than we can see at any wavelength we have looked at. The known particles of the standard model do not fit the data – they either interact too strongly or don’t form structures efficiently enough. Nobody knows what dark matter is made of. (If it is made of something. Alternatively, it could be a modification of gravity. Regardless of what xkcd says.)

In the model proposed by Ballesteros, the right-handed neutrinos could make up the dark matter. That too is an old idea and it’s not working very well: The more massive of the right-handed neutrinos can decay into lighter ones by emitting a photon and this hasn’t been seen. The problem here is getting the mass range of the neutrinos to both work for dark matter and the baryon asymmetry. Ballesteros et al solve this problem by making up dark matter mostly from something else, a particle called the axion. This particle has the benefit of also being good to solve a fourth problem.

Fourth, the strong CP problem. The standard model is lacking a possible interaction term which would cause the strong nuclear force to violate CP symmetry. We know this term is either absent or very tiny because otherwise the neutron would have an electric dipole moment, which hasn’t been observed.

This problem can be fixed by promoting the constant in front of this term (the theta parameter) to a field. The field then will move towards the minimum of the potential, explaining the smallness of the parameter. The field however is accompanied by a particle (dubbed the “axion” by Frank Wilczek) which hasn’t been observed. Nobody knows whether the axion exists.

In the SMASH model, the axion gives rise to dark matter by leaving behind a condensate and particles that are created in the early universe from the decay of topological defects (strings and domain walls). The axion gets its mass from an additional quark-like field (denoted with Q in the paper), and also solves the strong CP problem.

Fifth, inflation, the phase of rapid expansion in the early universe. Inflation was invented to explain several observational puzzles, notably why the temperature of the cosmic microwave background seems to be almost the same in every direction we look (up to small fluctuations). That’s surprising because in a universe without inflation the different parts of the hot plasma in the early universe which created this radiation had never been in contact before. They thus had no chance to exchange energy and come to a common temperature. Inflation solves this problem by blowing up an initially small patch to gigantic size. Nobody knows, however, what causes inflation. It’s normally assumed to be some scalar field. But where that field came from or what happened to it is unclear.

Ballesteros and his collaborators assume that the scalar field which gives rise to inflation is the Higgs – the only fundamental scalar which we have so far observed. This too is an old idea, and one that works badly. To make Higgs inflation works, one needs to introduce an unconventional coupling of the Higgs field to gravity, and this leads to a breakdown of the theory (loss of unitarity) in ranges where one needs it to work (ie the breakdown can’t be blamed on quantum gravity).

The SM*A*S*H model contains an additional scalar field which gives rise to a more complicated coupling and the authors claim that in this case the breakdown doesn’t happen until at the Planck scale (where it can be blamed on quantum gravity).

So, in summary, we have three right-handed neutrinos with their masses and mixing matrix, a new quark-like field and its mass, the axion field, a scalar field, the coupling between the scalar and the Higgs, the self-coupling of the scalar, the coupling of the quark to the scalar, the axion decay constant, the coupling of the Higgs to gravity, and the coupling of the new scalar to gravity. Though I might have missed something.

In case you just scrolled down to see if I think this model might be correct. The answer is almost certainly no. It’s a great model according to the current quality standard in the field. But when you combine several speculative ideas without observational evidence, you don’t get a model that is less speculative and has more evidence speaking for it.

Wednesday, November 09, 2016

Away Note

I’ll be in London for a few days, attending a RAS workshop on “Fine-Tuning on the Cosmological and the Quantum Scales.” First time I’m speaking about the topic, so a little nervous about that.

It just so happens that tomorrow evening theres a public lecture in London by Roger Penrose which I might or might not attend, depending on whether my flight arrives as planned. Feeling somewhat bad because I haven’t read his recent book. Just judging by the title I’m afraid it might have some overlap with mine.

This public lecture is arranged by the Ideas Roadshow, which I mentioned before. It’s run by Howard Burton, former director of PI. They have some teaser videos which you might enjoy:

Speaking of former directors, I believe Neil Turok’s term at PI is about to run out and I want to complain that I haven’t yet heard rumors who’s in the pipe^^.

As for this blog, please expect that comments might get stuck in the queue longer than usual.

Monday, November 07, 2016

Steven Weinberg doesn’t like Quantum Mechanics. So what?

A few days ago, Nobel laureate Steven Weinberg gave a one-hour lecture titled “What’s the matter with quantum mechanics?” at a workshop for science writers organized by the Council for the Advancement of Science Writing (CASW).

In his lecture, Weinberg expressed a newfound sympathy for the critics of quantum mechanics.
“I’m not as happy about quantum mechanics as I used to be, and not as dismissive of the critics. And it’s a bad sign in particular that those physicists who are happy about quantum mechanics, who see nothing wrong with it, don’t agree with each other about what it means.”
You can watch the full lecture here. (The above quote is at 17:40.)

It’s become a cliché that physicists in their late years develop an obsession with quantum mechanics. On this account, you can file Weinberg together with Mermin and Penrose and Smolin. I’m not sure why that is. Maybe it’s something which has bothered them all along, they just never saw it as important enough. Maybe it’s because they start paying more attention to their intuition, and quantum mechanics – widely regarded as non-intuitive – begins itching. Or maybe it’s because they conclude it’s the likely reason we haven’t seen any progress in the foundations of physics for 30 years.

Whatever Weinberg’s motivation, he doesn’t like neither Copenhagen, nor Many Worlds, nor decoherent or consistent histories, and he seems to be allergic to pilot waves (1:02:15). As for qbism, which Mermin finds so convincing, that doesn’t even seem noteworthy to Weinberg.

I learned quantum mechanics in the mid-1990s from Walter Greiner, the one with the textbook series. (He passed away a few weeks ago at age 80.) Walter taught the Copenhagen Interpretation. The attitude he conveyed in his lectures was what Mermin dubbed “shut up and calculate.”

Of course I as most other students spent some time looking into the different interpretations of quantum mechanics – nothing’s more interesting than the topics your prof refuses to talk about. But I’m an instrumentalist by heart and also I quite like the mathematics of quantum mechanics, so I never had a problem with the Copenhagen Interpretation. I’m also, however, a phenomenologist. And so I’ve always thought of quantum mechanics as an incomplete, not fundamental, theory which needs to be superseded by a better, underlying explanation.

My misgivings of quantum mechanics are pretty much identical to the ones which Weinberg expresses in his lecture. The axioms of quantum mechanics, whatever interpretation you chose, are unsatisfactory for a reductionist. They should not mention the process of measurement, because the fundamental theory should tell you what a measurement is.

If you believe the wave-function is a real thing (psi-ontic), decoherence doesn’t solve the issue because you’re left with a probabilistic state that needs to be suddenly updated. If you believe the wave-function only encodes information (psi-epistemic) and the update merely means we’ve learned something new, then you have to explain who learns and how they learn. None of the currently existing interpretations address these issues satisfactorily.

It isn’t so surprising I’m with Weinberg on this because despite attending Greiner’s lectures, I never liked Greiner’s textbooks. That we students were more or less forced to buy them didn’t make them any more likable. So I scraped together my Deutsche Marks and bought Weinberg’s textbooks, which I loved for the concise mathematical approach.

I learned both general relativity and quantum field theory from Weinberg’s textbooks. I also later bought Weinberg’s lectures on Quantum Mechanics which appeared in 2013, but haven’t actually read them, except for section 3.7, where he concludes that:
“[T]oday there is no interpretation of quantum mechanics that does not have serious flaws, and [we] ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation.”
It’s not much of a secret that I’m a fan of non-local hidden variables (aka superdeterminism), which I believe to be experimentally testable. To my huge frustration, however, I haven’t been able to find an experimental group willing and able to do that. I am therefore happy that Weinberg emphasizes the need to find a better theory, and to also look for experimental evidence. I don’t know what he thinks of superdeterminism. But superdeterminism or something else, I think probing quantum mechanics in new regimes is best shot we presently have at making progress on the foundations of physics.

I therefore don’t understand the ridicule aimed at those who think that quantum mechanics needs an overhaul. Being unintuitive and feeling weird doesn’t make a theory wrong – we can all agree on this. We don’t even have to agree it’s unintuitive – I actually don’t think so. Intuition comes with use. Even if you can’t stomach the math, you can build your quantum intuition for example by playing “Quantum Moves,” a video game that crowd-sources players’ solutions for quantum mechanical optimization problems. Interestingly, humans do better than algorithms (at least for now).

[Weinberg (left), getting some
kind of prize or title. Don't
know for what. Image: CASW]
So, yeah, maybe quantum physics isn’t weird. And even if it is, being weird doesn’t make it wrong, and therefore you don’t think it’s a promising research avenue to pursue. Fine, then don’t. But before you make jokes about physicists who rely on their intuition, let us be clear that being ugly doesn’t make a theory wrong either. And yet it’s presently entirely acceptable to develop new theories with the only aim of prettifying the existing ones.

I don’t think for example that numerological coincidences are problems worth thinking about – they’re questions of aesthetic appeal. The mass of the Higgs is much smaller than the Planck mass. So what? The spatial curvature of the universe is almost zero, the cosmological constant tiny, and the electric dipole moment of the neutron is for all we know absent. Why should that bother me? If you think that’s a mathematical inconsistency, think again – it’s not. There’s no logical reason for why that shouldn’t be so. It’s just that to our human sense it doesn’t quite feel right.

A huge amount of work has gone into curing these “problems” because finetuned constants aren’t thought of as beautiful. But in my eyes the cures are all worse than the disease: Solutions usually require the introduction of additional fields and potentials for these fields and personally I think it’s much preferable to just have a constant – is there any axiom simpler than that?

The difference between the two research areas is that there are tens of thousands of theorists trying to make the fundamental laws of nature less ugly, but only a few hundred working on making them less weird. That in and by itself is reason to shift focus to quantum foundations, just because it’s the path less trodden and more left to explore.

But maybe I’m just old beyond my years. So I’ll shut up now and go back to my calculations.

Monday, October 31, 2016

Modified Gravity vs Particle Dark Matter. The Plot Thickens.

They sit in caves, deep underground. Surrounded by lead, protected from noise, shielded from the warmth of the Sun, they wait. They wait for weakly interacting massive particles, short WIMPs, the elusive stuff that many physicists believe makes up 80% of the matter in the universe. They have been waiting for 30 years, but the detectors haven’t caught a single WIMP.

Even though the sensitivity of dark matter detectors has improved by more than five orders of magnitude since the early 1980s, all results so far are compatible with zero events. The searches for axions, another popular dark matter candidate, haven’t fared any better. Coming generations of dark matter experiments will cross into the regime where the neutrino background becomes comparable to the expected signal. But, as a colleague recently pointed out to me, this merely means that the experimentalists have to understand the background better.

Maybe in 100 years they’ll still sit in caves, deep underground. And wait.

Meanwhile others are running out of patience. Particle dark matter is a great explanation for all the cosmological observations that general relativity sourced by normal matter cannot explain. But maybe it isn’t right after all. The alternative to using general relativity and adding particle is to modify general relativity so that space-time curves differently in response to the matter we already know.

Already in the mid 1980s, Modehai Milgrom showed that modifying gravity has the potential to explain observations commonly attributed to particle dark matter. He proposed Modified Newtonian Dynamics – short MOND – to explain the galactic rotation curves instead of adding particle dark matter. Intriguingly, MOND, despite it having only one free parameter, fits a large number of galaxies. It doesn’t work well for galaxy clusters, but this clearly shows that many galaxies are similar in very distinct ways, ways that the concordance model (also known as LambdaCDM) hasn’t been able to account for.

In its simplest form the concordance model has sources which are collectively described as homogeneous throughout the universe – an approximation known as the cosmological principle. In this form, the concordance model doesn’t predict how galaxies rotate – it merely describes the dynamics on supergalactic scales.

To get galaxies right, physicists have to also take into account astrophysical processes within the galaxies: how stars form, which stars form, where do they form, how do they interact with the gas, how long do they live, when and how they go supernova, what magnetic fields permeate the galaxies, how the fields affect the intergalactic medium, and so on. It’s a mess, and it requires intricate numerical simulations to figure out just exactly how galaxies come to look how they look.

And so, physicists today are divided in two camps. In the larger camp are those who think that the observed galactic regularities will eventually be accounted for by the concordance model. It’s just that it’s a complicated question that needs to be answered with numerical simulations, and the current simulations aren’t good enough. In the smaller camp are those who think there’s no way these regularities will be accounted for by the concordance model, and modified gravity is the way to go.

In a recent paper, McGaugh et al reported a correlation among the rotation curves of 153 observed galaxies. They plotted the gravitational pull from the visible matter in the galaxies (gbar) against the gravitational pull inferred from the observations (gobs), and find that the two are closely related.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This correlation – the mass-discrepancy-acceleration relation (MDAR) – so they emphasize, is not itself new, it’s just a new way to present previously known correlations. As they write in the paper:
“[This Figure] combines and generalizes four well-established properties of rotating galaxies: flat rotation curves in the outer parts of spiral galaxies; the “conspiracy” that spiral rotation curves show no indication of the transition from the baryon-dominated inner regions to the outer parts that are dark matter-dominated in the standard model; the Tully-Fisher relation between the outer velocity and the inner stellar mass, later generalized to the stellar plus atomic hydrogen mass; and the relation between the central surface brightness of galaxies and their inner rotation curve gradient.”
But this was only act 1.

In act 2, another group of researchers responds to the McGaugh et al paper. They present results of a numerical simulation for galaxy formation and claim that particle dark matter can account for the MDAR. The end of MOND, so they think, is near.

Figure from arXiv:1610.06183 [astro-ph.GA]

McGaugh, hero of act 1, points out that the sample size for this simulation is tiny and also pre-selected to reproduce galaxies like we observe. Hence, he thinks the results are inconclusive.

In act 3, Mordehai Milgrom, the original inventor of MOND – posts a comment on the arXiv. He also complains about the sample size of the numerical simulation and further explains that there is much more to MOND than the MDAR correlation. Numerical simulations with particle dark matter have been developed to fit observations, he writes, so it’s not surprising they now fit observations.

“The simulation in question attempt to treat very complicated, haphazard, and unknowable events and processes taking place during the formation and evolution histories of these galaxies. The crucial baryonic processes, in particular, are impossible to tackle by actual, true-to-nature, simulation. So they are represented in the simulations by various effective prescriptions, which have many controls and parameters, and which leave much freedom to adjust the outcome of these simulations [...]

The exact strategies involved are practically impossible to pinpoint by an outsider, and they probably differ among simulations. But, one will not be amiss to suppose that over the years, the many available handles have been turned so as to get galaxies as close as possible to observed ones.”
In act 4, another paper with results of a numerical simulation for galaxy structures with particle dark matter appears.

This one uses a code with acronym EAGLE, for Evolution and Assembly of GaLaxies and their Environments. This code has “quite a few” parameters, as Aaron Ludlow, the paper’s first author told me, and these parameters have been optimized to reproduce realistic galaxies. In this simulation, however, the authors didn’t use this optimized parameter configuration but let several parameters (3-4) vary to produce a larger set of galaxies. These galaxies in general do not look like those we observe. Nevertheless, the researchers find that all their galaxies display the MDAR correlation, regardless.

This would indicate that the particle dark matter is enough to describe the observations.

Figure from arXiv:1610.07663 [astro-ph.GA] 

However, even when varying some parameters, the EAGLE code still contains parameters that have been fixed previously to reproduce observations. Ludlow calls them “subgrid parameters,” meaning they quantify physics on scales smaller than what the simulation can presently resolve. One sees for example in Figure 1 of their paper (shown below) that all those galaxies have a pronounced correlation between the velocities of the outer stars (Vmax) and the luminosity (M*) already.
Figure from arXiv:1610.07663 [astro-ph.GA]
Note that the plotted quantities are correlated in all data sets,
though the off-sets differ somewhat.

One shouldn’t hold this against the model. Such numerical simulations are done for the purpose of generating and understanding realistic galaxies. Runs are time-consuming and costly. From the point of view of an astrophysicist, the question just how unrealistic galaxies can get in these simulations is entirely nonsensical. And yet that’s exactly what the modified-gravity/dark matter showoff now asks for.

In act 5, John Moffat shows that modified gravity – the general relativistic completion of MOND – reproduces the MDAR correlation, but also predicts a distinct deviation for the very outside stars of galaxies.

Figure from arXiv:1610.06909 [astro-ph.GA] 
The green curve is the prediction from modified gravity.

The crucial question here is, I think, which correlations are independent of each other. I don’t know. But I’m sure there will be further acts in this drama.

Sunday, October 23, 2016

The concordance model strikes back

Two weeks ago, I summarized a recent paper by McGaugh et al who reported a correlation in galactic structures. The researchers studied a data-set with the rotation curves of 153 galaxies and showed that the gravitational acceleration inferred from the rotational velocity (including dark matter), gobs, is strongly correlated to the gravitational acceleration from the normal matter (stars and gas), gbar.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This isn’t actually new data or a new correlation, but a new way to look at correlations in previously available data.

The authors of the paper were very careful not to jump to conclusions from their results, but merely stated that this correlation requires some explanation. That galactic rotation curves have surprising regularities, however, has been evidence in favor of modified gravity for two decades, so the implication was clear: Here is something that the concordance model might have trouble explaining.

As I remarked in my previous blogpost, while the correlation does seem to be strong, it would be good to see the results of a simulation with the concordance model that describes dark matter, as usual, as a pressureless, cold fluid. In this case too one would expect there to be some relation. Normal matter forms galaxies in the gravitational potentials previously created by dark matter, so the two components should have some correlation with each other. The question is how much.

Just the other day, a new paper appeared on the arxiv, which looked at exactly this. The authors of the new paper analyzed the result of a specific numerical simulation within the concordance model. And they find that the correlation in this simulated sample is actually stronger than the observed one!

Figure from arXiv:1610.06183 [astro-ph.GA]

Moreover, they also demonstrate that in the concordance model, the slope of the best-fit curve should depend on the galaxies’ redshift (z), ie the age of the galaxy. This would be a way to test which explanation is correct.

Figure from arXiv:1610.06183 [astro-ph.GA]

I am not familiar with the specific numerical code that the authors use and hence I am not sure what to make of this. It’s been known for a long time that the concordance model has difficulties getting structures on galactic size right, especially galactic cores, and so it isn’t clear to me just how many parameters this model uses to work right. If the parameters were previously chosen so as to match observations already, then this result is hardly surprising.

McGaugh, one of the authors of the first paper, has already offered some comments (ht Yves). He notes that the sample size of the galaxies in the simulation is small, which might at least partly account for the small scatter. He also expresses himself skeptical of the results: “It is true that a single model does something like this as a result of dissipative collapse. It is not true that an ensemble of such models are guaranteed to fall on the same relation.”

I am somewhat puzzled by this result because, as I mentioned above, the correlation in the McGaugh paper is based on previously known correlations, such as the brightness-velocity relation which, to my knowledge, hadn’t been explained by the concordance model. So I would find it surprising should the results of the new paper hold up. I’m sure we’ll hear more about this in the soon future.