Thursday, February 22, 2018

Shut up and simulate. (In which I try to understand how dark matter forms galaxies, and end up very confused.)

Galactic structures.
Illustris Simulation.
[Image Source]
Most of the mass in the universe isn’t a type of matter we are familiar with. Instead, it’s a mysterious kind of “dark matter” that doesn’t emit or absorb light. It also interacts rarely both with itself and with normal matter, too rarely to have left any trace in our detectors.

We know dark matter is out there because we see its gravitational pull. Without dark matter, Einstein’s theory of general relativity does not predict a universe that looks like what we see; neither galaxies, nor galaxy clusters, nor galactic filaments come out right. At least that’s what I used to think.

But the large-scale structure we observe in the universe also don’t come out right with dark matter.

These are not calculations anyone can do with a pen on paper, so almost all of it is computer simulations. It’s terra-flopping, super-clustering, parallel computing that takes months even on the world’s best hardware. The outcome is achingly beautiful videos that show how initially homogeneous matter clumps under its own gravitational pull, slowly creating the sponge-like structures we see today.

Dark matter begins to clump first, then the normal matter follows the dark matter’s gravitational pull, forming dense clouds of gas, stars, and solar systems: The cradles of life.

structure formation, Magneticum simulation
Credit: Dolag et al. 2015
It is impressive work that simply wouldn’t have been possible two decades ago.

But the results of the computer simulations are problem-ridden, and have been since the very first ones. The clumping matter, it turns out, creates too many small “dwarf” galaxies. Also, the distribution of dark matter inside the galaxies is too peaked towards the middle, a trouble known as the “cusp problem.”

The simulations also leave some observations unexplained, such as an empirically well-established relation between the brightness of a galaxy and the velocity of its outermost stars, known as the Tully-Fisher-relation. And this is just to mention the problems that I understand well enough to even mention them.

It’s not something I used to worry about. Frankly I’ve been rather uninterested in the whole thing because for all I know dark matter is just another particle and really I don’t care much what it’s called.

Whenever I spoke to an astrophysicist about the shortcomings of the computer simulations they told me that mismatches with data are to be expected. That’s because the simulations don’t yet properly take into account the – often complicated – physics of normal matter, such as the pressure generated when stars go supernovae, the dynamics of interstellar gas, or the accretion and ejection of matter by supermassive black holes which are at the center of most galaxies.

Fair enough, I thought. Something with supernovae and so on that creates pressure and prevents the density peaks in the center of galaxies. Sounds plausible. These “feedback” processes, as they are called, must be highly efficient to fit the data, and make use of almost 100% of supernovae energy. This doesn’t seem realistic. But then again, astrophysicists aren’t known for high precision data. When the universe is your lab, error margins tend to be large. So maybe “almost 100%” in the end turns out to be more like 30%. I could live with that.

Eta Carinae. An almost supernova.
Image Source: NASA

Then I learned about the curious case of low surface brightness galaxies. I learned that from Stacy McGaugh who blogs next door. How I learned about that is a story by itself.

The first time someone sent me a link to Stacy’s blog, I read one sentence and closed the window right away. Some modified gravity guy, I thought. And modified gravity, you must know, is the crazy alternative to dark matter. The idea is that rather than adding dark matter to the universe, you fiddle with Einstein’s theory of gravity. And who in their right mind messes with Einstein.

The second time someone sent me a link to Stacy’s blog it came with the remark I might have something in common with the modified gravity dude. I wasn’t flattered. Also I didn’t bother clicking on the link.

The third time I heard of Stacy it was because I had a conversation with my husband about low surface brightness galaxies. Yes, I know, not the most romantic topic of a dinner conversation, but things happen when you marry a physicist. Turned out my dear husband clearly knew more about the subject than I. And when prompted for the source of his wisdom he referred to me to no other than Stacy-the-modified-gravity-dude.

So I had another look at that guy’s blog.

Upon closer inspection it became apparent Stacy isn’t a modified gravity dude. He isn’t even a theorist. He’s an observational astrophysicist somewhere in the US North-East who has become, rather unwillingly, a lone fighter for modified gravity. Not because he advocates a particular theory, but because he has his thumb on the pulse of incoming data.

I am not much of an astrophysicist and understand like 5% of what Stacy writes on his blog. There are so many words I can’t parse. Is it low-surface brightness galaxy or low surface-brightness galaxy? And what’s the surface of a galaxy anyway? If there are finite size galaxies, does that mean there are also infinite size galaxies? What the heck is an UFD? What means NFW, ISM, RAR, and EFE?* And why do astrophysicists use so many acronyms that you can’t tell a galaxy from an experiment? Questions over questions.

Though I barely understood what the man was saying, it was also clear why other people thought I may have something in common with him. Even if you don’t have a clue what he’s on about, frustration pours out of his writing. That’s a guy shouting at a scientific community to stop deluding themselves. A guy whose criticism is totally and utterly ignored while everybody goes on doing what they’ve been doing for decades, never mind that it doesn’t work. Oh yes, I know that feeling.

Still, I had no particular reason to look at the galactic literature and reassess which party is the crazier one, modified gravity or particle dark matter. I merely piped Stacy’s blog into my feed just for the occasional amusement. It took yet another guy to finally make me look at this.

I get a lot of requests from students. Not because I am such a famous physicists, I am afraid, but just because I am easy to find. So far I have deterred these students by pointing out that I have no money to pay them and that my own contract will likely run out before they have even graduated. But last year I was confronted with a student who was entirely unperturbed by my bleak future vision. He simply moved to Frankfurt and one day showed up in my office to announce he was here to work with me. On modified gravity, out of all things.

So now that I carry responsibility for somebody else’s career, I thought, I should at least get an opinion on the matter of dark matter.

That’s why I finally looked at a bunch of papers from different simulations for galaxy formation. I had the rather modest goal of trying to find out how many parameters they use, which of the simulations fare best in terms of explaining the most with the least input, and how those simulations compare to what you can do with modified gravity. I still don’t know. I don’t think anyone knows.

But after looking at a dozen or so papers the problem Stacy is going on about became apparent. These papers typically start with a brief survey of other, previous, simulations, none of which got the structures right, all of which have been adapted over and over and over again to produce results that fit better to observations. It screams “epicycles” directly into your face.

Now, there isn’t anything scientifically wrong with this procedure. It’s all well and fine to adapt a model so that it describes what you observe. But this way you’ll not get a model that has much predictive power. Instead, you will just extract fitting parameters from data. It is highly implausible that you can spend twenty or so years fiddling with the details of computer simulations to then find what’s supposedly a universal relation. It doesn’t add up. It doesn’t make sense. I get this cognitive dissonance.

And then there are the low surface-brightness galaxies. These are interesting because 30 years ago they were thought to be not existent. They do exist though, they are just difficult to see. And they spelled trouble for dark matter, just that no one wants to admit it.

Low surface brightness galaxies are basically dilute types of galaxies, so that there is less brightness per surface area, hence the name. If you believe that dark matter is a type of particle, then you’d naively expect these galaxies to not obey the Tully-Fisher relation. That’s because if you stretch out the matter in a galaxy, then the orbital velocity of the outermost stars should decrease while the total luminosity doesn’t, hence the relation between them should change.

But the data don’t comply. The low surface brightness things, they obey the very same Tully-Fisher relation than all the other galaxies. This came as a surprise to the dark matter community. It did not come as a surprise to Mordehai Milgrom, the inventor of modified Newtonian dynamics, who had predicted this in 1983, long before there was any data.

You’d think this would have counted as strong evidence for modified gravity. But it barely made a difference. What happened instead is that the dark matter models were adapted.

You can explain the observations of low surface brightness galaxies with dark matter, but it comes at a cost. To make it work, you have to readjust the amount of dark matter relative to normal matter. The lower the surface-brightness, the higher the fraction of dark matter in a galaxy.

And you must be good in your adjustment to match just the right ratio. Because that is fixed by the Tully-Fisher relation. And then you have to come up with a dynamical process for ridding your galaxies of normal matter to get the right ratio. And you have to get the same ratio pretty much regardless of how the galaxies formed, whether they formed directly, or whether they formed through mergers of smaller galaxies.

The stellar feedback is supposed to do it. Apparently it works. As someone who has nothing to do with the computer simulations for galaxy structures, the codes are black boxes to me. I have little doubt that it works. But how much fiddling and tuning is necessary to make it work, I have no telling.

My attempts to find out just how many parameters the computer simulations use were not very successful. It is not information that you readily find in the papers, which is odd enough. Isn’t this the major, most relevant information you’d want to have about the simulations? One person I contacted referred me to someone else who referred me to a paper which didn’t contain the list I was looking for. When I asked again, I got no response. On another attempt my question how many parameters there are in a simulations was answered with “in general, quite a few.”

But I did eventually get a straight reply from Volker Springel. In the Illustris Simulation, he told me, there are 10 physically relevant parameters, in addition to the 8 cosmological parameters. (That’s not counting the parameters necessary to initialize the simulation, like the resolution and so on.) I assume the other simulations have comparable numbers. That’s not so many. Indeed, that’s not bad at all, given how many different galaxy types there are!

Still, you have to compare this to Milgrom’s prediction from modified gravity. He needs one parameter. One. And out falls a relation that computer simulations haven’t been able to explain for twenty years.

And even if the simulations would get the right result, would that count as an explanation?

From the outside, it looks much like dark magic.


* ultra faint dwarfs, Navarro-Frenk-White, interstellar medium, radial acceleration relation, external field effect

Thursday, February 15, 2018

What does it mean for string theory that the LHC has not seen supersymmetric particles?



The LHC data so far have not revealed any evidence for supersymmetric particles, or any other new particles. For all we know at present, the standard model of particle physics suffices to explain observations.

There is some chance that better statistics which come with more data will reveal some less obvious signal, so the game isn’t yet over. But it’s not looking good for susy and her friends.
Simulated signal of black hole
production and decay at the LHC.
[Credits: CERN/ATLAS]

What are the consequences? The consequences for supersymmetry itself are few. The reason is that supersymmetry by itself is not a very predictive theory.

To begin with, there are various versions of supersymmetry. But more importantly, the theory doesn’t tell us what the masses of the supersymmetric particles are. We know they must be heavier than something we would have observed already, but that’s it. There is nothing in supersymmetric extensions of the standard model which prevents theorists from raising the masses of the supersymmetric partners until they are out of the reach of the LHC.

This is also the reason why the no-show of supersymmetry has no consequences for string theory. String theory requires supersymmetry, but it makes no requirements about the masses of supersymmetric particles either.

Yes, I know the headlines said the LHC would probe string theory, and the LHC would probe supersymmetry. The headlines were wrong. I am sorry they lied to you.

But the LHC, despite not finding supersymmetry or extra dimensions or black holes or unparticles or what have you, has taught us an important lesson. That’s because it is clear now that the Higgs mass is not “natural”, in contrast to all the other particle masses in the standard model. That the mass be natural means, roughly speaking, that getting masses from a calculation should not require the input of finely tuned numbers.

The idea that the Higgs-mass should be natural is why many particle physicists were confident the LHC would see something beyond the Higgs. This didn’t happen, so the present state of affairs forces them to rethink their methods. There are those who cling to naturalness, hoping it might still be correct, just in a more difficult form. Some are willing to throw it out and replace it instead with appealing to random chance in a multiverse. But most just don’t know what to do.

Personally I hope they’ll finally come around and see that they have tried for several decades to solve a problem that doesn’t exist. There is nothing wrong with the mass of the Higgs. What’s wrong with the standard model is the missing connection to gravity and a Landau pole.

Be that as it may, the community of theoretical particle physicists is currently in a phase of rethinking. There are of course those who already argue a next larger collider is needed because supersymmetry is just around the corner. But the main impression that I get when looking at recent publications is a state of confusion.

Fresh ideas are needed. The next years, I am sure, will be interesting.



I explain all about supersymmetry, string theory, the problem with the Higgs-mass, naturalness, the multiverse, and what they have to do with each other in my upcoming book “Lost in Math.”

Monday, February 12, 2018

Book Update: First Review!

The final proofs are done and review copies were sent out. One of the happy receivers, Emmanuel Rayner, read the book within two days and so we have a first review on Goodreads now. That’s not counting the two-star review by someone who I am very sure hasn’t read the book because he “reviewed” it before there were review copies. Tells you all about online ratings you need to know.

The German publisher, Fischer, is still waiting for the final manuscript which has not yet left the US publisher’s rear end. Fischer wants to get started on the translation so that the German edition appears in early fall, only a few months later than the US edition.

Since I get this question a lot, no, I will not translate the book myself. To begin with, it seemed like a rather stupid thing to do, agree on translating an 80k word manuscript if someone else can do it instead. Maybe more importantly, my German writing is miserable, that owing to a grammar reform which struck the country the year after I had moved overseas, and which therefore entirely passed me by. It adds to this that the German spell-check on my laptop isn’t working (it’s complicated), I have an English keyboard, hence no umlauts, and also did I mention I didn’t have to do it in the first place.

Problems start with the title. “Lost in Math” doesn’t translate well to German, so the Fischer people search for a new title. Have been searching for two months, for all I can tell. I imagine them randomly opening pages of a dictionary, looking for inspiration.

Meanwhile, they have recruited and scheduled an appointment for me with a photographer to take headshots. Because in Germany you leave nothing to coincidence. So next week I’ll be photographed.

In other news, end of February I will give a talk at a workshop on “Naturalness, Hierarchy, and Fine Tuning” in Aachen, and I agreed to give a seminar in Heidelberg end of April, both of which will be more or less about the topic of the book. So stop by if you are interested and in the area.

And do not forget to preorder a copy if you haven’t yet done so!

Wednesday, February 07, 2018

Which problems make good research problems?

mini-problem [answer here]
Scientists solve problems; that’s their job. But which problems are promising topics of research? This is the question I set out to answer in Lost in Math at least concerning the foundations of physics.

A first, rough, classification of research problems can be made using Thomas Kuhn’s cycle of scientific theories. Kuhn’s cycle consists of a phase of “normal science” followed by “crisis” leading to a paradigm change, after which a new phase of “normal science” begins. This grossly oversimplifies reality, but it will be good enough for what follows.

Normal Problems

During the phase of normal science, research questions usually can be phrased as “How do we measure this?” (for the experimentalists) or “How do we calculate this?” (for the theorists).

The Kuhn Cycle.
[Img Src: thwink.org]
In the foundations of physics, we have a lot of these “normal problems.” For the experimentalists it’s because the low-hanging fruits have been picked and measuring anything new becomes increasingly challenging. For the theorists it’s because in physics predictions don’t just fall out of hypotheses. We often need many steps of argumentation and lengthy calculations to derive quantitative consequences from a theory’s premises.

A good example for a normal problem in the foundations of physics is cold dark matter. The hypothesis is easy enough: There’s some cold, dark, stuff in the cosmos that behaves like a fluid and interacts weakly both with itself and other matter. But that by itself isn’t a useful prediction. A concrete research problem would instead be: “What is the effect of cold dark matter on the temperature fluctuations of the cosmic microwave background?” And then the experimental question “How can we measure this?”

Other problems of this type in the foundations of physics are “What is the gravitational contribution to the magnetic moment of the muon?,” or “What is the photon background for proton scattering at the Large Hadron Collider?”

Answering such normal problems expands our understanding of existing theories. These are calculations that can be done within the frameworks we have, but calculations can be be challenging.

The examples in the previous paragraphs are solved problems, or at least problems that we know how to solve, though you can always ask for higher precision. But we also have unsolved problems in this category.

The quantum theory of the strong nuclear force, for example, should largely predict the masses of particles that are composed of several quarks, like neutrons, protons, and other similar (but unstable) composites. Such calculations, however, are hideously difficult. They are today made by use of sophisticated computer code – “lattice calculations” – and even so the predictions aren’t all that great. A related question is how does nuclear matter behave in the core of neutron stars.

These are but some randomly picked examples for the many open questions in physics that are “normal problems,” believed to be answerable with the theories we know already, but I think they serve to illustrate the case.

Looking beyond the foundations, we have normal problems like predicting the solar cycle and solar weather – difficult because the system is highly nonlinear and partly turbulent, but nothing that we expect to be in conflict with existing theories. Then there is high-temperature superconductivity, a well-studied but theoretically not well-understood phenomenon, due to the lack of quasi-particles in such materials. And so on.

So these are the problems we study when business goes as normal. But then there are problems that can potentially change paradigms, problems that signal a “crisis” in the Kuhnian terminology.

Crisis Problems

The obvious crisis problems are observations that cannot be explained with the known theories.

I do not count most of the observations attributed to dark matter and dark energy as crisis problems. That’s because most of this data can be explained well enough by just adding two new contributions to the universe’s energy budget. You will undoubtedly complain that this does not give us a microscopic description, but there’s no data for the microscopic structure either, so no problem to pinpoint.

But some dark matter observations really are “crisis problems.” These are unexplained correlations, regularities in galaxies that are hard to come by with cold dark matter, such as the Tully-Fisher-relation or the strange ability of dark matter to seemingly track the distribution of matter. There is as yet no satisfactory explanation for these observations using the known theories. Modifying gravity successfully explains some of it but that brings other problems. So here is a crisis! And it’s a good crisis, I dare to say, because we have data and that data is getting better by the day.

This isn’t the only good observational crisis problem we presently have in the foundations of physics. One of the oldest ones, but still alive and kicking, is the magnetic moment of the muon. Here we have a long-standing mismatch between theoretical prediction and measurement that has still not been resolved. Many theorists take this as an indication that this cannot be explained with the standard model and a new, better, theory is needed.

A couple more such problems exist, or maybe I should say persist. The DAMA measurements for example. DAMA is an experiment that searches for dark matter. They have been getting a signal of unknown origin with an annual modulation, and have kept track of it for more than a decade. The signal is clearly there, but if it was dark matter that would conflict with other experimental results. So DAMA sees something, but no one knows what it is.

There is also the still-perplexing LSND data on neutrino oscillation that doesn’t want to agree with any other global parameter fit. Then there is the strange discrepancy in the measurement results for the proton radius using two different methods, and a similar story for the lifetime of the neutron. And there are the recent tensions in the measurement of the Hubble rate using different methods, which may or may not be something to worry about.

Of course each of these data anomalies might have a “normal” explanation in the end. It could be a systematic measurement error or a mistake in a calculation or an overlooked additional contribution. But maybe, just maybe, there’s more to it.

So that’s one type of “crisis problem” – a conflict between theory and observations. But besides these there is an utterly different type of crisis problem, which is entirely on the side of theory-development. These are problems of internal consistency.

A problem of internal consistency occurs if you have a theory that predicts conflicting, ambiguous, or just nonsense observations. A typical example for this would be probabilities that become larger than one, which is inconsistent with a probabilistic interpretation. Indeed, this problem was the reason physicists were very certain the LHC would see some new physics. They couldn’t know it would be the Higgs, and it could have been something else – like an unexpected change to the weak nuclear force – but the Higgs it was. It was restoring internal consistency that led to this successful prediction.

Historically, studying problems of consistency has led to many stunning breakthroughs.

The “UV catastrophe” in which a thermal source emits an infinite amount of light at small wavelength is such a problem. Clearly that’s not consistent with a meaningful physical theory in which observable quantities should be finite. (Note, though, that this is a conflict with an assumption. Mathematically there is nothing wrong with infinity.) Planck solved this problem, and the solution eventually led to the development of quantum mechanics.

Another famous problem of consistency is that Newtonian mechanics was not compatible with the space-time symmetries of electrodynamics. Einstein resolved this disagreement, and got special relativity. Dirac later resolved the contradiction between quantum mechanics and special relativity which, eventually, gave rise to quantum field theory. Einstein further removed contradictions between special relativity and Newtonian gravity, getting general relativity.

All these have been well-defined, concrete, problems.

But most theoretical problems in the foundations of physics today are not of this sort. Yes, it would be nice if the three forces of the standard model could be unified to one. It would be nice, but it’s not necessary for consistency. Yes, it would be nice if the universe was supersymmetric. But it’s not necessary for consistency. Yes, it would be nice if we could explain why the Higgs mass is not technically natural. But it’s not inconsistent if the Higgs mass is just what it is.

It is well documented that Einstein and even more so Dirac were guided by the beauty of their theories. Dirac in particular was fond of praising the use of mathematical elegance in theory-development. Their personal motivation, however, is only of secondary interest. In hindsight, the reason they succeeded was that they were working on good problems to begin with.

There are a few real theory-problems in the foundations of physics today, but they exist. One is the lacking quantization of gravity. Just lumping the standard model together with general relativity doesn’t work mathematically, and we don’t know how to do it properly.

Another serious problem with the standard model alone is the Landau pole in one of the coupling constants. That means that the strength of one of the forces becomes infinitely large. This is non-physical for the same reason the UV catastrophe was, so something must happen there. This problem has received little attention because most theorists presently believe that the standard model becomes unified long before the Landau pole is reached, making the extrapolation redundant.

And then there are some cases in which it’s not clear what type of problem we’re dealing with. The non-convergence of the perturbative expansion is one of these. Maybe it’s just a question of developing better math, or maybe there’s something we get really wrong about quantum field theory. The case is similar for Haag’s theorem. Also the measurement problem in quantum mechanics I find hard to classify. Appealing to a macroscopic process in the theory’s axioms isn’t compatible with the reductionist ideal, but then again that is not a fundamental problem, but a conceptual worry. So I’m torn about this one.

But for what crisis problems in theory development are concerned, the lesson from the history of physics is clear: Problems are promising research topics if they really are problems, which means you must be able to formulate a mathematical disagreement. If, in contrast, the supposed problem is that you simply do not like a particular aspect of a theory, chances are you will just waste your time.



Homework assignment: Convince yourself that the mini-problem shown in the top image is mathematically ill-posed unless you appeal to Occam’s razor.