Sunday, March 24, 2019

Superfluid Dark Matter [Video]

I am at home with a cold, and so I finally got around to finish this video on superfluid dark matter which has been sitting on my hard disk for a few months.


This is a sequel to my earlier two videos about Dark Matter and Modified Gravity.

For captions, click CC in the tool bar. Will add German captions in the next days. Now I need a few ibuprofen.

Saturday, March 23, 2019

Just Move (I’ve been singing again)

I have spent the last few weekends shouting at a foam mat. It’s nothing personal. Foam and I, we normally get along just fine. It’s just that I was hoping to improve my singing technique. Also, shouting may come in handy on other occasions, you never know.

Alas, my shouting success was limited. It’s hard to project anger at a foam mat. But somewhere along the way I seem to have spontaneously developed a head-voice vibrato, not sure how come. Probably a sign that my head is becoming increasingly hollow.

Besides that, I have a new pre-amplifier which works better than the previous one, but has a markedly different noise pattern that I yet have to get used to. If my voice sounds different, that’s probably why. That, or the hollow head.

(Soundcloud version here.)

Wednesday, March 20, 2019

Science has a problem. Here is how you can help.

[I have gotten numerous requests by people who want to share Appendix C of my book. The content is copyrighted, of course, but my publisher kindly agreed that I can make it publicly available. You may use this text for non-commercial purposes, so long as you add the copyright disclaimer, see bottom of post.]

Both bottom-up and top-down measures are necessary to improve the current situation. This is an interdisciplinary problem whose solution requires input from the sociology of science, philosophy, psychology, and – most importantly – the practicing scientists themselves. Details differ by research area. One size does not fit all. Here is what you can do to help.

As a scientist:
  • Learn about social and cognitive biases: Become aware of what they are and under which circumstances they are likely to occur. Tell your colleagues.
  • Prevent social and cognitive biases: If you organize conferences, encourage speakers to not only list motivations but also shortcomings. Don’t forget to discuss “known problems.” Invite researchers from competing programs. If you review papers, make sure open questions are adequately mentioned and discussed. Flag marketing as scientifically inadequate. Don’t discount research just because it’s not presented excitingly enough or because few people work on it.
  • Beware the influence of media and social networks: What you read and what your friends talk about affects your interests. Be careful what you let into your head. If you consider a topic for future research, factor in that you might have been influenced by how often you have heard others speak about it positively.
  • Build a culture of criticism: Ignoring bad ideas doesn’t make them go away, they will still eat up funding. Read other researchers’ work and make your criticism publicly available. Don’t chide colleagues for criticizing others or think of them as unproductive or aggressive. Killing ideas is a necessary part of science. Think of it as community service.
  • Say no: If a policy affects your objectivity, for example because it makes continued funding dependent on the popularity of your research results, point out that it interferes with good scientific conduct and should be amended. If your university praises its productivity by paper counts and you feel that this promotes quantity over quality, say that you disapprove of such statements.
As a higher ed administrator, science policy maker, journal editor, representative of funding body:
  • Do your own thing: Don’t export decisions to others. Don’t judge scientists by how many grants they won or how popular their research is – these are judgements by others who themselves relied on others. Make up your own mind, carry responsibility. If you must use measures, create your own. Better still, ask scientists to come up with their own measures.
  • Use clear guidelines: If you have to rely on external reviewers, formulate recommendations for how to counteract biases to the extent possible. Reviewers should not base their judgment on the popularity of a research area or the person. If a reviewer’s continued funding depends on the well-being of a certain research area, they have a conflict of interest and should not review papers in their own area. That will be a problem because this conflict of interest is presently everywhere. See next 3 points to alleviate it.
  • Make commitments: You have to get over the idea that all science can be done by postdocs on 2-year fellowships. Tenure was institutionalized for a reason and that reason is still valid. If that means fewer people, then so be it. You can either produce loads of papers that nobody will care about 10 years from now, or you can be the seed of ideas that will still be talked about in 1000 years. Take your pick. Short-term funding means short-term thinking.
  • Encourage a change of field: Scientists have a natural tendency to stick to what they know already. If the promise of a research area declines, they need a way to get out, otherwise you’ll end up investing money into dying fields. Therefore, offer reeducation support, 1-2 year grants that allow scientists to learn the basics of a new field and to establish contacts. During that period they should not be expected to produce papers or give conference talks.
  • Hire full-time reviewers: Create safe positions for scientists specialized in providing objective reviews in certain fields. These reviewers should not themselves work in the field and have no personal incentive to take sides. Try to reach agreements with other institutions on the number of such positions.
  • Support the publication of criticism and negative results: Criticism of other people’s work or negative results are presently underappreciated. But these contributions are absolutely essential for the scientific method to work. Find ways to encourage the publication of such communication, for example by dedicated special issues.
  • Offer courses on social and cognitive biases: This should be mandatory for anybody who works in academic research. We are part of communities and we have to learn about the associated pitfalls. Sit together with people from the social sciences, psychology, and the philosophy of science, and come up with proposals for lectures on the topic.
  • Allow a division of labor by specialization in task: Nobody is good at everything, so don’t expect scientists to be. Some are good reviewers, some are good mentors, some are good leaders, and some are skilled at science communication. Allow them to shine in what they’re good at and make best use of it, but don’t require the person who spends their evenings in student Q&A to also bring in loads of grant money. Offer them specific titles, degrees, or honors.
As a science writer or member of the public, ask questions:
  • You’re used to asking about conflicts of interest due to funding from industry. But you should also ask about conflicts of interest due to short-term grants or employment. Does the scientists’ future funding depend on producing the results they just told you about?
  • Likewise, you should ask if the scientists’ chance of continuing their research depends on their work being popular among their colleagues. Does their present position offer adequate protection from peer pressure?
  • And finally, like you are used to scrutinize statistics you should also ask whether the scientists have taken means to address their cognitive biases. Have they provided a balanced account of pros and cons or have they just advertised their own research?
You will find that for almost all research in the foundations of physics the answer to at least one of these questions is no. This means you can’t trust these scientists’ conclusions. Sad but true.


Reprinted from Lost In Math by Sabine Hossenfelder. Copyright © 2018. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc

Saturday, March 16, 2019

Particle physicists continue to spread misinformation about prospects of new collider

Physics Today has published an essay by Gordon Kane about “The collider question.”

Gordon Kane is Professor of Physics at the University of Michigan. He is well known in the field, both for his research and his engagement in science communication. Kane has written several well-received popular science books about particle physics in general, and supersymmetry in particular.

In his new essay for Physics Today, Kane mentions “economic considerations” and the possibility of spin-offs in favor of a larger collider. But these are arguments that could be made about any experiment of similar size.

His key point is that a next larger collider is needed to answer some of the currently open big questions in the foundations of physics:
“For our next colliders the goal is to provide data for a more comprehensive theory, hopefully one that incorporates dark matter, quantum gravity, and neutrino masses and solves the hierarchy problem. But what does that mean in practice?”
He claims that:
“It’s been known since the 1980s that a mathematically consistent quantum theory of gravity has to be formulated in 9 or 10 spatial dimensions.”
This statement is wrong. It is known that string theory requires additional dimensions of space, but physicists do not presently know that string theory is the correct theory for quantum gravity. They also have several other, mathematically consistent, approaches to quantum gravity that do not require additional dimensions, such as asymptotically safe gravity, or loop quantum gravity.

Kane then refers to an earlier article he wrote about his own models for Physics Today and claims:
“They predict or describe the Higgs boson mass. We can now study the masses that new particles have in such models to get guidance for what colliders to build.”
Note the odd phrase “predict or describe the Higgs boson mass.” The story here is that in 2011 Kane and collaborators, a few days before CERN collaborations released first results of the Higgs measurement, published a paper claiming they can predict the correct Higgs-mass. Kane later wrote a Comment about this for Nature Magazine. All particle physicists I ever spoke with about Kane’s prediction suspect it was informed by rumors about the Higgs mass and then, consciously or unconsciously, backward constructed.

In his Physics Today essay, Kane then goes on to write that his models “generically have some observable superpartners with masses between about 1500 GeV and 5000 GeV” and argues that:
“Such theoretical work provides quantitative predictions to help set goals for collider construction, similar to how theorists helped zero in on the mass of the Higgs boson.”
Gordon Kane has made predictions for the appearance of new particles at colliders for 20+ years. Every time an experiment fails to see those new particles, he adjusts the masses so that the theory is still compatible with data. For references, please check out Will Kinney’s recent twitter thread.

Among particle physicists, Kane is somewhat exceptional by his public presence and the boldness of his assertions. But his method of making predictions is typical practice in the field. Indeed, by the current standard in particle physics, his research is of high quality. Kane’s models are well-motivated by beautiful ideas and, together with his collaborators, he has amassed a lot of impressive looking equations, not to mention citations.

This does not change the fact that those predictions are worthless.

Allow me an analogy. Forget for a moment that we are talking about particle physics, and think of climate science. That’s the stuff with global warming and melting ice sheets and so on, I’m sure you’ve heard. Now imagine that those models could predict literally any possible future trend. That would be pretty useless predictions, wouldn’t you agree? It wouldn’t be much of a science, really. It would be pretty ridiculous, indeed.

Well, that’s how predictions for new particles currently work. The methods of theory-development used by particle physicists can predict anything and everything.

You do not have to take my word for it, you only have to look at this paper about “ambulance chasing”. Ambulance chasing is the practice of particle physicists to cook up models to explain statistical fluctuations that they hope will turn out to be real particles. With the currently accepted methods of model-building, they can produce hundreds of explanations within months, regardless of whether the signal was actually real or not.

You do not even need to understand a single word written in those papers to see that one cannot trust predictions made this way.

I don’t want to pick on Kane too much, because he just does what he has learned, and he does an excellent job at this. But Gordon Kane is to particle physics what Amy Cuddy is to psychology: A very public example of scientific methodology gone badly wrong.

Like Cuddy, the blame is not on Kane in particular, the blame is on a community which is not correcting a methodology they all know is not working. The difference is that while psychologists have recognized the problems in their community and have taken steps towards improvement, particle physicists still refuse to acknowledge their field even has a problem.

The methods of theory-development used in particle physics to predict new physics are not proper science. This research must be discontinued. And it should certainly not be used to argue we need a next larger collider.



Correction, March 18: I have been informed that Physics Today is not the membership magazine of the American Physical Society, it is just that members of the American Physical Society receive the magazine. I therefore rewrote the first sentence.

Thursday, March 14, 2019

Particle physicists excited over discovery of nothing in particular

Logo of Moriond meeting.
“Rencontres de Moriond” is one of the most important annual conferences in particle physics. This year’s meeting will start in two days, on March 16th. Usually, experimental collaborations try to have at least preliminary results to present at the conference, so we have an interesting couple of weeks ahead.

The collaboration of the ATLAS experiment at CERN has already released a few results from the searches for “exotic” particles with last year’s run 2 data. So far, they have seen nothing new. More results will likely appear online soon. 

One of the key questions to be addressed by the new data analysis is whether the “lepton flavor anomalies” persist. These anomalies are several small differences between rates of processes that, according to the standard model, should be identical. Separately, each deviation from the standard model has a low statistical significance, not exceeding 3 σ. However, in 2017 a group of particle physicists claimed that the combined significance exceeds 5 σ.

You should take such combined analyses with several grains of salt. Choosing some parts of the data while disregarding others makes the conclusion unreliable. This does not mean the result is wrong, just that it’s impossible to know if it is a real effect or a statistical fluctuation. Really this question can only be resolved with more data. CMS, another one of the LHC experiments, recently tested a specific explanation for the anomaly but found nothing.

Meanwhile it must have dawned on particle physicists that the non-discovery of fundamentally new particles besides the Higgs is a problem for their field, and especially for the prospects of financing that bigger collider which they want. For two decades they told the public that the LHC would help answering some “big questions,” for example by finding dark matter or supersymmetric particles, illustrated well by this LHC outreach website:


Screenshot of the LHC Outreach website.


However, the predictions for new particles besides the Higgs were all wrong. And now, rather than owning up to their mistakes, particle physicists want you to think it’s exciting they have found neither dark matter, nor extra dimensions, nor supersymmetry, nor anything else that is not in the standard model. In a recent online article at Scientific American, James Beacham is quoted saying:
“We’re right on the cusp of a revolution but we don’t really know where that revolution is going to be coming from. It’s so exciting and enticing. I would argue there’s never been a better time to be a particle physicist.”
The particle physicist Jon Butterworth says likewise:
“It’s more exciting and more uncertain now than I think it’s ever been in my career.”
And Nima Arkani-Hamed, in an interview with the CERN Courier begins his answer to the question “How do you view the status of particle physics?” with:
“There has never been a better time to be a physicist.”
The logic here seems to be this: First, mass-produce empty predictions to raise the impression that a costly experiment will answer some big questions. Then, if the experiment fails to answer those questions, proclaim how exciting it is that your predictions were wrong. Finally, explain that you need money for a larger experiment to answer those big questions.

The most remarkable thing about this is that they actually seem to think this will work.

Needless to say, if the analysis of the recent data reveals a signal of new effects, then the next collider will be built for sure. If nothing new shows up, then particle physicists can either continue to excitedly deny anything went wrong, or realize they have to act against hype and group-think in their community. The next weeks will be interesting.

Saturday, March 09, 2019

Motte and Bailey, Particle Physics Style

“Motte and bailey” is a rhetorical maneuver in which someone switches between an argument that does not support their conclusion but is easy to defend (the “motte”), and an argument that supports their conclusion but is hard to defend (the “bailey”). The purpose of this switch is to trick the listener into believing that the easy-to-defend argument suffices to support the conclusion.

This rhetorical trick is omnipresent in arguments that particle physicists currently make for building the next larger collider.

There are good arguments to build a larger collider, but those don’t justify the investment. These arguments are measuring the properties of known particles to higher precision and keeping particle physicists occupied. Also, we could just look and see if we find something new. That’s the motte.

Then there is an argument which would justify the investment, but this is not based on sound reasoning. This argument is that a next larger collider would lead to progress in the foundations of physics, for example by finding new symmetries or solving the riddle of dark matter. This argument is indefensible because there is no reason to think the next larger collider would help answering those questions. That’s the bailey.

This maneuver is particularly amusing if you both have people who make the indefensible argument and others who insist no one makes it. In a recent interview with the CERN courier, for example, Nima Arkani-Hamed says:
“Nobody who is making the case for future colliders is invoking, as a driving motivation, supersymmetry, extra dimensions…”
While his colleague, Lisa Randall, has defended the investment into the next larger collider by arguing:
“New dimensions or underlying structures might exist, but we won’t know unless we explore.”
I don’t think that particle physicists are consciously aware of what they are doing. Really, I get the impression they just throw around whatever arguments come to their mind and hope the other side doesn’t have a response. Most unfortunately, this tactic often works, just because there are few people competent enough to understand particle physicists’ arguments and also willing to point out when they go wrong.

For this reason I want to give you an explicit example for how motte and bailey is employed by particle physicists to make their case. I do this in the hope that it will help others notice when they encounter this flawed argument.

The example I will use is a recent interview I did for a podcast with the Guardian. The narrator is Ian Sample. Also on the show is particle physicist Brian Foster. I don’t personally know Foster and never spoke with him before. You can listen to the whole thing here, but I have transcribed the relevant parts below. (Please let me know in case I misheard something.)

At around 10:30 min the following exchange takes place.

Ian: “Are there particular things that physicists would like to look for, actual sort-of targets like the Higgs, that could be named like the Higgs?”

Brian: “The Higgs is really, I think, at the moment the thing that we are particularly interested in because it is the new particle on the block. And we know very little about it so far. And that will give us hopefully clues as to where to look for new phenomena beyond the standard model. Because the thing is that we know there must be physics beyond the standard model. If for no other reason than, as you mention, there’s very strong evidence that there is dark matter in the universe and that dark matter must be made of particles of some sort we have no candidate for those particles at the moment.”

I then explain that this argument does not work because there is no reason to think the next larger collider would find dark matter particles, that, in fact, we are not even sure dark matter is made of particles.

After some more talk about the various proposals for new colliders that are currently on the table, the discussion returns to the question what justifies the investment. At about 24:06 you can hear:

Ian: “Sabine, you’ve had a fair bit of flak for some of your criticisms for the FCC, haven’t you, from within the community?”

Sabine: “Sure, true, but I did expect it. Fact is, we have no reason to think that a next larger particle collider will actually tell us anything new about the fundamental laws of nature. There’s certainly some constants that you can always measure better, you can always say, well, I want to measure more precisely what the Higgs is doing, or how that particle decays, and so on and so forth. But if you want to make progress in our understanding of the foundations of physics that’s just not currently a promising thing to invest in. And I don’t think that’s so terribly controversial, but a lot of particle physicists clearly did not like me saying this publicly.”

Brian: “I beg to differ, I think it is very controversial, and I think it’s wrong, as I’ve tried to say several times. I mean the way in which you can make progress in particle physics is by making these precision measurements. You know very well that quantum mechanics is such that if you can make very high precision measurements that can tell you a lot of things about much higher energies than what you can reach in the laboratory. So that’s the purpose of doing very high precision physics at the LHC, it’s not like stamp collecting. You are trying to make measurements which will be sufficiently precise that they will give you a very strong indication of where there will be new physics at high energies.”

(Only tangentially relevant, but note that I was talking about the foundations of physics, whereas Brian’s reply is about progress in particle physics in particular.)

Sabine: “I totally agree with that. The more precisely you measure, the more sensitive you are to the high energy contributions. But still there is no good reason right now to think that there is anything to find, is what I’m saying.”

Brian: “But that’s not true. I mean, it’s quite clear, as you said yourself, that the standard model is incomplete. Therefore, if we can measure the absolutely outstanding particle in the standard model, the Higgs boson, which is completely unique, to very high precision, then the chances are very strong that we will find some indication for what this physics beyond the standard model is.”

Sabine: “So exactly what physics beyond the standard model are you referring to there?

Brian: “I have no idea. That’s why I want to do the measurement.”

I then explain why there is no reason to think that the next larger collider will find evidence of new physical effects. I do this by pointing out that the only reliable indications we have for new physics merely tell us something new has to appear at latest at energies that are still about a billion times higher than what even the next larger collider could reach.

At this point Brian stops claiming the chances are “very strong” that a bigger machine would find something new, and switches to the just-look-argument:

Brian: “Look, it’s a grave mistake to be too strongly lead by theoretical models [...]”

The just-look-argument is of course well and fine. But, as I have pointed out many times before, the same just-look-argument can be made for any other new experiment in the foundations of physics. It therefore does not explain why a larger particle collider in particular is a good investment. Indeed, the opposite is the case: There are less costly experiments for which we have good reasons, such as measuring more precisely the properties of dark matter or probing the weak field regime of quantum gravity.

When I debunk the just-look-argument, a lot of particle physicists then bring up the no-zero-sum-argument. I just did another podcast a few days ago where the no-zero-sum-argument played a big role and if that appears online, I’ll comment on that in more detail.

The real tragedy is that there is absolutely no learning curve in this exchange. Doesn’t matter how often I point out that particle physicists’ arguments don’t hold water, they’ll still repeat them.

(Completely irrelevant aside: This is the first time I have heard a recording made in my basement studio next to other recordings. I am pleased to note all the effort I put into getting good sound quality paid off.)

Friday, March 08, 2019

Inflation: Status Update

Model of Inflation.
img src: umich.edu
The universe hasn’t always been this way. That the cosmos as a whole evolves, rather than being eternally unchanging, is without doubt one of the most remarkable scientific insights of the past century. It follows from Einstein’s theory of general relativity: Einstein’s theory tells us that the universe must expand. As a consequence, in the early universe matter must have been compressed to high density.

But if you follow the equations back in time, general relativity eventually stops working. Therefore, no one presently knows how the universe began. Indeed, we may never know.

Since the days of Einstein, physicists have made much progress detailing the history of the universe. But the deeper they try to peer into our past, the more difficult their task becomes.

This difficulty arises partly because new data are harder and harder to come by. The dense matter in the early universe blocked light, so we cannot use light to look back to any time earlier than the formation of the cosmic microwave background. For even earlier times, we can make indirect inferences, or hope for new messengers, like gravitational waves or neutrinos. This is technologically and mathematically challenging, but these are challenges that can be overcome, at least in principle. (Says the theorist.)

The more serious difficulty is conceptual. When studying the universe as whole, physicists face the limits of the scientific method: The further back in time they look, the simpler their explanations become. At some point, then, there will be nothing left to simplify, and so there will be no way to improve their explanations. The question isn’t whether this will happen, the question is when it will happen.

The miserable status of today’s theories for the early universe makes me wonder whether it has already happened. Cosmologists have hundreds of theories, and many of those theories come in several variants. It’s not quite as bad as in particle physics, but the situation is similar in that cosmologists, too, produce loads of ill-motivated models for no reason other than that they can get them published. (And they insist this is good scientific practice. Don’t get me started.)

The currently most popular theory for the early universe is called “inflation”. According to inflation, the universe once underwent a phase in which volumes of space increased exponentially in time. This rapid expansion then stopped in an event called “reheating,” at which the particles of the standard model were produced. After this, particle physics continues the familiar way.

Inflation was originally invented to solve several finetuning problems. (I wrote about this previously, and don’t want to repeat it all over again, so if you are not familiar with the story, please check out this earlier post.) Yall know that I think finetuning arguments are a waste of time, so naturally I think these motivations for inflations are no good. However, just because the original reason for the idea of inflation doesn’t make sense doesn’t mean the theory is wrong.

Ever since the results of the Planck in 2013 it hasn’t looked good for inflation. After the results appeared, Anna Ijjas, Paul Steinhardt, and Avi Loeb argued in a series of papers that the models of inflation which are compatible with the data themselves require finetuning, and therefore bring back the problem they were meant to solve. They popularized their argument in a 2017 article in Scientific American, provocatively titled “Pop Goes the Universe.”

The current models of inflation work not simply by assuming that the universe did undergo a phase of exponential inflation, but they moreover introduce a new field – the “inflaton” – that supposedly caused this rapid expansion. For this to work, it is not sufficient to just postulate the existence of this field, the field also must have a suitable potential. This potential is basically a function (of the field) and typically requires several parameters to be specified.

Most of the papers published on inflation are then exercises in relating this inflaton potential to today’s cosmological observables, such as the properties of the cosmic microwave background.

Now, in the past week two long papers about all those inflationary models appeared on the arXiv:
and

The first paper, by Jerome Martin alone, is a general overview of the idea of inflation. It is well-written and a good introduction, but if you are familiar with the topic, nothing new to see here.

The second paper is more technical. It is a thorough re-analysis of the issue of finetuning in inflationary models and a response to the earlier papers by Ijjas, Steinhardt, and Loeb. The main claim of the new paper is that the argument by Ijjas et al, that inflation is “in trouble,” is wrong because it confuses two different types of models, the “plateau models” and the “hilltop models” (referring to different types of the inflaton potential).

According to the new analysis, the models most favored by the data are the plateau models, which do not suffer from finetuning problems, whereas the hilltop models do (in general) suffer from finetuning but are not favored by the data anyway. Hence, they conclude, inflation is doing just fine.

The rest of the paper analyses different aspects of finetuning in inflation (such as quantum contributions to the potential), and discusses further problems with inflation, such as the trans-planckian problem and the measurement problem (as pertaining to cosmological perturbations). It is a very balanced assessment of the situation.

The paper uses standard methods of analysis (Bayesian statistics), but I find this type of model-evaluation generally inconclusive. The problem with such analyses is that they do not take into account the prior probability for the models themselves but only for the initial values and the parameters of the model. Therefore, the results tend to favor models which shove unlikeliness from the initial condition into the model (eg the type of function for the potential).

This is most obvious when it comes to the so-called “curvature problem,” or the question why the universe today is spatially almost flat. You can get this outcome without inflation, but it requires you to start with an exponentially small value of the curvature already (curvature density, to be precise). If you only look at the initial conditions, then that strongly favors inflation.

But of course inflation works by postulating an exponential suppression that comes from the dynamical law. And not only this, it furthermore introduces a field which is strictly speaking unnecessary to get the exponential expansion. I therefore do not buy into the conclusion that inflation is the better explanation. On the very contrary, it adds unnecessary structure.

This is not to say that I think inflation is a bad idea. It’s just that I think cosmologists are focusing on the wrong aspects of the model. Finetuning arguments will forever remain ambiguous because they eventually depend on unjustifiable assumptions. What’s the probability for getting any particular inflaton potential to begin with? Well, if you use the most common measure on the space of all possible function, then all so-far considered potentials have probability zero. This type of reasoning just does not lead anywhere. So why waste time talking about finetuning?

Instead, let us talk about those predictions whose explanatory value does not depend on finetuning arguments, of which I suspect (but do not know) that ET-correlations in the CMB power spectrum are an example. Since finetuning debates will remain unsolvable, it would be more fruitful to focus on those benefits of inflation that can be quantified unambiguously.

In any case, I am sure the new paper will make many cosmologists happy, and encourage them to invent many more models for inflation. Sigh.

Tuesday, March 05, 2019

Merchants of Hype

Once upon a time, the task of scientists was to understand nature. “Merchants of Light,” Francis Bacon called them. They were a community of knowledge-seekers who subjected hypotheses to experimental test, using what we now simply call “the scientific method.” Understanding nature, so the idea, would both satisfy human curiosity and better our lives.

Today, the task of scientists is no longer to understand nature. Instead, their task is to uphold an illusion of progress by wrapping incremental advances in false promise. Merchants they still are, all right. But now their job is not to bring enlightenment; it is to bring excitement.

Nowhere is this more obvious than with big science initiatives. Quantum computing, personalized medicine, artificial intelligence, simulated brains, mega-scale particle colliders, and everything nano and neuro: While all those fields have a hard scientific core that justifies some investment, the big bulk is empty headlines. Most of the money goes into producing papers whose only purpose is to create an appearance of relevance.

Sooner or later, those research-bubbles become unsustainable and burst. But with the current organization of research, more people brings more money brings more people. And so, the moment one bubble bursts, the next one is on the rise already.

The hype-cycle is self-sustaining: Scientists oversell the promise of their research and get funding. Higher education institutions take their share and deliver press releases to the media. The media, since there’s money to make, produce headlines about breakthrough insights. Politicians are pleased about the impact, talk about international competitiveness, and keep the money flowing.

Trouble is, the supposed breakthroughs rarely lead to tangible progress. Where are our quantum computers? Where are our custom cancer cures? Where are the nano-bots? And why do we still not know what dark matter is made of?

Most scientists are well aware their research floats on empty promise, but keep their mouths shut. I know this not just from my personal experience. I know this because it has been vividly, yet painfully, documented in a series of anonymous interviews with British and Australian scientists about their experience writing grant proposals. These interviews, conducted by Jennifer Chubb and Richard Watermeyer (published in Studies in Higher Education), made me weep:
“I will write my proposals which will have in the middle of them all this work, yeah but on the fringes will tell some untruths about what it might do because that’s the only way it’s going to get funded and you know I’ve got a job to do, and that’s the way I’ve got to do it. It’s a shame isn’t it?”
(UK, Professor)

“If you can find me a single academic who hasn’t had to bullshit or bluff or lie or embellish in order to get grants, then I will find you an academic who is in trouble with his Head of Department. If you don’t play the game, you don’t do well by your university. So anyone that’s so ethical that they won’t bend the rules in order to play the game is going to be in trouble, which is deplorable.”
(Australia, Professor)

“We’ll just find some way of disguising it, no we’ll come out of it alright, we always bloody do, it’s not that, it’s the moral tension it places people under.”
(UK, Professor)

“They’re just playing games – I mean, I think it’s a whole load of nonsense, you’re looking for short term impact and reward so you’re playing a game... it’s over inflated stuff.”
(Australia, Professor)

“Then I’ve got this bit that’s tacked on... That might be sexy enough to get funded but I don’t believe in my heart that there’s any correlation whatsoever... There’s a risk that you end up tacking bits on for fear of the agenda and expectations when it’s not really where your heart is and so the project probably won’t be as strong.”
(Australia, Professor)
In other interviews, the researchers referred to their proposals as “virtually meaningless,” “made up stories” or “charades.” They felt sorry for their own situation. And then justified their behavior by the need to get funding.

Worse, the above quotes only document the tip of the iceberg. That’s because the people who survive in the current system are the ones most likely to be okay with the situation. This may be because they genuinely believe their field is as promising as they make it sound, or because they manage to excuse their behavior to themselves. Either way, the present selection criteria in science favor skilled salesmanship over objectivity. Need I say that this is not a good way to understand nature?

The tragedy is not that this situation sucks, though, of course, it does. The tragedy is that it’s an obvious problem and yet no one does anything about it. If scientists can increase their chances to get funding by exaggeration, they will exaggerate. If they can increase their chances to get funding by being nice to their peers, they will be nice to their peers. If they can increase their chances to get funding by publishing on popular topics, they will publish on popular topics. You don’t have to be a genius to figure that out.

Tenure was supposed to remedy scientists’ conflict of interest between truth-seeking and economic survival. But tenure is now a rarity. Even the lucky ones who have it must continue to play nice, both to please their institution and keep the funding flowing. And honesty has become self-destructive. If you draw attention to shortcomings, if you debunk hype, if you question the promise of your own research area, you will be expelled from the community. A recent commenter on this blog summed it up like this:
“at least when I was in [high energy physics], it was taken for granted that anyone in academic [high energy physics] who was not a booster for more spending, especially bigger colliders, was a traitor to the field.”
If you doubt this, think about the following. I have laid out clearly why I do not think a bigger particle collider is currently a good investment. No one who understands the scientific and technological situation seriously disagrees with my argument; they merely disagree with the conclusions. This is fine with me. This is not the problem. I don’t expect everyone to agree with me.

But I also don’t expect everyone to disagree with me, and neither should you. So here is the puzzle: Why can you not find any expert, besides me, willing to publicly voice criticism on particle physics? Hint: It’s not because there is nothing to criticize.

And if you figured this one out, maybe you will understand why I say I cannot trust scientists any more. It’s a problem. It’s a problem in dire need of a solution.

This rant, was, for once, not brought on by a particle physicist, but by someone who works in quantum computing. Someone who complained to me that scientists are overselling the potential of their research, especially when it comes to large investments. Someone distraught, frustrated, disillusioned, and most of all, unsure what to do.

I understand that many of you cannot break the ranks without putting your jobs at risk. I do not – and will not – expect you to sacrifice a career you worked hard for; no one would be helped by this. But I want to remind you that you didn’t become a scientist just to shut up and advocate.

Saturday, March 02, 2019

Check your Biases

[slide 8 of this presentation]
Physics World recently interviewed the current director of CERN, Fabiola Gianotti. When asked how particle physicists address group-think, Gianotti explains instead why some research avenues require large communities.

You would think that sufficiently much has been written about cognitive biases and logical fallacies that even particle physicists took note, but at least the ones I deal with have no clue. If I ask them what measures they take to avoid cognitive biases when evaluating the promise of a research direction, they will either mention techniques to prevent biased data-analysis (different thing entirely), or they will deny that they even have biases (thereby documenting the very problem whose existence they deny).

Here is a response I got from a particle physicist when I pointed out that Gianotti did not answer the question about group think:



(This person then launched an ad-hominem attack at me and eventually deleted their comment. In the hope that this deletion documents some sliver of self-insight, I decided to remove identifying information.)

Here is another particle physicist commenting on the same topic, demonstrating just how much these scientists overrate their rationality:



It is beyond me why scientists are still not required to have basic training in the sociology of science, cognitive biases, and decision making in groups. Such knowledge is necessary to properly evaluate information. Scientists cannot correctly judge the promise of research directions unless they are aware how their opinions are influenced by the groups they are part of.

It would be easy enough to set up online courses for this. If I had the funding, I would do it. Alas, I don’t. The only thing I can do, therefore, is to ask everyone – and especially those in leadership positions – to please take the problem seriously. Scientists are human. Leaving cognitive biases unchecked results in inefficient allocations of research funding, not to mention that it wastes time.

In all brevity, here are the basics.

What is a social bias, what is a cognitive bias?

A cognitive bias is thinking shortcut that has developed through evolution. It can be beneficial in some situations, but in other situations it can result in incorrect judgement. A cognitive bias is similar to an optical illusion. Look at this example:
Example of optical illusion. A and B have the same color.
Click here if you don’t believe it. [Image source: Wikipedia]

The pixels in the squares A and B have the exact same color. However, to most people, square B looks lighter than A. That’s because there is a shadow over square B, so your brain factors in that the original color should have been lighter.

The conclusion that B is lighter, therefore, makes perfect sense in a naturally occurring situation. When asked to judge the color on your screen, however, you are likely to give a wrong answer if you are not aware of how your brain works.

Likewise, a cognitive bias happens if your brain factors in information that may be relevant in some situations but can lead to wrong results in others. A social bias, more specifically, is a type of cognitive bias that comes from the interaction with other people.

It is important to keep in mind that cognitive biases are not a sign of lacking intelligence. Everyone has cognitive biases and that’s nothing to be ashamed of. But if your job is to objectively evaluate information, you should be aware that the results of your evaluation are skewed by the way your brain functions.

Scientists, therefore, need to take measures to prevent cognitive biases the same way that they take measures to prevent biases in data analysis. The brain is yet another apparatus. Understanding how it operates is necessary to arrive at correct conclusions.

There are dozens of cognitive biases. I here merely list the ones that I think are most important for science:
  • Communal Reinforcement
    More commonly known as “group think,” communal reinforcement happens if members of a community constantly reassure each other that what they are doing is the right thing. It is typically accompanied by devaluing or ignoring outside opinions. You will often see it come along with arguments from popularity. Communal reinforcement is the major reason bad methodologies can become accepted practice in research communities.

  • Availability Cascades
    What we hear of repeatedly sounds more interesting, and we talk more about what is more interesting, which makes it sound even more interesting. This does make a lot of sense if you want to find out what important things are happening in your village. It does not make sense, however, if your job is, say, to decide what’s the most promising experiment to make progress in the foundations of physics. Availability cascades are a driving force in scientific fashion trends and can lead to over-inflated research bubbles with little promise.

  • Post-purchase Rationalization
    This is the tendency to tell ourselves and others that we have not made stupid decision in the past, like, say, pouring billions of dollars in to entirely fruitless research avenues. It is a big obstacle to learning from failure. This bias is amplified by our desire to avoid cognitive dissonance, that is any threat to our self-image as a rationally thinking individual. Post-purchase rationalization is why no experiment in the history of science has ever been a bad investment.

  • Irrational Escalation
    Also known as the “sunk cost fallacy” or “throwing good money after bad.” Irrational Escalation is the argument that you cannot give up now because you have invested so much already. This is one of the main reasons why research agendas survive well beyond the point at which they stopped making sense, see supersymmetry, string theory, or searches for dark matter particles that become heavier and more weakly interacting every time they are not found.

  • Motivated Reasoning
    More collectively known as “wishful thinking,” motivated reasoning is the human tendency to give pep talks and then actually believe the rosy picture we painted ourselves. While usually well-intended, motivated reasoning can result in overly optimistic expectations and an insistence to hold onto irrational dreams. Surely particle physicists are just about to discover some new particle, the next round of experiments will find that dark matter candidate, etc.
There are practical measures you can implement to alleviate these biases, both in your institution and in your personal work-life. In the appendix of my book I list a few. But the most important step is that you acknowledge the existence of these biases whenever you evaluate information and at least try to correct your assessment.

The more people have told you that a crappy scientific method is okay, the more likely you are to believe it is okay. Keep that in mind next time a BSM phenomenologist tells you it is totally normal when a scientific discipline makes wrong predictions for 40 years.

The easiest way to see that particle physics has a big problem with cognitive biases is that members of this community deny they even have biases and refuse to do anything about it.

The topic of cognitive biases has been extensively covered elsewhere, and I see no use in repeating what others have said better. Google will give you all the information you need. Some good starting points are:

Wednesday, February 27, 2019

Book review: “Breakfast with Einstein” by Chad Orzel

Breakfast with Einstein: The Exotic Physics of Everyday Objects
By Chad Orzel
BenBella Books (December 11, 2018)

Physics is everywhere, that is the message of Chad Orzel’s new book “Breakfast with Einstein,” and he delivers his message masterfully. In the serenity of an early morning, Orzel uses every-day examples to reveal the omnipresence of physics: The sunrise becomes an occasion to introduce nuclear physics, the beeping of an alarm a pathway to atomic clocks, a toaster leads to a discussion of zero-point energy.

Much of the book’s content is home-play for Orzel, whose research specializes in atomic, molecular, and optical physics. It shows: “Breakfast with Einstein” is full with applied science, from fibre-optic cables to semiconductors, data storage, lasers, smoke detectors, and tunneling microscopes. Orzel doesn’t only know what he writes about, he also knows what this knowledge is good for, and the reader benefits.

Like in his earlier books, Orzel’s explanations are easy to follow without glossing over details. The illustrations aid the text, and his writing style is characteristically straight-forward. While he does give the reader some historical context, Orzel keeps people-facts to an absolute minimum, and instead sticks with the science.

In contrast to many recent books about physics, Orzel stays away from speculation, and focuses instead on the many remarkable achievements that last century’s led to.

When it comes to popular science books, this one is as flawless as it gets. “Breakfast with Einstein,” I believe, will be understandable to anyone with an interest in the subject. I can warmly recommend it.

[Disclaimer: Free review copy]

Sunday, February 24, 2019

Away Note

I am in London for a few days and comments are likely to pile up in the queue more than usual. Pls play nicely while I'm away.

Saturday, February 23, 2019

Gian-Francesco Giudice On Future High-Energy Colliders

Gian-Francesco Giudice
[Image: Wikipedia]
Gian-Francesco Giudice is a particle physicist and currently Head of the Theoretical Physics Department at CERN. He is one of the people I interviewed for my book. This week, Giudice had a new paper on the arXiv titled “On Future High-Energy Colliders.” It appeared in the category “History and Philosophy of Physics” but contains little history and no philosophy. It is really an opinion piece.

The article begins with Giudice stating that “the most remarkable result [of the LHC measurements] was the discovery of a completely new type of force.” By this he means that the interaction with the Higgs-boson amounts to a force, and therefore the discovery of the Higgs can be interpreted as the discovery of a new force.

That the Higgs-boson exchanges a force is technically correct, but this terminology creates a risk of misunderstanding, so please allow me to clarify. In common terminology, the standard model describes three fundamental forces (stemming from the three gauge-symmetries): The electromagnetic force, the strong nuclear force, and the weak nuclear force. The LHC results have not required physicists to rethink this. The force associated with the Higgs-boson is not normally counted among the fundamental forces.

One can debate whether or not this is a new type of force. Higgs-like phenomena have been observed in condensed-matter physics for a long time. In any case, rebranding the Higgs as a force doesn’t change the fact that it was predicted in the 1960s and was the last missing piece in the standard model.

Giudice then lists reasons why particle physicists want to further explore high energy regimes. Let me go through these quickly to explain why they are bad motivations for a next larger collider (for more details see also my earlier post about good and bad problems):
  • “the pattern of quark and lepton masses and mixings”

    There is no reason to think a larger particle collider will tell us anything new about this. There isn’t even a reason to think those patterns have any deeper explanation.
  • “the dynamics generating neutrino masses”

    The neutrino-masses are either of Majorana-type, which you test for with other experiments (looking for neutrino-less double-beta decay) or they are of Dirac-type, in which case there is no reason to think the (so-far missing) right handed neutrinos have masses in the range accessible by the next larger collider.
  • “Higgs naturalness”

    Arguments from naturalness were the reason why so many physicists believed the LHC should have seen fundamentally new particles besides the Higgs already (see here for references). Those predictions were all wrong. It’s about time that particle physicists learn from their mistakes.
  • “the origin of symmetry breaking dynamics”

    I am not sure what this refers to. If you know, pls leave a note in the comments.
  •  “the stability of the Higgs potential”

    A next larger collider would tell us more about the Higgs potential. But the question whether the potential is stable cannot be answered by this collider because the answer also depends on what happens at even higher energies.
  • “unification of forces, quantum gravity”

    Expected to become relevant at energies far exceeding that of the next larger collider.
  • “cosmological constant”

    Relevant on long distances and not something that high energy colliders test.
  • “the nature and origin of dark matter, dark energy, cosmic baryon asymmetry, inflation”

    No reason to think that a next larger collider will tell us anything about this.
Giudice then goes on to argue that “the non-discovery of expected results can be as effective as the discovery of unexpected results in igniting momentous paradigm changes.” In support of this he refers to the Michelson-Morley experiment.

The Michelson-Morley experiment, however, is an unfortunate example to enlist in favor of a larger collider. To begin with, it is somewhat disputed among historians how relevant the Michelson-Morley experiment really was for Einstein’s formulation of Special Relativity, since you can derive his theory from Maxwell’s equations. More interesting for the case of building a larger collider, though, is to look at what happened after the null-result of Michelson and Morley.

What happened is that for some decades experimentalists built larger and larger interferometers looking for the aether, not finding any evidence for it. These experiments eventually grew too large and this line of research was discontinued. Then, the second world-war interfered, and for some while scientific exploration stalled.

In the 1950s, due to rapid technological improvements, interferometers could be dramatically shrunk back in size and the search for the aether continued with smaller devices. Indeed, Michelson-Morley-like experiments are still made today. But the best constraints on deviations from Einstein’s theory now come from entirely different observations, notably from particles traveling over cosmologically long distances. The aether, needless to say, hasn’t been found.

There are two lessons to take away from this: (a) When experiments became too large and costly they paused until technological progress improved the return on investment. (b) Advances in entirely different research directions enabled better tests.

Back to high energy particle physics. There hasn’t been much progress in collider technology for decades. For this reason physicists still try to increase collision energies by digging longer tunnels. The costs are now exceeding $10 billion dollars for a next larger collider. We have no reason to think that this collider will tell us anything besides measuring details of the standard model to higher precision. This line of research should be discontinued until it becomes more cost-efficient again.

Giudice ends his essay with arguing that particle colliders are somehow exceptionally great experiments and therefore must be continued. He writes “No other instrument or research programme can replace high-energy colliders in the search for the fundamental laws governing the universe.”

But look at the facts: The best constraints on grand unified theories come from searches for proton decay. Such searches entail closely monitoring large tanks of water. These are not high-energy experiments. You could maybe call them “large volume experiments”. Likewise, the tightest constraints on physics at high energies currently comes from the ACME measurement of the electric dipole moment. This is a high precision measurement at low energies. And our currently best shot at finding evidence for quantum gravity comes from massive quantum oscillators. Again, that is not high energy physics.

Building larger colliders is not the only way forward in the foundations of physics. Particle physicists only seem to be able to think of reasons for a next larger particle collider and not of reasons against it. This is not a good way to evaluate the potential of such a large financial investment.

Thursday, February 21, 2019

Burton Richter on the Future of Particle Physics

Burton Richter.
[Image: NobelPrize.org]
The 1976 Nobel Prize was jointly awarded to Burton Richter and Samuel Ting for the discovery of the J/Psi particle. Sounds like yet-another-particle, I know, but this morsel of matter was a big step forward in the development of the standard model. Richter, sadly, passed away last summer.

Coincidentally, I recently came across a chapter Richter wrote in 2014 to introduce Volume 7 of the “Reviews of Accelerator Science and Technology.” It is titled “High Energy Colliding Beams; What Is Their Future?” and you can read the whole thing here. For your convenience I below quote some parts that are relevant to the current discussion of whether or not to build a next larger particle collider, specifically the 100 TeV pp-collider (FCC), planned by CERN.

Some of Burton’s remarks are rather specific, such as the required detector efficiency and the precision needed to measure the Higgs’ branching ratios. But he also comments on the problem that particle colliders deliver a diminishing return on investment:
“When I was much younger I was a fan of science fiction books. I have never forgotten the start of one, though I don’t remember the name of the book or its author. It began by saying that high-energy physics’ and optical astronomy’s instruments had gotten so expensive that the fields were no longer funded. That is something that we need to think about. Once before we were confronted with a cost curve that said we could never afford to go to very high energy, and colliding beams were invented and saved us from the fate given in my science fiction book. We really need to worry about that once more.

“If the cost of the next-generation proton collider is really linear with energy, I doubt that a 100-TeV machine will ever be funded, and the science fiction story of my youth will be the real story of our field [...]”
He points towards the lack of technological breakthroughs in accelerator design, which is the major reason why the current method of choice for higher collision energies is still digging longer tunnels. As I mentioned in my recent blogpost, there are two promising technologies on the horizon which could advance particle colliders: high-temperature superconductors and plasma wakefield acceleration. But neither of those is likely to become available within the next two decades.

On the superconductors, Burton writes:
“I see no well-focused R&D program looking to make the next generation of proton colliders more cost effective. I do not understand why there is as yet no program underway to try to develop lower cost, high-Tc superconducting magnets [...]”
About plasma wakefield acceleration he is both optimistic and pessimistic. Optimistic because amazing achievements have been made in this research program already, and pessimistic because he doesn’t see “a push to develop these technologies for use in real machines.”

Burton also comments on the increasingly troublesome disconnect between theorists and experimentalists in his community:
“A large fraction of the 100 TeV talk (wishes?) comes from the theoretical community which is disappointed at only finding the Higgs boson at LHC and is looking for something that will be real evidence for what is actually beyond the standard model [...]

“The usual back and forth between theory and experiment; sometimes one leading, sometimes the other leading; has stalled. The experiments and theory of the 1960s and 1970s gave us today’s Standard Model that I characterized earlier as a beautiful manuscript with some unfortunate Post-it notes stuck here and there with unanswered questions written on them. The last 40 years of effort has not removed even one of those Post-it notes. The accelerator builders and the experimenters have built ever bigger machines and detectors, while the theorists have kept inventing extensions to the model.

“There is a problem here that is new, caused by the ever-increasing mathematical complexity of today’s theory. When I received my PhD in the 1950s it was possible for an experimenter to know enough theory to do her/his own calculations and to understand much of what the theorists were doing, thereby being able to choose what was most important to work on. Today it is nearly impossible for an experimenter to do what many of yesterday’s experimenters could do, build apparatus while doing their own calculations on the significance of what they were working on. Nonetheless, it is necessary for experimenters and accelerator physicists to have some understanding of where theory is, and where it is going. Not to do so makes most of us nothing but technicians for the theorists.”
Indeed, I have wondered about this, whether experimentalists even understand what is going on in theory-development. My impression has been that most of them regard the ideas of theorists with a mix of agnosticism and skepticism. They believe it doesn’t matter, so they never looked much into the theorists’ reasoning for why the LHC should see some fundamentally new physics besides the Higgs-boson. But of course it does matter, as Burton points out, to understand the significance of what they are working on.

Burton also was no fan of naturalness, which he called an “empty concept” and his judgement of current theory-development in high energy particle physics was harsh: “Simply put, much of what currently passes as the most advanced theory looks to be more theological speculation, the development of models with no testable consequences, than it is the development of practical knowledge.”

A wise man; gone too soon.

Monday, February 18, 2019

Never again confuse Dark Matter with Dark Energy

Barnard 68 is a molecular cloud
that adsorbs light. It is dark and made
of matter, but not made of dark matter.
[Image: Wikipedia]
Dark Matter 

Dark Matter is, as the name says, matter. But “matter” is not just physicists’ way to say “stuff,” it’s a technical term. Matter has specific behavior, which is that its energy-density dilutes with the inverse volume. Energy-density of radiation, in contrast, dilutes faster than the inverse volume, because the wavelengths of the radiation also stretch.

Generally, anything that has a non-negligible pressure will not behave in this particular way. Cosmologists therefore also say dark matter is “a pressureless fluid.” And, since I know it’s confusing, let me remind you that a fluid isn’t the same as a liquid, and gases can be fluids, so sometimes they may speak about “pressureless gas” instead.

In contrast to what the name says, though, Dark Matter isn’t dark. “Dark” suggests that it adsorbs light, but really it doesn’t react with light at all. It would be better to call it transparent. Light just goes through. And in return, Dark Matter just goes through all normal matter, including planet Earth and you and me. Dark Matter interacts even less often than the already elusive neutrinos.

Dark matter is what makes galaxies rotate faster and helps galactic structure formation to get started.

Dark Energy

Dark Energy, too, is transparent rather than dark. But its name is even more misleading than that of Dark Matter because Dark Energy isn’t energy either. Instead, if you divide it by Newton’s constant, you get an energy density. In contrast to Dark Matter, however, this energy-density does not dilute with the inverse volume. Instead, it doesn’t dilute at all if the volume increases, at least not noticeably.

If the energy density remains entirely constant with the increase of volume, it’s called the “cosmological constant.” General types of Dark Energy can have a density that changes with time (or location), but we currently do not have any evidence that this is the case. The cosmological constant, for now, does just fine to explain observations.

Dark Energy is what makes the expansion of the universe speed up.

Are Dark Matter and Dark Energy the same?

Dark Matter and Dark Energy have distinctly different properties and cannot just be the same. At best they can both be different aspects of a common underlying theory. There are many theories for how this could happen, but to date we have no compelling evidence that this idea is correct.

Friday, February 15, 2019

Dark Matter – Or What?

Yesterday I gave a colloq about my work with Tobias Mistele on superfluid dark matter. Since several people asked for the slides, I have uploaded them to slideshare. You can also find the pdf here. I previously wrote about our research here and here. All my papers are openly available on the arXiv.

Wednesday, February 13, 2019

When gravity breaks down

[img:clipartmax]
Einstein’s theory of general relativity is more than a hundred years old, but still it gives physicists headaches. Not only are Einstein’s equations hideously difficult to solve, they also clash with physicists other most-cherish achievement, quantum theory.

Problem is, particles have quantum properties. They can, for example, be in two places at once. These particles also have masses, and masses cause gravity. But since gravity does not have quantum properties, no one really knows what’s the gravitational pull of a particle in a quantum superposition. To solve this problem, physicists need a theory of quantum gravity. Or, since Einstein taught us that gravity is really curvature of space-time, physicists need a theory for the quantum properties of space and time.

It’s a hard problem, even for big-brained people like theoretical physicists. They have known since the 1930s that quantum gravity is necessary to bring order into the laws of nature, but 80 years on a solution isn’t anywhere in sight. The major obstacle on the way to progress is the lack of experimental guidance. The effects of quantum gravity are extremely weak and have never been measured, so physicists have only math to rely on. And it’s easy to get lost in math.

The reason it is difficult to obtain observational evidence for quantum gravity is that all presently possible experiments fall into two categories. Either we measure quantum effects – using small and light objects – or we measure gravitational effects – using large and heavy objects. In both cases, quantum gravitational effects are tiny. To see the effects of quantum gravity, you would really need a heavy object that has pronounced quantum properties, and that’s hard to come by.

Physicists do know a few naturally occurring situations where quantum gravity should be relevant. But it is not on short distances, though I often hear that. Non-quantized gravity really fails in situations where energy-densities become large and space-time curvature becomes strong. And let me be clear that what astrophysicists consider “strong” curvature is still “weak” curvature for those working on quantum gravity. In particular, the curvature at a black hole horizon is not remotely strong enough to give rise to noticeable quantum gravitational effects.

Curvature strong enough to cause general relativity to break down, we believe, exists only in the center of black holes and close by the big bang. In both cases the strongly compressed matter has a high density and a pronounced quantum behavior which should give rise to quantum gravitational effects. Unfortunately, we cannot look inside a black hole, and reconstructing what happened at the Big Bang from today’s observation can, with present measurement techniques, not reveal the quantum gravitational behavior.

The regime where quantum gravity becomes relevant should also be reached in particle collisions at extremely high center-of-mass energy. If you had a collider large enough – estimates say that with current technology it would be about the size of the Milky Way – you could focus enough energy into a small region of space to create strong enough curvature. But we are not going to build such a collider any time soon.

Besides strong space-time curvature, there is another case where quantum effects of gravity should become measureable that is often neglected: By creating quantum superpositions of heavy objects. This causes the approximation in which matter has quantum properties but gravity doesn’t (the “semi-classical limit”) to break down, revealing truly quantum effects of gravity. A few experimental groups are currently trying to reach the regime where they might become sensitive to such effects. They still have some orders of magnitude to go, so not quite there yet.

Why don’t physicists study this case closer? As always, it’s hard to say why scientists do one thing and not another. I can only guess it’s because from a theoretical perspective this case is not all that interesting.

I know I said that physicists don’t have a theory of quantum gravity, but that is only partly correct. Gravity can, and has been, quantized using the normal methods of quantization already in the 1960s by Feynman and DeWitt. However, the theory one obtains this way (“perturbative quantum gravity”), breaks down in exactly the strong curvature regime that physicists want to use it (“perturbatively non-renormalizable”). Therefore, this approach is today considered merely a low-energy approximation (“effective theory”) to the yet-to-be-found full theory of quantum gravity (“UV-completion”).

Past the 1960s, almost all research efforts in quantum gravity focused on developing that full theory. The best known approaches are string theory, loop quantum gravity, asymptotic safety, and causal dynamical triangulation. The above mentioned case with heavy objects in quantum superpositions, however, does not induce strong curvature and hence falls into the realm of the boring and supposedly well-understood theory from the 1960s. Ironically, for this reason there are almost no theoretical predictions for such an experiment from either of the major approaches to the full theory of quantum gravity.

Most people in the field presently think that perturbative quantum gravity must be the correct low-energy limit of any theory of quantum gravity. A minority, however, holds that this isn’t so, and members of this club usually quote one or both of the following reasons.

The first objection is philosophical. It does not conceptually make much sense to derive a supposedly more fundamental theory (quantum gravity) from a less fundamental one (non-quantum gravity) because by definition the derived theory is the less fundamental one. Indeed, the quantization procedure for Yang-Mills theories is a logical nightmare. You start with a non-quantum theory, make it more complicated to obtain another theory, though that is not strictly speaking a derivation, and if you then take the classical limit you get a theory that doesn’t have any good interpretation whatsoever. So why did you start from it to begin with it?

Well, the obvious answer is: We do it because it works, and we do it this way because of historical accident not because it makes a lot of sense. Nothing wrong with that for a pragmatist like me, but also not a compelling reason to insist that the same method should apply to gravity.

The second often-named argument against the perturbative quantization is that you do not get atomic physics by quantizing water either. So if you think that gravity is not a fundamental interaction but comes from the collective behavior of a large number of microscopic constituents (think “atoms of space-time”), then quantizing general relativity is simply the wrong thing to do.

Those who take this point of view that gravity is really a bulk-theory for some unknown microscopic constituents follow an approach called “emergent gravity”. It is supported by the (independent) observations of Jacobson, Padmanabhan, and Verlinde, that the laws of gravity can be rewritten so that they appear like thermodynamical laws. My opinion about this flip-flops between “most amazing insight ever” and “curious aside of little relevance,” sometimes several times a day.

Be that as it may, if you think that emergent gravity is the right approach to quantum gravity, then the question where gravity-as-we-know-and-like-it breaks down becomes complicated. It should still break down at high curvature, but there may be further situations where you could see departures from general relativity.

Erik Verlinde, for example, interprets dark matter and dark energy as relics of quantum gravity. If you believe this, we do already have evidence for quantum gravity! Others have suggested that if space-time is made of microscopic constituents, then it may have bulk-properties like viscosity, or result in effects normally associated with crystals like birefringence, or the dispersion of light.

In summary, the expectation that quantum effects of gravity should become relevant for strong space-time curvature is based on an uncontroversial extrapolation and pretty much everyone in the field agrees on it.* In certain approaches to quantum gravity, deviations from general relativity could also become relevant at long distances, low acceleration, or low energies. An often neglected possibility is to probe the effects of quantum gravity with quantum superpositions of heavy objects.

I hope to see experimental evidence for quantum gravity in my lifetime.


Except me, sometimes.

Friday, February 08, 2019

A philosopher of science reviews “Lost in Math”

Jeremy Butterfield is a philosopher of science in Cambridge. I previously wrote about some of his work here, and have met him on various occasions. Butterfield recently reviewed my book “Lost in Math,” and you can now find this review online here. (I believe it was solicited for a journal by name Physics in Perspective.)

His is a very detailed review that focuses, unsurprisingly, on the philosophical implications of my book. I think his summary will give you a pretty good impression of the book’s content. However, I want to point out two places where he misrepresents my argument.

First, in section 2, Butterfield lays out his disagreements with me. Alas, he disagrees with positions I don’t hold and certainly did not state, neither in the book nor anywhere else:
“Hossenfelder’s main criticism of supersymmetry is, in short, that it is advocated because of its beauty, but is unobserved. But even if supersymmetry is not realized in nature, one might well defend studying it as an invaluable tool for getting a better understanding of quantum field theories. A similar defence might well be given for studying string theory.”
Sure. Supersymmetry, string theory, grand unification, even naturalness, started out as good ideas and valuable research programs. I do not say these should not have been studied; neither do I say one should now discontinue studying them. The problem is that these ideas have grown into paper-production industries that no longer produce valuable output.

Beautiful hypotheses are certainly worth consideration. Troubles begin if data disagree with the hypotheses but scientists continue to rely on their beautiful hypotheses rather than taking clues from evidence.

Second, Butterfield misunderstands just how physicists working on the field’s foundations are “led astray” by arguments from beauty. He writes:
“I also think advocates of beauty as a heuristic do admit these limitations. They advocate no more than a historically conditioned, and fallible, heuristic [...] In short, I think Hossenfelder interprets physicists as more gung-ho, more naïve, that beauty is a guide to truth than they really are.”
To the extent that physicists are aware they use arguments from beauty, most know that these are not scientific arguments and also readily admit it. I state this explicitly in the book. They use such arguments anyway, however, because doing so has become accepted methodology. Look at what they do, don’t listen to what they say.

A few try to justify using arguments from beauty by appeals to cherry-picked historical examples or quotes to Einstein and Dirac. In most cases, however, physicists are not aware they use arguments from beauty to begin with (hence the book’s title). I have such discussions on a daily basis.

Physicists wrap appeals to beauty into statements like “this just can’t be the last word,” “intuition tells me,” or “this screams for an explanation”. They have forgotten that naturalness is an argument from beauty and can’t recall, or never looked at, the motivation for axions or gauge coupling unification. They will express their obsessions with numerical coincidences by saying “it’s curious” or “it is suggestive,” often followed by “Don’t you agree?”.

Of course I agree. I agree that supersymmetry is beautiful and it should be true, and it looks like there should be a better explanation for the parameters in the standard model, and it looks like there should be a unified force. But who cares what I think nature should be like? Human intuition is not a good guide to the development of new laws of nature.

What physicists are naive about is not appeals to beauty; what they are naive about is their own rationality. They cannot fathom the possibility that their scientific judgement is influenced by cognitive biases and social trends in scientific communities. They believe it does not matter for their interests how their research is presented in the media.

The easiest way to see that the problem exists is that they deny it.