Monday, February 18, 2019

Never again confuse Dark Matter with Dark Energy

Barnard 68 is a molecular cloud
that adsorbs light. It is dark and made
of matter, but not made of dark matter.
[Image: Wikipedia]
Dark Matter 

Dark Matter is, as the name says, matter. But “matter” is not just physicists’ way to say “stuff,” it’s a technical term. Matter has specific behavior, which is that its energy-density dilutes with the inverse volume. Energy-density of radiation, in contrast, dilutes faster than the inverse volume, because the wavelengths of the radiation also stretch.

Generally, anything that has a non-negligible pressure will not behave in this particular way. Cosmologists therefore also say dark matter is “a pressureless fluid.” And, since I know it’s confusing, let me remind you that a fluid isn’t the same as a liquid, and gases can be fluids, so sometimes they may speak about “pressureless gas” instead.

In contrast to what the name says, though, Dark Matter isn’t dark. “Dark” suggests that it adsorbs light, but really it doesn’t react with light at all. It would be better to call it transparent. Light just goes through. And in return, Dark Matter just goes through all normal matter, including planet Earth and you and me. Dark Matter interacts even less often than the already elusive neutrinos.

Dark matter is what makes galaxies rotate faster and helps galactic structure formation to get started.

Dark Energy

Dark Energy, too, is transparent rather than dark. But its name is even more misleading than that of Dark Matter because Dark Energy isn’t energy either. Instead, if you divide it by Newton’s constant, you get an energy density. In contrast to Dark Matter, however, this energy-density does not dilute with the inverse volume. Instead, it doesn’t dilute at all if the volume increases, at least not noticeably.

If the energy density remains entirely constant with the increase of volume, it’s called the “cosmological constant.” General types of Dark Energy can have a density that changes with time (or location), but we currently do not have any evidence that this is the case. The cosmological constant, for now, does just fine to explain observations.

Dark Energy is what makes the expansion of the universe speed up.

Are Dark Matter and Dark Energy the same?

Dark Matter and Dark Energy have distinctly different properties and cannot just be the same. At best they can both be different aspects of a common underlying theory. There are many theories for how this could happen, but to date we have no compelling evidence that this idea is correct.

Friday, February 15, 2019

Dark Matter – Or What?

Yesterday I gave a colloq about my work with Tobias Mistele on superfluid dark matter. Since several people asked for the slides, I have uploaded them to slideshare. You can also find the pdf here. I previously wrote about our research here and here. All my papers are openly available on the arXiv.

Wednesday, February 13, 2019

When gravity breaks down

[img:clipartmax]
Einstein’s theory of general relativity is more than a hundred years old, but still it gives physicists headaches. Not only are Einstein’s equations hideously difficult to solve, they also clash with physicists other most-cherish achievement, quantum theory.

Problem is, particles have quantum properties. They can, for example, be in two places at once. These particles also have masses, and masses cause gravity. But since gravity does not have quantum properties, no one really knows what’s the gravitational pull of a particle in a quantum superposition. To solve this problem, physicists need a theory of quantum gravity. Or, since Einstein taught us that gravity is really curvature of space-time, physicists need a theory for the quantum properties of space and time.

It’s a hard problem, even for big-brained people like theoretical physicists. They have known since the 1930s that quantum gravity is necessary to bring order into the laws of nature, but 80 years on a solution isn’t anywhere in sight. The major obstacle on the way to progress is the lack of experimental guidance. The effects of quantum gravity are extremely weak and have never been measured, so physicists have only math to rely on. And it’s easy to get lost in math.

The reason it is difficult to obtain observational evidence for quantum gravity is that all presently possible experiments fall into two categories. Either we measure quantum effects – using small and light objects – or we measure gravitational effects – using large and heavy objects. In both cases, quantum gravitational effects are tiny. To see the effects of quantum gravity, you would really need a heavy object that has pronounced quantum properties, and that’s hard to come by.

Physicists do know a few naturally occurring situations where quantum gravity should be relevant. But it is not on short distances, though I often hear that. Non-quantized gravity really fails in situations where energy-densities become large and space-time curvature becomes strong. And let me be clear that what astrophysicists consider “strong” curvature is still “weak” curvature for those working on quantum gravity. In particular, the curvature at a black hole horizon is not remotely strong enough to give rise to noticeable quantum gravitational effects.

Curvature strong enough to cause general relativity to break down, we believe, exists only in the center of black holes and close by the big bang. In both cases the strongly compressed matter has a high density and a pronounced quantum behavior which should give rise to quantum gravitational effects. Unfortunately, we cannot look inside a black hole, and reconstructing what happened at the Big Bang from today’s observation can, with present measurement techniques, not reveal the quantum gravitational behavior.

The regime where quantum gravity becomes relevant should also be reached in particle collisions at extremely high center-of-mass energy. If you had a collider large enough – estimates say that with current technology it would be about the size of the Milky Way – you could focus enough energy into a small region of space to create strong enough curvature. But we are not going to build such a collider any time soon.

Besides strong space-time curvature, there is another case where quantum effects of gravity should become measureable that is often neglected: By creating quantum superpositions of heavy objects. This causes the approximation in which matter has quantum properties but gravity doesn’t (the “semi-classical limit”) to break down, revealing truly quantum effects of gravity. A few experimental groups are currently trying to reach the regime where they might become sensitive to such effects. They still have some orders of magnitude to go, so not quite there yet.

Why don’t physicists study this case closer? As always, it’s hard to say why scientists do one thing and not another. I can only guess it’s because from a theoretical perspective this case is not all that interesting.

I know I said that physicists don’t have a theory of quantum gravity, but that is only partly correct. Gravity can, and has been, quantized using the normal methods of quantization already in the 1960s by Feynman and DeWitt. However, the theory one obtains this way (“perturbative quantum gravity”), breaks down in exactly the strong curvature regime that physicists want to use it (“perturbatively non-renormalizable”). Therefore, this approach is today considered merely a low-energy approximation (“effective theory”) to the yet-to-be-found full theory of quantum gravity (“UV-completion”).

Past the 1960s, almost all research efforts in quantum gravity focused on developing that full theory. The best known approaches are string theory, loop quantum gravity, asymptotic safety, and causal dynamical triangulation. The above mentioned case with heavy objects in quantum superpositions, however, does not induce strong curvature and hence falls into the realm of the boring and supposedly well-understood theory from the 1960s. Ironically, for this reason there are almost no theoretical predictions for such an experiment from either of the major approaches to the full theory of quantum gravity.

Most people in the field presently think that perturbative quantum gravity must be the correct low-energy limit of any theory of quantum gravity. A minority, however, holds that this isn’t so, and members of this club usually quote one or both of the following reasons.

The first objection is philosophical. It does not conceptually make much sense to derive a supposedly more fundamental theory (quantum gravity) from a less fundamental one (non-quantum gravity) because by definition the derived theory is the less fundamental one. Indeed, the quantization procedure for Yang-Mills theories is a logical nightmare. You start with a non-quantum theory, make it more complicated to obtain another theory, though that is not strictly speaking a derivation, and if you then take the classical limit you get a theory that doesn’t have any good interpretation whatsoever. So why did you start from it to begin with it?

Well, the obvious answer is: We do it because it works, and we do it this way because of historical accident not because it makes a lot of sense. Nothing wrong with that for a pragmatist like me, but also not a compelling reason to insist that the same method should apply to gravity.

The second often-named argument against the perturbative quantization is that you do not get atomic physics by quantizing water either. So if you think that gravity is not a fundamental interaction but comes from the collective behavior of a large number of microscopic constituents (think “atoms of space-time”), then quantizing general relativity is simply the wrong thing to do.

Those who take this point of view that gravity is really a bulk-theory for some unknown microscopic constituents follow an approach called “emergent gravity”. It is supported by the (independent) observations of Jacobson, Padmanabhan, and Verlinde, that the laws of gravity can be rewritten so that they appear like thermodynamical laws. My opinion about this flip-flops between “most amazing insight ever” and “curious aside of little relevance,” sometimes several times a day.

Be that as it may, if you think that emergent gravity is the right approach to quantum gravity, then the question where gravity-as-we-know-and-like-it breaks down becomes complicated. It should still break down at high curvature, but there may be further situations where you could see departures from general relativity.

Erik Verlinde, for example, interprets dark matter and dark energy as relics of quantum gravity. If you believe this, we do already have evidence for quantum gravity! Others have suggested that if space-time is made of microscopic constituents, then it may have bulk-properties like viscosity, or result in effects normally associated with crystals like birefringence, or the dispersion of light.

In summary, the expectation that quantum effects of gravity should become relevant for strong space-time curvature is based on an uncontroversial extrapolation and pretty much everyone in the field agrees on it.* In certain approaches to quantum gravity, deviations from general relativity could also become relevant at long distances, low acceleration, or low energies. An often neglected possibility is to probe the effects of quantum gravity with quantum superpositions of heavy objects.

I hope to see experimental evidence for quantum gravity in my lifetime.


Except me, sometimes.

Friday, February 08, 2019

A philosopher of science reviews “Lost in Math”

Jeremy Butterfield is a philosopher of science in Cambridge. I previously wrote about some of his work here, and have met him on various occasions. Butterfield recently reviewed my book “Lost in Math,” and you can now find this review online here. (I believe it was solicited for a journal by name Physics in Perspective.)

His is a very detailed review that focuses, unsurprisingly, on the philosophical implications of my book. I think his summary will give you a pretty good impression of the book’s content. However, I want to point out two places where he misrepresents my argument.

First, in section 2, Butterfield lays out his disagreements with me. Alas, he disagrees with positions I don’t hold and certainly did not state, neither in the book nor anywhere else:
“Hossenfelder’s main criticism of supersymmetry is, in short, that it is advocated because of its beauty, but is unobserved. But even if supersymmetry is not realized in nature, one might well defend studying it as an invaluable tool for getting a better understanding of quantum field theories. A similar defence might well be given for studying string theory.”
Sure. Supersymmetry, string theory, grand unification, even naturalness, started out as good ideas and valuable research programs. I do not say these should not have been studied; neither do I say one should now discontinue studying them. The problem is that these ideas have grown into paper-production industries that no longer produce valuable output.

Beautiful hypotheses are certainly worth consideration. Troubles begin if data disagree with the hypotheses but scientists continue to rely on their beautiful hypotheses rather than taking clues from evidence.

Second, Butterfield misunderstands just how physicists working on the field’s foundations are “led astray” by arguments from beauty. He writes:
“I also think advocates of beauty as a heuristic do admit these limitations. They advocate no more than a historically conditioned, and fallible, heuristic [...] In short, I think Hossenfelder interprets physicists as more gung-ho, more naïve, that beauty is a guide to truth than they really are.”
To the extent that physicists are aware they use arguments from beauty, most know that these are not scientific arguments and also readily admit it. I state this explicitly in the book. They use such arguments anyway, however, because doing so has become accepted methodology. Look at what they do, don’t listen to what they say.

A few try to justify using arguments from beauty by appeals to cherry-picked historical examples or quotes to Einstein and Dirac. In most cases, however, physicists are not aware they use arguments from beauty to begin with (hence the book’s title). I have such discussions on a daily basis.

Physicists wrap appeals to beauty into statements like “this just can’t be the last word,” “intuition tells me,” or “this screams for an explanation”. They have forgotten that naturalness is an argument from beauty and can’t recall, or never looked at, the motivation for axions or gauge coupling unification. They will express their obsessions with numerical coincidences by saying “it’s curious” or “it is suggestive,” often followed by “Don’t you agree?”.

Of course I agree. I agree that supersymmetry is beautiful and it should be true, and it looks like there should be a better explanation for the parameters in the standard model, and it looks like there should be a unified force. But who cares what I think nature should be like? Human intuition is not a good guide to the development of new laws of nature.

What physicists are naive about is not appeals to beauty; what they are naive about is their own rationality. They cannot fathom the possibility that their scientific judgement is influenced by cognitive biases and social trends in scientific communities. They believe it does not matter for their interests how their research is presented in the media.

The easiest way to see that the problem exists is that they deny it.

Wednesday, February 06, 2019

Why a larger particle collider is not currently a good investment

LHC tunnel. Credits: CERN.
That a larger particle collider is not currently a good investment is hardly a controversial position. While the costs per units of collision-energy have decreased over the decades thanks to better technology, the absolute cost of new machines has shot up. That the costs of larger particle colliders would at some point become economically prohibitive has been known for a long time. Even particle physicists could predict this.

Already in 2001, Maury Tigner, who led the Central Design Group for the (cancelled) Superconducting Super Collider project, wrote an article for Physics Today asking “Does Accelerator-Based Particle Physics Have a Future?” While he remained optimistic that collaborative efforts and technological advances would lead to some more progress, he was also well aware of the challenges. Tigner wrote:
“If we are to continue progress with accelerator-based particle physics, we will have to mount much more effective efforts in the technical aspects of accelerator development — with a strong focus on economy. Such efforts will probably not suffice to hold constant the cost of new facilities in the face of the ever more demanding joint challenges of higher collision energy and higher luminosity. Making available the necessary intellectual and financial resources to sustain progress would seem to require social advances of unprecedented scope in resource management and collaboration.”
But the unprecedented social advances have not come to pass, and neither have we since seen major breakthroughs in collider technology. The state of affairs is often summarized in what is known as the “Livingston Plot” that shows the year of construction versus energy. You can clearly see that the golden years of particle accelerators ended around 1990. And no game-changing technology has come up to turn things around:

Livingston plot. Image Credits: K. Yokoya
Particle accelerators are just damn expensive. In the plot below I have collected some numbers for existing and former colliders. I took the numbers from this paper and from Wikipedia. Cost estimates are not inflation-adjusted and currency-conversions are approximate, so do not take the numbers too seriously. The figure should, however, give you a roughly correct impression:



The ILC is the (proposed) International Linear Collider, and the NLC was the once proposed Next Linear Collider. The SSC is the scraped Superconducting Super Collider. FCC-ee and FCC are the low-cost and high-cost variants of CERN’s planned Future Circular Collider.

When interpreting this plot, keep in mind that the cost for the LHC was low because it reused the tunnel of an earlier experiment. Also note that most of these machines were not built to reach the highest energies possible (at the time), so please do not judge facilities for falling below the diagonal.

So, yeah, particle collider are expensive, no doubt about this. Now, factor in the unclear discovery potential for the next larger collider, and compare this to other experiments that “push frontiers,” as the catchphrase has it.

There is currently no reason to think a larger particle collider will do anything besides measuring some constants to higher precision. That is not entirely uninteresting, of course, and it’s enough to excite particle physicists. But this knowledge will tell us little new about the universe and it cannot be used to obtain further knowledge either.

Compare the expenses for CERN’s FCC plans to that of the gravitational wave interferometer LIGO. LIGO’s price tag was well below a billion US$. Still, in 1991, physicists hotly debated whether it was worth the money.

And that is even though the scientific case for LIGO was clear. Gravitational waves were an excellently solid prediction. Not only this, physicists knew already from indirect measurements that they must exist. True, they did not know exactly at which amplitude to expect events or how many of them. But this was not a situation in which “nothing until 15 orders of magnitude higher” was the most plausible case.

In addition, gravitational waves are good for something. They allow physicists to infer properties of distant stellar objects, which is exactly what the LIGO collaboration is now doing. We have learned far more from LIGO than that gravitational waves exist.

The planned FCC costs 20 times as much, has no clear discovery target, and it’s a self-referential enterprise: A particle collider tells you more about particle collisions. We have found the Higgs, all right, but there is nothing you can do with the Higgs now other than studying it closer.

Another cost-comparison: The Square Kilometer Array (SKA). Again the scientific case is clear. The SKA (among other things) would allow us to study the “dark ages” of the universe, that we cannot see with other telescopes because no stars existed at the time, and look for organic compounds in outer space. From this we could learn a lot about star formation, the mystery that is dark matter, and the prevalence of organic chemistry in the universe that may be an indicator for life. The total cost of the SKA is below $2 billion, though it looks like the full version will not come into being. Currently, less than a billion of funding is available that suffices only for the slimmed-down variant (SKA-1).

And it’s not like building larger colliders is the only thing you can do to learn more about particle physics. All the things that can happen at higher energies also affect what happens at low energies, it’s just that at low energies you have to measure very, very precisely. That’s why high-precision measurements, like that of the muon g-2 or the electric dipole moment, are an alternative to going to higher energies. Such experiments are far less costly.

There are always many measurements that could be done more precisely, and when doing so, it is always possible that we find something new. But the expected discovery potential must play a role when evaluating the promises of an investment. It is unsurprising that particle physicists would like to have a new particle collider. But that is not an argument for why such a machine would be a good investment.

Particle physicists have not been able to come up with any reliable predictions for new effects for decades. The prediction of the Higgs-boson was the last good prediction they had. With that, the standard model is complete and we have no reason to expect anything more, not until energies 15 orders of magnitude higher.

Of course, particle physicists do have a large number of predictions for new particles within the reach of the next larger collider, but these are really fabricated for no other purpose than to rule them out. You cannot trust them. When they tell you that a next larger collider may see supersymmetry or extra dimensions or dark matter, keep in mind they told you the same thing 20 years ago.

Tuesday, February 05, 2019

String theory landscape predicts no new particles at the LHC

In a paper that appeared on the arXiv last week, Howard Baer and collaborators predict masses of new particles using the string theory landscape. They argue that the Large Hadron Collider should not have seen them so far, and likely will not see them in the upcoming run. Instead, it would at least take an upgrade of the LHC to higher collision energy to see any.

The idea underlying their calculation is that we live in a multiverse in which universes with all possible combinations of the constants of nature exist. On this multiverse, you have a probability distribution. You further must take into account that some combinations of natural constants will not allow for life to exist. This results in a new probability distribution that quantifies the likelihood, not of the existence of universes, but that we observe a particular combination. You can then calculate the probability for finding certain masses of postulated particles.

As I just explained in a recent post, this is a new variant of arguments from naturalness. A certain combination of parameters is more “natural” the more often it appears in the multiverse. As Baer et al write in their paper:
“The landscape, if it is to be predictive, is predictive in the statistical sense: the more prevalent solutions are statistically more likely. This gives the connection between landscape statistics and naturalness: vacua with natural observables are expected to be far more common than vacua with unnatural observables.”
Problem is, the landscape is just not predictive. It is predictive in the statistical sense only after you have invented a probability distribution. But since you cannot derive the distribution from first principles, you really postulate your results in form of the distribution.

Baer et al take their probability distribution from the literature, specifically from a 2004 paper by Michael Douglas. The Douglas paper has no journal reference and is on the arXiv in version 4 with the note “we identify a serious error in the original argument, and attempt to address it.”

So what do the particle physicists find? They find that the mass of the Higgs-boson is most likely what we have observed. They find that most likely we have not yet seen supersymmetric particles at the LHC. They also find that so far we have not seen any dark matter particles.

I must admit that this fits remarkably well with observations. I would have been more impressed, though, had they made those predictions prior to the measurement.

They also offer some actual predictions which is that the next LHC run is unlikely to see any new fundamental particles, but that upgrading the LHC to higher energies should help seeing them. (This upgrade is called HE-LHC and is distinct from the FCC proposal.) They also think that the next round of dark matter experiments should see something.

Ten years ago, Howard Baer worried that when the LHC turned on, it would produce so many supersymmetric particles that this would screw up the detector calibration.

Monday, February 04, 2019

Maybe I’m crazy


How often can you hold up four fingers, hear a thousand people shout “five”, and not agree with them? How often can you repeat an argument, see it ignored, and still believe in reason? How often can you tell a thousand scientists the blatantly obvious, hear them laugh, and not think you are the one who is insane?

I wonder.

Every time a particle physicist dismisses my concerns, unthinkingly, I wonder some more. Maybe I am crazy? It would explain so much. Then I remind myself of the facts, once again.

Fact is, in the foundations of physics we have not seen progress for the past four decades. Ever since the development of the standard model in the 1970s, further predictions for new effects have been wrong. Physicists commissioned dozens of experiments to look for dark matter particles and grand unification. They turned data up-side down in search for supersymmetric particles and dark energy and new dimensions of space. The result has been consistently: Nothing new.

Yes, null-results are also results. But they are not very useful results if you need to develop a new theory. A null-result says: “Let’s not go this way.” A result says: “Let’s go that way.” If there are many ways to go, discarding some of them does not help much. To move on in the foundations of physics, we need results, not null-results.

It’s not like we are done and can just stop here. We know we have not reached the end. The theories we currently have in the foundations are not complete. They have problems that require solutions. And if you look at the history of physics, theory-led breakthroughs came when predictions were based on solving problems that required solution.

But the problems that theoretical particle physicists currently try to solve do not require solutions. The lack of unification, the absence of naturalness, the seeming arbitrariness of the constants of nature: these are aesthetic problems. Physicists can think of prettier theories, and they believe those have better chances to be true. Then they set out to test those beauty-based predictions. And get null-results.

It’s not only that there is no reason to think this method should work, it does – in fact! – not work, has not worked for decades. It is failing right now, once again, as more beauty-based predictions for the LHC are ruled out every day.

They keep on believing, nevertheless.

Those who, a decade ago, made confident predictions that the Large Hadron Collider should have seen new particles can now not be bothered to comment. They are busy making “predictions” for new particles that the next larger collider should see. We risk spending $20 billion dollars on more null-results that will not move us forward. Am I crazy for saying that’s a dumb idea? Maybe.

Someone recently compared me to a dinghy that has the right of way over a tanker ship. I could have the best arguments in the world, that still would not stop them. Inertia. It’s physics, bitches.

Recently, I wrote an Op-Ed for the NYT in which I lay out why a larger particle collider is not currently a good investment. In her response, Prof Lisa Randall writes: “New dimensions or underlying structures might exist, but we won’t know unless we explore.” Correct, of course, but doesn’t explain why a larger particle collider is a promising investment.

Randall is professor of physics at Harvard. She is famous for having proposed a model, together with Raman Sundrum, according to which the universe should have additional dimensions of space. The key insight underlying the Randall-Sundrum model is that a small number in an exponential function can make a large number. She is one of the world’s best-cited particle physicists. There is no evidence these extra-dimension exist. More recently she has speculated that dark matter killed the dinosaurs.

Randall ends her response with: “Colliders are expensive, but so was the government shutdown,” an argument so flawed and so common I debunked it two weeks before she made it.

And that is how the top of tops of theoretical particle physicists react if someone points out they are unable to acknowledge failure: They demonstrate they are unable to acknowledge failure.

When I started writing my book, I thought the problem is they are missing information. But I no longer think so. Particle physicists have all the information they need. They just refuse to use it. They prefer to believe.

I now think it’s really a standoff between reason and intuition. Here I am, with all my arguments. With my stacks of papers about naturalness-based predictions that didn’t work. With my historical analysis and my reading of the philosophy of physics. With my extrapolation of the past to the future that says: Most likely, we will see more null-results at higher energies.

And on the other side there are some thousand particle physicists who think that this cannot possibly be the end of the story, that there must be more to see. Some thousand of the most intelligent people the human race has ever produced. Who believe they are right. Who trust their experience. Who think their collective hope is reason enough to spend $20 billion.

If this was a novel, hope would win. No one wants to live in a world where the little German lady with her oh-so rational arguments ends up being right. Not even the German lady wants that.

Wait, what did I say? I must be crazy.

Sunday, February 03, 2019

A philosopher's take on “naturalness” in particle physics

Square watermelons. Natural?
[Image Source]
Porter Williams is a philosopher at the University of Pittsburgh. He has a new paper about “naturalness,” an idea that has become a prominent doctrine in particle physics. In brief, naturalness requires that a theory’s dimensionless parameters should be close to 1, unless there is an explanation why they are not.

Naturalness arguments were the reason so many particle physicists expected (still expect) the Large Hadron Collider (LHC) to see fundamentally new particles besides the Higgs-boson.

In his new paper, titled “Two Notions of Naturalness,” Williams argues that, in recent years, naturalness arguments have split into two different categories.

The first category of naturalness is the formerly used one, based on quantum field theory. It quantifies, roughly speaking, how sensitive the parameters of a theory at low energies are to changes of the parameters at high energies. Assuming a probability distribution for the parameters at high energies, you can then quantify the likelihood of finding a theory with the parameters we do observe. If the likelihood is small, the theory is said to be “unnatural” or “finetuned”. The mass of the Higgs-boson is unnatural in this sense, so is the cosmological constant, and the theta-parameter.

The second, and newer, type of naturalness, is based on the idea that our universe is one of infinitely many that together make up a “multiverse.” In this case, if you assume a probability distribution over the universes, you can calculate the likelihood of finding the parameters we observe. Again, if that comes out to be unlikely, the theory is called “unnatural.” This approach has so far not been pursued much. Particle physicists therefore hope that the standard model may turn out to be natural in this new way.

I wrote about this drift of naturalness arguments last year (it is also briefly mentioned in my book). I think Williams correctly identifies a current trend in the community.

But his paper is valuable beyond identifying a new trend, because Williams lays out the arguments from naturalness very clearly. I have given quite some talks about the topic, and in doing so noticed that even particle physicists are sometimes confused about exactly what the argument is. Some erroneously think that naturalness is a necessary consequence of effective field theory. This is not so. Naturalness is an optional requirement that the theory may or may not fulfill.

As Williams points out: “Requiring that a quantum field theory be natural demands a more stringent autonomy of scales than we are strictly licensed to expect by [the] structural features of effective field theory.” By this he disagrees with a claim by the theoretical physicist Gian-Francesco Giudice, according to whom naturalness “can be justified by renormalization group methods and the decoupling theorem.” I side with Williams.

Nevertheless, Williams comes out in defense of naturalness arguments. He thinks that these arguments are well-motivated. I cannot, however, quite follow his rationale for coming to this conclusion.

It is correct that the sensitivity to high-energy parameters is peculiar and something that we see in the standard model only for the mass of the Higgs-boson*. But we know why that is: The Higgs-boson is different from all the other particles in being a scalar particle. The expectation that its quantum corrections should enjoy a similar protection as the other particles is therefore not justified.

Williams offers one argument that I had not heard before, which is that you need naturalness to get reliable order-of-magnitude estimates. But this argument assumes that you have only one constant for each dimension of units, so it’s circular. The best example is cosmology. The cosmological constant is not natural. GR has another, very different, mass-scale, that is the Planck mass. Still you can perfectly well make order-of-magnitude estimates as long as you know which mass-scales to use. In other words, making order-of-magnitude estimates in an unnatural theory is only problematic if you assume the theory really should be natural.

The biggest problem, however, is the same for both types of naturalness: You don’t have the probability distribution and no way of obtaining it because it’s a distribution over an experimentally inaccessible space. To quantify naturalness, you therefore have to postulate a distribution, but that has the consequence that you merely get out what you put in. Naturalness arguments can therefore always be amended to give whatever result you want.

And that really is the gist of the current trend. The LHC data has shown that the naturalness arguments that particle physicists relied on did not work. But instead of changing their methods of theory-development, they adjust their criteria of naturalness to accommodate the data. This will not lead to better predictions.


*The strong CP-problem (that’s the thing with the theta-parameter) is usually assumed to be solved by the Pecci-Quinn mechanism, never mind that we still haven’t seen axions. The cosmological constant has something to do with gravity, and therefore particle physicists think it’s none of their business.

Saturday, February 02, 2019

Particle physicists surprised to find I am not their cheer-leader

Me and my Particle Data Booklet.
In the past week, I got a lot of messages from particle physicists who are unhappy I wrote an Op-Ed for the New York Times. They inform me that they really would like to have a larger particle collider. In other news, dogs still bite men. In China, bags of rice topple over.

More interesting than particle physicists’ dismay are the flavors of their discontent. I’ve been called a “troll” and a “liar”. I’ve been told I “foul my nest” and “play the victim.” I have been accused of envy, wrath, sloth, greed, narcissism, and grandiosity. I’m a pessimist, a defeatist, and a populist. I’m “to be ignored.” I’m a “no one” with a “platform” who has a “cult following.” I have made quick career from an enemy of particle physics, to an enemy of physics, to an enemy of science. In extrapolation, by the end of next week I’ll be the anti-christ.

Now, look. I’m certainly not an angel. I have a temper. I lack patience. I’m “eye-for-eye” rather than “turn the other cheek”. I don’t always express myself as clearly as I should. I make mistakes, contradict myself, and don’t live up to my own expectations. I have regrets.

But I am also a simple person. You don’t need to dig deep to understand me. To first approximation, I mean what I say: We currently have no reason to think a next larger particle collider will do anything but confirm the existing theories. Particle physicists’ methods of theory-development have demonstrably failed for 40 years. The field is beset by hype and group-think. You cannot trust these people. It’s a problem and it’s in the way of progress.

It hurts, because they know that I know what I am talking about.

Thursday, I gave a colloquium at the University of Giessen. In Giessen, physics research is mostly nuclear physics and plasma physics. They don’t have anyone working in the fields I’m criticizing. Nevertheless, it transpired yesterday that following my Op-Ed some people at the department debated whether I am a “populist” who better not be given a “forum”.

For starters, that’s ridiculous – a physics colloq at the University of Giessen is not much of a forum. Also, I have been assured the department didn’t seriously consider uninviting me. Still, I am disturbed that scientists would try to shut me up rather than think about what I say.

I didn’t know anything about this, however, when I gave my talk. It was well attended, all seats taken, people standing in the back. It was my usual lecture, that is a brief summary of the arguments in my book. I got the usual questions. There is always someone who asks for an example of an ugly theory. There is always someone who asks what’s wrong with finding beauty in their research. There’s always someone who has a question that’s more of a comment, really.

Then, a clearly agitated young man raised his arm and mumbled something about a heated discussion that had taken place last week. This didn’t make sense to me until later, so I ignored it. He then explained he didn’t read my book, and didn’t find anything objectionable about my talk. Must have been some disappointment, I guess, to see I’m not Rumpelstiltskin. He said that “everyone here agrees” that those failed predictions and the hype surrounding them are a problem. But, he wailed, how could I possibly go and publicly declare that one cannot trust scientists?

You see, the issue they have isn’t that I say particle physics has a problem. Because that’s obvious to everyone who ever had anything to do with the field. The issue is that I publicly say it.

Why do I say it? Because it’s true. And because the public needs to know. And because I have given up hope they will change their ways just because what I say is right. You cannot reach them with reason. But you can reach them by drawing attention to how much money is going to waste because scientists refuse to have a hard look at themselves. Lots of money. Money that could be put to better use elsewhere.

Now they are afraid, and they feel betrayed. And that’s what you see in the responses.

The first mode of defense is denial. It goes like this: Particle physics is doing just fine, go away, nothing to see here. Please give us more money.

The second mode of defense is urging me to stay in line and, at the same time, warning everyone else to keep their mouth shut. Over at Orbiter Magazine, Marcelo Gleiser and some other HEP people (who I don’t know), accuse me of “defeatism” and “sabotage” and express their grievances as follows:
“As a community, we must fight united for the expansion of all our fields of inquiry, working with the public and politicians to increase the research budget to accommodate different kinds of projects. While it is true that research budgets are often strained, our work is to convince society that what we do is worthwhile, even when it fails to deliver the big headlines.”
But no, just no. My job as a scientist is not to “convince society” that what other scientists do is worthwhile (regardless of headlines). My job is to look at the evidence and report what I find. The evidence says particle physicists’ methods for theory-development have not worked for four decades. Yet they continue using these methods. It’s bad science, it deserves to be called bad science, and I will continue to call it bad science until they stop doing it.

If I was a genius, I would be here telling you about my great new theory of everything. I don’t have one. I am a mediocre thinker. I just wish all those smart people would stop playing citation games and instead do their job so we would see some real progress. But I’m also a writer. Words are my weapons. And make no mistake, I’m not done.

Wednesday, January 30, 2019

Just because it’s falsifiable doesn’t mean it’s good science.

Flying carrot. 
Title says it all, really, but it’s such a common misunderstanding I want to expand on this for a bit.

A major reason we see so many wrong predictions in the foundations of physics – and see those make headlines – is that both scientists and science writers take falsifiability to be a sufficient criterion for good science.

Now, a scientific prediction must be falsifiable, all right. But falsifiability alone is not sufficient to make a prediction scientific. (And, no, Popper never said so.) Example: Tomorrow it will rain carrots. Totally falsifiable. Totally not scientific.

Why is it not scientific? Well, because it doesn’t live up to the current quality standard in olericulture, that is the study of vegetables. According to the standard model of root crops, carrots don’t grow on clouds.

What do we learn from this? (Besides that the study of vegetables is called “olericulture,” who knew.) We learn that to judge a prediction you must know why scientists think it’s a good prediction.

Why does it matter?

The other day I got an email from a science writer asking me to clarify a statement he had gotten from another physicist. That other physicist had explained a next larger particle collider, if built, would be able to falsify the predictions of certain dark matter models.

That is correct of course. A next larger collider would be able to falsify a huge amount of predictions. Indeed, if you count precisely, it would falsify infinitely many predictions. That’s more than even particle physicists can write papers about.

You may think that’s a truly remarkable achievement. But the question you should ask is: What reason did the physicist have to think that any of those predictions are good predictions? And when it comes to the discovery of dark matter with particle colliders, the answer currently is: There is no reason.

I cannot stress this often enough. There is not currently any reason to think a larger particle collider would produce fundamentally new particles or see any other new effects. There are loads of predictions, but none of those have good motivations. They are little better than carrot rain.

People not familiar with particle physics tend to be baffled by this, and I do not blame them. You would expect if scientists make predictions they have reasons to think it’ll actually happen. But that’s not the case in theory-development for physics beyond the standard model. To illustrate this, let me tell you how these predictions for new particles come into being.

The standard model of particle physics is an extremely precisely tested theory. You cannot just add particles to it as you want, because doing so quickly gets you into conflict with experiment. Neither, for that matter, can you just change something about the existing particles like, eg, postulating they are made up of smaller particles or such. Yes, particle physics is complicated.

There are however a few common techniques you can use to amend the standard model so that the deviations from it are not in the regime that we have measured yet. The most common way to do this is to make the new particles heavy (so that it takes a lot of energy to create them) or very weakly interacting (so that you produce them very rarely). The former is more common in particle physics, the latter more common in astrophysics.

There are of course a lot of other quality criteria that you need to fulfil. You need to formulate your theory in the currently used mathematical language, that is that of quantum field theories. You must demonstrate that your new theory is not in conflict with experiment already. You must make sure that your theory has no internal contradictions. Most importantly though, you must have a motivation for why your extension of the standard model is interesting.

You need this motivation because any such theory-extension is strictly speaking unnecessary. You do not need it to explain existing data. No, you do not need it to explain the observations normally attributed to dark matter either. Because to explain those you only need to assume an unspecified “fluid” and it doesn’t matter what that fluid is made of. To explain the existing data, all you need is the standard model of particle physics and the concordance model of cosmology.

The major motivation for new particles at higher energies, therefore, has for the past 20 years been an idea called “naturalness”. The standard model of particle physics is not “natural”. If you add more particles to it, you can make it “natural” again. Problem is that now the data say that the standard model is just not natural, period. So that motivation just evaporated. With that motivation gone, particle physicists don’t know what to do. Hence all the talk about confusion and crisis and so on.

Of course physicists who come up with new models will always claim that they have a good motivation, and it can be hard to follow their explanations. But it never hurts to ask. So please do ask. And don’t take “it’s falsifiable” as an answer.

There is more to be said about what it means for a theory to be “falsifiable” and how necessary that criterion really is, but that’s a different story and shall be told another time. Thanks for listening.



[I explain all this business with naturalness and inventing new particles that never show up in my book. I know you are sick of me mentioning this, but the reason I keep pointing it out is that I spent a lot of time making the statements in my book as useful and accurate as possible. I cannot make this effort with all my blogposts. So really I think you are better off reading the book.]

Tuesday, January 29, 2019

Book Update

Yesterday, after some back-and-forth with a German customs officer, my husband finally got hold of a parcel that had gone astray. It turned out to contain 5 copies of the audio version of my book “Lost in Math.” UNABRIDGED. Read by Laura Jennings.

7 discs. 8 hours, 43 minutes. Produced by Brilliance Audio.

I don’t need 5 copies of this. Come to think of it, I don’t even have a CD player. So, I decided, I’ll give away two copies. Yes, all for free! I’ll even pay the shipping fee on your behalf.

All you have to do is leave a comment below, and explain why you are interested in the book. The CD-sets will go to the first two such commenters by time stamp of submission. And, to say the obvious, I cannot send a parcel to a pseudonym, so if you are interested, you must be willing to provide a shipping address.

Ready, set, go.

Update: The books are gone!

Sunday, January 27, 2019

New Scientist creates a Crisis-Anticrisis Pair

A recent issue of New Scientist has an article about the crisis in the foundations of physics titled “We’ll die before we find the answers.

The article, written by Dan Cossins, is a hilarious account of a visit at Perimeter Institute. Cossins does a great job capturing the current atmosphere in the field which is one of confusion.

That the Large Hadron Collider so far hasn’t seen any fundamentally new particles besides the Higgs-boson is a big headache, getting bigger by the day. Most of the theorists who made the flawed predictions for new particles, eg supersymmetric partner particles, are now at a loss of what to do:
“Even those who forged the idea [of supersymmetry] are now calling into question the underlying assumption of “naturalness”, namely that the laws of nature ought to be plausible and coherent, rather than down to chance.

“We did learn something: we learned what is not the case,” says Savas Dimopoulos. A theorist at Stanford University in California, and one of the founders of supersymmetry, he happens to be visiting the Perimeter Institute while I am there. How do we judge what theories we should pursue? “Maybe we have to rethink our criteria,” he says.”
We meet Asimina Arvanitaki, the first woman to hold a research chair at the Perimeter Institute:
“There are people who think that, using just the power of our minds, we can understand what dark matter is, what quantum gravity is,” says Arvanitaki. “That’s not true. The only way forward is to have experiment and theory move in unison.”
Cossins also spoke with Niayesh Afshordi, who is somewhat further along in his analysis of the situation:
“For cosmologist Niayesh Afshordi, the thread that connects these botched embroideries is theorists’ tendency to devote themselves to whatever happens to be the trendiest idea. “It’s a bandwagon effect,” he says. He thinks it has a lot to do with the outsize influence of citations.

“The fight now is not to find a fundamental theory. It is to get noticed, and the easiest way to do that is get on board a bandwagon,” he says. “You get this feedback loop: the people who spend longer on the bandwagons get more citations, then more funding, and the cycle repeats.” For its critics, string theory is the ultimate expression of this.”
The article sounds eerily like an extract from my book. Except, I must admit, Cossins writes better than I do.

The star of the New Scientist article is Neil Turok, the current director of Perimeter Institute. Turok has been going around talking about “crisis” for some while and his cure for the crisis is… more Neil Turok. In a recent paper with Latham Boyle and Kieran Finn (PRL here), he proposed a new theory according to which our universe was created in a pair with an anti-universe.

I read this paper some while ago and didn’t find it all that terrible. At least it’s not obviously wrong, which is more than what can be said about some papers that make headlines. Though speculative, it’s a minimalistic idea that results in observable consequences. I was surprised it didn’t attract more media attention.

The calculation in the paper, however, has gaps, especially for what the earliest phase of the universe is concerned. And if the researchers find ways to fill those gaps, I would be afraid, they may end up in the all-too-common situation where they can pretty much predict everything and anything. So I am not terribly excited. Then again, I rarely am.

Let me end with a personal recollection. Neil Turok became director of Perimeter Institute in 2008. At that time I was the postdoc representative and as such had a lunch meeting with the incoming director. I asked him about his future plans. Listening to Turok, it became clear to me quickly that his term would mean the end of Perimeter Institute’s potential to make a difference.

Turok’s vision, in brief, was to make the place an internationally renowned research institution that attracts prestigious researchers. This all sounds well and fine until you realize that “renowned” and “prestigious” are assessments made by the rest of the research community. Presently these terms pretty much mean productivity and popularity.

The way I expressed my concern to Turok back then was to point out that if you couple to the heat bath you will approach the same temperature. Yeah, I have learned since then to express myself somewhat clearer. To rephrase this in normal-people speak, if you play by everybody else’s rules, you will make the same mistakes.

If you want to make a difference, you must be willing to accept that people ridicule you, criticize you, and shun you. Turok wasn’t prepared for any of this. It had not even crossed his mind.

Ten years on, I am afraid to say that what happened is exactly what I expected. Research at Perimeter Institute today is largely “more of the same.” Besides papers, not much has come out of it. But it surely sounds like they are having fun.

Tuesday, January 22, 2019

Particle physics may have reached the end of the line

Image: CERN
CERN’s press release of plans for a larger particle collider, which I wrote about last week, made international headlines. Unfortunately, most articles about the topic just repeat the press-release, and do not explain how much the situation in particle physics has changed with the LHC data.

Since the late 1960s, when physicists hit on the “particle zoo” at nuclear energies, they always had a good reason to build a larger collider. That’s because their theories of elementary matter were incomplete. But now, with the Higgs-boson found in 2012, their theory – the “standard model of particle physics” – is complete. It’s done. There’s nothing missing. All Pokemon caught.

The Higgs was the last good prediction that particle physicists had. This prediction dates back to the 1960s and it was based on sound mathematics. In contrast to this, the current predictions for new particles at a larger collider – eg supersymmetric partner particles or dark matter particles – are not based on sound mathematics. These predictions are based on what is called an “argument from naturalness” and those arguments are little more than wishful thinking dressed in equations.

I have laid out my reasoning for why those predictions are no good in great detail in my book (and also in this little paper). But it does not matter whether you believe (or even understand) my arguments, you only have to look at the data to see that particle physicists’ predictions for physics beyond the standard model have, in fact, not worked for more than 30 years.

Fact is, particle physicists have predicted dark matter particles since the mid-1980s. None of those have been seen.

 Fact is, particle physicists predicted grand unified theories starting also in the 1980s. To the extent that those can be ruled out, they have been ruled out.

Fact is, they predicted that supersymmetric particles and/or large additional dimensions of space should become observable at the LHC. According to those predictions, this should have happened already. It did not.

The important thing is now that those demonstrably flawed methods were the only reason to think the LHC should discover something fundamentally new besides the Higgs. With this method of prediction not working, there is now no reason to think that the LHC in its upcoming runs, or a next larger collider, will see anything besides the physics predicted by the already known theories.

Of course it may happen. I am not saying that I know a larger collider will not find something new. It is possible that we get lucky. I am simply saying that we currently have no prediction that indicates a larger collider would lead to a breakthrough. The standard model may well be it.

This situation is unprecedented in particle physics. The only reliable prediction we currently have for physics beyond the standard model is that we should eventually see effects of quantum gravity. But for that we would have to reach energies 13 orders of magnitude higher than what even the next collider would deliver. It’s way out of reach.

The only thing we can reliably say a next larger collider will do is measure more precisely the properties of the already known fundamental particles. That it may tell us something about dark matter, or dark energy, or the matter-antimatter symmetry is a hope, not a prediction.

Particle physicists had a good case to build the LHC with the prediction of the Higgs-boson. But with the Higgs found, the next larger collider has no good motivation. The year is 2019, not 1999.

Letter from a reader: “Someone has to write such a book” we used to say

Dear Sabine,

congratulations on your book. I read it this sommer and enjoyed it very much. For people like me, working on solid state physics, the issues you addressed were a recurrent subject to talk over lunch over the last decade. “Someone has to write such a book” we used to say, necessarily had to be someone from inside this community. I am glad that you did it.

I came to your book through the nice review published in Nature. I was disappointed with the one I read later in Science; also, with the recent one on Physics Today by Wilczek (“...and beautiful ideas from information theory are illuminating physical algorithms and quantum network design”, does it make sense to anyone?!). To be honest, he should list all beautiful ideas developed, and then the brief list of the ones that got agreement with experiment. This would be a scientific approach to test if such a statement makes sense, would you agree?

I send you a comment from Philip Anderson on string theory, I don’t think you mention it in your book but I guess you heard of it.

Best regards,

Daniel

---------------------------------------------------
Prof. Daniel Farias
Dpto. de Física de la Materia Condensada
Universidad Autónoma de Madrid
Phone: +34 91 497 5550
----------------------------------------------------

[The mentioned comment is Phillip Anderson’s response to the EDGE annual question 2005: What do you believe is true even though you cannot prove it? Which I append below for your amusement.]

Is string theory a futile exercise as physics, as I believe it to be? It is an interesting mathematical specialty and has produced and will produce mathematics useful in other contexts, but it seems no more vital as mathematics than other areas of very abstract or specialized math, and doesn't on that basis justify the incredible amount of effort expended on it.

My belief is based on the fact that string theory is the first science in hundreds of years to be pursued in pre-Baconian fashion, without any adequate experimental guidance. It proposes that Nature is the way we would like it to be rather than the way we see it to be; and it is improbable that Nature thinks the same way we do.

The sad thing is that, as several young would-be theorists have explained to me, it is so highly developed that it is a full-time job just to keep up with it. That means that other avenues are not being explored by the bright, imaginative young people, and that alternative career paths are blocked.

Wednesday, January 16, 2019

Particle physicists want money for bigger collider

Illustration of particle collision.
[screen shot from this video]
The Large Hadron Collider (LHC) at CERN is the world’s current largest particle collider. But in a decade its days will come to an end. Particle physicists are now making plans for the future. Yesterday, CERN issued a press-release about a design study for their plans, which is a machine called the Future Circular Collider (FCC).

There are various design options for the FCC. Costs start at €9 billion for the least expensive version, going up to €21 for the big vision. The idea is to dig a longer ring-tunnel, in which first electrons would be brought to collision with positrons at energies from 91 to 365 GeV. The operation energies are chosen to enable more detailed studies of specific particles than the LHC allows. This machine would later be upgraded for proton-proton collisions at higher energies, reaching up to 100 TeV (or 100k GeV). In comparison, the LHC’s maximum design energy is 14 TeV.

€9 billion is a lot of money and given what we presently know, I don’t think it’s worth it. It is possible that if we reach higher energies, we will find new particles, but we do not currently have any good reason to think this will happen. Of course if the LHC finds something after all, the situation will entirely change and everyone will rush to build the next collider. But without that, the only thing we know that a larger collider will reliably do is measure in greater detail the properties of the already-known particles.

The design-reports acknowledge this, but obfuscates the point. The opening statement, for example, says:
[Several] experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do point to the existence of physics beyond the Standard Model.” (original emphasis)
The accompanying video similarly speaks vaguely of “big questions”, something to do with 95% of the universe (referring to dark matter and dark energy) and raises the impression that a larger collider would tell us something interesting about that:


It is correct that the standard model requires extension, but there is no reason that the new physical effects, like particles making up dark matter, must be accessible at the next larger collider. Indeed, the currently most reliable predictions put any new physics at energies 14 orders of magnitude higher, well out of the reach of any collider we’ll be able to build in the coming centuries. This is noted later in the report, where you can read: “Today high energy physics lacks unambiguous and guaranteed discovery targets.”

The report uses some highly specific examples of hypothetical particles that can be ruled out, such as certain WIMP candidates or supersymmetric particles. Again, that’s correct. But there is no good argument for why those particular  particles should be the right ones. Physicists have no end of conjectured new particles. You’d end up ruling out a few among millions of models, and make little progress, just like with the LHC and the earlier colliders.

We are further offered the usual arguments, that investing in a science project this size would benefit the technological industry and education and scientific networks. This is all true, but not specific to particle colliders. Any large-scale experiment would have such benefits. I do not find such arguments remotely convincing.

Another reason I am not excited about the current plans for a larger collider is that we might get more bang for the buck if we waited for better technologies. There’s the plasma wakefield acceleration, eg, that is in a test-period now and that may become a more efficient route to progress. Also, maybe high temperature superconductors will reach a level where they become usable for the magnets. Both of these technologies may become available in a decade or two, but they are not presently sufficiently developed so that they can be used for the next collider.

Therefore, investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

At current, other large-scale experiments would more reliably offer new insights into the foundations of physics. Anything that peers back into the early universe, such as big radio telescopes, for example, or anything that probes the properties of dark matter. There are also medium and small-scale experiments that tend to fall off the table if big collaborations eat up the bulk of money and attention. And that’s leaving aside that maybe we might be better off investing in other areas of science entirely.

Of course a blog post cannot replace a detailed cost-benefit assessment, so I cannot tell you what’s the best thing to invest in. I can, however, tell you that a bigger particle collider is one of the most expensive experiments you can think of, and we do not currently have a reason to think it would discover anything new. Ie, large cost, little benefit. That much is pretty clear.

I think the Chinese are not dumb enough to build the next bigger collider. If they do, they might end up being the first nation ever to run and operate such a costly machine without finding anything new. It’s not how they hope to enter history books. So, I consider it unlikely they will go for it.

What the Europeans will do is harder to predict, because a lot depends on who has influential friends in which ministry. But I think particle physicists have dug their own grave by giving the public the impression that the LHC would answer some big question, and then not being able to deliver.

Sunday, January 13, 2019

Good Problems in the Foundations of Physics

img src: openclipart.org
Look at the history of physics, and you will find that breakthroughs come in two different types. Either observations run into conflict with predictions and a new theory must be developed. Or physicists solve a theoretical problem, resulting in new predictions which are then confirmed by experiment. In both cases, problems that give rise to breakthroughs are inconsistencies: Either theory does not agree with data (experiment-led), or the theories have internal disagreements that require resolution (theory-led).

We can classify the most notable breakthroughs this way: Electric and magnetic fields (experiment-led), electromagnetic waves (theory-led), special relativity (theory-led), quantum mechanics (experiment-led), general relativity (theory-led), the Dirac equation (theory-led), the weak nuclear force (experiment-led), the quark-model (experiment-led), electro-weak unification (theory-led), the Higgs-boson (theory-led).

That’s an oversimplification, of course, and leaves aside the myriad twists and turns and personal tragedies that make scientific history interesting. But it captures the essence.

Unfortunately, in the past decades it has become fashionable among physicists to present the theory-led breakthroughs as a success of beautiful mathematics.

Now, it is certainly correct that in some cases the theorists making such breakthroughs were inspired by math they considered beautiful. This is well-documented, eg, for both Dirac and Einstein. However, as I lay out in my book, arguments from beauty have not always been successful. They worked in cases when the underlying problem was one of consistency. They failed in other cases. As the philosopher Radin Dardashti put it aptly, scientists sometimes work on the right problem for the wrong reason.

That breakthrough problems were those which harbored an inconsistency is true even for the often-told story of the prediction of the charm quark. The charm quark, so they will tell you, was a prediction based on naturalness, which is an argument from beauty. However, we also know that the theories which particle physicists used at the time were not renormalizable and therefore would break down at some energy. Once electro-weak unification removes this problem, the requirement of gauge-anomaly cancellation will tell you that a fourth quark is necessary. But this isn’t a prediction based on beauty. It’s a prediction based on consistency.

This, I must emphasize, is not what historically happened. Weinberg’s theory of the electro-weak unification came after the prediction of the charm quark. But in hindsight we can see that the reason this prediction worked was that it was indeed a problem of consistency. Physicists worked on the right problem, if for the wrong reasons.

What can we learn from this?

Well, one thing we learn is that if you rely on beauty you may get lucky. Sometimes it works. Feyerabend, I think, had it basically right when he argued “anything goes.” Or, as the late German chancellor Kohl put it, “What matters is what comes out in the end.”

But we also see that if you happen to insist on the wrong ideal of beauty, you will not make it into history books. Worse, since our conception of what counts as a beautiful theory is based on what worked in the past, it may actually get in the way if a breakthrough requires new notions of beauty.

The more useful lesson to learn, therefore, is that the big theory-led breakthroughs could have been based on sound mathematical arguments, even if in practice they came about by trial and error.

The “anything goes” approach is fine if you can test a large number of hypotheses and then continue with the ones that work. But in the foundations of physics we can no longer afford “anything goes”. Experiments are now so expensive and take such a long time to build that we have to be very careful when deciding which theories to test. And if we take a clue from history, then the most promising route to progress is to focus on problems that are either inconsistencies with data or internal inconsistencies of the theories.

At least that’s my conclusion.

It is far from my intention to tell anyone what to do. Indeed, if there is any message I tried to get across in my book it’s that I wish physicists would think more for themselves and listen less to their colleagues.

Having said this, I have gotten a lot of emails from students asking me for advice, and I recall how difficult it was for me as a student to make sense of the recent research trends. For this reason I append below my assessment of some of the currently most popular problems in the foundations of physics. Not because I want you to listen to me, but because I hope that the argument I offered will help you come to your own conclusion.

(You find more details and references on all of this in my book.)



Dark Matter
Is an inconsistency between theory and experiment and therefore a good problem. (The issue with dark matter isn’t whether it’s a good problem or not, but to decide under when to consider the problem solved.)

Dark Energy
There are different aspects of this problem, some of which are good problems others not. The question why the cosmological constant is small compared to (powers of) the Planck mass is not a good problem because there is nothing wrong with just choosing it to be a certain constant. The question why the cosmological constant is presently comparable to the density of dark matter is likewise a bad problem because it isn’t associated with any inconsistency. On the other hand, the absence of observable fluctuations around the vacuum energy (what Afshordi calls the “cosmological non-constant problem”) and the question why the zero-point energy gravitates in atoms but not in the vacuum (details here) are good problems.

The Hierarchy Problem
The hierarchy problem is the big difference between the strength of gravity and the other forces in the standard model. There is nothing contradictory about this, hence not a good problem.

Grand Unification
A lot of physicists would rather have one unified force in the standard model rather than three different ones. There is, however, nothing wrong with the three different forces. I am undecided as to whether the almost-prediction of the Weinberg-angle from breaking a large symmetry group does or does not require an explanation.

Quantum Gravity
Quantum gravity removes an inconsistency and hence a solution to a good problem. However, I must add that there may be other ways to resolve the problem besides quantizing gravity.

Black Hole Information Loss
A good problem in principle. Unfortunately, there are many different ways to fix the problem and no way to experimentally distinguish between them. So while it’s a good problem, I don’t consider it a promising research direction.

Particle Masses
It would be nice to have a way to derive the masses of the particles in the standard model from a theory with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem.

Quantum Field Theory
There are various problems with quantum field theories where we lack a good understanding of how the theory works and that require a solution. The UV Landau pole in the standard model is one of them. It must be resolved somehow, but just exactly how is not clear. We also do not have a good understanding of the non-perturbative formulation of the theory and the infrared behavior turns out to be not as well understood as we thought only years ago (see eg here).

The Measurement Problem
The measurement problem in quantum mechanics is typically thought of as a problem of interpretation and then left to philosophers to discuss. I think that’s a mistake; it is an actual inconsistency. The inconsistency comes from the need to postulate the behavior of macroscopic objects when that behavior should instead follow from the theory of the constituents. The measurement postulate, hence, is inconsistent with reductionism.

The Flatness Problem
Is an argument from finetuning and not well-defined without a probability distribution. There is nothing wrong with the (initial value of) the curvature density just being what it is. Thus, not a good problem.

The Monopole Problem
That’s the question why we haven’t seen magnetic monopoles. It is quite plausibly solved by them not existing. Also not a good problem.

Baryon Asymmetry and The Horizon Problem
These are both finetuning problems that rely on the choice of an initial condition, which is considered to be likely. However, there is no way to quantify how likely the initial condition is, so the problem is not well-defined.

There are further always a variety of anomalies where data disagrees with theory. Those can linger at low significance for a long time and it’s difficult to decide how seriously to take them. For those I can only give you the general advice that you listen to experimentalists (preferably some who are not working on the experiment in question) before you listen to theorists. Experimentalists often have an intuition for how seriously to take a result. That intuition, however, usually doesn’t make it into publications because it’s impossible to quantify. Measures of statistical significance don’t always tell the full story.

Saturday, January 12, 2019

Book Review: “Quantum Space” by Jim Baggott

Quantum Space
Loop Quantum Gravity and the Search for the Structure of Space, Time, and the Universe
By Jim Baggott
Oxford University Press (January 22, 2019)


In his new book Quantum Space, Jim Baggott presents Loop Quantum Gravity (LQG) as the overlooked competitor of String Theory. He uses a chronological narrative that follows the lives of Lee Smolin and Carlo Rovelli. The book begins with their nascent interest in quantum gravity, continues with their formal education, their later collaboration, and, in the final chapters, follows them as their ways separate. Along with the personal stories, Baggott introduces background knowledge and lays out the scientific work.

Quantum Space is structured into three parts. The first part covers the basics necessary to understand the key ideas of Loop Quantum Gravity. Here, Baggott goes through the essentials of special relativity, general relativity, quantum mechanics, quantum field theory, and the standard model of particle physics. The second part lays out the development of Loop Quantum Gravity and the main features of the theory, notably the emphasis on background independence.

The third part is about recent applications, such as the graviton propagator, the calculation of black hole entropy, and the removal of the big bang singularity. You also find there Sundance Bilson-Thompson’s idea that elementary particles are braids in the spin-foam. In this last part, Baggott further includes Rovelli’s and Smolin’s ideas about the foundations of quantum mechanics, as well as Rovelli and Vidotto’s Planck Stars, and Smolin’s ideas about the reality of time and cosmological natural selection.

The book’s epilogue is an extended Q&A with Smolin and Rovelli and ends with mentioning the connections between String Theory and Loop Quantum Gravity (which I wrote about here).

Baggott writes very well and he expresses himself clearly, aided by about two dozen figures and a glossary. The book, however, requires some tolerance for technical terminology. While Baggott does an admirable job explaining advanced physics – such as Wilson loops, parallel transport, spinfoam, and renormalizability – and does not shy away from complex topics – such as the fermion doubling problem, the Wheeler-De-Witt equation, Shannon entropy, or extremal black holes – for a reader without prior knowledge in the field, this may be tough going.

We know from Baggott’s 2013 book “Farewell to Reality” that he is not fond of String Theory, and in Quantum Space, too, he does not hold back with criticism. On Edward Witten’s conjecture of M-theory, for example, he writes:
“This was a conjecture, not a theory…. But this was, nevertheless, more than enough to set the superstring bandwagon rolling even faster.”
In Quantum Space, Baggott also reprints Smolin’s diagnostic of the String Theory community, which asserts string theorists “tremendous self-confidence,” “group think,” “confirmation bias,” and “a complete disregard and disinterest in the opinions of anyone outside the group.”

Baggott further claims that the absence of new particles at the Large Hadron Collider is bad news for string theory*:
“Some argue that string theory is the tighter, more mathematically rigorous and consistent, better-defined structure. But a good deal of this consistency appears to have been imported through the assumption of supersymmetry, and with each survey published by the ATLAS or CMS detector collaborations at CERN’s Large Hadron Collider, the scientific case for supersymmetry weakens some more.”
I’d have some other minor points to quibble with, but given the enormous breadth of topics covered, I think Baggott’s blunders are few and sparse.

I must admit, however, that the structure of the book did not make a lot of sense to me. Baggott introduces a lot of topics that he later does not need and whose relevance to LQG escapes me. For example, he goes on about the standard model and the Higgs-mechanism in particular, but that doesn’t play any role later. He also spends quite some time on the interpretation of quantum mechanics, which isn’t actually necessary to understand Loop Quantum Gravity. I also don’t see what Lee Smolin’s cosmological natural selection has to do with anything. But these are stylistic issues.

The bigger issue I have with the book is that Baggott is as uncritical of Loop Quantum Gravity as he is critical of String Theory. There is not a mention in the book about the problem of recovering local Lorentz-invariance, an issue that has greatly bothered both Joe Polchinski and Jorge Pullin. Baggott presents Loop Quantum Cosmology (the LQG-based approach to understand the early universe) as testable but forgets to note that the predictions depend on an adjustable parameter, and also, it would be extremely hard to tell apart the LQG-based models from other models. And he does not, in balance, mention String Cosmology. He does not mention the problem with the supposed derivation of the black hole entropy by Bianchi and he does not mention the problems with Planck stars.

And if he had done a little digging, he’d have noticed that the group-think in LQG is as bad as it is in string theory.

In summary, Quantum Space is an excellent, non-technical, introduction to Loop Quantum Gravity that is chock-full with knowledge. It will, however, give you a rather uncritical view of the field.

[Disclaimer: Free review copy.]


* I explained here why the non-discovery of supersymmetric particles at the LHC has no relevance for string theory.