Wednesday, February 27, 2019

Book review: “Breakfast with Einstein” by Chad Orzel

Breakfast with Einstein: The Exotic Physics of Everyday Objects
By Chad Orzel
BenBella Books (December 11, 2018)

Physics is everywhere, that is the message of Chad Orzel’s new book “Breakfast with Einstein,” and he delivers his message masterfully. In the serenity of an early morning, Orzel uses every-day examples to reveal the omnipresence of physics: The sunrise becomes an occasion to introduce nuclear physics, the beeping of an alarm a pathway to atomic clocks, a toaster leads to a discussion of zero-point energy.

Much of the book’s content is home-play for Orzel, whose research specializes in atomic, molecular, and optical physics. It shows: “Breakfast with Einstein” is full with applied science, from fibre-optic cables to semiconductors, data storage, lasers, smoke detectors, and tunneling microscopes. Orzel doesn’t only know what he writes about, he also knows what this knowledge is good for, and the reader benefits.

Like in his earlier books, Orzel’s explanations are easy to follow without glossing over details. The illustrations aid the text, and his writing style is characteristically straight-forward. While he does give the reader some historical context, Orzel keeps people-facts to an absolute minimum, and instead sticks with the science.

In contrast to many recent books about physics, Orzel stays away from speculation, and focuses instead on the many remarkable achievements that last century’s led to.

When it comes to popular science books, this one is as flawless as it gets. “Breakfast with Einstein,” I believe, will be understandable to anyone with an interest in the subject. I can warmly recommend it.

[Disclaimer: Free review copy]

Sunday, February 24, 2019

Away Note

I am in London for a few days and comments are likely to pile up in the queue more than usual. Pls play nicely while I'm away.

Saturday, February 23, 2019

Gian-Francesco Giudice On Future High-Energy Colliders

Gian-Francesco Giudice
[Image: Wikipedia]
Gian-Francesco Giudice is a particle physicist and currently Head of the Theoretical Physics Department at CERN. He is one of the people I interviewed for my book. This week, Giudice had a new paper on the arXiv titled “On Future High-Energy Colliders.” It appeared in the category “History and Philosophy of Physics” but contains little history and no philosophy. It is really an opinion piece.

The article begins with Giudice stating that “the most remarkable result [of the LHC measurements] was the discovery of a completely new type of force.” By this he means that the interaction with the Higgs-boson amounts to a force, and therefore the discovery of the Higgs can be interpreted as the discovery of a new force.

That the Higgs-boson exchanges a force is technically correct, but this terminology creates a risk of misunderstanding, so please allow me to clarify. In common terminology, the standard model describes three fundamental forces (stemming from the three gauge-symmetries): The electromagnetic force, the strong nuclear force, and the weak nuclear force. The LHC results have not required physicists to rethink this. The force associated with the Higgs-boson is not normally counted among the fundamental forces.

One can debate whether or not this is a new type of force. Higgs-like phenomena have been observed in condensed-matter physics for a long time. In any case, rebranding the Higgs as a force doesn’t change the fact that it was predicted in the 1960s and was the last missing piece in the standard model.

Giudice then lists reasons why particle physicists want to further explore high energy regimes. Let me go through these quickly to explain why they are bad motivations for a next larger collider (for more details see also my earlier post about good and bad problems):
  • “the pattern of quark and lepton masses and mixings”

    There is no reason to think a larger particle collider will tell us anything new about this. There isn’t even a reason to think those patterns have any deeper explanation.
  • “the dynamics generating neutrino masses”

    The neutrino-masses are either of Majorana-type, which you test for with other experiments (looking for neutrino-less double-beta decay) or they are of Dirac-type, in which case there is no reason to think the (so-far missing) right handed neutrinos have masses in the range accessible by the next larger collider.
  • “Higgs naturalness”

    Arguments from naturalness were the reason why so many physicists believed the LHC should have seen fundamentally new particles besides the Higgs already (see here for references). Those predictions were all wrong. It’s about time that particle physicists learn from their mistakes.
  • “the origin of symmetry breaking dynamics”

    I am not sure what this refers to. If you know, pls leave a note in the comments.
  •  “the stability of the Higgs potential”

    A next larger collider would tell us more about the Higgs potential. But the question whether the potential is stable cannot be answered by this collider because the answer also depends on what happens at even higher energies.
  • “unification of forces, quantum gravity”

    Expected to become relevant at energies far exceeding that of the next larger collider.
  • “cosmological constant”

    Relevant on long distances and not something that high energy colliders test.
  • “the nature and origin of dark matter, dark energy, cosmic baryon asymmetry, inflation”

    No reason to think that a next larger collider will tell us anything about this.
Giudice then goes on to argue that “the non-discovery of expected results can be as effective as the discovery of unexpected results in igniting momentous paradigm changes.” In support of this he refers to the Michelson-Morley experiment.

The Michelson-Morley experiment, however, is an unfortunate example to enlist in favor of a larger collider. To begin with, it is somewhat disputed among historians how relevant the Michelson-Morley experiment really was for Einstein’s formulation of Special Relativity, since you can derive his theory from Maxwell’s equations. More interesting for the case of building a larger collider, though, is to look at what happened after the null-result of Michelson and Morley.

What happened is that for some decades experimentalists built larger and larger interferometers looking for the aether, not finding any evidence for it. These experiments eventually grew too large and this line of research was discontinued. Then, the second world-war interfered, and for some while scientific exploration stalled.

In the 1950s, due to rapid technological improvements, interferometers could be dramatically shrunk back in size and the search for the aether continued with smaller devices. Indeed, Michelson-Morley-like experiments are still made today. But the best constraints on deviations from Einstein’s theory now come from entirely different observations, notably from particles traveling over cosmologically long distances. The aether, needless to say, hasn’t been found.

There are two lessons to take away from this: (a) When experiments became too large and costly they paused until technological progress improved the return on investment. (b) Advances in entirely different research directions enabled better tests.

Back to high energy particle physics. There hasn’t been much progress in collider technology for decades. For this reason physicists still try to increase collision energies by digging longer tunnels. The costs are now exceeding $10 billion dollars for a next larger collider. We have no reason to think that this collider will tell us anything besides measuring details of the standard model to higher precision. This line of research should be discontinued until it becomes more cost-efficient again.

Giudice ends his essay with arguing that particle colliders are somehow exceptionally great experiments and therefore must be continued. He writes “No other instrument or research programme can replace high-energy colliders in the search for the fundamental laws governing the universe.”

But look at the facts: The best constraints on grand unified theories come from searches for proton decay. Such searches entail closely monitoring large tanks of water. These are not high-energy experiments. You could maybe call them “large volume experiments”. Likewise, the tightest constraints on physics at high energies currently comes from the ACME measurement of the electric dipole moment. This is a high precision measurement at low energies. And our currently best shot at finding evidence for quantum gravity comes from massive quantum oscillators. Again, that is not high energy physics.

Building larger colliders is not the only way forward in the foundations of physics. Particle physicists only seem to be able to think of reasons for a next larger particle collider and not of reasons against it. This is not a good way to evaluate the potential of such a large financial investment.

Thursday, February 21, 2019

Burton Richter on the Future of Particle Physics

Burton Richter.
[Image: NobelPrize.org]
The 1976 Nobel Prize was jointly awarded to Burton Richter and Samuel Ting for the discovery of the J/Psi particle. Sounds like yet-another-particle, I know, but this morsel of matter was a big step forward in the development of the standard model. Richter, sadly, passed away last summer.

Coincidentally, I recently came across a chapter Richter wrote in 2014 to introduce Volume 7 of the “Reviews of Accelerator Science and Technology.” It is titled “High Energy Colliding Beams; What Is Their Future?” and you can read the whole thing here. For your convenience I below quote some parts that are relevant to the current discussion of whether or not to build a next larger particle collider, specifically the 100 TeV pp-collider (FCC), planned by CERN.

Some of Burton’s remarks are rather specific, such as the required detector efficiency and the precision needed to measure the Higgs’ branching ratios. But he also comments on the problem that particle colliders deliver a diminishing return on investment:
“When I was much younger I was a fan of science fiction books. I have never forgotten the start of one, though I don’t remember the name of the book or its author. It began by saying that high-energy physics’ and optical astronomy’s instruments had gotten so expensive that the fields were no longer funded. That is something that we need to think about. Once before we were confronted with a cost curve that said we could never afford to go to very high energy, and colliding beams were invented and saved us from the fate given in my science fiction book. We really need to worry about that once more.

“If the cost of the next-generation proton collider is really linear with energy, I doubt that a 100-TeV machine will ever be funded, and the science fiction story of my youth will be the real story of our field [...]”
He points towards the lack of technological breakthroughs in accelerator design, which is the major reason why the current method of choice for higher collision energies is still digging longer tunnels. As I mentioned in my recent blogpost, there are two promising technologies on the horizon which could advance particle colliders: high-temperature superconductors and plasma wakefield acceleration. But neither of those is likely to become available within the next two decades.

On the superconductors, Burton writes:
“I see no well-focused R&D program looking to make the next generation of proton colliders more cost effective. I do not understand why there is as yet no program underway to try to develop lower cost, high-Tc superconducting magnets [...]”
About plasma wakefield acceleration he is both optimistic and pessimistic. Optimistic because amazing achievements have been made in this research program already, and pessimistic because he doesn’t see “a push to develop these technologies for use in real machines.”

Burton also comments on the increasingly troublesome disconnect between theorists and experimentalists in his community:
“A large fraction of the 100 TeV talk (wishes?) comes from the theoretical community which is disappointed at only finding the Higgs boson at LHC and is looking for something that will be real evidence for what is actually beyond the standard model [...]

“The usual back and forth between theory and experiment; sometimes one leading, sometimes the other leading; has stalled. The experiments and theory of the 1960s and 1970s gave us today’s Standard Model that I characterized earlier as a beautiful manuscript with some unfortunate Post-it notes stuck here and there with unanswered questions written on them. The last 40 years of effort has not removed even one of those Post-it notes. The accelerator builders and the experimenters have built ever bigger machines and detectors, while the theorists have kept inventing extensions to the model.

“There is a problem here that is new, caused by the ever-increasing mathematical complexity of today’s theory. When I received my PhD in the 1950s it was possible for an experimenter to know enough theory to do her/his own calculations and to understand much of what the theorists were doing, thereby being able to choose what was most important to work on. Today it is nearly impossible for an experimenter to do what many of yesterday’s experimenters could do, build apparatus while doing their own calculations on the significance of what they were working on. Nonetheless, it is necessary for experimenters and accelerator physicists to have some understanding of where theory is, and where it is going. Not to do so makes most of us nothing but technicians for the theorists.”
Indeed, I have wondered about this, whether experimentalists even understand what is going on in theory-development. My impression has been that most of them regard the ideas of theorists with a mix of agnosticism and skepticism. They believe it doesn’t matter, so they never looked much into the theorists’ reasoning for why the LHC should see some fundamentally new physics besides the Higgs-boson. But of course it does matter, as Burton points out, to understand the significance of what they are working on.

Burton also was no fan of naturalness, which he called an “empty concept” and his judgement of current theory-development in high energy particle physics was harsh: “Simply put, much of what currently passes as the most advanced theory looks to be more theological speculation, the development of models with no testable consequences, than it is the development of practical knowledge.”

A wise man; gone too soon.

Monday, February 18, 2019

Never again confuse Dark Matter with Dark Energy

Barnard 68 is a molecular cloud
that adsorbs light. It is dark and made
of matter, but not made of dark matter.
[Image: Wikipedia]
Dark Matter 

Dark Matter is, as the name says, matter. But “matter” is not just physicists’ way to say “stuff,” it’s a technical term. Matter has specific behavior, which is that its energy-density dilutes with the inverse volume. Energy-density of radiation, in contrast, dilutes faster than the inverse volume, because the wavelengths of the radiation also stretch.

Generally, anything that has a non-negligible pressure will not behave in this particular way. Cosmologists therefore also say dark matter is “a pressureless fluid.” And, since I know it’s confusing, let me remind you that a fluid isn’t the same as a liquid, and gases can be fluids, so sometimes they may speak about “pressureless gas” instead.

In contrast to what the name says, though, Dark Matter isn’t dark. “Dark” suggests that it adsorbs light, but really it doesn’t react with light at all. It would be better to call it transparent. Light just goes through. And in return, Dark Matter just goes through all normal matter, including planet Earth and you and me. Dark Matter interacts even less often than the already elusive neutrinos.

Dark matter is what makes galaxies rotate faster and helps galactic structure formation to get started.

Dark Energy

Dark Energy, too, is transparent rather than dark. But its name is even more misleading than that of Dark Matter because Dark Energy isn’t energy either. Instead, if you divide it by Newton’s constant, you get an energy density. In contrast to Dark Matter, however, this energy-density does not dilute with the inverse volume. Instead, it doesn’t dilute at all if the volume increases, at least not noticeably.

If the energy density remains entirely constant with the increase of volume, it’s called the “cosmological constant.” General types of Dark Energy can have a density that changes with time (or location), but we currently do not have any evidence that this is the case. The cosmological constant, for now, does just fine to explain observations.

Dark Energy is what makes the expansion of the universe speed up.

Are Dark Matter and Dark Energy the same?

Dark Matter and Dark Energy have distinctly different properties and cannot just be the same. At best they can both be different aspects of a common underlying theory. There are many theories for how this could happen, but to date we have no compelling evidence that this idea is correct.

Friday, February 15, 2019

Dark Matter – Or What?

Yesterday I gave a colloq about my work with Tobias Mistele on superfluid dark matter. Since several people asked for the slides, I have uploaded them to slideshare. You can also find the pdf here. I previously wrote about our research here and here. All my papers are openly available on the arXiv.

Wednesday, February 13, 2019

When gravity breaks down

[img:clipartmax]
Einstein’s theory of general relativity is more than a hundred years old, but still it gives physicists headaches. Not only are Einstein’s equations hideously difficult to solve, they also clash with physicists other most-cherish achievement, quantum theory.

Problem is, particles have quantum properties. They can, for example, be in two places at once. These particles also have masses, and masses cause gravity. But since gravity does not have quantum properties, no one really knows what’s the gravitational pull of a particle in a quantum superposition. To solve this problem, physicists need a theory of quantum gravity. Or, since Einstein taught us that gravity is really curvature of space-time, physicists need a theory for the quantum properties of space and time.

It’s a hard problem, even for big-brained people like theoretical physicists. They have known since the 1930s that quantum gravity is necessary to bring order into the laws of nature, but 80 years on a solution isn’t anywhere in sight. The major obstacle on the way to progress is the lack of experimental guidance. The effects of quantum gravity are extremely weak and have never been measured, so physicists have only math to rely on. And it’s easy to get lost in math.

The reason it is difficult to obtain observational evidence for quantum gravity is that all presently possible experiments fall into two categories. Either we measure quantum effects – using small and light objects – or we measure gravitational effects – using large and heavy objects. In both cases, quantum gravitational effects are tiny. To see the effects of quantum gravity, you would really need a heavy object that has pronounced quantum properties, and that’s hard to come by.

Physicists do know a few naturally occurring situations where quantum gravity should be relevant. But it is not on short distances, though I often hear that. Non-quantized gravity really fails in situations where energy-densities become large and space-time curvature becomes strong. And let me be clear that what astrophysicists consider “strong” curvature is still “weak” curvature for those working on quantum gravity. In particular, the curvature at a black hole horizon is not remotely strong enough to give rise to noticeable quantum gravitational effects.

Curvature strong enough to cause general relativity to break down, we believe, exists only in the center of black holes and close by the big bang. In both cases the strongly compressed matter has a high density and a pronounced quantum behavior which should give rise to quantum gravitational effects. Unfortunately, we cannot look inside a black hole, and reconstructing what happened at the Big Bang from today’s observation can, with present measurement techniques, not reveal the quantum gravitational behavior.

The regime where quantum gravity becomes relevant should also be reached in particle collisions at extremely high center-of-mass energy. If you had a collider large enough – estimates say that with current technology it would be about the size of the Milky Way – you could focus enough energy into a small region of space to create strong enough curvature. But we are not going to build such a collider any time soon.

Besides strong space-time curvature, there is another case where quantum effects of gravity should become measureable that is often neglected: By creating quantum superpositions of heavy objects. This causes the approximation in which matter has quantum properties but gravity doesn’t (the “semi-classical limit”) to break down, revealing truly quantum effects of gravity. A few experimental groups are currently trying to reach the regime where they might become sensitive to such effects. They still have some orders of magnitude to go, so not quite there yet.

Why don’t physicists study this case closer? As always, it’s hard to say why scientists do one thing and not another. I can only guess it’s because from a theoretical perspective this case is not all that interesting.

I know I said that physicists don’t have a theory of quantum gravity, but that is only partly correct. Gravity can, and has been, quantized using the normal methods of quantization already in the 1960s by Feynman and DeWitt. However, the theory one obtains this way (“perturbative quantum gravity”), breaks down in exactly the strong curvature regime that physicists want to use it (“perturbatively non-renormalizable”). Therefore, this approach is today considered merely a low-energy approximation (“effective theory”) to the yet-to-be-found full theory of quantum gravity (“UV-completion”).

Past the 1960s, almost all research efforts in quantum gravity focused on developing that full theory. The best known approaches are string theory, loop quantum gravity, asymptotic safety, and causal dynamical triangulation. The above mentioned case with heavy objects in quantum superpositions, however, does not induce strong curvature and hence falls into the realm of the boring and supposedly well-understood theory from the 1960s. Ironically, for this reason there are almost no theoretical predictions for such an experiment from either of the major approaches to the full theory of quantum gravity.

Most people in the field presently think that perturbative quantum gravity must be the correct low-energy limit of any theory of quantum gravity. A minority, however, holds that this isn’t so, and members of this club usually quote one or both of the following reasons.

The first objection is philosophical. It does not conceptually make much sense to derive a supposedly more fundamental theory (quantum gravity) from a less fundamental one (non-quantum gravity) because by definition the derived theory is the less fundamental one. Indeed, the quantization procedure for Yang-Mills theories is a logical nightmare. You start with a non-quantum theory, make it more complicated to obtain another theory, though that is not strictly speaking a derivation, and if you then take the classical limit you get a theory that doesn’t have any good interpretation whatsoever. So why did you start from it to begin with it?

Well, the obvious answer is: We do it because it works, and we do it this way because of historical accident not because it makes a lot of sense. Nothing wrong with that for a pragmatist like me, but also not a compelling reason to insist that the same method should apply to gravity.

The second often-named argument against the perturbative quantization is that you do not get atomic physics by quantizing water either. So if you think that gravity is not a fundamental interaction but comes from the collective behavior of a large number of microscopic constituents (think “atoms of space-time”), then quantizing general relativity is simply the wrong thing to do.

Those who take this point of view that gravity is really a bulk-theory for some unknown microscopic constituents follow an approach called “emergent gravity”. It is supported by the (independent) observations of Jacobson, Padmanabhan, and Verlinde, that the laws of gravity can be rewritten so that they appear like thermodynamical laws. My opinion about this flip-flops between “most amazing insight ever” and “curious aside of little relevance,” sometimes several times a day.

Be that as it may, if you think that emergent gravity is the right approach to quantum gravity, then the question where gravity-as-we-know-and-like-it breaks down becomes complicated. It should still break down at high curvature, but there may be further situations where you could see departures from general relativity.

Erik Verlinde, for example, interprets dark matter and dark energy as relics of quantum gravity. If you believe this, we do already have evidence for quantum gravity! Others have suggested that if space-time is made of microscopic constituents, then it may have bulk-properties like viscosity, or result in effects normally associated with crystals like birefringence, or the dispersion of light.

In summary, the expectation that quantum effects of gravity should become relevant for strong space-time curvature is based on an uncontroversial extrapolation and pretty much everyone in the field agrees on it.* In certain approaches to quantum gravity, deviations from general relativity could also become relevant at long distances, low acceleration, or low energies. An often neglected possibility is to probe the effects of quantum gravity with quantum superpositions of heavy objects.

I hope to see experimental evidence for quantum gravity in my lifetime.


Except me, sometimes.

Friday, February 08, 2019

A philosopher of science reviews “Lost in Math”

Jeremy Butterfield is a philosopher of science in Cambridge. I previously wrote about some of his work here, and have met him on various occasions. Butterfield recently reviewed my book “Lost in Math,” and you can now find this review online here. (I believe it was solicited for a journal by name Physics in Perspective.)

His is a very detailed review that focuses, unsurprisingly, on the philosophical implications of my book. I think his summary will give you a pretty good impression of the book’s content. However, I want to point out two places where he misrepresents my argument.

First, in section 2, Butterfield lays out his disagreements with me. Alas, he disagrees with positions I don’t hold and certainly did not state, neither in the book nor anywhere else:
“Hossenfelder’s main criticism of supersymmetry is, in short, that it is advocated because of its beauty, but is unobserved. But even if supersymmetry is not realized in nature, one might well defend studying it as an invaluable tool for getting a better understanding of quantum field theories. A similar defence might well be given for studying string theory.”
Sure. Supersymmetry, string theory, grand unification, even naturalness, started out as good ideas and valuable research programs. I do not say these should not have been studied; neither do I say one should now discontinue studying them. The problem is that these ideas have grown into paper-production industries that no longer produce valuable output.

Beautiful hypotheses are certainly worth consideration. Troubles begin if data disagree with the hypotheses but scientists continue to rely on their beautiful hypotheses rather than taking clues from evidence.

Second, Butterfield misunderstands just how physicists working on the field’s foundations are “led astray” by arguments from beauty. He writes:
“I also think advocates of beauty as a heuristic do admit these limitations. They advocate no more than a historically conditioned, and fallible, heuristic [...] In short, I think Hossenfelder interprets physicists as more gung-ho, more naïve, that beauty is a guide to truth than they really are.”
To the extent that physicists are aware they use arguments from beauty, most know that these are not scientific arguments and also readily admit it. I state this explicitly in the book. They use such arguments anyway, however, because doing so has become accepted methodology. Look at what they do, don’t listen to what they say.

A few try to justify using arguments from beauty by appeals to cherry-picked historical examples or quotes to Einstein and Dirac. In most cases, however, physicists are not aware they use arguments from beauty to begin with (hence the book’s title). I have such discussions on a daily basis.

Physicists wrap appeals to beauty into statements like “this just can’t be the last word,” “intuition tells me,” or “this screams for an explanation”. They have forgotten that naturalness is an argument from beauty and can’t recall, or never looked at, the motivation for axions or gauge coupling unification. They will express their obsessions with numerical coincidences by saying “it’s curious” or “it is suggestive,” often followed by “Don’t you agree?”.

Of course I agree. I agree that supersymmetry is beautiful and it should be true, and it looks like there should be a better explanation for the parameters in the standard model, and it looks like there should be a unified force. But who cares what I think nature should be like? Human intuition is not a good guide to the development of new laws of nature.

What physicists are naive about is not appeals to beauty; what they are naive about is their own rationality. They cannot fathom the possibility that their scientific judgement is influenced by cognitive biases and social trends in scientific communities. They believe it does not matter for their interests how their research is presented in the media.

The easiest way to see that the problem exists is that they deny it.

Wednesday, February 06, 2019

Why a larger particle collider is not currently a good investment

LHC tunnel. Credits: CERN.
That a larger particle collider is not currently a good investment is hardly a controversial position. While the costs per units of collision-energy have decreased over the decades thanks to better technology, the absolute cost of new machines has shot up. That the costs of larger particle colliders would at some point become economically prohibitive has been known for a long time. Even particle physicists could predict this.

Already in 2001, Maury Tigner, who led the Central Design Group for the (cancelled) Superconducting Super Collider project, wrote an article for Physics Today asking “Does Accelerator-Based Particle Physics Have a Future?” While he remained optimistic that collaborative efforts and technological advances would lead to some more progress, he was also well aware of the challenges. Tigner wrote:
“If we are to continue progress with accelerator-based particle physics, we will have to mount much more effective efforts in the technical aspects of accelerator development — with a strong focus on economy. Such efforts will probably not suffice to hold constant the cost of new facilities in the face of the ever more demanding joint challenges of higher collision energy and higher luminosity. Making available the necessary intellectual and financial resources to sustain progress would seem to require social advances of unprecedented scope in resource management and collaboration.”
But the unprecedented social advances have not come to pass, and neither have we since seen major breakthroughs in collider technology. The state of affairs is often summarized in what is known as the “Livingston Plot” that shows the year of construction versus energy. You can clearly see that the golden years of particle accelerators ended around 1990. And no game-changing technology has come up to turn things around:

Livingston plot. Image Credits: K. Yokoya
Particle accelerators are just damn expensive. In the plot below I have collected some numbers for existing and former colliders. I took the numbers from this paper and from Wikipedia. Cost estimates are not inflation-adjusted and currency-conversions are approximate, so do not take the numbers too seriously. The figure should, however, give you a roughly correct impression:



The ILC is the (proposed) International Linear Collider, and the NLC was the once proposed Next Linear Collider. The SSC is the scraped Superconducting Super Collider. FCC-ee and FCC are the low-cost and high-cost variants of CERN’s planned Future Circular Collider.

When interpreting this plot, keep in mind that the cost for the LHC was low because it reused the tunnel of an earlier experiment. Also note that most of these machines were not built to reach the highest energies possible (at the time), so please do not judge facilities for falling below the diagonal.

So, yeah, particle collider are expensive, no doubt about this. Now, factor in the unclear discovery potential for the next larger collider, and compare this to other experiments that “push frontiers,” as the catchphrase has it.

There is currently no reason to think a larger particle collider will do anything besides measuring some constants to higher precision. That is not entirely uninteresting, of course, and it’s enough to excite particle physicists. But this knowledge will tell us little new about the universe and it cannot be used to obtain further knowledge either.

Compare the expenses for CERN’s FCC plans to that of the gravitational wave interferometer LIGO. LIGO’s price tag was well below a billion US$. Still, in 1991, physicists hotly debated whether it was worth the money.

And that is even though the scientific case for LIGO was clear. Gravitational waves were an excellently solid prediction. Not only this, physicists knew already from indirect measurements that they must exist. True, they did not know exactly at which amplitude to expect events or how many of them. But this was not a situation in which “nothing until 15 orders of magnitude higher” was the most plausible case.

In addition, gravitational waves are good for something. They allow physicists to infer properties of distant stellar objects, which is exactly what the LIGO collaboration is now doing. We have learned far more from LIGO than that gravitational waves exist.

The planned FCC costs 20 times as much, has no clear discovery target, and it’s a self-referential enterprise: A particle collider tells you more about particle collisions. We have found the Higgs, all right, but there is nothing you can do with the Higgs now other than studying it closer.

Another cost-comparison: The Square Kilometer Array (SKA). Again the scientific case is clear. The SKA (among other things) would allow us to study the “dark ages” of the universe, that we cannot see with other telescopes because no stars existed at the time, and look for organic compounds in outer space. From this we could learn a lot about star formation, the mystery that is dark matter, and the prevalence of organic chemistry in the universe that may be an indicator for life. The total cost of the SKA is below $2 billion, though it looks like the full version will not come into being. Currently, less than a billion of funding is available that suffices only for the slimmed-down variant (SKA-1).

And it’s not like building larger colliders is the only thing you can do to learn more about particle physics. All the things that can happen at higher energies also affect what happens at low energies, it’s just that at low energies you have to measure very, very precisely. That’s why high-precision measurements, like that of the muon g-2 or the electric dipole moment, are an alternative to going to higher energies. Such experiments are far less costly.

There are always many measurements that could be done more precisely, and when doing so, it is always possible that we find something new. But the expected discovery potential must play a role when evaluating the promises of an investment. It is unsurprising that particle physicists would like to have a new particle collider. But that is not an argument for why such a machine would be a good investment.

Particle physicists have not been able to come up with any reliable predictions for new effects for decades. The prediction of the Higgs-boson was the last good prediction they had. With that, the standard model is complete and we have no reason to expect anything more, not until energies 15 orders of magnitude higher.

Of course, particle physicists do have a large number of predictions for new particles within the reach of the next larger collider, but these are really fabricated for no other purpose than to rule them out. You cannot trust them. When they tell you that a next larger collider may see supersymmetry or extra dimensions or dark matter, keep in mind they told you the same thing 20 years ago.

Tuesday, February 05, 2019

String theory landscape predicts no new particles at the LHC

In a paper that appeared on the arXiv last week, Howard Baer and collaborators predict masses of new particles using the string theory landscape. They argue that the Large Hadron Collider should not have seen them so far, and likely will not see them in the upcoming run. Instead, it would at least take an upgrade of the LHC to higher collision energy to see any.

The idea underlying their calculation is that we live in a multiverse in which universes with all possible combinations of the constants of nature exist. On this multiverse, you have a probability distribution. You further must take into account that some combinations of natural constants will not allow for life to exist. This results in a new probability distribution that quantifies the likelihood, not of the existence of universes, but that we observe a particular combination. You can then calculate the probability for finding certain masses of postulated particles.

As I just explained in a recent post, this is a new variant of arguments from naturalness. A certain combination of parameters is more “natural” the more often it appears in the multiverse. As Baer et al write in their paper:
“The landscape, if it is to be predictive, is predictive in the statistical sense: the more prevalent solutions are statistically more likely. This gives the connection between landscape statistics and naturalness: vacua with natural observables are expected to be far more common than vacua with unnatural observables.”
Problem is, the landscape is just not predictive. It is predictive in the statistical sense only after you have invented a probability distribution. But since you cannot derive the distribution from first principles, you really postulate your results in form of the distribution.

Baer et al take their probability distribution from the literature, specifically from a 2004 paper by Michael Douglas. The Douglas paper has no journal reference and is on the arXiv in version 4 with the note “we identify a serious error in the original argument, and attempt to address it.”

So what do the particle physicists find? They find that the mass of the Higgs-boson is most likely what we have observed. They find that most likely we have not yet seen supersymmetric particles at the LHC. They also find that so far we have not seen any dark matter particles.

I must admit that this fits remarkably well with observations. I would have been more impressed, though, had they made those predictions prior to the measurement.

They also offer some actual predictions which is that the next LHC run is unlikely to see any new fundamental particles, but that upgrading the LHC to higher energies should help seeing them. (This upgrade is called HE-LHC and is distinct from the FCC proposal.) They also think that the next round of dark matter experiments should see something.

Ten years ago, Howard Baer worried that when the LHC turned on, it would produce so many supersymmetric particles that this would screw up the detector calibration.

Monday, February 04, 2019

Maybe I’m crazy


How often can you hold up four fingers, hear a thousand people shout “five”, and not agree with them? How often can you repeat an argument, see it ignored, and still believe in reason? How often can you tell a thousand scientists the blatantly obvious, hear them laugh, and not think you are the one who is insane?

I wonder.

Every time a particle physicist dismisses my concerns, unthinkingly, I wonder some more. Maybe I am crazy? It would explain so much. Then I remind myself of the facts, once again.

Fact is, in the foundations of physics we have not seen progress for the past four decades. Ever since the development of the standard model in the 1970s, further predictions for new effects have been wrong. Physicists commissioned dozens of experiments to look for dark matter particles and grand unification. They turned data up-side down in search for supersymmetric particles and dark energy and new dimensions of space. The result has been consistently: Nothing new.

Yes, null-results are also results. But they are not very useful results if you need to develop a new theory. A null-result says: “Let’s not go this way.” A result says: “Let’s go that way.” If there are many ways to go, discarding some of them does not help much. To move on in the foundations of physics, we need results, not null-results.

It’s not like we are done and can just stop here. We know we have not reached the end. The theories we currently have in the foundations are not complete. They have problems that require solutions. And if you look at the history of physics, theory-led breakthroughs came when predictions were based on solving problems that required solution.

But the problems that theoretical particle physicists currently try to solve do not require solutions. The lack of unification, the absence of naturalness, the seeming arbitrariness of the constants of nature: these are aesthetic problems. Physicists can think of prettier theories, and they believe those have better chances to be true. Then they set out to test those beauty-based predictions. And get null-results.

It’s not only that there is no reason to think this method should work, it does – in fact! – not work, has not worked for decades. It is failing right now, once again, as more beauty-based predictions for the LHC are ruled out every day.

They keep on believing, nevertheless.

Those who, a decade ago, made confident predictions that the Large Hadron Collider should have seen new particles can now not be bothered to comment. They are busy making “predictions” for new particles that the next larger collider should see. We risk spending $20 billion dollars on more null-results that will not move us forward. Am I crazy for saying that’s a dumb idea? Maybe.

Someone recently compared me to a dinghy that has the right of way over a tanker ship. I could have the best arguments in the world, that still would not stop them. Inertia. It’s physics, bitches.

Recently, I wrote an Op-Ed for the NYT in which I lay out why a larger particle collider is not currently a good investment. In her response, Prof Lisa Randall writes: “New dimensions or underlying structures might exist, but we won’t know unless we explore.” Correct, of course, but doesn’t explain why a larger particle collider is a promising investment.

Randall is professor of physics at Harvard. She is famous for having proposed a model, together with Raman Sundrum, according to which the universe should have additional dimensions of space. The key insight underlying the Randall-Sundrum model is that a small number in an exponential function can make a large number. She is one of the world’s best-cited particle physicists. There is no evidence these extra-dimension exist. More recently she has speculated that dark matter killed the dinosaurs.

Randall ends her response with: “Colliders are expensive, but so was the government shutdown,” an argument so flawed and so common I debunked it two weeks before she made it.

And that is how the top of tops of theoretical particle physicists react if someone points out they are unable to acknowledge failure: They demonstrate they are unable to acknowledge failure.

When I started writing my book, I thought the problem is they are missing information. But I no longer think so. Particle physicists have all the information they need. They just refuse to use it. They prefer to believe.

I now think it’s really a standoff between reason and intuition. Here I am, with all my arguments. With my stacks of papers about naturalness-based predictions that didn’t work. With my historical analysis and my reading of the philosophy of physics. With my extrapolation of the past to the future that says: Most likely, we will see more null-results at higher energies.

And on the other side there are some thousand particle physicists who think that this cannot possibly be the end of the story, that there must be more to see. Some thousand of the most intelligent people the human race has ever produced. Who believe they are right. Who trust their experience. Who think their collective hope is reason enough to spend $20 billion.

If this was a novel, hope would win. No one wants to live in a world where the little German lady with her oh-so rational arguments ends up being right. Not even the German lady wants that.

Wait, what did I say? I must be crazy.

Sunday, February 03, 2019

A philosopher's take on “naturalness” in particle physics

Square watermelons. Natural?
[Image Source]
Porter Williams is a philosopher at the University of Pittsburgh. He has a new paper about “naturalness,” an idea that has become a prominent doctrine in particle physics. In brief, naturalness requires that a theory’s dimensionless parameters should be close to 1, unless there is an explanation why they are not.

Naturalness arguments were the reason so many particle physicists expected (still expect) the Large Hadron Collider (LHC) to see fundamentally new particles besides the Higgs-boson.

In his new paper, titled “Two Notions of Naturalness,” Williams argues that, in recent years, naturalness arguments have split into two different categories.

The first category of naturalness is the formerly used one, based on quantum field theory. It quantifies, roughly speaking, how sensitive the parameters of a theory at low energies are to changes of the parameters at high energies. Assuming a probability distribution for the parameters at high energies, you can then quantify the likelihood of finding a theory with the parameters we do observe. If the likelihood is small, the theory is said to be “unnatural” or “finetuned”. The mass of the Higgs-boson is unnatural in this sense, so is the cosmological constant, and the theta-parameter.

The second, and newer, type of naturalness, is based on the idea that our universe is one of infinitely many that together make up a “multiverse.” In this case, if you assume a probability distribution over the universes, you can calculate the likelihood of finding the parameters we observe. Again, if that comes out to be unlikely, the theory is called “unnatural.” This approach has so far not been pursued much. Particle physicists therefore hope that the standard model may turn out to be natural in this new way.

I wrote about this drift of naturalness arguments last year (it is also briefly mentioned in my book). I think Williams correctly identifies a current trend in the community.

But his paper is valuable beyond identifying a new trend, because Williams lays out the arguments from naturalness very clearly. I have given quite some talks about the topic, and in doing so noticed that even particle physicists are sometimes confused about exactly what the argument is. Some erroneously think that naturalness is a necessary consequence of effective field theory. This is not so. Naturalness is an optional requirement that the theory may or may not fulfill.

As Williams points out: “Requiring that a quantum field theory be natural demands a more stringent autonomy of scales than we are strictly licensed to expect by [the] structural features of effective field theory.” By this he disagrees with a claim by the theoretical physicist Gian-Francesco Giudice, according to whom naturalness “can be justified by renormalization group methods and the decoupling theorem.” I side with Williams.

Nevertheless, Williams comes out in defense of naturalness arguments. He thinks that these arguments are well-motivated. I cannot, however, quite follow his rationale for coming to this conclusion.

It is correct that the sensitivity to high-energy parameters is peculiar and something that we see in the standard model only for the mass of the Higgs-boson*. But we know why that is: The Higgs-boson is different from all the other particles in being a scalar particle. The expectation that its quantum corrections should enjoy a similar protection as the other particles is therefore not justified.

Williams offers one argument that I had not heard before, which is that you need naturalness to get reliable order-of-magnitude estimates. But this argument assumes that you have only one constant for each dimension of units, so it’s circular. The best example is cosmology. The cosmological constant is not natural. GR has another, very different, mass-scale, that is the Planck mass. Still you can perfectly well make order-of-magnitude estimates as long as you know which mass-scales to use. In other words, making order-of-magnitude estimates in an unnatural theory is only problematic if you assume the theory really should be natural.

The biggest problem, however, is the same for both types of naturalness: You don’t have the probability distribution and no way of obtaining it because it’s a distribution over an experimentally inaccessible space. To quantify naturalness, you therefore have to postulate a distribution, but that has the consequence that you merely get out what you put in. Naturalness arguments can therefore always be amended to give whatever result you want.

And that really is the gist of the current trend. The LHC data has shown that the naturalness arguments that particle physicists relied on did not work. But instead of changing their methods of theory-development, they adjust their criteria of naturalness to accommodate the data. This will not lead to better predictions.


*The strong CP-problem (that’s the thing with the theta-parameter) is usually assumed to be solved by the Pecci-Quinn mechanism, never mind that we still haven’t seen axions. The cosmological constant has something to do with gravity, and therefore particle physicists think it’s none of their business.

Saturday, February 02, 2019

Particle physicists surprised to find I am not their cheer-leader

Me and my Particle Data Booklet.
In the past week, I got a lot of messages from particle physicists who are unhappy I wrote an Op-Ed for the New York Times. They inform me that they really would like to have a larger particle collider. In other news, dogs still bite men. In China, bags of rice topple over.

More interesting than particle physicists’ dismay are the flavors of their discontent. I’ve been called a “troll” and a “liar”. I’ve been told I “foul my nest” and “play the victim.” I have been accused of envy, wrath, sloth, greed, narcissism, and grandiosity. I’m a pessimist, a defeatist, and a populist. I’m “to be ignored.” I’m a “no one” with a “platform” who has a “cult following.” I have made quick career from an enemy of particle physics, to an enemy of physics, to an enemy of science. In extrapolation, by the end of next week I’ll be the anti-christ.

Now, look. I’m certainly not an angel. I have a temper. I lack patience. I’m “eye-for-eye” rather than “turn the other cheek”. I don’t always express myself as clearly as I should. I make mistakes, contradict myself, and don’t live up to my own expectations. I have regrets.

But I am also a simple person. You don’t need to dig deep to understand me. To first approximation, I mean what I say: We currently have no reason to think a next larger particle collider will do anything but confirm the existing theories. Particle physicists’ methods of theory-development have demonstrably failed for 40 years. The field is beset by hype and group-think. You cannot trust these people. It’s a problem and it’s in the way of progress.

It hurts, because they know that I know what I am talking about.

Thursday, I gave a colloquium at the University of Giessen. In Giessen, physics research is mostly nuclear physics and plasma physics. They don’t have anyone working in the fields I’m criticizing. Nevertheless, it transpired yesterday that following my Op-Ed some people at the department debated whether I am a “populist” who better not be given a “forum”.

For starters, that’s ridiculous – a physics colloq at the University of Giessen is not much of a forum. Also, I have been assured the department didn’t seriously consider uninviting me. Still, I am disturbed that scientists would try to shut me up rather than think about what I say.

I didn’t know anything about this, however, when I gave my talk. It was well attended, all seats taken, people standing in the back. It was my usual lecture, that is a brief summary of the arguments in my book. I got the usual questions. There is always someone who asks for an example of an ugly theory. There is always someone who asks what’s wrong with finding beauty in their research. There’s always someone who has a question that’s more of a comment, really.

Then, a clearly agitated young man raised his arm and mumbled something about a heated discussion that had taken place last week. This didn’t make sense to me until later, so I ignored it. He then explained he didn’t read my book, and didn’t find anything objectionable about my talk. Must have been some disappointment, I guess, to see I’m not Rumpelstiltskin. He said that “everyone here agrees” that those failed predictions and the hype surrounding them are a problem. But, he wailed, how could I possibly go and publicly declare that one cannot trust scientists?

You see, the issue they have isn’t that I say particle physics has a problem. Because that’s obvious to everyone who ever had anything to do with the field. The issue is that I publicly say it.

Why do I say it? Because it’s true. And because the public needs to know. And because I have given up hope they will change their ways just because what I say is right. You cannot reach them with reason. But you can reach them by drawing attention to how much money is going to waste because scientists refuse to have a hard look at themselves. Lots of money. Money that could be put to better use elsewhere.

Now they are afraid, and they feel betrayed. And that’s what you see in the responses.

The first mode of defense is denial. It goes like this: Particle physics is doing just fine, go away, nothing to see here. Please give us more money.

The second mode of defense is urging me to stay in line and, at the same time, warning everyone else to keep their mouth shut. Over at Orbiter Magazine, Marcelo Gleiser and some other HEP people (who I don’t know), accuse me of “defeatism” and “sabotage” and express their grievances as follows:
“As a community, we must fight united for the expansion of all our fields of inquiry, working with the public and politicians to increase the research budget to accommodate different kinds of projects. While it is true that research budgets are often strained, our work is to convince society that what we do is worthwhile, even when it fails to deliver the big headlines.”
But no, just no. My job as a scientist is not to “convince society” that what other scientists do is worthwhile (regardless of headlines). My job is to look at the evidence and report what I find. The evidence says particle physicists’ methods for theory-development have not worked for four decades. Yet they continue using these methods. It’s bad science, it deserves to be called bad science, and I will continue to call it bad science until they stop doing it.

If I was a genius, I would be here telling you about my great new theory of everything. I don’t have one. I am a mediocre thinker. I just wish all those smart people would stop playing citation games and instead do their job so we would see some real progress. But I’m also a writer. Words are my weapons. And make no mistake, I’m not done.