Friday, February 15, 2019

Dark Matter – Or What?

Yesterday I gave a colloq about my work with Tobias Mistele on superfluid dark matter. Since several people asked for the slides, I have uploaded them to slideshare. You can also find the pdf here. I previously wrote about our research here and here. All my papers are openly available on the arXiv.

Wednesday, February 13, 2019

When gravity breaks down

Einstein’s theory of general relativity is more than a hundred years old, but still it gives physicists headaches. Not only are Einstein’s equations hideously difficult to solve, they also clash with physicists other most-cherish achievement, quantum theory.

Problem is, particles have quantum properties. They can, for example, be in two places at once. These particles also have masses, and masses cause gravity. But since gravity does not have quantum properties, no one really knows what’s the gravitational pull of a particle in a quantum superposition. To solve this problem, physicists need a theory of quantum gravity. Or, since Einstein taught us that gravity is really curvature of space-time, physicists need a theory for the quantum properties of space and time.

It’s a hard problem, even for big-brained people like theoretical physicists. They have known since the 1930s that quantum gravity is necessary to bring order into the laws of nature, but 80 years on a solution isn’t anywhere in sight. The major obstacle on the way to progress is the lack of experimental guidance. The effects of quantum gravity are extremely weak and have never been measured, so physicists have only math to rely on. And it’s easy to get lost in math.

The reason it is difficult to obtain observational evidence for quantum gravity is that all presently possible experiments fall into two categories. Either we measure quantum effects – using small and light objects – or we measure gravitational effects – using large and heavy objects. In both cases, quantum gravitational effects are tiny. To see the effects of quantum gravity, you would really need a heavy object that has pronounced quantum properties, and that’s hard to come by.

Physicists do know a few naturally occurring situations where quantum gravity should be relevant. But it is not on short distances, though I often hear that. Non-quantized gravity really fails in situations where energy-densities become large and space-time curvature becomes strong. And let me be clear that what astrophysicists consider “strong” curvature is still “weak” curvature for those working on quantum gravity. In particular, the curvature at a black hole horizon is not remotely strong enough to give rise to noticeable quantum gravitational effects.

Curvature strong enough to cause general relativity to break down, we believe, exists only in the center of black holes and close by the big bang. In both cases the strongly compressed matter has a high density and a pronounced quantum behavior which should give rise to quantum gravitational effects. Unfortunately, we cannot look inside a black hole, and reconstructing what happened at the Big Bang from today’s observation can, with present measurement techniques, not reveal the quantum gravitational behavior.

The regime where quantum gravity becomes relevant should also be reached in particle collisions at extremely high center-of-mass energy. If you had a collider large enough – estimates say that with current technology it would be about the size of the Milky Way – you could focus enough energy into a small region of space to create strong enough curvature. But we are not going to build such a collider any time soon.

Besides strong space-time curvature, there is another case where quantum effects of gravity should become measureable that is often neglected: By creating quantum superpositions of heavy objects. This causes the approximation in which matter has quantum properties but gravity doesn’t (the “semi-classical limit”) to break down, revealing truly quantum effects of gravity. A few experimental groups are currently trying to reach the regime where they might become sensitive to such effects. They still have some orders of magnitude to go, so not quite there yet.

Why don’t physicist study this case closer? As always, it’s hard to say why scientists do one thing and not another. I can only guess it’s because from a theoretical perspective this case is not all that interesting.

I know I said that physicists don’t have a theory of quantum gravity, but that is only partly correct. Gravity can, and has been, quantized using the normal methods of quantization already in the 1960s by Feynman and DeWitt. However, the theory one obtains this way (“perturbative quantum gravity”), breaks down in exactly the strong curvature regime that physicists want to use it (“perturbatively non-renormalizable”). Therefore, this approach is today considered merely a low-energy approximation (“effective theory”) to the yet-to-be-found full theory of quantum gravity (“UV-completion”).

Past the 1960s, almost all research efforts in quantum gravity focused on developing that full theory. The best known approaches are string theory, loop quantum gravity, asymptotic safety, and causal dynamical triangulation. The above mentioned case with heavy objects in quantum superpositions, however, does not induce strong curvature and hence falls into the realm of the boring and supposedly well-understood theory from the 1960s. Ironically, for this reason there are almost no theoretical predictions for such an experiment from either of the major approaches to the full theory of quantum gravity.

Most people in the field presently think that perturbative quantum gravity must be the correct low-energy limit of any theory of quantum gravity. A minority, however, holds that this isn’t so, and members of this club usually quote one or both of the following reasons.

The first objection is philosophical. It does not conceptually make much sense to derive a supposedly more fundamental theory (quantum gravity) from a less fundamental one (non-quantum gravity) because by definition the derived theory is the less fundamental one. Indeed, the quantization procedure for Yang-Mills theories is a logical nightmare. You start with a non-quantum theory, make it more complicated to obtain another theory, though that is not strictly speaking a derivation, and if you then take the classical limit you get a theory that doesn’t have any good interpretation whatsoever. So why did you start from it to begin with it?

Well, the obvious answer is: We do it because it works, and we do it this way because of historical accident not because it makes a lot of sense. Nothing wrong with that for a pragmatist like me, but also not a compelling reason to insist that the same method should apply to gravity.

The second often-named argument against the perturbative quantization is that you do not get atomic physics by quantizing water either. So if you think that gravity is not a fundamental interaction but comes from the collective behavior of a large number of microscopic constituents (think “atoms of space-time”), then quantizing general relativity is simply the wrong thing to do.

Those who take this point of view that gravity is really a bulk-theory for some unknown microscopic constituents follow an approach called “emergent gravity”. It is supported by the (independent) observations of Jacobson, Padmanabhan, and Verlinde, that the laws of gravity can be rewritten so that they appear like thermodynamical laws. My opinion about this flip-flops between “most amazing insight ever” and “curious aside of little relevance,” sometimes several times a day.

Be that as it may, if you think that emergent gravity is the right approach to quantum gravity, then the question where gravity-as-we-know-and-like-it breaks down becomes complicated. It should still break down at high curvature, but there may be further situations where you could see departures from general relativity.

Erik Verlinde, for example, interprets dark matter and dark energy as relics of quantum gravity. If you believe this, we do already have evidence for quantum gravity! Others have suggested that if space-time is made of microscopic constituents, then it may have bulk-properties like viscosity, or result in effects normally associated with crystals like birefringence, or the dispersion of light.

In summary, the expectation that quantum effects of gravity should become relevant for strong space-time curvature is based on an uncontroversial extrapolation and pretty much everyone in the field agrees on it.* In certain approaches to quantum gravity, deviations from general relativity could also become relevant at long distances, low acceleration, or low energies. An often neglected possibility is to probe the effects of quantum gravity with quantum superpositions of heavy objects.

I hope to see experimental evidence for quantum gravity in my lifetime.

Except me, sometimes.

Friday, February 08, 2019

A philosopher of science reviews “Lost in Math”

Jeremy Butterfield is a philosopher of science in Cambridge. I previously wrote about some of his work here, and have met him on various occasions. Butterfield recently reviewed my book “Lost in Math,” and you can now find this review online here. (I believe it was solicited for a journal by name Physics in Perspective.)

His is a very detailed review that focuses, unsurprisingly, on the philosophical implications of my book. I think his summary will give you a pretty good impression of the book’s content. However, I want to point out two places where he misrepresents my argument.

First, in section 2, Butterfield lays out his disagreements with me. Alas, he disagrees with positions I don’t hold and certainly did not state, neither in the book nor anywhere else:
“Hossenfelder’s main criticism of supersymmetry is, in short, that it is advocated because of its beauty, but is unobserved. But even if supersymmetry is not realized in nature, one might well defend studying it as an invaluable tool for getting a better understanding of quantum field theories. A similar defence might well be given for studying string theory.”
Sure. Supersymmetry, string theory, grand unification, even naturalness, started out as good ideas and valuable research programs. I do not say these should not have been studied; neither do I say one should now discontinue studying them. The problem is that these ideas have grown into paper-production industries that no longer produce valuable output.

Beautiful hypotheses are certainly worth consideration. Troubles begin if data disagree with the hypotheses but scientists continue to rely on their beautiful hypotheses rather than taking clues from evidence.

Second, Butterfield misunderstands just how physicists working on the field’s foundations are “led astray” by arguments from beauty. He writes:
“I also think advocates of beauty as a heuristic do admit these limitations. They advocate no more than a historically conditioned, and fallible, heuristic [...] In short, I think Hossenfelder interprets physicists as more gung-ho, more na├»ve, that beauty is a guide to truth than they really are.”
To the extent that physicists are aware they use arguments from beauty, most know that these are not scientific arguments and also readily admit it. I state this explicitly in the book. They use such arguments anyway, however, because doing so has become accepted methodology. Look at what they do, don’t listen to what they say.

A few try to justify using arguments from beauty by appeals to cherry-picked historical examples or quotes to Einstein and Dirac. In most cases, however, physicists are not aware they use arguments from beauty to begin with (hence the book’s title). I have such discussions on a daily basis.

Physicists wrap appeals to beauty into statements like “this just can’t be the last word,” “intuition tells me,” or “this screams for an explanation”. They have forgotten that naturalness is an argument from beauty and can’t recall, or never looked at, the motivation for axions or gauge coupling unification. They will express their obsessions with numerical coincidences by saying “it’s curious” or “it is suggestive,” often followed by “Don’t you agree?”.

Of course I agree. I agree that supersymmetry is beautiful and it should be true, and it looks like there should be a better explanation for the parameters in the standard model, and it looks like there should be a unified force. But who cares what I think nature should be like? Human intuition is not a good guide to the development of new laws of nature.

What physicists are naive about is not appeals to beauty; what they are naive about is their own rationality. They cannot fathom the possibility that their scientific judgement is influenced by cognitive biases and social trends in scientific communities. They believe it does not matter for their interests how their research is presented in the media.

The easiest way to see that the problem exists is that they deny it.

Wednesday, February 06, 2019

Why a larger particle collider is not currently a good investment

LHC tunnel. Credits: CERN.
That a larger particle collider is not currently a good investment is hardly a controversial position. While the costs per units of collision-energy have decreased over the decades thanks to better technology, the absolute cost of new machines has shot up. That the costs of larger particle colliders would at some point become economically prohibitive has been known for a long time. Even particle physicists could predict this.

Already in 2001, Maury Tigner, who led the Central Design Group for the (cancelled) Superconducting Super Collider project, wrote an article for Physics Today asking “Does Accelerator-Based Particle Physics Have a Future?” While he remained optimistic that collaborative efforts and technological advances would lead to some more progress, he was also well aware of the challenges. Tigner wrote:
“If we are to continue progress with accelerator-based particle physics, we will have to mount much more effective efforts in the technical aspects of accelerator development — with a strong focus on economy. Such efforts will probably not suffice to hold constant the cost of new facilities in the face of the ever more demanding joint challenges of higher collision energy and higher luminosity. Making available the necessary intellectual and financial resources to sustain progress would seem to require social advances of unprecedented scope in resource management and collaboration.”
But the unprecedented social advances have not come to pass, and neither have we since seen major breakthroughs in collider technology. The state of affairs is often summarized in what is known as the “Livingston Plot” that shows the year of construction versus energy. You can clearly see that the golden years of particle accelerators ended around 1990. And no game-changing technology has come up to turn things around:

Livingston plot. Image Credits: K. Yokoya
Particle accelerators are just damn expensive. In the plot below I have collected some numbers for existing and former colliders. I took the numbers from this paper and from Wikipedia. Cost estimates are not inflation-adjusted and currency-conversions are approximate, so do not take the numbers too seriously. The figure should, however, give you a roughly correct impression:

The ILC is the (proposed) International Linear Collider, and the NLC was the once proposed Next Linear Collider. The SSC is the scraped Superconducting Super Collider. FCC-ee and FCC are the low-cost and high-cost variants of CERN’s planned Future Circular Collider.

When interpreting this plot, keep in mind that the cost for the LHC was low because it reused the tunnel of an earlier experiment. Also note that most of these machines were not built to reach the highest energies possible (at the time), so please do not judge facilities for falling below the diagonal.

So, yeah, particle collider are expensive, no doubt about this. Now, factor in the unclear discovery potential for the next larger collider, and compare this to other experiments that “push frontiers,” as the catchphrase has it.

There is currently no reason to think a larger particle collider will do anything besides measuring some constants to higher precision. That is not entirely uninteresting, of course, and it’s enough to excite particle physicists. But this knowledge will tell us little new about the universe and it cannot be used to obtain further knowledge either.

Compare the expenses for CERN’s FCC plans to that of the gravitational wave interferometer LIGO. LIGO’s price tag was well below a billion US$. Still, in 1991, physicists hotly debated whether it was worth the money.

And that is even though the scientific case for LIGO was clear. Gravitational waves were an excellently solid prediction. Not only this, physicists knew already from indirect measurements that they must exist. True, they did not know exactly at which amplitude to expect events or how many of them. But this was not a situation in which “nothing until 15 orders of magnitude higher” was the most plausible case.

In addition, gravitational waves are good for something. They allow physicists to infer properties of distant stellar objects, which is exactly what the LIGO collaboration is now doing. We have learned far more from LIGO than that gravitational waves exist.

The planned FCC costs 20 times as much, has no clear discovery target, and it’s a self-referential enterprise: A particle collider tells you more about particle collisions. We have found the Higgs, all right, but there is nothing you can do with the Higgs now other than studying it closer.

Another cost-comparison: The Square Kilometer Array (SKA). Again the scientific case is clear. The SKA (among other things) would allow us to study the “dark ages” of the universe, that we cannot see with other telescopes because no stars existed at the time, and look for organic compounds in outer space. From this we could learn a lot about star formation, the mystery that is dark matter, and the prevalence of organic chemistry in the universe that may be an indicator for life. The total cost of the SKA is below $2 billion, though it looks like the full version will not come into being. Currently, less than a billion of funding is available that suffices only for the slimmed-down variant (SKA-1).

And it’s not like building larger colliders is the only thing you can do to learn more about particle physics. All the things that can happen at higher energies also affect what happens at low energies, it’s just that at low energies you have to measure very, very precisely. That’s why high-precision measurements, like that of the muon g-2 or the electric dipole moment, are an alternative to going to higher energies. Such experiments are far less costly.

There are always many measurements that could be done more precisely, and when doing so, it is always possible that we find something new. But the expected discovery potential must play a role when evaluating the promises of an investment. It is unsurprising that particle physicists would like to have a new particle collider. But that is not an argument for why such a machine would be a good investment.

Particle physicists have not been able to come up with any reliable predictions for new effects for decades. The prediction of the Higgs-boson was the last good prediction they had. With that, the standard model is complete and we have no reason to expect anything more, not until energies 15 orders of magnitude higher.

Of course, particle physicists do have a large number of predictions for new particles within the reach of the next larger collider, but these are really fabricated for no other purpose than to rule them out. You cannot trust them. When they tell you that a next larger collider may see supersymmetry or extra dimensions or dark matter, keep in mind they told you the same thing 20 years ago.

Tuesday, February 05, 2019

String theory landscape predicts no new particles at the LHC

In a paper that appeared on the arXiv last week, Howard Baer and collaborators predict masses of new particles using the string theory landscape. They argue that the Large Hadron Collider should not have seen them so far, and likely will not see them in the upcoming run. Instead, it would at least take an upgrade of the LHC to higher collision energy to see any.

The idea underlying their calculation is that we live in a multiverse in which universes with all possible combinations of the constants of nature exist. On this multiverse, you have a probability distribution. You further must take into account that some combinations of natural constants will not allow for life to exist. This results in a new probability distribution that quantifies the likelihood, not of the existence of universes, but that we observe a particular combination. You can then calculate the probability for finding certain masses of postulated particles.

As I just explained in a recent post, this is a new variant of arguments from naturalness. A certain combination of parameters is more “natural” the more often it appears in the multiverse. As Baer et al write in their paper:
“The landscape, if it is to be predictive, is predictive in the statistical sense: the more prevalent solutions are statistically more likely. This gives the connection between landscape statistics and naturalness: vacua with natural observables are expected to be far more common than vacua with unnatural observables.”
Problem is, the landscape is just not predictive. It is predictive in the statistical sense only after you have invented a probability distribution. But since you cannot derive the distribution from first principles, you really postulate your results in form of the distribution.

Baer et al take their probability distribution from the literature, specifically from a 2004 paper by Michael Douglas. The Douglas paper has no journal reference and is on the arXiv in version 4 with the note “we identify a serious error in the original argument, and attempt to address it.”

So what do the particle physicists find? They find that the mass of the Higgs-boson is most likely what we have observed. They find that most likely we have not yet seen supersymmetric particles at the LHC. They also find that so far we have not seen any dark matter particles.

I must admit that this fits remarkably well with observations. I would have been more impressed, though, had they made those predictions prior to the measurement.

They also offer some actual predictions which is that the next LHC run is unlikely to see any new fundamental particles, but that upgrading the LHC to higher energies should help seeing them. (This upgrade is called HE-LHC and is distinct from the FCC proposal.) They also think that the next round of dark matter experiments should see something.

Ten years ago, Howard Baer worried that when the LHC turned on, it would produce so many supersymmetric particles that this would screw up the detector calibration.

Monday, February 04, 2019

Maybe I’m crazy

How often can you hold up four fingers, hear a thousand people shout “five”, and not agree with them? How often can you repeat an argument, see it ignored, and still believe in reason? How often can you tell a thousand scientists the blatantly obvious, hear them laugh, and not think you are the one who is insane?

I wonder.

Every time a particle physicist dismisses my concerns, unthinkingly, I wonder some more. Maybe I am crazy? It would explain so much. Then I remind myself of the facts, once again.

Fact is, in the foundations of physics we have not seen progress for the past four decades. Ever since the development of the standard model in the 1970s, further predictions for new effects have been wrong. Physicists commissioned dozens of experiments to look for dark matter particles and grand unification. They turned data up-side down in search for supersymmetric particles and dark energy and new dimensions of space. The result has been consistently: Nothing new.

Yes, null-results are also results. But they are not very useful results if you need to develop a new theory. A null-result says: “Let’s not go this way.” A result says: “Let’s go that way.” If there are many ways to go, discarding some of them does not help much. To move on in the foundations of physics, we need results, not null-results.

It’s not like we are done and can just stop here. We know we have not reached the end. The theories we currently have in the foundations are not complete. They have problems that require solutions. And if you look at the history of physics, theory-led breakthroughs came when predictions were based on solving problems that required solution.

But the problems that theoretical particle physicists currently try to solve do not require solutions. The lack of unification, the absence of naturalness, the seeming arbitrariness of the constants of nature: these are aesthetic problems. Physicists can think of prettier theories, and they believe those have better chances to be true. Then they set out to test those beauty-based predictions. And get null-results.

It’s not only that there is no reason to think this method should work, it does – in fact! – not work, has not worked for decades. It is failing right now, once again, as more beauty-based predictions for the LHC are ruled out every day.

They keep on believing, nevertheless.

Those who, a decade ago, made confident predictions that the Large Hadron Collider should have seen new particles can now not be bothered to comment. They are busy making “predictions” for new particles that the next larger collider should see. We risk spending $20 billion dollars on more null-results that will not move us forward. Am I crazy for saying that’s a dumb idea? Maybe.

Someone recently compared me to a dinghy that has the right of way over a tanker ship. I could have the best arguments in the world, that still would not stop them. Inertia. It’s physics, bitches.

Recently, I wrote an Op-Ed for the NYT in which I lay out why a larger particle collider is not currently a good investment. In her response, Prof Lisa Randall writes: “New dimensions or underlying structures might exist, but we won’t know unless we explore.” Correct, of course, but doesn’t explain why a larger particle collider is a promising investment.

Randall is professor of physics at Harvard. She is famous for having proposed a model, together with Raman Sundrum, according to which the universe should have additional dimensions of space. The key insight underlying the Randall-Sundrum model is that a small number in an exponential function can make a large number. She is one of the world’s best-cited particle physicists. There is no evidence these extra-dimension exist. More recently she has speculated that dark matter killed the dinosaurs.

Randall ends her response with: “Colliders are expensive, but so was the government shutdown,” an argument so flawed and so common I debunked it two weeks before she made it.

And that is how the top of tops of theoretical particle physicists react if someone points out they are unable to acknowledge failure: They demonstrate they are unable to acknowledge failure.

When I started writing my book, I thought the problem is they are missing information. But I no longer think so. Particle physicists have all the information they need. They just refuse to use it. They prefer to believe.

I now think it’s really a standoff between reason and intuition. Here I am, with all my arguments. With my stacks of papers about naturalness-based predictions that didn’t work. With my historical analysis and my reading of the philosophy of physics. With my extrapolation of the past to the future that says: Most likely, we will see more null-results at higher energies.

And on the other side there are some thousand particle physicists who think that this cannot possibly be the end of the story, that there must be more to see. Some thousand of the most intelligent people the human race has ever produced. Who believe they are right. Who trust their experience. Who think their collective hope is reason enough to spend $20 billion.

If this was a novel, hope would win. No one wants to live in a world where the little German lady with her oh-so rational arguments ends up being right. Not even the German lady wants that.

Wait, what did I say? I must be crazy.

Sunday, February 03, 2019

A philosopher's take on “naturalness” in particle physics

Square watermelons. Natural?
[Image Source]
Porter Williams is a philosopher at the University of Pittsburgh. He has a new paper about “naturalness,” an idea that has become a prominent doctrine in particle physics. In brief, naturalness requires that a theory’s dimensionless parameters should be close to 1, unless there is an explanation why they are not.

Naturalness arguments were the reason so many particle physicists expected (still expect) the Large Hadron Collider (LHC) to see fundamentally new particles besides the Higgs-boson.

In his new paper, titled “Two Notions of Naturalness,” Williams argues that, in recent years, naturalness arguments have split into two different categories.

The first category of naturalness is the formerly used one, based on quantum field theory. It quantifies, roughly speaking, how sensitive the parameters of a theory at low energies are to changes of the parameters at high energies. Assuming a probability distribution for the parameters at high energies, you can then quantify the likelihood of finding a theory with the parameters we do observe. If the likelihood is small, the theory is said to be “unnatural” or “finetuned”. The mass of the Higgs-boson is unnatural in this sense, so is the cosmological constant, and the theta-parameter.

The second, and newer, type of naturalness, is based on the idea that our universe is one of infinitely many that together make up a “multiverse.” In this case, if you assume a probability distribution over the universes, you can calculate the likelihood of finding the parameters we observe. Again, if that comes out to be unlikely, the theory is called “unnatural.” This approach has so far not been pursued much. Particle physicists therefore hope that the standard model may turn out to be natural in this new way.

I wrote about this drift of naturalness arguments last year (it is also briefly mentioned in my book). I think Williams correctly identifies a current trend in the community.

But his paper is valuable beyond identifying a new trend, because Williams lays out the arguments from naturalness very clearly. I have given quite some talks about the topic, and in doing so noticed that even particle physicists are sometimes confused about exactly what the argument is. Some erroneously think that naturalness is a necessary consequence of effective field theory. This is not so. Naturalness is an optional requirement that the theory may or may not fulfill.

As Williams points out: “Requiring that a quantum field theory be natural demands a more stringent autonomy of scales than we are strictly licensed to expect by [the] structural features of effective field theory.” By this he disagrees with a claim by the theoretical physicist Gian-Francesco Giudice, according to whom naturalness “can be justified by renormalization group methods and the decoupling theorem.” I side with Williams.

Nevertheless, Williams comes out in defense of naturalness arguments. He thinks that these arguments are well-motivated. I cannot, however, quite follow his rationale for coming to this conclusion.

It is correct that the sensitivity to high-energy parameters is peculiar and something that we see in the standard model only for the mass of the Higgs-boson*. But we know why that is: The Higgs-boson is different from all the other particles in being a scalar particle. The expectation that its quantum corrections should enjoy a similar protection as the other particles is therefore not justified.

Williams offers one argument that I had not heard before, which is that you need naturalness to get reliable order-of-magnitude estimates. But this argument assumes that you have only one constant for each dimension of units, so it’s circular. The best example is cosmology. The cosmological constant is not natural. GR has another, very different, mass-scale, that is the Planck mass. Still you can perfectly well make order-of-magnitude estimates as long as you know which mass-scales to use. In other words, making order-of-magnitude estimates in an unnatural theory is only problematic if you assume the theory really should be natural.

The biggest problem, however, is the same for both types of naturalness: You don’t have the probability distribution and no way of obtaining it because it’s a distribution over an experimentally inaccessible space. To quantify naturalness, you therefore have to postulate a distribution, but that has the consequence that you merely get out what you put in. Naturalness arguments can therefore always be amended to give whatever result you want.

And that really is the gist of the current trend. The LHC data has shown that the naturalness arguments that particle physicists relied on did not work. But instead of changing their methods of theory-development, they adjust their criteria of naturalness to accommodate the data. This will not lead to better predictions.

*The strong CP-problem (that’s the thing with the theta-parameter) is usually assumed to be solved by the Pecci-Quinn mechanism, never mind that we still haven’t seen axions. The cosmological constant has something to do with gravity, and therefore particle physicists think it’s none of their business.

Saturday, February 02, 2019

Particle physicists surprised to find I am not their cheer-leader

Me and my Particle Data Booklet.
In the past week, I got a lot of messages from particle physicists who are unhappy I wrote an Op-Ed for the New York Times. They inform me that they really would like to have a larger particle collider. In other news, dogs still bite men. In China, bags of rice topple over.

More interesting than particle physicists’ dismay are the flavors of their discontent. I’ve been called a “troll” and a “liar”. I’ve been told I “foul my nest” and “play the victim.” I have been accused of envy, wrath, sloth, greed, narcissism, and grandiosity. I’m a pessimist, a defeatist, and a populist. I’m “to be ignored.” I’m a “no one” with a “platform” who has a “cult following.” I have made quick career from an enemy of particle physics, to an enemy of physics, to an enemy of science. In extrapolation, by the end of next week I’ll be the anti-christ.

Now, look. I’m certainly not an angel. I have a temper. I lack patience. I’m “eye-for-eye” rather than “turn the other cheek”. I don’t always express myself as clearly as I should. I make mistakes, contradict myself, and don’t live up to my own expectations. I have regrets.

But I am also a simple person. You don’t need to dig deep to understand me. To first approximation, I mean what I say: We currently have no reason to think a next larger particle collider will do anything but confirm the existing theories. Particle physicists’ methods of theory-development have demonstrably failed for 40 years. The field is beset by hype and group-think. You cannot trust these people. It’s a problem and it’s in the way of progress.

It hurts, because they know that I know what I am talking about.

Thursday, I gave a colloquium at the University of Giessen. In Giessen, physics research is mostly nuclear physics and plasma physics. They don’t have anyone working in the fields I’m criticizing. Nevertheless, it transpired yesterday that following my Op-Ed some people at the department debated whether I am a “populist” who better not be given a “forum”.

For starters, that’s ridiculous – a physics colloq at the University of Giessen is not much of a forum. Also, I have been assured the department didn’t seriously consider uninviting me. Still, I am disturbed that scientists would try to shut me up rather than think about what I say.

I didn’t know anything about this, however, when I gave my talk. It was well attended, all seats taken, people standing in the back. It was my usual lecture, that is a brief summary of the arguments in my book. I got the usual questions. There is always someone who asks for an example of an ugly theory. There is always someone who asks what’s wrong with finding beauty in their research. There’s always someone who has a question that’s more of a comment, really.

Then, a clearly agitated young man raised his arm and mumbled something about a heated discussion that had taken place last week. This didn’t make sense to me until later, so I ignored it. He then explained he didn’t read my book, and didn’t find anything objectionable about my talk. Must have been some disappointment, I guess, to see I’m not Rumpelstiltskin. He said that “everyone here agrees” that those failed predictions and the hype surrounding them are a problem. But, he wailed, how could I possibly go and publicly declare that one cannot trust scientists?

You see, the issue they have isn’t that I say particle physics has a problem. Because that’s obvious to everyone who ever had anything to do with the field. The issue is that I publicly say it.

Why do I say it? Because it’s true. And because the public needs to know. And because I have given up hope they will change their ways just because what I say is right. You cannot reach them with reason. But you can reach them by drawing attention to how much money is going to waste because scientists refuse to have a hard look at themselves. Lots of money. Money that could be put to better use elsewhere.

Now they are afraid, and they feel betrayed. And that’s what you see in the responses.

The first mode of defense is denial. It goes like this: Particle physics is doing just fine, go away, nothing to see here. Please give us more money.

The second mode of defense is urging me to stay in line and, at the same time, warning everyone else to keep their mouth shut. Over at Orbiter Magazine, Marcelo Gleiser and some other HEP people (who I don’t know), accuse me of “defeatism” and “sabotage” and express their grievances as follows:
“As a community, we must fight united for the expansion of all our fields of inquiry, working with the public and politicians to increase the research budget to accommodate different kinds of projects. While it is true that research budgets are often strained, our work is to convince society that what we do is worthwhile, even when it fails to deliver the big headlines.”
But no, just no. My job as a scientist is not to “convince society” that what other scientists do is worthwhile (regardless of headlines). My job is to look at the evidence and report what I find. The evidence says particle physicists’ methods for theory-development have not worked for four decades. Yet they continue using these methods. It’s bad science, it deserves to be called bad science, and I will continue to call it bad science until they stop doing it.

If I was a genius, I would be here telling you about my great new theory of everything. I don’t have one. I am a mediocre thinker. I just wish all those smart people would stop playing citation games and instead do their job so we would see some real progress. But I’m also a writer. Words are my weapons. And make no mistake, I’m not done.