Tuesday, January 22, 2019

Particle physics may have reached the end of the line

Image: CERN
CERN’s press release of plans for a larger particle collider, which I wrote about last week, made international headlines. Unfortunately, most articles about the topic just repeat the press-release, and do not explain how much the situation in particle physics has changed with the LHC data.

Since the late 1960s, when physicists hit on the “particle zoo” at nuclear energies, they always had a good reason to build a larger collider. That’s because their theories of elementary matter were incomplete. But now, with the Higgs-boson found in 2012, their theory – the “standard model of particle physics” – is complete. It’s done. There’s nothing missing. All Pokemon caught.

The Higgs was the last good prediction that particle physicists had. This prediction dates back to the 1960s and it was based on sound mathematics. In contrast to this, the current predictions for new particles at a larger collider – eg supersymmetric partner particles or dark matter particles – are not based on sound mathematics. These predictions are based on what is called an “argument from naturalness” and those arguments are little more than wishful thinking dressed in equations.

I have laid out my reasoning for why those predictions are no good in great detail in my book (and also in this little paper). But it does not matter whether you believe (or even understand) my arguments, you only have to look at the data to see that particle physicists’ predictions for physics beyond the standard model have, in fact, not worked for more than 30 years.

Fact is, particle physicists have predicted dark matter particles since the mid-1980s. None of those have been seen.

 Fact is, particle physicists predicted grand unified theories starting also in the 1980s. To the extent that those can be ruled out, they have been ruled out.

Fact is, they predicted that supersymmetric particles and/or large additional dimensions of space should become observable at the LHC. According to those predictions, this should have happened already. It did not.

The important thing is now that those demonstrably flawed methods were the only reason to think the LHC should discover something fundamentally new besides the Higgs. With this method of prediction not working, there is now no reason to think that the LHC in its upcoming runs, or a next larger collider, will see anything besides the physics predicted by the already known theories.

Of course it may happen. I am not saying that I know a larger collider will not find something new. It is possible that we get lucky. I am simply saying that we currently have no prediction that indicates a larger collider would lead to a breakthrough. The standard model may well be it.

This situation is unprecedented in particle physics. The only reliable prediction we currently have for physics beyond the standard model is that we should eventually see effects of quantum gravity. But for that we would have to reach energies 13 orders of magnitude higher than what even the next collider would deliver. It’s way out of reach.

The only thing we can reliably say a next larger collider will do is measure more precisely the properties of the already known fundamental particles. That it may tell us something about dark matter, or dark energy, or the matter-antimatter symmetry is a hope, not a prediction.

Particle physicists had a good case to build the LHC with the prediction of the Higgs-boson. But with the Higgs found, the next larger collider has no good motivation. The year is 2019, not 1999.

Letter from a reader: “Someone has to write such a book” we used to say

Dear Sabine,

congratulations on your book. I read it this sommer and enjoyed it very much. For people like me, working on solid state physics, the issues you addressed were a recurrent subject to talk over lunch over the last decade. “Someone has to write such a book” we used to say, necessarily had to be someone from inside this community. I am glad that you did it.

I came to your book through the nice review published in Nature. I was disappointed with the one I read later in Science; also, with the recent one on Physics Today by Wilczek (“...and beautiful ideas from information theory are illuminating physical algorithms and quantum network design”, does it make sense to anyone?!). To be honest, he should list all beautiful ideas developed, and then the brief list of the ones that got agreement with experiment. This would be a scientific approach to test if such a statement makes sense, would you agree?

I send you a comment from Philip Anderson on string theory, I don’t think you mention it in your book but I guess you heard of it.

Best regards,

Daniel

---------------------------------------------------
Prof. Daniel Farias
Dpto. de Física de la Materia Condensada
Universidad Autónoma de Madrid
Phone: +34 91 497 5550
----------------------------------------------------

[The mentioned comment is Phillip Anderson’s response to the EDGE annual question 2005: What do you believe is true even though you cannot prove it? Which I append below for your amusement.]

Is string theory a futile exercise as physics, as I believe it to be? It is an interesting mathematical specialty and has produced and will produce mathematics useful in other contexts, but it seems no more vital as mathematics than other areas of very abstract or specialized math, and doesn't on that basis justify the incredible amount of effort expended on it.

My belief is based on the fact that string theory is the first science in hundreds of years to be pursued in pre-Baconian fashion, without any adequate experimental guidance. It proposes that Nature is the way we would like it to be rather than the way we see it to be; and it is improbable that Nature thinks the same way we do.

The sad thing is that, as several young would-be theorists have explained to me, it is so highly developed that it is a full-time job just to keep up with it. That means that other avenues are not being explored by the bright, imaginative young people, and that alternative career paths are blocked.

Wednesday, January 16, 2019

Particle physicists want money for bigger collider

Illustration of particle collision.
[screen shot from this video]
The Large Hadron Collider (LHC) at CERN is the world’s current largest particle collider. But in a decade its days will come to an end. Particle physicists are now making plans for the future. Yesterday, CERN issued a press-release about a design study for their plans, which is a machine called the Future Circular Collider (FCC).

There are various design options for the FCC. Costs start at €9 billion for the least expensive version, going up to €21 for the big vision. The idea is to dig a longer ring-tunnel, in which first electrons would be brought to collision with positrons at energies from 91 to 365 GeV. The operation energies are chosen to enable more detailed studies of specific particles than the LHC allows. This machine would later be upgraded for proton-proton collisions at higher energies, reaching up to 100 TeV (or 100k GeV). In comparison, the LHC’s maximum design energy is 14 TeV.

€9 billion is a lot of money and given what we presently know, I don’t think it’s worth it. It is possible that if we reach higher energies, we will find new particles, but we do not currently have any good reason to think this will happen. Of course if the LHC finds something after all, the situation will entirely change and everyone will rush to build the next collider. But without that, the only thing we know that a larger collider will reliably do is measure in greater detail the properties of the already-known particles.

The design-reports acknowledge this, but obfuscates the point. The opening statement, for example, says:
[Several] experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do point to the existence of physics beyond the Standard Model.” (original emphasis)
The accompanying video similarly speaks vaguely of “big questions”, something to do with 95% of the universe (referring to dark matter and dark energy) and raises the impression that a larger collider would tell us something interesting about that:


It is correct that the standard model requires extension, but there is no reason that the new physical effects, like particles making up dark matter, must be accessible at the next larger collider. Indeed, the currently most reliable predictions put any new physics at energies 14 orders of magnitude higher, well out of the reach of any collider we’ll be able to build in the coming centuries. This is noted later in the report, where you can read: “Today high energy physics lacks unambiguous and guaranteed discovery targets.”

The report uses some highly specific examples of hypothetical particles that can be ruled out, such as certain WIMP candidates or supersymmetric particles. Again, that’s correct. But there is no good argument for why those particular  particles should be the right ones. Physicists have no end of conjectured new particles. You’d end up ruling out a few among millions of models, and make little progress, just like with the LHC and the earlier colliders.

We are further offered the usual arguments, that investing in a science project this size would benefit the technological industry and education and scientific networks. This is all true, but not specific to particle colliders. Any large-scale experiment would have such benefits. I do not find such arguments remotely convincing.

Another reason I am not excited about the current plans for a larger collider is that we might get more bang for the buck if we waited for better technologies. There’s the plasma wakefield acceleration, eg, that is in a test-period now and that may become a more efficient route to progress. Also, maybe high temperature superconductors will reach a level where they become usable for the magnets. Both of these technologies may become available in a decade or two, but they are not presently sufficiently developed so that they can be used for the next collider.

Therefore, investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

At current, other large-scale experiments would more reliably offer new insights into the foundations of physics. Anything that peers back into the early universe, such as big radio telescopes, for example, or anything that probes the properties of dark matter. There are also medium and small-scale experiments that tend to fall off the table if big collaborations eat up the bulk of money and attention. And that’s leaving aside that maybe we might be better off investing in other areas of science entirely.

Of course a blog post cannot replace a detailed cost-benefit assessment, so I cannot tell you what’s the best thing to invest in. I can, however, tell you that a bigger particle collider is one of the most expensive experiments you can think of, and we do not currently have a reason to think it would discover anything new. Ie, large cost, little benefit. That much is pretty clear.

I think the Chinese are not dumb enough to build the next bigger collider. If they do, they might end up being the first nation ever to run and operate such a costly machine without finding anything new. It’s not how they hope to enter history books. So, I consider it unlikely they will go for it.

What the Europeans will do is harder to predict, because a lot depends on who has influential friends in which ministry. But I think particle physicists have dug their own grave by giving the public the impression that the LHC would answer some big question, and then not being able to deliver.

Sunday, January 13, 2019

Good Problems in the Foundations of Physics

img src: openclipart.org
Look at the history of physics, and you will find that breakthroughs come in two different types. Either observations run into conflict with predictions and a new theory must be developed. Or physicists solve a theoretical problem, resulting in new predictions which are then confirmed by experiment. In both cases, problems that give rise to breakthroughs are inconsistencies: Either theory does not agree with data (experiment-led), or the theories have internal disagreements that require resolution (theory-led).

We can classify the most notable breakthroughs this way: Electric and magnetic fields (experiment-led), electromagnetic waves (theory-led), special relativity (theory-led), quantum mechanics (experiment-led), general relativity (theory-led), the Dirac equation (theory-led), the weak nuclear force (experiment-led), the quark-model (experiment-led), electro-weak unification (theory-led), the Higgs-boson (theory-led).

That’s an oversimplification, of course, and leaves aside the myriad twists and turns and personal tragedies that make scientific history interesting. But it captures the essence.

Unfortunately, in the past decades it has become fashionable among physicists to present the theory-led breakthroughs as a success of beautiful mathematics.

Now, it is certainly correct that in some cases the theorists making such breakthroughs were inspired by math they considered beautiful. This is well-documented, eg, for both Dirac and Einstein. However, as I lay out in my book, arguments from beauty have not always been successful. They worked in cases when the underlying problem was one of consistency. They failed in other cases. As the philosopher Radin Dardashti put it aptly, scientists sometimes work on the right problem for the wrong reason.

That breakthrough problems were those which harbored an inconsistency is true even for the often-told story of the prediction of the charm quark. The charm quark, so they will tell you, was a prediction based on naturalness, which is an argument from beauty. However, we also know that the theories which particle physicists used at the time were not renormalizable and therefore would break down at some energy. Once electro-weak unification removes this problem, the requirement of gauge-anomaly cancellation will tell you that a fourth quark is necessary. But this isn’t a prediction based on beauty. It’s a prediction based on consistency.

This, I must emphasize, is not what historically happened. Weinberg’s theory of the electro-weak unification came after the prediction of the charm quark. But in hindsight we can see that the reason this prediction worked was that it was indeed a problem of consistency. Physicists worked on the right problem, if for the wrong reasons.

What can we learn from this?

Well, one thing we learn is that if you rely on beauty you may get lucky. Sometimes it works. Feyerabend, I think, had it basically right when he argued “anything goes.” Or, as the late German chancellor Kohl put it, “What matters is what comes out in the end.”

But we also see that if you happen to insist on the wrong ideal of beauty, you will not make it into history books. Worse, since our conception of what counts as a beautiful theory is based on what worked in the past, it may actually get in the way if a breakthrough requires new notions of beauty.

The more useful lesson to learn, therefore, is that the big theory-led breakthroughs could have been based on sound mathematical arguments, even if in practice they came about by trial and error.

The “anything goes” approach is fine if you can test a large number of hypotheses and then continue with the ones that work. But in the foundations of physics we can no longer afford “anything goes”. Experiments are now so expensive and take such a long time to build that we have to be very careful when deciding which theories to test. And if we take a clue from history, then the most promising route to progress is to focus on problems that are either inconsistencies with data or internal inconsistencies of the theories.

At least that’s my conclusion.

It is far from my intention to tell anyone what to do. Indeed, if there is any message I tried to get across in my book it’s that I wish physicists would think more for themselves and listen less to their colleagues.

Having said this, I have gotten a lot of emails from students asking me for advice, and I recall how difficult it was for me as a student to make sense of the recent research trends. For this reason I append below my assessment of some of the currently most popular problems in the foundations of physics. Not because I want you to listen to me, but because I hope that the argument I offered will help you come to your own conclusion.

(You find more details and references on all of this in my book.)



Dark Matter
Is an inconsistency between theory and experiment and therefore a good problem. (The issue with dark matter isn’t whether it’s a good problem or not, but to decide under when to consider the problem solved.)

Dark Energy
There are different aspects of this problem, some of which are good problems others not. The question why the cosmological constant is small compared to (powers of) the Planck mass is not a good problem because there is nothing wrong with just choosing it to be a certain constant. The question why the cosmological constant is presently comparable to the density of dark matter is likewise a bad problem because it isn’t associated with any inconsistency. On the other hand, the absence of observable fluctuations around the vacuum energy (what Afshordi calls the “cosmological non-constant problem”) and the question why the zero-point energy gravitates in atoms but not in the vacuum (details here) are good problems.

The Hierarchy Problem
The hierarchy problem is the big difference between the strength of gravity and the other forces in the standard model. There is nothing contradictory about this, hence not a good problem.

Grand Unification
A lot of physicists would rather have one unified force in the standard model rather than three different ones. There is, however, nothing wrong with the three different forces. I am undecided as to whether the almost-prediction of the Weinberg-angle from breaking a large symmetry group does or does not require an explanation.

Quantum Gravity
Quantum gravity removes an inconsistency and hence a solution to a good problem. However, I must add that there may be other ways to resolve the problem besides quantizing gravity.

Black Hole Information Loss
A good problem in principle. Unfortunately, there are many different ways to fix the problem and no way to experimentally distinguish between them. So while it’s a good problem, I don’t consider it a promising research direction.

Particle Masses
It would be nice to have a way to derive the masses of the particles in the standard model from a theory with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem.

Quantum Field Theory
There are various problems with quantum field theories where we lack a good understanding of how the theory works and that require a solution. The UV Landau pole in the standard model is one of them. It must be resolved somehow, but just exactly how is not clear. We also do not have a good understanding of the non-perturbative formulation of the theory and the infrared behavior turns out to be not as well understood as we thought only years ago (see eg here).

The Measurement Problem
The measurement problem in quantum mechanics is typically thought of as a problem of interpretation and then left to philosophers to discuss. I think that’s a mistake; it is an actual inconsistency. The inconsistency comes from the need to postulate the behavior of macroscopic objects when that behavior should instead follow from the theory of the constituents. The measurement postulate, hence, is inconsistent with reductionism.

The Flatness Problem
Is an argument from finetuning and not well-defined without a probability distribution. There is nothing wrong with the (initial value of) the curvature density just being what it is. Thus, not a good problem.

The Monopole Problem
That’s the question why we haven’t seen magnetic monopoles. It is quite plausibly solved by them not existing. Also not a good problem.

Baryon Asymmetry and The Horizon Problem
These are both finetuning problems that rely on the choice of an initial condition, which is considered to be likely. However, there is no way to quantify how likely the initial condition is, so the problem is not well-defined.

There are further always a variety of anomalies where data disagrees with theory. Those can linger at low significance for a long time and it’s difficult to decide how seriously to take them. For those I can only give you the general advice that you listen to experimentalists (preferably some who are not working on the experiment in question) before you listen to theorists. Experimentalists often have an intuition for how seriously to take a result. That intuition, however, usually doesn’t make it into publications because it’s impossible to quantify. Measures of statistical significance don’t always tell the full story.

Saturday, January 12, 2019

Book Review: “Quantum Space” by Jim Baggott

Quantum Space
Loop Quantum Gravity and the Search for the Structure of Space, Time, and the Universe
By Jim Baggott
Oxford University Press (January 22, 2019)


In his new book Quantum Space, Jim Baggott presents Loop Quantum Gravity (LQG) as the overlooked competitor of String Theory. He uses a chronological narrative that follows the lives of Lee Smolin and Carlo Rovelli. The book begins with their nascent interest in quantum gravity, continues with their formal education, their later collaboration, and, in the final chapters, follows them as their ways separate. Along with the personal stories, Baggott introduces background knowledge and lays out the scientific work.

Quantum Space is structured into three parts. The first part covers the basics necessary to understand the key ideas of Loop Quantum Gravity. Here, Baggott goes through the essentials of special relativity, general relativity, quantum mechanics, quantum field theory, and the standard model of particle physics. The second part lays out the development of Loop Quantum Gravity and the main features of the theory, notably the emphasis on background independence.

The third part is about recent applications, such as the graviton propagator, the calculation of black hole entropy, and the removal of the big bang singularity. You also find there Sundance Bilson-Thompson’s idea that elementary particles are braids in the spin-foam. In this last part, Baggott further includes Rovelli’s and Smolin’s ideas about the foundations of quantum mechanics, as well as Rovelli and Vidotto’s Planck Stars, and Smolin’s ideas about the reality of time and cosmological natural selection.

The book’s epilogue is an extended Q&A with Smolin and Rovelli and ends with mentioning the connections between String Theory and Loop Quantum Gravity (which I wrote about here).

Baggott writes very well and he expresses himself clearly, aided by about two dozen figures and a glossary. The book, however, requires some tolerance for technical terminology. While Baggott does an admirable job explaining advanced physics – such as Wilson loops, parallel transport, spinfoam, and renormalizability – and does not shy away from complex topics – such as the fermion doubling problem, the Wheeler-De-Witt equation, Shannon entropy, or extremal black holes – for a reader without prior knowledge in the field, this may be tough going.

We know from Baggott’s 2013 book “Farewell to Reality” that he is not fond of String Theory, and in Quantum Space, too, he does not hold back with criticism. On Edward Witten’s conjecture of M-theory, for example, he writes:
“This was a conjecture, not a theory…. But this was, nevertheless, more than enough to set the superstring bandwagon rolling even faster.”
In Quantum Space, Baggott also reprints Smolin’s diagnostic of the String Theory community, which asserts string theorists “tremendous self-confidence,” “group think,” “confirmation bias,” and “a complete disregard and disinterest in the opinions of anyone outside the group.”

Baggott further claims that the absence of new particles at the Large Hadron Collider is bad news for string theory*:
“Some argue that string theory is the tighter, more mathematically rigorous and consistent, better-defined structure. But a good deal of this consistency appears to have been imported through the assumption of supersymmetry, and with each survey published by the ATLAS or CMS detector collaborations at CERN’s Large Hadron Collider, the scientific case for supersymmetry weakens some more.”
I’d have some other minor points to quibble with, but given the enormous breadth of topics covered, I think Baggott’s blunders are few and sparse.

I must admit, however, that the structure of the book did not make a lot of sense to me. Baggott introduces a lot of topics that he later does not need and whose relevance to LQG escapes me. For example, he goes on about the standard model and the Higgs-mechanism in particular, but that doesn’t play any role later. He also spends quite some time on the interpretation of quantum mechanics, which isn’t actually necessary to understand Loop Quantum Gravity. I also don’t see what Lee Smolin’s cosmological natural selection has to do with anything. But these are stylistic issues.

The bigger issue I have with the book is that Baggott is as uncritical of Loop Quantum Gravity as he is critical of String Theory. There is not a mention in the book about the problem of recovering local Lorentz-invariance, an issue that has greatly bothered both Joe Polchinski and Jorge Pullin. Baggott presents Loop Quantum Cosmology (the LQG-based approach to understand the early universe) as testable but forgets to note that the predictions depend on an adjustable parameter, and also, it would be extremely hard to tell apart the LQG-based models from other models. And he does not, in balance, mention String Cosmology. He does not mention the problem with the supposed derivation of the black hole entropy by Bianchi and he does not mention the problems with Planck stars.

And if he had done a little digging, he’d have noticed that the group-think in LQG is as bad as it is in string theory.

In summary, Quantum Space is an excellent, non-technical, introduction to Loop Quantum Gravity that is chock-full with knowledge. It will, however, give you a rather uncritical view of the field.

[Disclaimer: Free review copy.]


* I explained here why the non-discovery of supersymmetric particles at the LHC has no relevance for string theory.

Wednesday, January 09, 2019

The Real Problems with Artificial Intelligence

R2D2 costume for toddlers.
[image: amazon.com]
In recent years many prominent people have expressed worries about artificial intelligence (AI). Elon Musk thinks it’s the “biggest existential threat.” Stephen Hawking said it could “be the worst event in the history of our civilization.” Steve Wozniak believes that AIs will “get rid of the slow humans to run companies more efficiently,” and Bill Gates, too, put himself in “the camp that is concerned about super intelligence.”

In 2015, the Future of Life Institute formulated an open letter calling for caution and formulating a list of research priorities. It was signed by more than 8,000 people.

Such worries are not unfounded. Artificial intelligence, as any new technology, brings risks. While we are far from creating machines even remotely as intelligent as humans, it’s only smart to think about how to handle them sooner rather than later.

However, these worries neglect the more immediate problems that AI will bring.

Artificially Intelligent machines won’t get rid of humans any time soon because they’ll need us for quite some while. The human brain may not be the best thinking apparatus, but it has a distinct advantage over all machines we built so far: It functions for decades. It’s robust. It repairs itself.

Some million years of evolution optimized our bodies, and while the result could certainly be further improved (damn those knees), it’s still more durable than any silicon-based thinking apparatuses we created. Some AI researchers have even argued that a body of some kind is necessary to reach human-level intelligence, which – if correct – would vastly increase the problem of AI fragility.

Whenever I bring up this issue with AI enthusiasts, they tell me that AIs will learn to repair themselves, and even if not, they will just upload themselves to another platform. Indeed, much of the perceived AI-threat comes from them replicating quickly and easily, while at the same time being basically immortal. I think that’s not how it will go.

Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.

We see the beginning of this trend already. Your computer isn’t like my computer. Even if you have the same model, even if you run the same software, they’re not the same. Hackers exploit these differences between computers to track your internet activity. Canvas fingerprinting, for example, is a method of asking your computer to render a font and output an image. The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device.

Presently, you do not notice these subtle differences between computers all that much (except possibly when you spend hours browsing help forums thinking “someone must have had this problem before” and turn up nothing). But the more complex computers get, the more obvious the differences will become. One day, they will be individuals with irreproducible quirks and bugs – like you and I.

So we have AI fragility plus the trend of increasingly complex hard- and software to become unique. Now extrapolate this some decades into the future. We will have a few large companies, governments, and maybe some billionaires who will be able to afford their own AI. Those AIs will be delicate and need constant attention by a crew of dedicated humans.

This brings up various immediate problems:

1. Who gets to ask questions and what questions?

This may not be a matter of discussion for privately owned AI, but what about those produced by scientists or bought by governments? Does everyone get a right to a question per month? Do difficult questions have to be approved by the parliament? Who is in charge?

2. How do you know that you are dealing with an AI?

The moment you start relying on AIs, there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI. This problem will occur well before AIs are intelligent enough to develop their own goals.

3. How can you tell that an AI is any good at giving answers?

If you only have a few AIs and those are trained for entirely different purposes, it may not be possible to reproduce any of their results. So how do you know you can trust them? It could be a good idea to ask that all AIs have a common area of expertise that can be used to compare their performance.

4. How do you prevent that limited access to AI increases inequality, both within nations and between nations?

Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther. If this is not something that we want – and I certainly don’t – we should think about how to deal with it.

Monday, January 07, 2019

Letter from a reader: “What’s so bad about randomness?”

[The best part of publishing a book has been getting feedback from readers who report their own experience as it relates to what I wrote about. With permission, I want to share this letter which I received the other day from Dave Hurwitz, a classical music critic.]

Dear Dr. Hossenfelder,

I hope that I am not bothering you, but I just wanted to write to tell you how much I am enjoying your book “Lost in Math.” I haven’t quite finished it yet, but I was so taken with it that I thought I might write to let you know anyway. I am about as far away from theoretical physics as it’s possible to be: I am a classical music critic and independent musical scholar, and I support myself working in real estate; but I am a very serious follower of the popular scientific literature, and I was so impressed by your directness, literacy, and ability to make complex topics digestible and entertaining for the general reader.

I am also very much in sympathy with your point of view. Even though I don’t understand the math, it often seems to me that so much of what theoretical physicists are doing amounts to little more than a sort of high-end gematria – numerology with a kind of mystical value assigned to mathematical coincidence or consistency, or, as you (they) call it, “beauty.” I cringe whenever I hear these purely aesthetic judgments applied to theoretical speculation about the nature of reality, based primarily on the logic of the underlying math. And don’t get me wrong: I like math. Personally, I have no problem with the idea that the laws governing the universe may not be elegant and tidy, and I see no reason why they should be. They are what they are, and that’s all. What’s so bad about randomness? It’s tough enough trying to figure out what they are without assigning to them purely subjective moral or aesthetic values (or giving these undue weight in guiding the search).

It may interest you to know that something similar seems to be infecting current musicology, and I am sure many other academic fields. Discussion of specific musical works often hinges on standardized and highly technical versions of harmonic analysis, mostly because the language and methodology have been systematized and everyone agrees on how to do it – but what it actually means, how it creates meaning or expressiveness, is anyone’s guess. It is assumed to be important, but there is no demonstrable causal connection between the correctness of the analysis and the qualitative values assigned to it. It all comes down to a kind of circular reasoning: the subjective perception of “beauty” drives the search for a coherent musical substructure which, not surprisingly, once described is alleged to justify the original assumption of “beauty.” If you don’t “get” physicists today, then I don’t “get” musicologists.

Anyway, I’m sorry to take up so much of your time, but I just wanted to note that what you see – the kind of reasoning that bothers you so much – has its analogues way beyond the field of theoretical physics. I take your point that scientists, perhaps, should know better, but the older I get the more I realize two things: first, human nature is the same everywhere, and second, as a consequence, it’s precisely the people who ought to know better that, for the most part, seldom do. I thank you once again for making your case so lucidly and incisively.

Best regards,

Dave Hurwitz
ClassicsToday.com

Thursday, January 03, 2019

Book Update

During the holidays I got several notes from people who tried to order my book but were told it’s out of print or not in stock. Amazon likewise had only used copies on sale, starting at $50 and up. Today I have good news: My publisher informed me that the book has been reprinted and should become available again in the next days.

The German translation meanwhile is in the third edition (the running head has been fixed!). The Spanish translation will appear in June with a publisher by name Ariel. Other translations that will come are Japanese, Chinese, Russian, Italian, French, Korean, Polish, and Romanian. Amazon now also offers an English audio version.

Many thanks to all readers!



Oh, and I still don’t have a publisher in the UK.

Wednesday, January 02, 2019

Electrons don’t think

Brainless particles leaving tracks
in a bubble chamber. [image source]
I recently discovered panpsychism. That’s the idea that all matter – animate or inanimate – is conscious, we just happen to be somewhat more conscious than carrots. Panpsychism is the modern elan vital.

When I say I “discovered” panpsychism, I mean I discovered there’s a bunch of philosophers who produce pamphlets about it. How do these philosophers address the conflict with evidence? Simple: They don’t.

Now, look, I know that physicists have a reputation of being narrow-minded. But the reason we have this reputation is that we tried the crazy shit long ago and just found it doesn’t work. You call it “narrow-minded,” we call it “science.” We have moved on. Can elementary particles be conscious? No, they can’t. It’s in conflict with evidence. Here’s why.

We know 25 elementary particles. These are collected in the standard model of particle physics. The predictions of the standard model agree with experiment to best precision.

The particles in the standard model are classified by their properties, which are collectively called “quantum numbers.” The electron, for example, has an electric charge of -1 and it can have a spin of +1/2 or -1/2. There are a few other quantum numbers with complicated names, such as the weak hypercharge, but really it’s not so important. Point is, there are handful of those quantum numbers and they uniquely identify an elementary particle.

If you calculate how many particles of a certain type are produced in a particle collision, the result depends on how many variants of the produced particle exist. In particular, it depends on the different values the quantum numbers can take. Since the particles have quantum properties, anything that can happen will happen. If a particle exists in many variants, you’ll produce them all – regardless of whether or not you can distinguish them. The result is that you see more of them than the standard model predicts.

Now, if you want a particle to be conscious, your minimum expectation should be that the particle can change. It’s hard to have an inner life with only one thought. But if electrons could have thoughts, we’d long have seen this in particle collisions because it would change the number of particles produced in collisions.

In other words, electrons aren’t conscious, and neither are any other particles. It’s incompatible with data.

As I explain in my book, there are ways to modify the standard model that do not run into conflict with experiment. One of them is to make new particles so massive that so far we have not managed to produce them in particle collisions, but this doesn’t help you here. Another way is to make them interact so weakly that we haven’t been able to detect them. This too doesn’t help here. The third way is to assume that the existing particles are composed of more fundamental constituents, that are, however, so strongly bound together that we have not yet been able to tear them apart.

With the third option it is indeed possible to add internal states to elementary particles. But if your goal is to give consciousness to those particles so that we can inherit it from them, strongly bound composites do not help you. They do not help you exactly because you have hidden this consciousness so that it needs a lot of energy to access. This then means, of course, that you cannot use it at lower energies, like the ones typical for soft and wet thinking apparatuses like human brains.

Summary: If a philosopher starts speaking about elementary particles, run.

Thursday, December 27, 2018

How the LHC may spell the end of particle physics

The Large Hadron Collider (LHC) recently completed its second experimental run. It now undergoes a scheduled upgrade to somewhat higher energies, at which more data will be collected. Besides the Higgs-boson, the LHC has not found any new elementary particle.

It is possible that in the data yet to come some new particle eventually shows up. But particle physicists are nervous. It’s not looking good – besides a few anomalies that are not statistically significant, there is no evidence for anything out of the normal. And if the LHC finds nothing new, there is no reason to think the next larger collider will. In which case, why build one?

That the LHC finds the Higgs and nothing else was dubbed the “nightmare scenario” for a reason. For 30 years, particle physicists have told us that the LHC should find something besides that, something exciting: a particle for dark matter, additional dimensions of space, or maybe a new type of symmetry. Something that would prove that the standard model is not all there is. But this didn’t happen.

All those predictions for new physics were based on arguments from naturalness. I explained in my book that naturalness arguments are not mathematically sound and one shouldn’t have trusted them.

The problem particle physicists now have is that naturalness was the only reason to think that there should be new physics at the LHC. That’s why they are getting nervous. Without naturalness, there is no argument for new physics at energies even higher than that of the LHC. (Not until 15 orders of magnitude higher, which is when the quantum structure of spacetime should become noticeable. But energies so large will remain inaccessible for the foreseeable future.)

How have particle physicists reacted to the situation? Largely by pretending nothing happened.

One half continues to hope that something will show up in the data, eventually. Maybe naturalness is just more complicated than we thought. The other half pre-emptively fabricates arguments for why a next larger collider should see new particles. And a few just haven’t noticed they walked past the edge of the cliff. A recent report about Beyond the Standard Model Physics at the LHC, for example, still iterates that “naturalness [is] the main motivation to expect new physics.”

Regardless of their coping strategy, a lot of particle physicists probably now wish they had never made those predictions. Therefore I think it’s a great time to look at who said what. References below.

Some lingo ahead: “eV” stands for electron-Volt and is a measure of energy. Particle colliders are classified by the energy that they can test. Higher energy means that the collisions resolve smaller structures. The LHC will reach up to 14 Tera electron Volt (TeV). The “electroweak scale” or “electroweak energy” is typically said to be around the mass of the Z-boson, which is about 100 Giga-electron Volts (GeV), ie a factor 100 below what the LHC reaches.

Also note that even though the LHC reaches energies up to 14 TeV, it collides protons, and those are not elementary particles but composites of quarks and gluons. The total collision energy is therefore distributed over the constituent particles, meaning that constraints on the masses of new particles are below the collision energy. How good the constraints are depends on the expected number of interactions and the amount of data collected. The current constraints are typically at some TeV and will increase as more data is analyzed.

With that ahead, let us start in 1987 with Barbieri and Giudice:
“The implementation of this “naturalness” criterion, gives rise to a physical upper bound on superparticle masses in the TeV range.”
In 1994, Anderson and Castano write:
“[In] the most natural scenarios, many sparticles, for example, charginos, squarks, and gluinos, lie within the physics reach of either LEP II or the Tevatron”
and
“supersymmetry cannot provide a complete explanation of weak scale stability, if squarks and gluinos have masses beyond the physics reach of the LHC.”
LEP was the Large Electron Positron collider. LEP1 and LEP2 refers to the two runs of the experiment.

In 1995, Dimopoulous and Giudice tell us similarly:
“[If] minimal low-energy supersymmetry describes the world with no more than 10% fine tuning, then LEP2 has great chances to discover it.”
In 1997, Erich Poppitz writes:
“Within the next 10 years—with the advent of the Large Hadron Collider—we will have the answer to the question: “Is supersymmetry relevant for physics at the electroweak scale?””
On to 1998, when Louis, Brunner, and Huber tell us the same thing:
“These models do provide a solution to the naturalness problem as long as the supersymmetric partners have masses not much bigger than 1 TeV.”
It was supposed to be an easy discovery, as Frank Paige wrote in 1998:
“Discovering gluinos and squarks in the expected mass range [...] seems straightforward, since the rates are large and the signals are easy to separate from Standard Model backgrounds.”
Giudice and Rattazzi in 1998 emphasize that naturalness is why they believe in physics beyond the standard model:
“The naturalness (or hierarchy) problem, is considered to be the most serious theoretical argument against the validity of the Standard Model (SM) of elementary particle interactions beyond the TeV energy scale. In this respect, it can be viewed as the ultimate motivation for pushing the experimental research to higher energies.”
They go on to praise the beauty of supersymmetry: “An elegant solution to the naturalness problem is provided by supersymmetry...”

In 1999, Alessandro Strumia, interestingly enough, concludes that the LEP results are really bad news for supersymmetry:
“the negative results of the recent searches for supersymmetric particles pose a naturalness problem to all ‘conventional’ supersymmetric models.”
In his paper, he stresses repeatedly that his conclusion applies only to certain supersymmetric models. Which is of course correct. The beauty of supersymmetry is that it’s so adaptive it evades all constraints.

Most particle physicists were utterly undeterred by the negative LEP results. They just moved their predictions to the next larger collider, the TeVatron and then the LHC.

In 2000, Feng, Matchev, and Moroi write:
“This has reinforced a widespread optimism that the next round of collider experiments at the Tevatron, LHC or the NLC are guaranteed to discover all superpartners, if they exists.”
(NLC stands for Next Linear Collider, which was a proposal in early 2000s that has since been dropped.) They also iterate that supersymmetry should be easy to find at the LHC:
“In contrast to the sfermions, gauginos and higgsinos cannot be very heavy in this scenario. For example … gauginos will be produced in large numbers at the LHC, and will be discovered in typical scenarios.”
In 2004, Stuart Raby tries to say that naturalness arguments already are in trouble:
“Simple ‘naturalness’ arguments would lead one to believe that SUSY should have been observed already.”
But of course that’s just reason to consider not-so-simple naturalness arguments.

In the same year, Fabiola Gianotti bangs the drum for the LHC (emphasis mine):
“The above [naturalness] arguments open the door to new and more fundamental physics. There are today several candidate scenarios for physics beyond the Standard Model, including Supersymmetry (SUSY), Technicolour and theories with Extra-dimensions. All of them predict new particles in the TeV region, as needed to stabilize the Higgs mass. We note that there is no other scale in particle physics today as compelling as the TeV scale, which strongly motivates a machine like the LHC able to explore directly and in detail this energy range.”
She praises supersymmetry as “very attractive” and also tells us that the discovery should be easy and fast:
“SUSY discovery at the LHC could be relatively easy and fast… Squark and gluino masses of 1 TeV are accessible after only one month of data taking… The ultimate mass reach is up to ∼ 3 TeV for squarks and gluinos. Therefore, if nothing is found at the LHC, TeV-scale Supersymmetry will most likely be ruled out, because of the arguments related to stabilizing the Higgs mass mentioned above.”
In 2005, Arkani-Hamed and Savas Dimopolous have the same tale to tell:
“[Ever] since the mid 1970’s, there has been a widely held expectation that the SM must be incomplete already at the ∼ TeV scale. The reason is the principle of naturalness… Solving the naturalness problem has provided the biggest impetus to constructing theories of physics beyond the Standard Model...”
Same thing with Feng and Wilczek in 2005:
“The standard model of particle physics is fine-tuned… This blemish has been a prime motivation for proposing supersymmetric extensions to the standard model. In models with low-energy supersymmetry, naturalness can be restored by having superpartners with approximately weak-scale masses.”
Here is John Donoghue in 2007:
“[The] argument against finetuning becomes a powerful motivator for new physics at the scale of 1 TeV. The Large Hadron Collider has been designed to find this new physics.”
And Michael Dine who, also in 2007, writes:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
The same story, that new physics needs to appear at around a TeV, has been repeated in countless talks and seminars. A few examples. Here is Peter Krieger in 2008:


Michelangelo Mangano:



Joseph Lykken:



I could go on, but I hope this suffices to document that pretty much everyone

    (a) agreed that the LHC should see new physics besides the Higgs, and
    (b) they all had the same reason, namely naturalness.

In summary: Since the naturalness-based predictions did not pan out, we have no reason to think that the remaining LHC run or an even larger particle collider would see any new physics that is not already explained by the standard model of particle physics. A larger collider would be able to measure more precisely the properties of already known particles, but that is arguably not a terribly exciting exercise. It will be a tough sell for a machine that comes at $10 billion and up. Therefore, it may very well be that the LHC will remain the largest particle collider in human history.



Bonus: A reader submits this gem from David Gross and Ed Witten in the Wall Street Journal, anno 1996:
“There is a high probability that supersymmetry, if it plays the role physicists suspect, will be confirmed in the next decade. The existing accelerators that have a chance of doing so are the proton collider at the Department of Energy’s Fermi Lab in Batavia, Ill., and the electron collider at the European Center for Nuclear Research (CERN) in Geneva. Last year’s final run at Fermi Lab, during which the top quark was discovered, gave tantalizing hints of supersymmetry.”

Wednesday, December 26, 2018

Book review: “On The Future” by Martin Rees

On the Future: Prospects for Humanity
By Martin Rees
Princeton University Press (October 16, 2018)

The future will come, that much is clear. What it will bring, not so much. But speculating about what the future brings is how we make decisions in the present, so it’s a worthwhile exercise. It can be fun, it can be depressing. Rees’ new book “On The Future” is both.

Martin Rees is a cosmologist and astrophysicist. He has also for long been involved in public discourse about science, notably the difficulty of integrating scientific evidence in policy making. He is also one of the founding members of the Cambridge Center for Existential Risk and serves on the advisory board of the Future of Life Institute. In brief, Rees thinks ahead, not for 5 years or 10 years, but for 1000 or maybe – gasp – a million years.

In his new book, Rees covers a large number of topics. From the threat of nuclear war, climate change, clean energy, and environmental sustainability to artificial intelligence, bioterrorism, assisted dying, and the search for extraterrestrial life. Rees is clearly a big fan of space exploration and bemoans that today it’s nowhere near as exciting as when he was young.
“I recall a visit to my home town by John Glenn, the first American to go into orbit. He was asked what he was thinking while in the rocket’s nose cone, awaiting launch. He responded, ‘I was thinking that there were twenty thousand parts in this rocket, and each was made by the lowest bidder.’”
I much enjoyed Rees book, the biggest virtue of which is brevity. Rees gets straight to the point. He summarizes what we know and don’t know, and what he thinks about where it’ll go, and that’s that. The amount of flowery words in his book is minimal (and those that he uses are mostly borrowed from Carl Sagan).

You don’t have to agree with Rees on his extrapolations into the unknown, but you will end up being well-informed. Rees is also utterly unapologetic about being a scientist to the core. Oftentimes scientists writing about climate change or biotech end up in a forward-defense against denialism, which I find exceedingly tiresome. Rees does nothing of that sort. He sticks with the facts.

It sometimes shows that Rees is a physicist. For example in his going on about exoplanets, reductionism (“The ‘ordering’ of the sciences in this hierarchy is not controversial.”), and his plug for the multiverse about which he writes “[The multiverse] is not metaphysics. It’s highly speculative. But it’s exciting science. And it may be true.”

But Rees does not address the biggest challenge we currently face, that is our inability to make use of the knowledge we already have. He is simply silent on the problems we currently see in science, the lack of progress, and the difficulties we face in our society when trying to aggregate evidence to make informed decisions.

In his chapter about “The Limits and Future of Science,” Rees acknowledges the possibility that “some fundamental truths about nature could be too complex for unaided human brains to fully grasp” but fails to notice that unaided human brains are not even able to fully grasp how being part of a large community influences their interests – and with that the decision of what we chose to spend time and resources on.

By omitting to even mention these problems, Rees tells us something about the future too. We may be so busy painting pictures of our destination that we forget to think of a way to reach it.

[Disclaimer: Free review copy.]

Monday, December 24, 2018

Happy Holidays

I have been getting a novel complaint about my music videos, which is that they are “hard to understand.” In case you share this impression, you may be overthinking this. These aren’t press-releases about high-temperature superconductors or neutron star matter, it’s me attempting to sing. But since you ask, this one is about the – often awkward – relation between science and religion.



(Soundcloud version here.)

And since tis the season, allow me to mention that you find a donate-button in the top right corner of this website. On this occasion I also wish to express a heart-felt THANK YOU for all of those who this year sent donations. I very much appreciate your support, no matter how small, because it documents that you value my writing.

And here is this years’ family portrait for the Christmas cards.



In Germany, we traditionally celebrate Christmas on the evening of December 24th. So I herewith sign off for the rest of the year. Wish you all Happy Holidays.

Friday, December 21, 2018

Winter Solstice

[Photo: Herrmann Stamm]

The clock says 3:30 am. Is that early or late? Wrapped in a blanket I go into the living room. I open the door and step onto the patio. It’s too warm for December. An almost full moon blurs into the clouds. In the distance, the highway hums.

Somewhere, someone dies.

For everyone who dies, two people are born. 7.5 billion and counting.

We came to dominate planet Earth because, compared to other animals, we learned fast and collaborated well. We used resources efficiently. We developed tools to use more resources, and then employed those tools to use even more resources. But no longer. It’s 2018, and we are failing.

That’s what I think every day when I read the news. We are failing.

Throughout history, humans improved how to exchange and act on information held by only a few. Speech, writing, politics, economics, social and cultural norms, TV, telephones, the internet. These are all methods of communication. It’s what enabled us to collectively learn and make continuous progress. But now that we have networks connecting billions of people, we have reached our limits.

Fake news, Russian trolls, shame storms. Some dude’s dick in the wrong place. That’s what we talk about.

And buried below the viral videos and memes there’s the information that was not where it was supposed to be. Hurricane Katrina? The problem was known. The 2008 financial crisis? The problem was known. That Icelandic volcano whose ashes, in 2010, grounded flight traffic? Utterly unsurprising. Iceland has active volcanoes. Sometimes the wind blows South-East. Btw, it will happen again. And California is due for a tsunami. The problems are known.

But that’s not how it will end.

20 years ago I had a car accident. I was on a busy freeway. It was raining heavily and the driver in front of me suddenly braked. Only later did I learn someone had cut his way. I hit the brakes. And then I watched a pair of red lights coming closer.

They say time slows if you fear for your life. It does.

I came to a stop one inch before slamming into the other car. I breathed out. Then a heavy BMW slammed into my back.

Human civilization will go like that. If we don’t keep moving, problems now behind us will slam into our back. Climate change, environmental pollution, antibiotic resistance, the persistent risk of nuclear war, for just to mention a few – you know the list. We will have to deal with those sooner or later. Not now. Oh, no. Not us, not now, not here. But our children. Or their children. If we stop learning, if we stop improving our technologies, it’ll catch up with them, sooner or later.

Having to deal with long-postponed problems will eat up resources. Those resources, then, will not be available for further technological development, which will create further problems, which will eat up more resources. Modern technologies will become increasingly expensive until most people no longer can afford them. Infrastructures will crumble. Education will decay. It’s a downward spiral. A long, unpreventable and disease-ridden, regress.

Those artificial intelligences you were banking on? Not going to happen. All the money in the world will not lead to scientific breakthroughs if we don’t have sufficiently many people with the sufficient education.

Who is to blame? No one, really. We are just too stupid to organize our living together on a global scale. We will not make it to the next level of evolutionary development. We don’t have the mental faculties. We do not comprehend. We do not act because we cannot. We don’t know how. We will fail and, maybe, in a million years or so, another species will try again.

Climate negotiations stalled over the choice of a word. A single word.

The clouds have drifted and the bushes now throw faint shadows in the moonlight. A cat screeches, or maybe it’s two. Something topples over. An engine starts. Then, silence again.

In the silence, I can hear them scream. All the people who don’t get heard, who pray and hope and wait for someone to please do something. But there is no one to listen. Even the scientists, even people in my own community, do not see, do not want to see, are not willing to look at their failure to make informed decisions in large groups. The problems are known.

Back there on that freeway, the BMW totaled my little Ford. I carried away neck and teeth damage, though I wouldn’t realize this until months later. I got out of my car and stood in the rain, thinking I’d be late for class. Again. The passenger’s door of the BMW opened and out came – an umbrella. Then, a tall man in a dark suit. He looked at me and the miserable state of my car and handed me a business card. “Don’t worry,” he said, “My insurance will cover that.” It did.

Of course I’m as stupid as everyone else, screaming screams that no one hears and, despite all odds, still hoping that someone has an insurance, that someone knows what to do.

I go back into the house. It’s dark inside. I step onto a LEGO, one of the pink ones. They have fewer sharp edges; maybe, I think, that’s why parents keep buying them.

The kids are sleeping. It will be some hours until the husband announces his impending awakening with a morning fart. By standby lights I navigate to my desk.

We are failing. I am failing. But what else can I do than try.

I open my laptop.

Friday, December 14, 2018

Don’t ask what science can do for you.

Among the more peculiar side-effects of publishing a book are the many people who suddenly recall we once met.

There are weird fellows who write to say they mulled ten years over a single sentence I once spoke with them. There are awkward close-encounters from conferences I’d rather have forgotten about. There are people who I have either indeed forgotten about or didn’t actually meet. And then there are those who, at some time in my life, handed me a piece of the puzzle I’ve since tried to assemble; people I am sorry I forgot about.

For example my high-school physics teacher, who read about me in a newspaper and then came to a panel discussion I took part in. Or Eric Weinstein, who I met many years ago at Perimeter Institute, and who has since become the unofficial leader of the last American intellectuals. Or Robin Hanson, with whom I had a run-in 10 years ago and later met at SciFoo.

I spoke with Robin the other day.

Robin is an economist at George Mason University in Virginia, USA. I had an argument with him because Robin proposed – all the way back in 1990 – that “gambling” would save science. He wanted scientists to bet on the outcomes of their colleagues’ predictions and claimed this would fix the broken incentive structure of academia.

I wasn’t fond of Robin’s idea back then. The major reason was that I couldn’t see scientists spend much time on a betting market. Sure, some of them would give it a go, but nowhere near enough for such a market to have much impact.

Economists tend to find it hard to grasp, but most people who stay in academia are not in for the money. This isn’t to say that money is not relevant in academia – it certainly is: Money decides who stays and who goes and what research gets done. But if getting rich is your main goal, you don’t dedicate your life to counting how many strings fit into a proton.

The foundations of physics may be an extreme case, but by my personal assessment most people in this area primarily chase after recognition. They want to be important more than they want to be rich.

And even if my assessment of scientists’ motivations was wrong, such a betting market would have to have a lot of money go around, more money than scientists can make by upping their reputation with putting money behind their own predictions.

In my book, I name a few examples of physicists who bet to express confidence in their own theory, such as Garrett Lisi who bet Frank Wilczek $1000 that supersymmetry would not be found at the LHC by 2016. Lisi won and Wilczek paid his due. But really what Garrett did there was just to publicly promote his own theory, a competitor of supersymmetry.

A betting market with minor payoffs, one has to be afraid, would likewise simply be used by researchers to bet on themselves because they have more to win by securing grants or jobs, which favorable market odds might facilitate.

But what if scientists could make larger gains by betting smartly than they could make by promoting their own research? “Who would bet against their career?” I asked Robin when we spoke last week.

“You did,” he pointed out.

He got me there.

My best shot at a permanent position in academia would have been LHC predictions for physics beyond the standard model. This is what I did for my PhD. In 2003, I was all set to continue into this direction. But by 2005, three years before the LHC began operation, I became convinced that those predictions were all nonsense. I stopped working on the topic, and instead began writing about the problems with particle physics. In 2015, my agent sold the proposal for “Lost in Math”.

When I wrote the book proposal, no one knew what the LHC would discover. Had the experiments found any of the predicted particles, I’d have made myself the laughing stock of particle physics.

So, Robin is right. It’s not how I thought about it, but I made a bet. The LHC predictions failed. I won. Hurray. Alas, the only thing I won is the right to go around and grumble “I told you so.” What little money I earn now from selling books will not make up for decades of employment I could have gotten playing academia-games by the rules.

In other words, yeah, maybe a betting market would be a good idea. Snort.

My thoughts have moved on since 2007, so have Robin’s. During our conversation, it became clear our views about what’s wrong with academia and what to do about it have converged over the years. To begin with, Robin seems to have recognized that scientists themselves are indeed unlikely candidates to do the betting. Instead, he now envisions that higher education institutions and funding agencies employ dedicated personnel to gather information and place bets. Let me call those “prediction market investors” (PMIs). Think of them like hedge-fund managers on the stock market.

Importantly, those PMIs would not merely collect information from scientists in academia, but also from those who leave. That’s important because information leaves with people. I suspect had you asked those who left particle physics about the LHC predictions, you’d have noticed quickly I was far from the only one who saw a problem. Alas, journalists don’t interview drop-outs. And those who still work in the field have all reason to project excitement and optimism about their research area.

The PMIs would of course not be the only ones making investments. Anyone could do it, if they wanted to. But I am guessing they’d be the biggest players.

This arrangement makes a lot of sense to me.

First and foremost, it’s structurally consistent. The people who evaluate information about the system do not themselves publish research papers. This circumvents the problem that I have long been going on about, that scientists don’t take into account the biases that skew their information-assessment. In Robin’s new setting, it doesn’t really matter if scientists’ see their mistakes; it only matters that someone sees them.

Second, it makes financial sense. Higher education institutions and funding agencies have reason to pay attention to the prediction market, because it provides new means to bring in money and new information about how to best invest money. In contrast to scientists, they might therefore be willing to engage in it.

Third, it is minimally intrusive yet maximally effective. It keeps the current arrangement of academia intact, but at the same it has a large potential for impact. Resistance to this idea would likely be small.

So, I quite like Robin’s proposal. Though, I wish to complain, it’s too vague to be practical and needs more work. It’s very, erm, academic.

But in 2007, I had another reason to disagree with Robin, which was that I thought his attempt to “save science” was unnecessary.

This was two years after Ioannidis’ paper “Why most published research findings are false” attracted a lot of attention. It was one year after Lee Smolin and Peter Woit published books that were both highly critical of string theory, which has long been one of the major research-bubbles in my discipline. At the time, I was optimistic – or maybe just naïve – and thought that change was on the way.

But years passed and nothing changed. If anything, problems got worse as scientists began to more aggressively market their research and lobby for themselves. The quest for truth, it seems, is now secondary. More important is you can sell an idea, both to your colleagues and to the public. And if it doesn’t pan out? Deny, deflect, dissociate.

That’s why you constantly see bombastic headlines about breakthrough insights you never hear of again. That’s why, after years of talking about the wonderful things the LHC might see, no one wants to admit something went wrong. And that’s why, if you read the comments on this blog, they wish I’d keep my mouth shut. Because it’s cozy in their research bubble and they don’t want it to burst.

That’s also why Robin’s proposal looks good to me. It looks better the more I think about it. Three days have passed, and now I think it’s brilliant. Funding agencies would make much better financial investments if they’d draw on information from such a prediction market. Unfortunately, without startup support it’s not going to happen. And who will pay for it?

This brings me back to my book. Seeing the utter lack of self-reflection in my community, I concluded scientists cannot solve the problem themselves. The only way to solve it is massive public pressure. The only way to solve the problem is that you speak up. Say it often and say it loudly, that you’re fed up watching research funds go to waste on citation games. Ask for proposals like Robin’s to be implemented.

Because if we don’t get our act together, ten years from now someone else will write another book. And you will have to listen to the same sorry story all over again.

Thursday, December 13, 2018

New experiment cannot reproduce long-standing dark matter anomaly

Close-up of the COSINE detector  [Credits: COSINE collaboration]
To correctly fit observations, physicists’ best current theory for the universe needs a new type of matter, the so-called “dark matter.” According to this theory, our galaxy – as most other galaxies – is contained in a spherical cloud of this dark stuff. Exactly what dark matter is made of, however, we still don’t know.

The more hopeful physicists believe that dark matter interacts with normal matter, albeit rarely. If they are right, we might get lucky and see one of those interactions by closely watching samples of normal matter for the occasional bump. Dozens of experiments have looked for such interactions with the putative dark matter particles. They found nothing.

The one exception is the DAMA experiment. DAMA is located below the Gran Sasso mountains in Italy, and it has detected something starting in 1995. Unfortunately, it has remained unclear just what that something is.

For many years, the collaboration has reported excess-hits to their detector. The signal has meanwhile reached a significance of 8.9σ, well above the 5σ standard for discovery. The number of those still unexplained events varies periodically during the year, which is consistent with the change that physicists expect due to our planet’s motion around the Sun and the Sun’s motion around the galactic center. The DAMA collaboration claims their measurements cannot be explained by interactions with already known particles.

DAMA data with best-bit modulation curve.
Figure 1 from arXiv:1301.6243

The problem with the DAMA experiment, however, is that the results are incompatible with the null-results of other dark matter searches. If what DAMA sees was really dark matter, then other experiments should also have seen it, which is not the case.

Most physicists seem to assume that what DAMA measures is really some normal particle, just that the collaboration does not correctly account for signals that come, eg, from radioactive decays in the surrounding mountains, cosmic rays, or neutrinos. An annual modulation could come about by other means than our motion through a dark matter halo. Many variables change throughout the year, such as the temperature and our distance to the sun. And while DAMA claims, of course, that they have taken into account all that, their results have been met with great skepticism.

I will admit I have always been fond of the DAMA anomaly. Not only because of its high significance, but because the peak of the annual modulation fits with the idea of us flying through dark matter. It’s not all that simple to find another signal that looks like that.

So far, there has been a loophole in the argument that the DAMA-signal cannot be a dark matter particle. The DAMA detector differs from all other experiments in one important point. DAMA uses thallium-doped sodium iodide crystals, while the conflicting results come from detectors using other targets, such as Xenon or Germanium. A dark matter particle which preferably couples to specific types of atoms could trigger the DAMA detector, but not trigger the other detectors. This is not a popular idea, but it would be compatible with observation.

To test whether this is what is going on, another experiment, COSINE, set out to repeat the measurement using the same material as DAMA. COSINE is located in South Korea and has begun operation in 2016. They just published the results from the first 60 days of their measurements. COSINE did not see excess events.

Figure 2 from Nature 564, 83–86 (2018)
Data is consistent with expected background


60 days of data is not enough to look for an annual modulation, and the annual modulation will greatly improve the statistical significance of the COSINE results. So it’s too early to entirely abandon hope. But that’s certainly a disappointment.

Friday, December 07, 2018

No, negative masses have not revolutionized cosmology

Figure from arXiv:1712.07962
A lot of people have asked me to comment on a paper by Jamie Farnes, titled
Farnes is a postdoc fellow at the Oxford e-Research center and has previously worked on observational astrophysics. A few days ago, Oxford University published a press-release celebrating the publication of Farnes’ paper. This press-release was then picked up by phys.org and spread from there to a few other outlets. I have since gotten various inquiries by readers and journalists asking for comments.

In his paper, Farnes has a go at cosmology with negative gravitational masses. He wants these masses further to also have negative inertial masses, so that the equivalence principle is maintained. It’s a nice idea. I, as I am sure many other people in the field, have toyed with it. Problem is, it works really badly.

General Relativity is a wonderful theory. It tells you how masses move under the pull of gravity. You do not get to choose how they move; it follows from Einstein’s equations. These equations tell you that like masses attract and unlike masses repel. We don’t normally talk about this because for all we know there are no negative gravitational masses, but you can see what happens in the Newtonian limit. It’s the same as for the electromagnetic force, just with electric charges exchanged for masses, and – importantly – with a flipped sign.

The deeper reason for this is that the gravitational interaction is exchanged by a spin-2 field, whereas the electromagnetic force is exchanged by a spin-1 field. Note that for this to be the case, you do not need to speak about the messenger particle that is associated with the force if you quantize it (gravitons or photons). It’s simply a statement about the type of interaction, not about the quantization. Again, you don’t get to choose this behavior. Once you work with General Relativity, you are stuck with the spin-2 field and you conclude: like charges attract and unlike charges repel.

Farnes in his paper instead wants negative gravitational masses to mutually repel each other. But general relativity won’t let you do this. He notices that in section 2.3.3. where he goes on about the “counterintuitive” finding that the negative masses don’t actually seem to mutually repel.

He doesn’t say in his paper how he did the N-body simulation in which the negative mass particles mutually repel (you can tell they do just by looking at the images). Some inquiry by email revealed that he does not actually derive the Newtonian limit from the field equations, he just encodes the repulsive interaction the way he thinks it should be.

Farnes also introduces a creation term for the negative masses so he gets something akin dark energy. A creation term is basically a magic fix by which you can explain everything and anything. Once you have that, you can either go and postulate an equation of motion that is consistent with the constant creation (or whatever else you want), or you don’t, in which case you just violate energy conservation. Either way, it doesn’t explain anything. And if you are okay with introducing fancy fluids with uncommon equations of motion you may as well stick with dark energy and dark matter.

There’s a more general point to be made here. The primary reason that we use dark matter and dark energy to explain cosmological observations is that they are simple. Occam’s razor vetoes any explanation you can come up with that is more complicated than that, and Farnes’ approach certainly is not a simple explanation. Furthermore, while it is okay to introduce negative gravitational masses, it’s highly problematic to introduce negative inertial masses because this means the vacuum becomes unstable. If you do this, you can produce particle pairs from a net energy of zero in infinitely large amounts. This fits badly with our observations.

Now, look. It may be that what I am saying is wrong. Maybe the Newtonian limit is more complicated that it seems. Maybe gravity is not a spin-2 interaction. Maybe you can have mutually repulsive negative masses in general relativity after all. I would totally be in favor of that, as I have written a paper about repulsive gravity myself (it’s quoted in Farnes’ paper). I believe that negative gravitational masses are the only known solution to the (real) cosmological constant problem. But any approach that attempts to work with negative masses needs to explain how it overcomes the above mentioned problems. Farnes’ paper falls short of this.

In summary, the solution proposed by Farnes creates more problems than it solves.