Pages

Thursday, December 27, 2018

How the LHC may spell the end of particle physics

The Large Hadron Collider (LHC) recently completed its second experimental run. It now undergoes a scheduled upgrade to somewhat higher energies, at which more data will be collected. Besides the Higgs-boson, the LHC has not found any new elementary particle.

It is possible that in the data yet to come some new particle eventually shows up. But particle physicists are nervous. It’s not looking good – besides a few anomalies that are not statistically significant, there is no evidence for anything out of the normal. And if the LHC finds nothing new, there is no reason to think the next larger collider will. In which case, why build one?

That the LHC finds the Higgs and nothing else was dubbed the “nightmare scenario” for a reason. For 30 years, particle physicists have told us that the LHC should find something besides that, something exciting: a particle for dark matter, additional dimensions of space, or maybe a new type of symmetry. Something that would prove that the standard model is not all there is. But this didn’t happen.

All those predictions for new physics were based on arguments from naturalness. I explained in my book that naturalness arguments are not mathematically sound and one shouldn’t have trusted them.

The problem particle physicists now have is that naturalness was the only reason to think that there should be new physics at the LHC. That’s why they are getting nervous. Without naturalness, there is no argument for new physics at energies even higher than that of the LHC. (Not until 15 orders of magnitude higher, which is when the quantum structure of spacetime should become noticeable. But energies so large will remain inaccessible for the foreseeable future.)

How have particle physicists reacted to the situation? Largely by pretending nothing happened.

One half continues to hope that something will show up in the data, eventually. Maybe naturalness is just more complicated than we thought. The other half pre-emptively fabricates arguments for why a next larger collider should see new particles. And a few just haven’t noticed they walked past the edge of the cliff. A recent report about Beyond the Standard Model Physics at the LHC, for example, still iterates that “naturalness [is] the main motivation to expect new physics.”

Regardless of their coping strategy, a lot of particle physicists probably now wish they had never made those predictions. Therefore I think it’s a great time to look at who said what. References below.

Some lingo ahead: “eV” stands for electron-Volt and is a measure of energy. Particle colliders are classified by the energy that they can test. Higher energy means that the collisions resolve smaller structures. The LHC will reach up to 14 Tera electron Volt (TeV). The “electroweak scale” or “electroweak energy” is typically said to be around the mass of the Z-boson, which is about 100 Giga-electron Volts (GeV), ie a factor 100 below what the LHC reaches.

Also note that even though the LHC reaches energies up to 14 TeV, it collides protons, and those are not elementary particles but composites of quarks and gluons. The total collision energy is therefore distributed over the constituent particles, meaning that constraints on the masses of new particles are below the collision energy. How good the constraints are depends on the expected number of interactions and the amount of data collected. The current constraints are typically at some TeV and will increase as more data is analyzed.

With that ahead, let us start in 1987 with Barbieri and Giudice:
“The implementation of this “naturalness” criterion, gives rise to a physical upper bound on superparticle masses in the TeV range.”
In 1994, Anderson and Castano write:
“[In] the most natural scenarios, many sparticles, for example, charginos, squarks, and gluinos, lie within the physics reach of either LEP II or the Tevatron”
and
“supersymmetry cannot provide a complete explanation of weak scale stability, if squarks and gluinos have masses beyond the physics reach of the LHC.”
LEP was the Large Electron Positron collider. LEP1 and LEP2 refers to the two runs of the experiment.

In 1995, Dimopoulous and Giudice tell us similarly:
“[If] minimal low-energy supersymmetry describes the world with no more than 10% fine tuning, then LEP2 has great chances to discover it.”
In 1997, Erich Poppitz writes:
“Within the next 10 years—with the advent of the Large Hadron Collider—we will have the answer to the question: “Is supersymmetry relevant for physics at the electroweak scale?””
On to 1998, when Louis, Brunner, and Huber tell us the same thing:
“These models do provide a solution to the naturalness problem as long as the supersymmetric partners have masses not much bigger than 1 TeV.”
It was supposed to be an easy discovery, as Frank Paige wrote in 1998:
“Discovering gluinos and squarks in the expected mass range [...] seems straightforward, since the rates are large and the signals are easy to separate from Standard Model backgrounds.”
Giudice and Rattazzi in 1998 emphasize that naturalness is why they believe in physics beyond the standard model:
“The naturalness (or hierarchy) problem, is considered to be the most serious theoretical argument against the validity of the Standard Model (SM) of elementary particle interactions beyond the TeV energy scale. In this respect, it can be viewed as the ultimate motivation for pushing the experimental research to higher energies.”
They go on to praise the beauty of supersymmetry: “An elegant solution to the naturalness problem is provided by supersymmetry...”

In 1999, Alessandro Strumia, interestingly enough, concludes that the LEP results are really bad news for supersymmetry:
“the negative results of the recent searches for supersymmetric particles pose a naturalness problem to all ‘conventional’ supersymmetric models.”
In his paper, he stresses repeatedly that his conclusion applies only to certain supersymmetric models. Which is of course correct. The beauty of supersymmetry is that it’s so adaptive it evades all constraints.

Most particle physicists were utterly undeterred by the negative LEP results. They just moved their predictions to the next larger collider, the TeVatron and then the LHC.

In 2000, Feng, Matchev, and Moroi write:
“This has reinforced a widespread optimism that the next round of collider experiments at the Tevatron, LHC or the NLC are guaranteed to discover all superpartners, if they exists.”
(NLC stands for Next Linear Collider, which was a proposal in early 2000s that has since been dropped.) They also iterate that supersymmetry should be easy to find at the LHC:
“In contrast to the sfermions, gauginos and higgsinos cannot be very heavy in this scenario. For example … gauginos will be produced in large numbers at the LHC, and will be discovered in typical scenarios.”
In 2004, Stuart Raby tries to say that naturalness arguments already are in trouble:
“Simple ‘naturalness’ arguments would lead one to believe that SUSY should have been observed already.”
But of course that’s just reason to consider not-so-simple naturalness arguments.

In the same year, Fabiola Gianotti bangs the drum for the LHC (emphasis mine):
“The above [naturalness] arguments open the door to new and more fundamental physics. There are today several candidate scenarios for physics beyond the Standard Model, including Supersymmetry (SUSY), Technicolour and theories with Extra-dimensions. All of them predict new particles in the TeV region, as needed to stabilize the Higgs mass. We note that there is no other scale in particle physics today as compelling as the TeV scale, which strongly motivates a machine like the LHC able to explore directly and in detail this energy range.”
She praises supersymmetry as “very attractive” and also tells us that the discovery should be easy and fast:
“SUSY discovery at the LHC could be relatively easy and fast… Squark and gluino masses of 1 TeV are accessible after only one month of data taking… The ultimate mass reach is up to ∼ 3 TeV for squarks and gluinos. Therefore, if nothing is found at the LHC, TeV-scale Supersymmetry will most likely be ruled out, because of the arguments related to stabilizing the Higgs mass mentioned above.”
In 2005, Arkani-Hamed and Savas Dimopolous have the same tale to tell:
“[Ever] since the mid 1970’s, there has been a widely held expectation that the SM must be incomplete already at the ∼ TeV scale. The reason is the principle of naturalness… Solving the naturalness problem has provided the biggest impetus to constructing theories of physics beyond the Standard Model...”
Same thing with Feng and Wilczek in 2005:
“The standard model of particle physics is fine-tuned… This blemish has been a prime motivation for proposing supersymmetric extensions to the standard model. In models with low-energy supersymmetry, naturalness can be restored by having superpartners with approximately weak-scale masses.”
Here is John Donoghue in 2007:
“[The] argument against finetuning becomes a powerful motivator for new physics at the scale of 1 TeV. The Large Hadron Collider has been designed to find this new physics.”
Michael Dine who, also in 2007, writes:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
And Howard Baer in 2009:
“quadratic divergences associated with the scalar sector require new physics at or around the electroweak scale.”
The same story, that new physics needs to appear at around a TeV, has been repeated in countless talks and seminars. A few examples. Here is Peter Krieger in 2008:


Michelangelo Mangano:



Joseph Lykken:



I could go on, but I hope this suffices to document that pretty much everyone

    (a) agreed that the LHC should see new physics besides the Higgs, and
    (b) they all had the same reason, namely naturalness.

In summary: Since the naturalness-based predictions did not pan out, we have no reason to think that the remaining LHC run or an even larger particle collider would see any new physics that is not already explained by the standard model of particle physics. A larger collider would be able to measure more precisely the properties of already known particles, but that is arguably not a terribly exciting exercise. It will be a tough sell for a machine that comes at $10 billion and up. Therefore, it may very well be that the LHC will remain the largest particle collider in human history.



Bonus: A reader submits this gem from David Gross and Ed Witten in the Wall Street Journal, anno 1996:
“There is a high probability that supersymmetry, if it plays the role physicists suspect, will be confirmed in the next decade. The existing accelerators that have a chance of doing so are the proton collider at the Department of Energy’s Fermi Lab in Batavia, Ill., and the electron collider at the European Center for Nuclear Research (CERN) in Geneva. Last year’s final run at Fermi Lab, during which the top quark was discovered, gave tantalizing hints of supersymmetry.”

Wednesday, December 26, 2018

Book review: “On The Future” by Martin Rees

On the Future: Prospects for Humanity
By Martin Rees
Princeton University Press (October 16, 2018)

The future will come, that much is clear. What it will bring, not so much. But speculating about what the future brings is how we make decisions in the present, so it’s a worthwhile exercise. It can be fun, it can be depressing. Rees’ new book “On The Future” is both.

Martin Rees is a cosmologist and astrophysicist. He has also for long been involved in public discourse about science, notably the difficulty of integrating scientific evidence in policy making. He is also one of the founding members of the Cambridge Center for Existential Risk and serves on the advisory board of the Future of Life Institute. In brief, Rees thinks ahead, not for 5 years or 10 years, but for 1000 or maybe – gasp – a million years.

In his new book, Rees covers a large number of topics. From the threat of nuclear war, climate change, clean energy, and environmental sustainability to artificial intelligence, bioterrorism, assisted dying, and the search for extraterrestrial life. Rees is clearly a big fan of space exploration and bemoans that today it’s nowhere near as exciting as when he was young.
“I recall a visit to my home town by John Glenn, the first American to go into orbit. He was asked what he was thinking while in the rocket’s nose cone, awaiting launch. He responded, ‘I was thinking that there were twenty thousand parts in this rocket, and each was made by the lowest bidder.’”
I much enjoyed Rees book, the biggest virtue of which is brevity. Rees gets straight to the point. He summarizes what we know and don’t know, and what he thinks about where it’ll go, and that’s that. The amount of flowery words in his book is minimal (and those that he uses are mostly borrowed from Carl Sagan).

You don’t have to agree with Rees on his extrapolations into the unknown, but you will end up being well-informed. Rees is also utterly unapologetic about being a scientist to the core. Oftentimes scientists writing about climate change or biotech end up in a forward-defense against denialism, which I find exceedingly tiresome. Rees does nothing of that sort. He sticks with the facts.

It sometimes shows that Rees is a physicist. For example in his going on about exoplanets, reductionism (“The ‘ordering’ of the sciences in this hierarchy is not controversial.”), and his plug for the multiverse about which he writes “[The multiverse] is not metaphysics. It’s highly speculative. But it’s exciting science. And it may be true.”

But Rees does not address the biggest challenge we currently face, that is our inability to make use of the knowledge we already have. He is simply silent on the problems we currently see in science, the lack of progress, and the difficulties we face in our society when trying to aggregate evidence to make informed decisions.

In his chapter about “The Limits and Future of Science,” Rees acknowledges the possibility that “some fundamental truths about nature could be too complex for unaided human brains to fully grasp” but fails to notice that unaided human brains are not even able to fully grasp how being part of a large community influences their interests – and with that the decision of what we chose to spend time and resources on.

By omitting to even mention these problems, Rees tells us something about the future too. We may be so busy painting pictures of our destination that we forget to think of a way to reach it.

[Disclaimer: Free review copy.]

Monday, December 24, 2018

Happy Holidays

I have been getting a novel complaint about my music videos, which is that they are “hard to understand.” In case you share this impression, you may be overthinking this. These aren’t press-releases about high-temperature superconductors or neutron star matter, it’s me attempting to sing. But since you ask, this one is about the – often awkward – relation between science and religion.



(Soundcloud version here.)

And since tis the season, allow me to mention that you find a donate-button in the top right corner of this website. On this occasion I also wish to express a heart-felt THANK YOU for all of those who this year sent donations. I very much appreciate your support, no matter how small, because it documents that you value my writing.

And here is this years’ family portrait for the Christmas cards.



In Germany, we traditionally celebrate Christmas on the evening of December 24th. So I herewith sign off for the rest of the year. Wish you all Happy Holidays.

Friday, December 21, 2018

Winter Solstice

[Photo: Herrmann Stamm]

The clock says 3:30 am. Is that early or late? Wrapped in a blanket I go into the living room. I open the door and step onto the patio. It’s too warm for December. An almost full moon blurs into the clouds. In the distance, the highway hums.

Somewhere, someone dies.

For everyone who dies, two people are born. 7.5 billion and counting.

We came to dominate planet Earth because, compared to other animals, we learned fast and collaborated well. We used resources efficiently. We developed tools to use more resources, and then employed those tools to use even more resources. But no longer. It’s 2018, and we are failing.

That’s what I think every day when I read the news. We are failing.

Throughout history, humans improved how to exchange and act on information held by only a few. Speech, writing, politics, economics, social and cultural norms, TV, telephones, the internet. These are all methods of communication. It’s what enabled us to collectively learn and make continuous progress. But now that we have networks connecting billions of people, we have reached our limits.

Fake news, Russian trolls, shame storms. Some dude’s dick in the wrong place. That’s what we talk about.

And buried below the viral videos and memes there’s the information that was not where it was supposed to be. Hurricane Katrina? The problem was known. The 2008 financial crisis? The problem was known. That Icelandic volcano whose ashes, in 2010, grounded flight traffic? Utterly unsurprising. Iceland has active volcanoes. Sometimes the wind blows South-East. Btw, it will happen again. And California is due for a tsunami. The problems are known.

But that’s not how it will end.

20 years ago I had a car accident. I was on a busy freeway. It was raining heavily and the driver in front of me suddenly braked. Only later did I learn someone had cut his way. I hit the brakes. And then I watched a pair of red lights coming closer.

They say time slows if you fear for your life. It does.

I came to a stop one inch before slamming into the other car. I breathed out. Then a heavy BMW slammed into my back.

Human civilization will go like that. If we don’t keep moving, problems now behind us will slam into our back. Climate change, environmental pollution, antibiotic resistance, the persistent risk of nuclear war, for just to mention a few – you know the list. We will have to deal with those sooner or later. Not now. Oh, no. Not us, not now, not here. But our children. Or their children. If we stop learning, if we stop improving our technologies, it’ll catch up with them, sooner or later.

Having to deal with long-postponed problems will eat up resources. Those resources, then, will not be available for further technological development, which will create further problems, which will eat up more resources. Modern technologies will become increasingly expensive until most people no longer can afford them. Infrastructures will crumble. Education will decay. It’s a downward spiral. A long, unpreventable and disease-ridden, regress.

Those artificial intelligences you were banking on? Not going to happen. All the money in the world will not lead to scientific breakthroughs if we don’t have sufficiently many people with the sufficient education.

Who is to blame? No one, really. We are just too stupid to organize our living together on a global scale. We will not make it to the next level of evolutionary development. We don’t have the mental faculties. We do not comprehend. We do not act because we cannot. We don’t know how. We will fail and, maybe, in a million years or so, another species will try again.

Climate negotiations stalled over the choice of a word. A single word.

The clouds have drifted and the bushes now throw faint shadows in the moonlight. A cat screeches, or maybe it’s two. Something topples over. An engine starts. Then, silence again.

In the silence, I can hear them scream. All the people who don’t get heard, who pray and hope and wait for someone to please do something. But there is no one to listen. Even the scientists, even people in my own community, do not see, do not want to see, are not willing to look at their failure to make informed decisions in large groups. The problems are known.

Back there on that freeway, the BMW totaled my little Ford. I carried away neck and teeth damage, though I wouldn’t realize this until months later. I got out of my car and stood in the rain, thinking I’d be late for class. Again. The passenger’s door of the BMW opened and out came – an umbrella. Then, a tall man in a dark suit. He looked at me and the miserable state of my car and handed me a business card. “Don’t worry,” he said, “My insurance will cover that.” It did.

Of course I’m as stupid as everyone else, screaming screams that no one hears and, despite all odds, still hoping that someone has an insurance, that someone knows what to do.

I go back into the house. It’s dark inside. I step onto a LEGO, one of the pink ones. They have fewer sharp edges; maybe, I think, that’s why parents keep buying them.

The kids are sleeping. It will be some hours until the husband announces his impending awakening with a morning fart. By standby lights I navigate to my desk.

We are failing. I am failing. But what else can I do than try.

I open my laptop.

Friday, December 14, 2018

Don’t ask what science can do for you.

Among the more peculiar side-effects of publishing a book are the many people who suddenly recall we once met.

There are weird fellows who write to say they mulled ten years over a single sentence I once spoke with them. There are awkward close-encounters from conferences I’d rather have forgotten about. There are people who I have either indeed forgotten about or didn’t actually meet. And then there are those who, at some time in my life, handed me a piece of the puzzle I’ve since tried to assemble; people I am sorry I forgot about.

For example my high-school physics teacher, who read about me in a newspaper and then came to a panel discussion I took part in. Or Eric Weinstein, who I met many years ago at Perimeter Institute, and who has since become the unofficial leader of the last American intellectuals. Or Robin Hanson, with whom I had a run-in 10 years ago and later met at SciFoo.

I spoke with Robin the other day.

Robin is an economist at George Mason University in Virginia, USA. I had an argument with him because Robin proposed – all the way back in 1990 – that “gambling” would save science. He wanted scientists to bet on the outcomes of their colleagues’ predictions and claimed this would fix the broken incentive structure of academia.

I wasn’t fond of Robin’s idea back then. The major reason was that I couldn’t see scientists spend much time on a betting market. Sure, some of them would give it a go, but nowhere near enough for such a market to have much impact.

Economists tend to find it hard to grasp, but most people who stay in academia are not in for the money. This isn’t to say that money is not relevant in academia – it certainly is: Money decides who stays and who goes and what research gets done. But if getting rich is your main goal, you don’t dedicate your life to counting how many strings fit into a proton.

The foundations of physics may be an extreme case, but by my personal assessment most people in this area primarily chase after recognition. They want to be important more than they want to be rich.

And even if my assessment of scientists’ motivations was wrong, such a betting market would have to have a lot of money go around, more money than scientists can make by upping their reputation with putting money behind their own predictions.

In my book, I name a few examples of physicists who bet to express confidence in their own theory, such as Garrett Lisi who bet Frank Wilczek $1000 that supersymmetry would not be found at the LHC by 2016. Lisi won and Wilczek paid his due. But really what Garrett did there was just to publicly promote his own theory, a competitor of supersymmetry.

A betting market with minor payoffs, one has to be afraid, would likewise simply be used by researchers to bet on themselves because they have more to win by securing grants or jobs, which favorable market odds might facilitate.

But what if scientists could make larger gains by betting smartly than they could make by promoting their own research? “Who would bet against their career?” I asked Robin when we spoke last week.

“You did,” he pointed out.

He got me there.

My best shot at a permanent position in academia would have been LHC predictions for physics beyond the standard model. This is what I did for my PhD. In 2003, I was all set to continue into this direction. But by 2005, three years before the LHC began operation, I became convinced that those predictions were all nonsense. I stopped working on the topic, and instead began writing about the problems with particle physics. In 2015, my agent sold the proposal for “Lost in Math”.

When I wrote the book proposal, no one knew what the LHC would discover. Had the experiments found any of the predicted particles, I’d have made myself the laughing stock of particle physics.

So, Robin is right. It’s not how I thought about it, but I made a bet. The LHC predictions failed. I won. Hurray. Alas, the only thing I won is the right to go around and grumble “I told you so.” What little money I earn now from selling books will not make up for decades of employment I could have gotten playing academia-games by the rules.

In other words, yeah, maybe a betting market would be a good idea. Snort.

My thoughts have moved on since 2007, so have Robin’s. During our conversation, it became clear our views about what’s wrong with academia and what to do about it have converged over the years. To begin with, Robin seems to have recognized that scientists themselves are indeed unlikely candidates to do the betting. Instead, he now envisions that higher education institutions and funding agencies employ dedicated personnel to gather information and place bets. Let me call those “prediction market investors” (PMIs). Think of them like hedge-fund managers on the stock market.

Importantly, those PMIs would not merely collect information from scientists in academia, but also from those who leave. That’s important because information leaves with people. I suspect had you asked those who left particle physics about the LHC predictions, you’d have noticed quickly I was far from the only one who saw a problem. Alas, journalists don’t interview drop-outs. And those who still work in the field have all reason to project excitement and optimism about their research area.

The PMIs would of course not be the only ones making investments. Anyone could do it, if they wanted to. But I am guessing they’d be the biggest players.

This arrangement makes a lot of sense to me.

First and foremost, it’s structurally consistent. The people who evaluate information about the system do not themselves publish research papers. This circumvents the problem that I have long been going on about, that scientists don’t take into account the biases that skew their information-assessment. In Robin’s new setting, it doesn’t really matter if scientists’ see their mistakes; it only matters that someone sees them.

Second, it makes financial sense. Higher education institutions and funding agencies have reason to pay attention to the prediction market, because it provides new means to bring in money and new information about how to best invest money. In contrast to scientists, they might therefore be willing to engage in it.

Third, it is minimally intrusive yet maximally effective. It keeps the current arrangement of academia intact, but at the same it has a large potential for impact. Resistance to this idea would likely be small.

So, I quite like Robin’s proposal. Though, I wish to complain, it’s too vague to be practical and needs more work. It’s very, erm, academic.

But in 2007, I had another reason to disagree with Robin, which was that I thought his attempt to “save science” was unnecessary.

This was two years after Ioannidis’ paper “Why most published research findings are false” attracted a lot of attention. It was one year after Lee Smolin and Peter Woit published books that were both highly critical of string theory, which has long been one of the major research-bubbles in my discipline. At the time, I was optimistic – or maybe just naïve – and thought that change was on the way.

But years passed and nothing changed. If anything, problems got worse as scientists began to more aggressively market their research and lobby for themselves. The quest for truth, it seems, is now secondary. More important is you can sell an idea, both to your colleagues and to the public. And if it doesn’t pan out? Deny, deflect, dissociate.

That’s why you constantly see bombastic headlines about breakthrough insights you never hear of again. That’s why, after years of talking about the wonderful things the LHC might see, no one wants to admit something went wrong. And that’s why, if you read the comments on this blog, they wish I’d keep my mouth shut. Because it’s cozy in their research bubble and they don’t want it to burst.

That’s also why Robin’s proposal looks good to me. It looks better the more I think about it. Three days have passed, and now I think it’s brilliant. Funding agencies would make much better financial investments if they’d draw on information from such a prediction market. Unfortunately, without startup support it’s not going to happen. And who will pay for it?

This brings me back to my book. Seeing the utter lack of self-reflection in my community, I concluded scientists cannot solve the problem themselves. The only way to solve it is massive public pressure. The only way to solve the problem is that you speak up. Say it often and say it loudly, that you’re fed up watching research funds go to waste on citation games. Ask for proposals like Robin’s to be implemented.

Because if we don’t get our act together, ten years from now someone else will write another book. And you will have to listen to the same sorry story all over again.

Thursday, December 13, 2018

New experiment cannot reproduce long-standing dark matter anomaly

Close-up of the COSINE detector  [Credits: COSINE collaboration]
To correctly fit observations, physicists’ best current theory for the universe needs a new type of matter, the so-called “dark matter.” According to this theory, our galaxy – as most other galaxies – is contained in a spherical cloud of this dark stuff. Exactly what dark matter is made of, however, we still don’t know.

The more hopeful physicists believe that dark matter interacts with normal matter, albeit rarely. If they are right, we might get lucky and see one of those interactions by closely watching samples of normal matter for the occasional bump. Dozens of experiments have looked for such interactions with the putative dark matter particles. They found nothing.

The one exception is the DAMA experiment. DAMA is located below the Gran Sasso mountains in Italy, and it has detected something starting in 1995. Unfortunately, it has remained unclear just what that something is.

For many years, the collaboration has reported excess-hits to their detector. The signal has meanwhile reached a significance of 8.9σ, well above the 5σ standard for discovery. The number of those still unexplained events varies periodically during the year, which is consistent with the change that physicists expect due to our planet’s motion around the Sun and the Sun’s motion around the galactic center. The DAMA collaboration claims their measurements cannot be explained by interactions with already known particles.

DAMA data with best-bit modulation curve.
Figure 1 from arXiv:1301.6243

The problem with the DAMA experiment, however, is that the results are incompatible with the null-results of other dark matter searches. If what DAMA sees was really dark matter, then other experiments should also have seen it, which is not the case.

Most physicists seem to assume that what DAMA measures is really some normal particle, just that the collaboration does not correctly account for signals that come, eg, from radioactive decays in the surrounding mountains, cosmic rays, or neutrinos. An annual modulation could come about by other means than our motion through a dark matter halo. Many variables change throughout the year, such as the temperature and our distance to the sun. And while DAMA claims, of course, that they have taken into account all that, their results have been met with great skepticism.

I will admit I have always been fond of the DAMA anomaly. Not only because of its high significance, but because the peak of the annual modulation fits with the idea of us flying through dark matter. It’s not all that simple to find another signal that looks like that.

So far, there has been a loophole in the argument that the DAMA-signal cannot be a dark matter particle. The DAMA detector differs from all other experiments in one important point. DAMA uses thallium-doped sodium iodide crystals, while the conflicting results come from detectors using other targets, such as Xenon or Germanium. A dark matter particle which preferably couples to specific types of atoms could trigger the DAMA detector, but not trigger the other detectors. This is not a popular idea, but it would be compatible with observation.

To test whether this is what is going on, another experiment, COSINE, set out to repeat the measurement using the same material as DAMA. COSINE is located in South Korea and has begun operation in 2016. They just published the results from the first 60 days of their measurements. COSINE did not see excess events.

Figure 2 from Nature 564, 83–86 (2018)
Data is consistent with expected background


60 days of data is not enough to look for an annual modulation, and the annual modulation will greatly improve the statistical significance of the COSINE results. So it’s too early to entirely abandon hope. But that’s certainly a disappointment.

Friday, December 07, 2018

No, negative masses have not revolutionized cosmology

Figure from arXiv:1712.07962
A lot of people have asked me to comment on a paper by Jamie Farnes, titled
Farnes is a postdoc fellow at the Oxford e-Research center and has previously worked on observational astrophysics. A few days ago, Oxford University published a press-release celebrating the publication of Farnes’ paper. This press-release was then picked up by phys.org and spread from there to a few other outlets. I have since gotten various inquiries by readers and journalists asking for comments.

In his paper, Farnes has a go at cosmology with negative gravitational masses. He wants these masses further to also have negative inertial masses, so that the equivalence principle is maintained. It’s a nice idea. I, as I am sure many other people in the field, have toyed with it. Problem is, it works really badly.

General Relativity is a wonderful theory. It tells you how masses move under the pull of gravity. You do not get to choose how they move; it follows from Einstein’s equations. These equations tell you that like masses attract and unlike masses repel. We don’t normally talk about this because for all we know there are no negative gravitational masses, but you can see what happens in the Newtonian limit. It’s the same as for the electromagnetic force, just with electric charges exchanged for masses, and – importantly – with a flipped sign.

The deeper reason for this is that the gravitational interaction is exchanged by a spin-2 field, whereas the electromagnetic force is exchanged by a spin-1 field. Note that for this to be the case, you do not need to speak about the messenger particle that is associated with the force if you quantize it (gravitons or photons). It’s simply a statement about the type of interaction, not about the quantization. Again, you don’t get to choose this behavior. Once you work with General Relativity, you are stuck with the spin-2 field and you conclude: like charges attract and unlike charges repel.

Farnes in his paper instead wants negative gravitational masses to mutually repel each other. But general relativity won’t let you do this. He notices that in section 2.3.3. where he goes on about the “counterintuitive” finding that the negative masses don’t actually seem to mutually repel.

He doesn’t say in his paper how he did the N-body simulation in which the negative mass particles mutually repel (you can tell they do just by looking at the images). Some inquiry by email revealed that he does not actually derive the Newtonian limit from the field equations, he just encodes the repulsive interaction the way he thinks it should be.

Farnes also introduces a creation term for the negative masses so he gets something akin dark energy. A creation term is basically a magic fix by which you can explain everything and anything. Once you have that, you can either go and postulate an equation of motion that is consistent with the constant creation (or whatever else you want), or you don’t, in which case you just violate energy conservation. Either way, it doesn’t explain anything. And if you are okay with introducing fancy fluids with uncommon equations of motion you may as well stick with dark energy and dark matter.

There’s a more general point to be made here. The primary reason that we use dark matter and dark energy to explain cosmological observations is that they are simple. Occam’s razor vetoes any explanation you can come up with that is more complicated than that, and Farnes’ approach certainly is not a simple explanation. Furthermore, while it is okay to introduce negative gravitational masses, it’s highly problematic to introduce negative inertial masses because this means the vacuum becomes unstable. If you do this, you can produce particle pairs from a net energy of zero in infinitely large amounts. This fits badly with our observations.

Now, look. It may be that what I am saying is wrong. Maybe the Newtonian limit is more complicated that it seems. Maybe gravity is not a spin-2 interaction. Maybe you can have mutually repulsive negative masses in general relativity after all. I would totally be in favor of that, as I have written a paper about repulsive gravity myself (it’s quoted in Farnes’ paper). I believe that negative gravitational masses are the only known solution to the (real) cosmological constant problem. But any approach that attempts to work with negative masses needs to explain how it overcomes the above mentioned problems. Farnes’ paper falls short of this.

In summary, the solution proposed by Farnes creates more problems than it solves.

Thursday, December 06, 2018

CERN produces marketing video for new collider and it’s full of lies

The Large Hadron Collider (LHC) just completed its second run. Besides a few anomalies, there’s nothing new in the data. After the discovery of the Higgs-boson, there is also no good reason for why there should be something else to find, neither at the LHC nor at higher energies, not up until 15 orders of magnitude higher than what we can reach now.

But of course there may be something, whether there’s a good reason or not. You never know before you look. And so, particle physicists are lobbying for the next larger collider.

Illustration of FCC tunnel. Screenshot from this video.

Proposals have been floating around for some while.

The Japanese, for example, like the idea of a linear collider of 20-30 miles length that would collide electrons and positrons, tentatively dubbed the International Linear Collider (ILC). The committee tasked with formulating the proposal seems to expect that the Japanese Ministry of Science and Technology will “take a pessimistic view of the project.”

Some years ago, the Chinese expressed interest in building a circular electron-positron collider (CEPC) of 50 miles circumference. Nima Arkani-Hamed was so supportive of this option that I heard it being nicknamed the Nimatron. The Chinese work in 5-year plans, but CEPC evidently did not make it on the 2016 plan.

CERN meanwhile has its own plan, which is a machine called the Future Circular Collider (FCC). Three different variants are presently under discussion, depending on whether the collisions are between hadrons (FCC-hh), electron-positions (FCC-ee), or a mixture of both (FCC-he). The plan for the FCC-hh is now subject of a study carried out in a €4 million EU-project.

This project comes with a promotional video:



The video advertises the FCC as “the world’s biggest scientific instrument” that will address the following questions:

What is 96% of the universe made of?

This presumably refers to the 96% that are dark matter and dark energy combined. While it is conceivably possible that dark matter is made of heavy particles that the FCC can produce, this is not the case for dark energy. Particle colliders don’t probe dark energy. Dark energy is a low-energy, long-distance phenomenon, the entire opposite from high-energy physics. What the FCC will reliably probe are the other 4%, the same 4% that we have probed for the past 50 years.

What is dark matter?

We have done dozens of experiments that search for dark matter particles, and none has seen anything. It is not impossible that we get lucky and the FCC will produce a particle that fits the bill, but there is no knowing it will be the case.

Why is there no more antimatter?

Because if there was, you wouldn’t be here to ask the question. Presumably this item refers to the baryon asymmetry. This is a fine-tuning problem which simply may not have an answer. And even if it has, the FCC may not answer it.

How did the universe begin?

The FCC would not tell us how the universe began. Collisions of large ions produce little blobs of quark gluon plasma, and this plasma almost certainly was also present in the early universe. But what the FCC can produce has a density some 70 orders of magnitude below the density at the beginning of the universe. And even that blob of plasma finds itself in a very different situation at the FCC than it would encounter in the early universe, because in a collider it expands into empty space, whereas in the early universe the plasma filled the whole universe while space expanded.

On the accompanying website, I further learned that the FCC “is a bold leap into completely uncharted territory that would probe… the puzzling masses of neutrinos.”

The neutrino-masses are a problem in the Standard Model because either you need right-handed neutrinos which have never been seen, or because the neutrinos are different from the other fermions, by being “Majorana-particles” (I explained this here).

In the latter case, you’re not going to find out with a particle collider; there are other experiments for that (quick summary here). In the former case, the simplest model has the masses of the right-handed neutrinos at the Planck scale, so the FCC would never see them. You can of course formulate models in which the masses are at lower energies and happen to fall into the FCC range. I am sure you can. That particle physicists can fumble together models that predict all and everything is why I no longer trust their predictions. Again, it’s not impossible the FCC would find something, but there is no good reason for why that should happen.

I am not opposed to building a larger collider. Particle colliders that reach higher energies than we probed before are the cleanest and most reliable way to search for new physics. But I am strongly opposed to misleading the public about the prospects of such costly experiments. We presently have no reliable prediction for new physics at any energy below the Planck energy. A next larger collider may find nothing new. That may be depressing, but it’s true.

Correction: The video in question was produced by the FCC study group at CERN and is hosted on the CERN website, but was not produced by CERN.