Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

Away Note

I’ll be traveling the next two weeks. First to Cambridge to celebrate Stephen Hawking’s 75th birthday (which was in January), then in Trieste for a conference on “Probing the spacetime fabric: from concepts to phenomenology.”  Rant coming up later today, but after that please prepare for a slow time.

Monday, June 26, 2017

Dear Dr B: Is science democratic?

    “Hi Bee,

    One of the often repeated phrases here in Italy by so called “science enthusiasts” is that “science is not democratic”, which to me sounds like an excuse for someone to justify some authoritarian or semi-fascist fantasy.

    We see this on countless “Science pages”, one very popular example being Fare Serata Con Galileo. It's not a bad page per se, quite the contrary, but the level of comments including variations of “Democracy is overrated”, “Darwin works to eliminate weak and stupid people” and the usual “Science is not democratic” is unbearable. It underscores a troubling “sympathy for authoritarian politics” that to me seems to be more and more common among “science enthusiasts". The classic example it’s made is “the speed of light is not voted”, which to me, as true as it may be, has some sinister resonance.

    Could you comment on this on your blog?

    Luca S.”


Dear Luca,

Wow, I had no idea there’s so much hatred in the backyards of science communication.

Hand count at convention of the German
party CDU. Image Source: AFP
It’s correct that science isn’t democratic, but that doesn’t mean it’s fascistic. Science is a collective enterprise and a type of adaptive system, just like democracy is. But science isn’t democratic any more than sausage is a fruit just because you can eat both.

In an adaptive system, small modifications create a feedback that leads to optimization. The best-known example is probably Darwinian evolution, in which a species’ genetic information receives feedback through natural selection, thereby optimizing the odds of successful reproduction. A market economy is also an adaptive system. Here, the feedback happens through pricing. A free market optimizes “utility” that is, roughly speaking, a measure of the agents’ (customers/producers) satisfaction.

Democracy too is an adaptive system. Its task is to match decisions that affect the whole collective with the electorate’s values. We use democracy to keep our “is” close to the “ought.”

Democracies are more stable than monarchies or autocracies because an independent leader is unlikely to continuously make decisions which the governed people approve of. And the more governed people disapprove, the more likely they are to chop off the king’s head. Democracy, hence, works better than monarchy for the same reason a free market works better than a planned economy: It uses feedback for optimization, and thereby increases the probability for serving peoples’ interests.

The scientific system too uses feedback for optimization – this is the very basis of the scientific method: A hypothesis that does not explain observations has to be discarded or amended. But that’s about where similarities end.

The most important difference between the scientific, democratic, and economic system is the weight of an individual’s influence. In a free market, influence is weighted by wealth: The more money you can invest, the more influence you can have. In a democracy, each voter’s opinion has the same weight. That’s pretty much the definition of democracy – and note that this is a value in itself.

In science, influence is correlated with expertise. While expertise doesn’t guarantee influence, an expert is more likely to hold relevant knowledge, hence expertise is in practice strongly correlated with influence.

There are a lot of things that can go wrong with scientific self-optimization – and a lot of things do go wrong – but that’s a different story and shall be told another time. Still, optimizing hypotheses by evaluating empirical adequacy is how it works in principle. Hence, science clearly isn’t democratic.

Democracy, however, plays an important role for science.

For science to work properly, scientists must be free to communicate and discuss their findings. Non-democratic societies often stifle discussion on certain topics which can create a tension with the scientific system. This doesn’t have to be the case – science can flourish just fine in non-democratic societies – but free speech strongly links the two.

Science also plays an important role for democracy.

Politics isn’t done with polling the electorate on what future they would like to see. Elected representatives then have to find out how to best work towards this future, and scientific knowledge is necessary to get from “is” to “ought.”

But things often go wrong at the step from “is” to “ought.” Trouble is, the scientific system does not export knowledge in a format that can be directly imported by the political system. The information that elected representatives would need to make decisions is a breakdown of predictions with quantified risks and uncertainties. But science doesn’t come with a mechanism to aggregate knowledge. For an outsider, it’s a mess of technical terms and scientific papers and conferences – and every possible opinion seems to be defended by someone!

As a result, public discourse often draws on the “scientific consensus” but this is a bad way to quantify risk and uncertainty.

To begin with, scientists are terribly disagreeable and the only consensuses I know of are those on thousand years-old questions. More important, counting the numbers of people who agree with a statement simply isn’t an accurate quantifier of certainty. The result of such counting inevitably depends on how much expertise the counted people have: Too little expertise, and they’re likely to be ill-informed. Too much expertise, and they’re likely to have personal stakes in the debate. Worse, still, the head-count can easily be skewed by pouring money into some research programs.

Therefore, the best way we presently have make scientific knowledge digestible for politicians is to use independent panels. Such panels – done well – can both circumvent the problem of personal bias and the skewed head count. In the long run, however, I think we need a fourth arm of government to prevent politicians from attempting to interpret scientific debate. It’s not their job and it shouldn’t be.

But those “science enthusiasts” who you complain about are as wrong-headed as the science deniers who selectively disregard facts that are inconvenient for their political agenda. Both of them confuse opinions about what “ought to be” with the question how to get there. The former is a matter of opinion, the latter isn’t.

That vaccine debate that you mentioned, for example. It’s one question what are the benefits of vaccination and who is at risk from side-effects – that’s a scientific debate. It’s another question entirely whether we should allow parents to put their and other peoples’ children at an increased risk of early death or a life of disability. There’s no scientific and no logical argument that tells us where to draw the line.

Personally, I think parents who don’t vaccinate their kids are harming minors and society shouldn’t tolerate such behavior. But this debate has very little to do with scientific authority. Rather, the issue is to what extent parents are allowed to ruin their offspring’s life. Your values may differ from mine.

There is also, I should add, no scientific and no logical argument for counting the vote of everyone (above some quite arbitrary age threshold) with the same weight. Indeed, as Daniel Gilbert argues, we are pretty bad at predicting what will make us happy. If he’s right, then the whole idea of democracy is based on a flawed premise.

So – science isn’t democratic, never has been, never will be. But rather than stating the obvious, we should find ways to better integrate this non-democratically obtained knowledge into our democracies. Claiming that science settles political debate is as stupid as ignoring knowledge that is relevant to make informed decisions.

Science can only help us to understand the risks and opportunities that our actions bring. It can’t tell us what to do.

Thanks for an interesting question.

Tuesday, June 20, 2017

If tensions in cosmological data are not measurement problems, they probably mean dark energy changes

Galaxy pumpkin.
Src: The Swell Designer
According to physics, the universe and everything in it can be explained by but a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit.

Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.

One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.

The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.

The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.

It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.

Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.

Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.

Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.

That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.

A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.

This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)

Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.

Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.

The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.


[This article previously appeared on Starts With A Bang.]

Update June 21: Corrected several sentences to address comments below.

Wednesday, June 14, 2017

What’s new in high energy physics? Clockworks.

Clockworks. [Img via dwan1509].
High energy physics has phases. I don’t mean phases like matter has – solid, liquid, gaseous and so on. I mean phases like cranky toddlers have: One week they eat nothing but noodles, the next week anything as long as it’s white, then toast with butter but it must be cut into triangles.

High energy physics is like this. Twenty years ago, it was extra dimensions, then we had micro black holes, unparticles, little Higgses – and the list goes on.

But there hasn’t been a big, new trend since the LHC falsified everything that was falsifiable. It’s like particle physics stepped over the edge of a cliff but hasn’t looked down and now just walks on nothing.

The best candidate for a new trend that I saw in the past years is the “clockwork mechanism,” though the idea just took a blow and I’m not sure it’ll go much farther.

The origins of the model go back to late 2015, when the term “clockwork mechanism” was coined by Kaplan and Rattazzi, though Cho and Im pursued a similar idea and published it at almost the same time. In August 2016, clockworks were picked up by Giudice and McCullough, who advertised the model as a “a useful tool for model-building applications” that “offers a solution to the Higgs naturalness problem.”

Gears. Img Src: Giphy.
The Higgs naturalness problem, to remind you, is that the mass of the Higgs receives large quantum corrections. The Higgs is the only particle in the standard model that suffers from this problem because it’s the only scalar. These quantum corrections can be cancelled by subtracting a constant so that the remainder fits the observed value, but then the constant would have to be very finely tuned. Most particle physicists think that this is too much of a coincidence and hence search for other explanations.

Before the LHC turned on, the most popular solution to the Higgs naturalness issue was that some new physics would show up in the energy range comparable to the Higgs mass. We now know, however, that there’s no new physics nearby, and so the Higgs mass has remained unnatural.

Clockworks are a mechanism to create very small numbers in a “natural” way, that is from numbers that are close by 1. This can be done by copying a field multiple times and then coupling each copy to two neighbors so that they form a closed chain. This is the “clockwork” and it is assumed to have a couplings with values close to 1 which are, however, asymmetric among the chain neighbors.

The clockwork’s chain of fields has eigenmodes that can be obtained by diagonalizing the mass matrix. These modes are the “gears” of the clockwork and they contain one massless particle.

The important feature of the clockwork is now that this massless particle’s mode has a coupling that scales with the clockwork’s coupling taken to the N-th power, where N is the number of clockwork gears. This means even if the original clockwork coupling was only a little smaller than 1, the coupling of the lightest clockwork mode becomes small very fast when the clockwork grows.

Thus, clockworks are basically a complicated way to make a number of order 1 small by exponentiating it.

I’m an outspoken critic of arguments from naturalness (and have been long before we had the LHC data) so it won’t surprise you to hear that I am not impressed. I fail to see how choosing one constant to match observation is supposedly worse than introducing not only a new constant, but also N copies of some new field with a particular coupling pattern.

Either way, by March 2017, Ben Allanach reports from Recontres de Moriond – the most important annual conference in particle physics – that clockworks are “getting quite a bit of attention” and are “new fertile ground.”

Ben is right. Clockworks contain one light and weakly coupled mode – difficult to detect because of the weak coupling – and a spectrum of strongly coupled but massive modes – difficult to detect because they’re massive. That makes the model appealing because it will remain impossible to rule it out for a while. It is, therefore, perfect playground for phenomenologists.

And sure enough, the arXiv has since seen further papers on the topic. There’s clockwork inflation and clockwork dark mattera clockwork axion and clockwork composite Higgses – you get the picture.

But then, in April 2017, a criticism of the clockwork mechanism appears on the arXiv. Its authors Craig, Garcia Garcia, and Sutherland point out that the clockwork mechanism can only be used if the fields in the clockwork’s chain have abelian symmetry groups. If the group isn’t abelian the generators will mix together in the zero mode, and maintaining gauge symmetry then demands that all couplings be equal to one. This severely limits the application range of the model.

A month later, Giudice and McCullough reply to this criticism essentially by saying “we know this.” I have no reason to doubt it, but I still found the Craig et al criticism useful for clarifying what clockworks can and can’t do. This means in particular that the supposed solution to the hierarchy problem does not work as desired because to maintain general covariance one is forced to put a hierarchy of scales into the coupling already.

I am not sure whether this will discourage particle physicists from pursuing the idea further or whether more complicated versions of clockworks will be invented to save naturalness. But I’m confident that – like a toddler’s phase – this too shall pass.

Wednesday, June 07, 2017

Dear Dr B: What are the chances of the universe ending out of nowhere due to vacuum decay?

    “Dear Sabine,

    my names [-------]. I'm an anxiety sufferer of the unknown and have been for 4 years. I've recently came across some articles saying that the universe could just end out of no where either through false vacuum/vacuum bubbles or just ending and I'm just wondering what the chances of this are occurring anytime soon. I know it sounds silly but I'd be dearly greatful for your reply and hopefully look forward to that

    Many thanks

    [--------]”


Dear Anonymous,

We can’t predict anything.

You see, we make predictions by seeking explanations for available data, and then extrapolating the best explanation into the future. It’s called “abductive reasoning,” or “inference to the best explanation” and it sounds reasonable until you ask why it works. To which the answer is “Nobody knows.”

We know that it works. But we can’t justify inference with inference, hence there’s no telling whether the universe will continue to be predictable. Consequently, there is also no way to exclude that tomorrow the laws of nature will stop and planet Earth will fall apart. But do not despair.

Francis Bacon – widely acclaimed as the first to formulate the scientific method – might have reasoned his way out by noting there are only two possibilities. Either the laws of nature will break down unpredictably or they won’t. If they do, there’s nothing we can do about it. If they don’t, it would be stupid not to use predictions to improve our lives.

It’s better to prepare for a future that you don’t have than to not prepare for a future you do have. And science is based on this reasoning: We don’t know why the universe is comprehensible and why the laws of nature are predictive. But we cannot do anything about unknown unknowns anyway, so we ignore them. And if we do that, we can benefit from our extrapolations.

Just how well scientific predictions work depends on what you try to predict. Physics is the currently most predictive discipline because it deals with the simplest of systems, those whose properties we can measure to high precision and whose behavior we can describe with mathematics. This enables physicists to make quantitatively accurate predictions – if they have sufficient data to extrapolate.

The articles that you read about vacuum decay, however, are unreliable extrapolations of incomplete evidence.

Existing data in particle physics are well-described by a field – the Higgs-field – that fills the universe and gives masses to elementary particles. This works because the value of the Higgs-field is different from zero even in vacuum. We say it has a “non-vanishing vacuum expectation value.” The vacuum expectation value can be calculated from the masses of the known particles.

In the currently most widely used theory for the Higgs and its properties, the vacuum expectation value is non-zero because it has a potential with a local minimum whose value is not at zero.

We do not, however, know that the minimum which the Higgs currently occupies is the only minimum of the potential and – if the potential has another minimum – whether the other minimum would be at a smaller energy. If that was so, then the present state of the vacuum would not be stable, it would merely be “meta-stable” and would eventually decay to the lowest minimum. In this case, we would live today in what is called a “false vacuum.”

Image Credits: Gary Scott Watson.


If our vacuum decays, the world will end – I don’t know a more appropriate expression. Such a decay, once triggered, releases an enormous amount of energy – and it spreads at the speed of light, tearing apart all matter it comes in contact with, until all vacuum has decayed.

How can we tell whether this is going to happen?

Well, we can try to measure the properties of the Higgs’ potential and then extrapolate it away from the minimum. This works much like Taylor series expansions, and it has the same pitfalls. Indeed, making predictions about the minima of a function based on a polynomial expansion is generally a bad idea.

Just look for example at the Taylor series of the sine function. The full function has an infinite number of minima at exactly the same value but you’d never guess from the first terms in the series expansion. First it has one minimum, then it has two minima of different value, then again it has only one – and the higher the order of the expansion the more minima you get.

The situation for the Higgs’ potential is more complicated because the coefficients are not constant, but the argument is similar. If you extract the best-fit potential from the available data and extrapolate it to other values of the Higgs-field, then you find that our present vacuum is meta-stable.

The figure below shows the situation for the current data (figure from this paper). The horizontal axis is the Higgs mass, the vertical axis the mass of the top-quark. The current best-fit is the upper left red point in the white region labeled “Metastability.”
Figure 2 from Bednyakov et al, Phys. Rev. Lett. 115, 201802 (2015).


This meta-stable vacuum has, however, a ridiculously long lifetime of about 10600 times the current age of the universe, take or give a few billion billion billion years. This means that the vacuum will almost certainly not decay until all stars have burnt out.

However, this extrapolation of the potential assumes that there aren’t any unknown particles at energies higher than what we have probed, and no other changes to physics as we know it either. And there is simply no telling whether this assumption is correct.

The analysis of vacuum stability is not merely an extrapolation of the presently known laws into the future – which would be justified – it is also an extrapolation of the presently known laws into an untested energy regime – which is not justified. This stability debate is therefore little more than a mathematical exercise, a funny way to quantify what we already know about the Higgs’ potential.

Besides, from all the ways I can think of humanity going extinct, this one worries me least: It would happen without warning, it would happen quickly, and nobody would be left behind to mourn. I worry much more about events that may cause much suffering, like asteroid impacts, global epidemics, nuclear war – and my worry-list goes on.

Not all worries can be cured by rational thought, but since I double-checked you want facts and not comfort, fact is that current data indicates our vacuum is meta-stable. But its decay is an unreliable prediction based the unfounded assumption that there either are no changes to physics at energies beyond the ones we have tested, or that such changes don’t matter. And even if you buy this, the vacuum almost certainly wouldn’t decay as long as the universe is hospitable for life.

Particle physics is good for many things, but generating potent worries isn’t one of them. The biggest killer in physics is still the 2nd law of thermodynamics. It will get us all, eventually. But keep in mind that the only reason we play the prediction game is to get the best out of the limited time that we have.

Thanks for an interesting question!