Pages

Thursday, February 23, 2017

Book Review: “The Particle Zoo” by Gavin Hesketh

The Particle Zoo: The Search for the Fundamental Nature of Reality
By Gavin Hesketh
Quercus (1 Sept. 2016)

The first word in Gavin Hesketh’s book The Particle Zoo is “Beauty.” I read the word, closed the book, and didn’t reopen it for several months. Having just myself finished writing a book about the role of beauty in theoretical physics, it was the absolutely last thing I wanted to hear about.

I finally gave Hesketh’s book a second chance and took it along on a recent flight. Turned out once I passed the somewhat nauseating sales pitch in the beginning, the content considerably improved.

Hesketh provides a readable and accessible no-nonsense introduction to the standard model and quantum field theory. He explains everything as well as possible without using equations.

The author is an experimentalist and part of the LHC’s ATLAS collaboration. The Particle Zoo also has a few paragraphs about how it is to work in such large collaborations. Personally, I found this the most interesting part of the book. Hesketh also does a great job to describe how the various types of particle detectors work.

Had the book ended here, it would have been a well-done job. But Hesketh goes on to elaborate on physics beyond the standard model. And there he’s clearly out of his water.

Problems start when he begins laying out the shortcomings of the standard model, leaving the reader with the impression that it’s non-renormalizable. I suspect (or hope) he wasn’t referring to non-renormalizability but maybe Landau poles or the non-convergence of the perturbative expansion, but the explanation is murky.

Murky is bad, but wrong is worse. And wrong follows. Fore example, to generate excitement for new physics, Hesketh writes:
“Some theories suggest that antimatter responds to gravity in a different way: matter and antimatter may repel each other… [W]hile this is a strange idea, so far it is one that we cannot rule out.”
I do not know of any consistent theory that suggests antimatter responds differently to gravity than matter, and I say that as one of the three theorists on the planet who have worked on antigravity. I have no idea what Hesketh is referring to in this paragraph.

It does not help that “The Particle Zoo” does not have any references. I understand that a popular science book isn’t a review article, but I would expect that a scientist at least quotes sources for historical facts and quotations, which isn’t the case.

He then confuses a “Theory of Everything” with quantum gravity, and about supersymmetry (SuSy) he writes:
“[I]f SuSy is possible and it makes everything much neater, it really should exist. Otherwise it seems that nature has apparently gone out of its way to avoid it, making the equations uglier at the same time, and we would have to explain why that is.”
Which is a statement that should be embarrassing for any scientist to make.

Hesketh’s attitude to supersymmetry is however somewhat schizophrenic because he later writes that:
“[T]his is really why SuSy has lived for so long: whenever an experiment finds no signs of the super-particles, it is possible merely to adjust some of these free parameters so that these super-particles must be just a little bit heavier, just a little bit further out of reach. By never being specific, it is never wrong.”
Only to then reassure the reader
“SuSy may end up as another beautiful theory destroyed by an ugly fact, and we should find out in the next years.”
I am left to wonder which fact he thinks will destroy a theory that he just told us is never wrong.

Up to this point I might have blamed the inaccuracies on an editor, but then Hesketh goes on to explain the (ADD model of) large extra dimensions and claims that it solves the hierarchy problem. This isn’t so – the model reformulates one hierarchy (the weakness of gravity) as another hierarchy (extra dimensions much larger than the Planck length) and hence doesn’t solve the problem. I am not sure whether he is being intentionally misleading or really didn’t understand this, but either way, it’s wrong.

Hesketh furthermore states that if there were such large extra dimensions the LHC might produce microscopic black holes – but he doesn’t mention with a single word that not the faintest evidence for this has been found.

When it comes to dark matter, he waves away the possibility that the observations are due to a modification of gravity with the magic word “Bullet Cluster” – a distortion of facts about which I have previously complained. I am afraid he actually might not know any better since this myth has been so widely spread, but if he doesn’t care to look at the subject he shouldn’t write a book about it. To round things up, Hesketh misspells “Noether” as “Nöther,” though I am willing to believe that this egg was laid by someone else.

In summary, the first two thirds of the book about the standard model, quantum field theory, and particle detectors are recommendable. But when it comes to new physics the author doesn’t know what he’s talking about.

Update April 7th 2017: Most of these bummers have been fixed in the paperback edition.

Sunday, February 19, 2017

Fake news wasn’t hard to predict – But what’s next?

In 2008, I wrote a blogpost which began with a dark vision – a presidential election led astray by fake news.

I’m not much of a prophet, but it wasn’t hard to predict. Journalism, for too long, attempted the impossible: Make people pay for news they don’t want to hear.

It worked, because news providers, by and large, shared an ethical code. Journalists aspired to tell the truth; their passion was unearthing and publicizing facts – especially those that nobody wanted to hear. And as long as the professional community held the power, they controlled access to the press – the device – and kept up the quality.

But the internet made it infinitely easy to produce and distribute news, both correct and incorrect. Fat headlines suddenly became what economists call an “intangible good.” No longer does it rely on a physical resource or a process of manufacture. News now can be created, copied, and shared by anyone, anywhere, with almost zero investment.

By the early 00s, anybody could set up a webpage and produce headlines. From thereon, quality went down. News makes the most profit if it’s cheap and widely shared. Consequently, more and more outlets offer the news people want to read –that’s how the law of supply and demand is supposed to work after all.

What we have seen so far, however, is only the beginning. Here’s what’s up next:
  • 1. Fake News Get Organized

    An army of shadow journalists specializes in fake news, pitching it to alternative news outlets. These outlets will mix real and fake news. It becomes increasingly hard to tell one from the other.

  • 2. Fake News Becomes Visual

    “Picture or it didn’t happen,” will soon be a thing of the past. Today, it’s still difficult to forge photos and videos. But software becomes better, and cheaper, and easier to obtain, and soon it will take experts to tell real from fake.

  • 3. Fake News Get Cozy

    Anger isn’t sustainable. In the long run, most people want good news – they want to be reassured everything’s fine. The war in Syria is over. The earthquake risk in California is low. The economy is up. The chocolate ratio has been raised again.

  • 4. Cooperations Throw the Towel

    Facebook and Google and Yahoo conclude out it’s too costly to assess the truth value of information passed on by their platforms, and decide it’s not their task. They’re right.
  • 5. Fake News Has Real-World Consequences

    We’ll see denial of facts leading to deaths of thousands of people. I mean lack of earthquake warning systems because the risk was believed fear-mongering. I mean riots over terrorist attacks that never happened. I mean collapsed buildings and toxic infant formula because who cares about science. We’ll get there.

The problem that fake news poses for democratic societies attracted academic interest already a decade ago. Triggered by the sudden dominance of Google as search engine, it entered the literature under the name “Googlearchy.”

Democracy relies on informed decision making. If the electorate doesn’t know what’s real, democratic societies can’t identify good ways to carry out the people’s will. You’d think that couldn’t be in anybody’s interest, but it is – if you can make money from misinformation.

Back then, the main worry focused on search engines as primary information providers. Someone with more prophetic skills might have predicted that social networks would come to play the central role for news distribution, but the root of the problem is the same: Algorithms are designed to deliver news which users like. That optimizes profit, but degrades the quality of news.

Economists of the Chicago School would tell you that this can’t be. People’s behavior reveals what they really want, and any regulation of the free market merely makes the fulfillment of their wants less efficient. If people read fake news, that’s what they want – the math proves it!

But no proof is better than its assumptions, and one central assumption for this conclusion is that people can’t have mutually inconsistent desires. We’re supposed to have factored in long-term consequences of today’s actions, properly future-discounted and risk-assessed. In other words, we’re supposed to know what’s good for us and our children and grand-grand-children and make rational decisions to work towards that goal.

In reality, however, we often want what’s logically impossible. Problem is, a free market, left unattended, caters predominantly to our short-term wants.

On the risk of appearing to be inconsistent, economists are right when they speak of revealed preferences as the tangible conclusion of our internal dialogues. It’s just that economists, being economists, like to forget that people have a second way of revealing preferences – they vote.

We use democratic decision making to ensure the long-term consequences of our actions are consistent with the short-term ones, like putting a price on carbon. One of the major flaws of current economic theory is that it treats the two systems, economic and political, as separate, when really they’re two sides of the same coin. But free markets don’t work without a way to punish forgery, lies, and empty promises.

This is especially important for intangible goods – those which can be reproduced with near-zero effort. Intangible goods, like information, need enforced copyright, or else quality becomes economically unsustainable. Hence, it will take regulation, subsidies, or both to prevent us from tumbling down into the valley of alternative facts.

In the last months I’ve seen a lot of finger-pointing at scientists for not communicating enough or not communicating correctly, as if we were the ones to blame for fake news. But this isn’t our fault. It’s the media which has a problem – and it’s a problem scientists solved long ago.

The main reason why fake news is hard to identify, and why it remains profitable to reproduce what other outlets have already covered, is that journalists – in contrast to scientists – are utterly intransparent about their doings.

As a blogger, I see this happening constantly. I know that many, if not most, science writers closely follow science blogs. And the professional writers frequently report on topics previously covered by bloggers – without doing as much as naming their sources, not to mention referencing them.

This isn’t merely a personal paranoia. I know this because in several instances science writers actually told me that my blogpost about this-or-that has been so very useful. Some even asked me to share links to their articles they wrote based on it. Let that sink in for a moment – they make money from my expertise, don’t give me credits, and think that this is entirely appropriate behavior. And you wonder why fake news is economically profitable?

For a scientist, that’s mindboggling. Our currency is citations. Proper credits is pretty much all we want. Keep the money, but say my name.

I understand that journalists have to protect some sources, so don’t misunderstand me. I don’t mean they have to spill beans about their exclusive secrets. What I mean is simply that a supposed news outlet that merely echoes what’s been reported elsewhere should be required to refer to the earlier article.

Of course this would imply that the vast majority of existing news sites were revealed as copy-cats and lose readers. And of course it isn’t going to happen because nobody’s going to enforce it. If I saw even a remote chance of this happening, I wouldn’t have made the above predictions, would I?

What’s even more perplexing for a scientist, however, is that news outlets, to the extent that they do fact-checks, don’t tell customers that they fact-check, or what they fact-check, or how they fact-check.

Do you know, for example, which science magazines fact-check their articles? Some do, some don’t. I know for a few because I’ve been border-crossing between scientists and writers for a while. But largely it’s insider knowledge – I think it should be front-page information. Listen, Editor-in-Chief: If you fact-check, tell us.

It isn’t going to stop fake news, but I think a more open journalistic practice and publicly stated adherence to voluntary guidelines could greatly alleviate it. It probably makes you want to puke, but academics are good at a few things and high community standards are one of them. And that is what journalisms need right now.

I know, this isn’t exactly the cozy, shallow, good news that shares well. But it will be a great pleasure when, in ten years, I can say: I told you so.

Friday, February 17, 2017

Black Hole Information - Still Lost

[Illustration of black hole.
Image: NASA]
According to Google, Stephen Hawking is the most famous physicist alive, and his most famous work is the black hole information paradox. If you know one thing about physics, therefore, that’s what you should know.

Before Hawking, black holes weren’t paradoxical. Yes, if you throw a book into a black hole you can’t read it anymore. That’s because what has crossed a black hole’s event horizon can no longer be reached from the outside. The event horizon is a closed surface inside of which everything, even light, is trapped. So there’s no way information can get out of the black hole; the book’s gone. That’s unfortunate, but nothing physicists sweat over. The information in the book might be out of sight, but nothing paradoxical about that.

Then came Stephen Hawking. In 1974, he showed that black holes emit radiation and this radiation doesn’t carry information. It’s entirely random, except for the distribution of particles as a function of energy, which is a Planck spectrum with temperature inversely proportional to the black hole’s mass. If the black hole emits particles, it loses mass, shrinks, and gets hotter. After enough time and enough emission, the black hole will be entirely gone, with no return of the information you put into it. The black hole has evaporated; the book can no longer be inside. So, where did the information go?

You might shrug and say, “Well, it’s gone, so what? Don’t we lose information all the time?” No, we don’t. At least, not in principle. We lose information in practice all the time, yes. If you burn the book, you aren’t able any longer to read what’s inside. However, fundamentally, all the information about what constituted the book is still contained in the smoke and ashes.

This is because the laws of nature, to our best current understanding, can be run both forwards and backwards – every unique initial-state corresponds to a unique end-state. There are never two initial-states that end in the same final state. The story of your burning book looks very different backwards. If you were able to very, very carefully assemble smoke and ashes in just the right way, you could unburn the book and reassemble it. It’s an exceedingly unlikely process, and you’ll never see it happening in practice. But, in principle, it could happen.

Not so with black holes. Whatever formed the black hole doesn't make a difference when you look at what you wind up with. In the end you only have this thermal radiation, which – in honor of its discoverer – is now called ‘Hawking radiation.’ That’s the paradox: Black hole evaporation is a process that cannot be run backwards. It is, as we say, not reversible. And that makes physicists sweat because it demonstrates they don’t understand the laws of nature.

Black hole information loss is paradoxical because it signals an internal inconsistency of our theories. When we combine – as Hawking did in his calculation – general relativity with the quantum field theories of the standard model, the result is no longer compatible with quantum theory. At a fundamental level, every interaction involving particle processes has to be reversible. Because of the non-reversibility of black hole evaporation, Hawking showed that the two theories don’t fit together.

The seemingly obvious origin of this contradiction is that the irreversible evaporation was derived without taking into account the quantum properties of space and time. For that, we would need a theory of quantum gravity, and we still don’t have one. Most physicists therefore believe that quantum gravity would remove the paradox – just how that works they still don’t know.

The difficulty with blaming quantum gravity, however, is that there isn’t anything interesting happening at the horizon – it's in a regime where general relativity should work just fine. That’s because the strength of quantum gravity should depend on the curvature of space-time, but the curvature at a black hole horizon depends inversely on the mass of the black hole. This means the larger the black hole, the smaller the expected quantum gravitational effects at the horizon.

Quantum gravitational effects would become noticeable only when the black hole has reached the Planck mass, about 10 micrograms. When the black hole has shrunken to that size, information could be released thanks to quantum gravity. But, depending on what the black hole formed from, an arbitrarily large amount of information might be stuck in the black hole until then. And when a Planck mass is all that’s left, it’s difficult to get so much information out with such little energy left to encode it.

For the last 40 years, some of the brightest minds on the planets have tried to solve this conundrum. It might seem bizarre that such an outlandish problem commands so much attention, but physicists have good reasons for this. The evaporation of black holes is the best-understood case for the interplay of quantum theory and gravity, and therefore might be the key to finding the right theory of quantum gravity. Solving the paradox would be a breakthrough and, without doubt, result in a conceptually new understanding of nature.

So far, most solution attempts for black hole information loss fall into one of four large categories, each of which has its pros and cons.

  • 1. Information is released early.

    The information starts leaking out long before the black hole has reached Planck mass. This is the presently most popular option. It is still unclear, however, how the information should be encoded in the radiation, and just how the conclusion of Hawking’s calculation is circumvented.

    The benefit of this solution is its compatibility with what we know about black hole thermodynamics. The disadvantage is that, for this to work, some kind of non-locality – a spooky action at a distance – seems inevitable. Worse still, it has recently been claimed that if information is released early, then black holes are surrounded by a highly-energetic barrier: a “firewall.” If a firewall exists, it would imply that the principle of equivalence, which underlies general relativity, is violated. Very unappealing.

  • 2. Information is kept, or it is released late.

    In this case, the information stays in the black hole until quantum gravitational effects become strong, when the black hole has reached the Planck mass. Information is then either released with the remaining energy or just kept forever in a remnant.

    The benefit of this option is that it does not require modifying either general relativity or quantum theory in regimes where we expect them to hold. They break down exactly where they are expected to break down: when space-time curvature becomes very large. The disadvantage is that some have argued it leads to another paradox, that of the possibility to infinitely produce black hole pairs in a weak background field: i.e., all around us. The theoretical support for this argument is thin, but it’s still widely used.

  • 3. Information is destroyed.

    Supporters of this approach just accept that information is lost when it falls into a black hole. This option was long believed to imply violations of energy conservation and hence cause another inconsistency. In recent years, however, new arguments have surfaced according to which energy might still be conserved with information loss, and this option has therefore seem a little revival. Still, by my estimate it’s the least popular solution.

    However, much like the first option, just saying that’s what one believes doesn’t make for a solution. And making this work would require a modification of quantum theory. This would have to be a modification that doesn’t lead to conflict with any of our experiments testing quantum mechanics. It’s hard to do.

  • 4. There’s no black hole.

    A black hole is never formed or information never crosses the horizon. This solution attempt pops up every now and then, but has never caught on. The advantage is that it’s obvious how to circumvent the conclusion of Hawking’s calculation. The downside is that this requires large deviations from general relativity in small curvature regimes, and it is therefore difficult to make compatible with precision tests of gravity.
There are a few other proposed solutions that don’t fall into any of these categories, but I will not – cannot! – attempt to review all of them here. In fact, there isn’t any good review on the topic – probably because the mere thought of compiling one is dreadful. The literature is vast. Black hole information loss is without doubt the most-debated paradox ever.

And it’s bound to remain so. The temperature of black holes which we can observe today is far too small to be observable. Hence, in the foreseeable future nobody is going to measure what happens to the information which crosses the horizon. Let me therefore make a prediction. In ten years from now, the problem will still be unsolved.

Hawking just celebrated his 75th birthday, which is a remarkable achievement by itself. 50 years ago, his doctors declared him dead soon, but he's stubbornly hung onto life. The black hole information paradox may prove to be even more stubborn. Unless a revolutionary breakthrough comes, it may outlive us all.

(I wish to apologize for not including references. If I’d start with this, I wouldn’t be done by 2020.)

[This post previously appeared on Starts With A Bang.]

Sunday, February 12, 2017

Away Note

I'm traveling next week and will be offline for some days. Blogging may be insubstantial, if existent, and comments may be stuck in the queue longer than usual. But I'm sure you'll survive without me ;)

And since you haven't seen the girls for a while, here is a recent photo. They'll be starting school this year in the fall and are very excited about it.

Thursday, February 09, 2017

New Data from the Early Universe Does Not Rule Out Holography

[img src: entdeckungen.net]
It’s string theorists’ most celebrated insight: The world is a hologram. Like everything else string theorists have come up with, it’s an untested hypothesis. But now, it’s been put to test with a new analysis that compares a holographic early universe with its non-holographic counterpart.

Tl;dr: Results are inconclusive.

When string theorists say we live in a hologram, they don’t mean we are shadows in Plato’s cave. They mean their math says that all information about what’s inside a box can be encoded on the boundary of that box – albeit in entirely different form.

The holographic principle – if correct – means there are two different ways to describe the same reality. Unlike in Plato’s cave, however, where the shadows lack information about what caused them, with holography both descriptions are equally good.

Holography would imply that the three dimensions of space which we experience are merely one way to think of the world. If you can describe what happens in our universe by equations that use only two-dimensional surfaces, you might as well say we live in two dimensions – just that these are dimensions we don’t normally experience.

It’s a nice idea but hard to test. That’s because the two-dimensional interpretation of today’s universe isn’t normally very workable. Holography identifies two different theories with each other by a relation called “duality.” The two theories in question here are one for gravity in three dimensions of space, and a quantum field theory without gravity in one dimension less. However, whenever one of the theories is weakly coupled, the other one is strongly coupled – and computations in strongly coupled theories are hard, if not impossible.

The gravitational force in our universe is presently weakly coupled. For this reason General Relativity is the easier side of the duality. However, the situation might have been different in the early universe. Inflation – the rapid phase of expansion briefly after the big bang – is usually assumed to take place in gravity’s weakly coupled regime. But that might not be correct. If instead gravity at that early stage was strongly coupled, then a description in terms of a weakly coupled quantum field theory might be more appropriate.

This idea has been pursued by Kostas Skenderis and collaborators for several years. These researchers have developed a holographic model in which inflation is described by a lower-dimensional non-gravitational theory. In a recent paper, their predictions have been put to test with new data from the Planck mission, a high-precision measurement of the temperature fluctuations of the cosmic microwave background.


In this new study, the authors compare the way that holographic inflation and standard inflation in the concordance model – also known as ΛCDM – fit the data. The concordance model is described by six parameters. Holographic inflation has a closer connection to the underlying theory and so the power spectrum brings in one additional parameter, which makes a total of seven. After adjusting for the number of parameters, the authors find that the concordance model fits better to the data.

However, the biggest discrepancy between the predictions of holographic inflation and the concordance model arise at large scales, or low multipole moments respectively. In this regime, the predictions from holographic inflation cannot really be trusted. Therefore, the authors repeat the analysis with the low multipole moments omitted from the data. Then, the two models fit the data equally well. In some cases (depending on the choice of prior for one of the parameters) holographic inflation is indeed a better fit, but the difference is not statistically significant.

To put this result into context it must be added that the best-understood cases of holography work in space-times with a negative cosmological constant, the Anti-de Sitter spaces. Our own universe, however, is not of this type. It has instead a positive cosmological constant, described by de-Sitter space. The use of the holographic principle in our universe is hence not strongly supported by string theory, at least not presently.

The model for holographic inflation can therefore best be understood as one that is motivated by, but not derived from, string theory. It is a phenomenological model, developed to quantify predictions and test them against data.

While the difference between the concordance model and holographic inflation which this study finds are insignificant, it is interesting that a prediction based on such an entirely different framework is able to fit the data at all. I should also add that there is a long-standing debate in the community as to whether the low multipole moments are well-described by the concordance model, or whether any of the large-scale anomalies are to be taken seriously.

In summary, I find this an interesting result because it’s an entirely different way to think of the early universe, and yet it describes the data. For the same reason, however, it’s also somewhat depressing. Clearly, we don’t presently have a good way to test all the many ideas that theorists have come up with.

Friday, February 03, 2017

Testing Quantum Foundations With Atomic Clocks

Funky clock at Aachen University.
Nobel laureate Steven Weinberg has recently drawn attention by disliking quantum mechanics. Besides an article for The New York Review of Books and a public lecture to bemoan how unsatisfactory the current situation is, he has, however, also written a technical paper:
    Lindblad Decoherence in Atomic Clocks
    Steven Weinberg
    Phys. Rev. A 94, 042117 (2016)
    arXiv:1610.02537 [quant-ph]
In this paper, Weinberg studies the use of atomic clocks for precision tests of quantum mechanics. Specifically, to search for an unexpected, omnipresent, decoherence .

Decoherence is the process that destroys quantum-ness. It happens constantly and everywhere. Each time a quantum state interacts with an environment – air, light, neutrinos, what have you – it becomes a little less quantum.

This type of decoherence explains why, in every-day life, we don’t see quantum-typical behavior, like cats being both dead and alive and similar nonsense. Trouble is, decoherence takes place only if you consider the environment a source of noise whose exact behavior is unknown. If you look at the combined system of the quantum state plus environment, that still doesn’t decohere. So how come on large scales our world is distinctly un-quantum?

It seems that besides this usual decoherence, quantum mechanics must do something else, that is explaining the measurement process. Decoherence merely converts a quantum state into a probabilistic (“mixed”) state. But upon measurement, this probabilistic state must suddenly change to reflect that, after observation, the state is in the measured configuration with 100% certainty. This update is also sometimes referred to as the “collapse” of the wave-function.

Whether or not decoherence solves the measurement problem then depends on your favorite interpretation of quantum mechanics. If you don’t think the wave-function, which describes the quantum state, is real but merely encodes information, then decoherence does the trick. If you do, in contrast, think the wave-function is real, then decoherence doesn’t help you understand what happens in a measurement because you still have to update probabilities.

That is so unless you are a fan of the the many-worlds interpretation which simply declares the problem nonexistent by postulating all possible measurement outcomes are equally real. It just so happens that we find ourselves in only one of these realities. I’m not a fan of many worlds because defining problems away rarely leads to progress. Weinberg finds all the many worlds “distasteful,” which also rarely leads to progress.

What would really solve the problem, however, is some type of fundamental decoherence, an actual collapse prescription basically. It’s not a particularly popular idea, but at least it is an idea, and it’s one that’s worth testing.

What has any of that to do with atomic clocks? Well, atomic clocks work thanks to quantum mechanics, and they work extremely precisely. And so, Weinberg’s idea is to use atomic clocks to look for evidence of fundamental decoherence.

An atomic clock trades off the precise measurement of time for the precise measurement of a wavelength, or frequency respectively, which counts oscillations per time. And that is where quantum mechanics comes in handy. A hundred years or so ago, physicist found that the energies of electrons which surround the atomic nucleus can take on only discrete values. This also means they can absorb and emit light only of energies that corresponds to the difference in the discrete levels.

Now, as Einstein demonstrated with the photoelectric effect, the energy of light is proportional to its frequency. So, if you find light of a frequency that the atom can absorb, you must have hit one of the differences in energy levels. These differences in energy levels are (at moderate temperatures) properties of the atom and almost insensitive to external disturbances. That’s what makes atomic clocks tick so regularly.

So, it comes down to measuring atomic transition frequencies. Such measurements works by tuning a laser until a cloud of atoms (usually Cesium or Rubidium) absorbs most of the light. The absorbtion indicates you have hit the transition frequency.

In modern atomic clocks, one employs a two-pulse scheme, known as the Ramsey method. A cloud of atoms is exposed to a first pulse, then left to drift for a second or so, and then comes a second pulse. After that, you measure how many atoms were affected by the pulses, and use a feedback loop to tune the frequency of the light to maximize the number of atoms. (Further reading: “Real Clock Tutorial” by Chad Orzel.)

If, however, between the two pulses some unexpected decoherence happens, then the frequency tuning doesn’t work as well as it does in normal quantum mechanics. And this, so Weinberg’s argument, would have been noticed already if decoherence were relevant for atomic masses on the timescale of seconds. This way, he obtains constraints on fundamental decoherence. And, as bonus, proposes a new way of testing the foundations of quantum mechanics by use of the Ramsey method.

It’s a neat idea. It strikes me as the kind of paper that comes about as spin-off when thinking about a problem. I find this an interesting work because my biggest frustration with quantum foundations is all the talk about what is or isn’t distasteful about this or that interpretation. For me, the real question is whether quantum mechanics – in whatever interpretation – is fundamental, or whether there is an underlying theory. And if so, how to test that.

As a phenomenologist, you won’t be surprised to hear that I think research on the foundations of quantum mechanics would benefit from more phenomenology. Or, in summary: A little less talk, a little more action please.