Thursday, January 28, 2016

Does the arXiv censor submissions?

The arXiv is the physicsts' marketplace of ideas. In high energy physics and adjacent fields, almost all papers are submitted to the arXiv prior to journal submission. Developed by Paul Ginsparg in the early 1990s, this open-access pre-print repository has served the physics community for more than 20 years, and meanwhile extends also to adjacent fields like mathematics, economics, and biology. It fulfills an extremely important function by helping us to exchange ideas quickly and efficiently.

Over the years the originally free signup became more restricted. If you sign up for the arXiv now, you need to be "endorsed" by several people who are already signed up. It also became necessary to screen submissions to keep the quality level up. In hindsight, this isn't surprising: more people means more trouble. And sometimes, of course, things go wrong.

I have heard various stories about arXiv moderation gone wrong, mostly these are from students, and mostly it affects those who work in small research areas or those whose name is Garrett Lisi.

A few days ago, a story appeared online which quickly spread. Nicolas Gisin, an established Professor for Physics who works on quantum cryptography (among other things) relates the story of two of his students who ventured in a territory unfamiliar for him, black hole physics. They wrote a paper that appeared to him likely wrong but reasonable. It got rejected by the arxiv. The paper later got published by PLA (a respected journal that however does not focus on general relativity). More worrisome still, the students' next paper also got rejected by the arXiv, making it appear as if they were now blacklisted.

Now the paper that caused the offense is, haha, not on the arXiv, but I tracked it down. So let me just say that I think it's indeed wrong and it shouldn't have gotten published in a journal. They are basically trying to include the backreaction of the outgoing Hawking-radiation on the black hole. It's a thorny problem (the very problem this blog was named after) and the treatment in the paper doesn't make sense.

Hawking radiation is not produced at the black hole horizon. No, it is not. And tracking back the flux from infinity to the horizon is therefore is not correct. Besides this, the equation for the mass-loss that they use is a late-time approximation in a collapse situation. One can't use this approximation for a metric without collapse, and it certainly shouldn't be used down to the Planck mass. If you have a collapse-scenario, to get the backreaction right you would have to calculate the emission rate prior to horizon formation, time-dependently, and integrate over this.

Ok, so the paper is wrong. But should it have been rejected by the arXiv? I don't think so. The arxiv moderation can't and shouldn't replace peer review, it should just be a basic quality check, and the paper looks like a reasonable research project.

I asked a colleague who I know works as an arXiv moderator for comment. (S)he wants to stay anonymous but offers the following explanation:


I had not heard of the complaints/blog article, thanks for passing that information on...  
 The version of the article I saw was extremely naive and was very confused regarding coordinates and horizons in GR... I thought it was not “referee-able quality’’ — at least not in any competently run GR journal... (The hep-th moderator independently raised concerns...)  
 While it is now published at Physics Letters A, it is perhaps worth noting that the editorial board of Physics Letters A does *not* include anyone specializing in GR.
(S)he is correct of course. We haven't seen the paper that was originally submitted. It was very likely in considerably worse shape than the published version. Indeed, Gisin writes in his post that the paper was significantly revised during peer review. Taking this into account, the decision seems understandable to me.

The main problem I have with this episode is not that a paper got rejected which maybe shouldn't have been rejected -- because shit happens. Humans make mistakes, and let us be clear that the arXiv, underfunded as it is, relies on volunteers for the moderation. No, the main problem I have is the lack of transparency.

The arXiv is an essential resource for the physics community. We all put trust in a group of mostly anonymous moderators who do a rather thankless and yet vital job. I don't think the origin of the problem is with these people. I am sure they do the best they can. No, I think the origin of the problem is the lack of financial resources which must affect the possibility to employ administrative staff to oversee the operations. You get what you pay for.

I hope that this episode be a wake-up call to the community to put their financial support behind the arXiv, and to the arXiv to use this support to put into place a more transparent and better organized moderation procedure.

Note added: It was mentioned to me that the problem with the paper might be more elementary in that they're using wrong coordinates to begin with - it hadn't even occurred to me to check this. To tell you the truth, I am not really interested in figuring out exactly why the paper is wrong, it's besides the point. I just hope that whoever reviewed the paper for PLA now goes and sits in the corner for an hour with a paper bag over their head.

Wednesday, January 27, 2016

Hello from Maui

Greetings from the west-end of my trip, which brought me out to Maui, visiting Garrett at the Pacific Science Institute, PSI. Launched roughly a year ago, Garrett and his girlfriend/partner Crystal have now hosted about 60 traveling scientists, "from all areas except chemistry" I was told.

I got bitten by mosquitoes and picked at by a set of adorable chickens (named after the six quarks), but managed to convince everybody that I really didn't feel like swimming, or diving, or jumping off things at great height. I know I'm dull. I did watch some sea turtles though and I also got a new T-shirt with the PSI-logo, which you can admire in the photo to the right (taken in front of a painting by Crystal).

I'm not an island-person, don't like mountains, and I can't stand humidity, so for me it's somewhat of a mystery what people think is so great about Hawaii. But leaving aside my preference for German forests, it's as pleasant a place as can be.

You won't be surprised to hear that Garrett is still working on his E8 unification and says things are progressing well, if slowly. Aloha.






Monday, January 25, 2016

Is space-time a prism?

Tl;dr: A new paper demonstrates that quantum gravity can split light into spectral colors. Gravitational rainbows are almost certainly undetectable on cosmological scales, but the idea might become useful for Earth-based experiments.

Einstein’s theory of general relativity still stands apart from the other known forces by its refusal to be quantized. Progress in finding a theory of quantum gravity has stalled because of the complete lack of data – a challenging situation that physicists have never encountered before.

The main problem in measuring quantum gravitational effects is the weakness of gravity. Estimates show that testing its quantum effects would require detectors the size of planet Jupiter or particle accelerators the size of the Milky-way. Thus, experiments to guide theory development are unfeasible. Or so we’ve been told.

But gravity is not a weak force – its strength depends on the masses between which it acts. (Indeed, that is the very reason gravity is so difficult to quantize.) Saying that gravity is weak makes sense only when referring to a specific mass, like that of the proton for example. We can then compare the strength of gravity to the strength of the other interactions, demonstrating its relative weakness – a puzzling fact known as the “strong hierarchy problem.” But that the strength of gravity depends on the particles’ masses also means that quantum gravitational effects are not generally weak: their magnitude too depends on the gravitating masses.

To be more precise one should thus say that quantum gravity is hard to detect because if an object is massive enough to have large gravitational effects then its quantum properties are negligible and don’t cause quantum behavior of space-time. General relativity however acts in two ways: Matter affects space-time and space-time affects matter. And so the reverse is also true: If the dynamical background of general relativity for some reason has an intrinsic quantum uncertainty, then this will affect the matter moving in this space-time – in a potentially observable way.

Rainbow gravity, proposed in 2003 by Magueijo and Smolin, is based on this idea, that the quantum properties of space-time could noticeably affect particles propagating in it. In rainbow gravity, space-time itself depends on the particle’s energy. In particular, light of different energies travels with different speeds, splitting up different colors, hence the name. It’s a nice idea but unfortunately it’s is an internally inconsistent theory and so far nobody has managed to make much sense of it.

First, let us note that already in general relativity the background depends of course on the energy of the particle, and this certainly should carry over also into quantum gravity. More precisely though, space-time depends not on the energy but on the energy-density of matter in it. So this cannot give rise to rainbow gravity. Worse even, because of this, general relativity is in outright conflict with rainbow gravity.

Second, an energy-dependent metric can be given meaning to in the framework of asymptotically safe gravity, but this is not what rainbow gravity is about either. Asymptotically safe gravity is an approach to quantum gravity in which space-time depends on the energy by which it is probed. The energy in rainbow gravity is however not that by which space-time is probed (which is observer-independent), but is supposedly the energy of a single particle (which is observer-dependent).

Third, the whole idea crumbles to dust once you start wondering how the particles in rainbow gravity are supposed to interact. You need space-time to define “where” and “when”. If each particle has its own notion of where and when, the requirement that an interaction be local rather than “spookily” on a distance can no longer be fulfilled.

In a paper which recently appeared in PLB (arXiv version here), three researchers from the University of Warsaw have made a new attempt to give meaning to rainbow gravity. While it doesn’t really solve all problems, it makes considerably more sense than the previous attempts.

In their paper, the authors look a small (scalar) perturbations over a cosmological background, that are modes with different energies. They assume that there is some theory for quantum gravity which dictates what the background does but do not specify this theory. They then ask what happens to the perturbations which travel in the background and derive equations for each mode of the perturbation. Finally, they demonstrate that these equations can be reformulated so that, effectively, the perturbation travels in a space-time which depends on the perturbation’s own energy – it is a variant of rainbow gravity.

The unknown theory of quantum gravity only enters into the equations by an average over the quantum states of the background’s dynamical variables. That is, if the background is classical and in one specific quantum state, gravity doesn’t cause any rainbows, which is the usual state of affairs in general relativity. It is the quantum uncertainty of the space-time background that gives rise to rainbows.

This type of effective metric makes somewhat more sense to me than the previously considered scenarios. In this new approach, it is not the perturbation itself that causes the quantum effect (which would be highly non-local and extremely suspicious). Instead the particle merely acts as a probe for the background (a quite common approximation that neglects backreaction).

Unfortunately, one must expect the quantum uncertainty of space-time to be extremely tiny and undetectable. A long time has passed since quantum gravitational effects were strong in the very early universe and since then they have long decohered. Of course we don’t really know this with certainty, so looking for such effects is generally a good idea. But I don’t think it’s likely we’d find something here.

The situation looks somewhat better though for a case not discussed in the paper, which is a quantum uncertainty in space-time caused by massive particles with a large position uncertainty. I discussed this possibility in this earlier post, and it might be that the effect considered in the new paper can serve as a way to probe it. This would require though to know what happens not to background perturbations but other particles traveling in this background, requiring a different approach than the one used in this paper.

I am not really satisfied with this version of rainbow gravity because I still don’t understand how particles would know where to interact, or which effective background to travel in if several of them are superposed, which seems somewhat of a shortcoming for a quantum theory. But this version isn’t quite as nonsensical as the previous one, so let me say I am cautiously hopeful that this idea might one day become useful.

In summary, the new paper demonstrates that gravitational rainbows might appear in quantum gravity under quite general circumstances. It might be an interesting contribution that, with further work, could become useful in the search for experimental evidence of quantum gravity.

Note added: The paper deals with a FRW background and thus trivially violates Lorentz-invariance.

Thursday, January 21, 2016

Messengers from the Dark Age

Astrophysicists dream of putting radio
telescopes on the far side of the moon.
[Image Credits: 21stcentech.com]
An upcoming generation of radio telescopes will soon let us look back into the dark age of the universe. The new observations can test dark matter models, inflation, and maybe even string theory.

The universe might have started with a bang, but once the echoes faded it took quite some while until the symphony began. Between the creation of the cosmic microwave background (CMB) and the formation of the first stars, 100 million years passed in darkness. This “dark age” has so far been entirely hidden from observation, but this situation is soon to change.

The dark age may hold the answers to many pressing questions. During this period, most of the universe’s mass was in form of light atoms – primarily hydrogen – and dark matter. The atoms slowly clumped under the influence of gravitational forces, until they finally ignited the first stars. Before the first stars, astrophysical processes were few, and so the distribution of hydrogen during the dark age carries very clean information about structure formation. Details about both the behavior of dark matter and the size of structures are encoded in these hydrogen clouds. But how can we see into the darkness?

Luckily the dark age was not entirely dark, just very, very dim. Back then, the hydrogen atoms that filled the universe frequently bumped into each other, which can flip the electron’s spin. If a collision flips the spin, the electron’s energy changes by a tiny amount because the energy depends on whether the electron’s spin is aligned with the spin of the nucleus or whether it points in the opposite direction. This energy difference is known as “hyperfine splitting.” Flipping the hydrogen electron’s spin therefore leads to the emission of a very low energy photon with a wavelength of 21cm. If we can trace the emissions of these 21cm photons, we can trace the distribution of hydrogen.


But 21 cm is the wavelength of the photons at the time of emission, which was 13 billion years ago. Since then the universe has expanded significantly and stretched the photons’ wavelength with it. How much the wavelength has been stretched depends on whether it was emitted early or late during the dark ages. The early photons have meanwhile been stretched by a factor of about 1000, resulting in wavelengths of a few hundred meters. Photons emitted towards the end of the dark age have not been stretched quite as much – they today have wavelength of some meters.

This most exciting aspect of 21cm astronomy is that it does not only give us a snapshot at one particular moment – like the CMB – but allows us to map different times during the dark age. By measuring the red-shifted photons at different wavelengths we can scan through the whole period. This would give us many new insights about the history of our universe.

To begin with, it is not well understood how the dark age ends and the first stars are formed. The dark age fades away in a phase of reionization in which the hydrogen is stripped of its electrons again. This reionization is believed to be caused by the first star’s radiation, but exactly what happens we don’t know. Since the ionized hydrogen no longer emits the hyperfine line, 21cm astronomy could tell us how the ionized regions grow, teaching us much about the early stellar objects and the behavior of the intergalactic medium.

21 cm astronomy can also help solve the riddle of dark matter. If dark matter self-annihiliates, this affects the distribution of neutral hydrogen, which can be used to constrain or rule out dark matter models.

Inflation models too can be probed by this method: The distribution of structures that 21cm astronomy can map carries an imprint of the quantum fluctuations that caused them. These fluctuations in return depend on the type of inflation fields and the field’s potential. Thus, the correlations in the structures which were present already during the dark age let us narrow down what type of inflation has taken place.

Maybe most excitingly, the dark ages might give us a peek at cosmic strings, one-dimensional objects with a high density and high gravitational pull. In many models of string phenomenology, cosmic strings can be produced at the end of inflation, before the dark age begins. By distorting the hydrogen clouds, the cosmic strings would leave a characteristic signal in the 21cm emission spectrum.

CSL-1. A candidate signal for a cosmic
string, later identified as two galaxies.
Read more about cosmic strings here.
But measuring photons of this wavelength is not easy. The Milkyway too has sources that emit in this regime, which gives rise to an unavoidable galactic foreground. In addition, the Earth’s atmosphere distorts the signal and some radio broadcasts too can interfere with the measurement. Nevertheless, astronomers have risen up to the challenge and the first telescopes hunting for the 21cm signal are now in operation.

The Low-Frequency Array (LOFAR) went online in late 2012. Its main telescope is located in the Netherlands, but it combines data from 24 other telescopes in Europe. It reaches wavelengths up to 30m. The Mileura Widefield Array (MWA) in Australia, which is sensitive to wavelengths of a few meters, has started taking data in 2013. And in 2025, the Square Kilometer Array (SKA) is scheduled to be completed. This joint project between Australia and South Africa will be the yet largest radio telescope.

Still, the astronomers’ dream would be to get rid of the distortion caused by Earth’s atmosphere. Their most ambitious plan is to put an array of telescopes on the far side of the moon. But this idea is, unfortunately, still far-fetched – for not to mention underfunded.

Only a few decades ago, cosmology was a discipline so starved of data that it was closer to philosophy than to science. Today it is a research area based on high precision measurements. The progress in technology and in our understanding of the universe’s history has been nothing but stunning, but we have only just begun. The dark age is next.


[This post previously appeared on Starts With a Bang.]

Saturday, January 16, 2016

Away Note

I am traveling the next three weeks and things will go very slowly on this blog.

In case you missed it, you might enjoy two pieces I recently wrote for NOVA: Are Singularities Real? and Are Space and Time discrete or continuous? There should be a third one appearing later this month (which will also be the last because it seems they're scraping this column). And then I wrote an article for Quanta Magazine String Theory Meets Loop Quantum Gravity, to which you find some background material here and here. Finally you might find this article in The Independent amusing: Stephen Hawking publishes paper on black holes that could get him 'a Nobel prize after all', in which I'm quoted as the voice of reason.

Wednesday, January 13, 2016

Book review: “From the Great Wall to the Great Collider” by Nadis and Yau

From the Great Wall to the Great Collider: China and the Quest to Uncover the Inner Workings of the Universe
By Steve Nadis and Shing-Tung Yau
International Press of Boston (October 23, 2015)

Did you know that particle physicists like the Chinese government’s interest in building the next larger particle collider? If not, then this neat little book about the current plans for the Great Collider, aka “Nimatron,” is just for you.

Nadis and Yau begin their book laying out the need for a larger collider, followed by a brief history of accelerator physics that emphasizes the contribution of Chinese researchers. Then come two chapters about the hunt for the Higgs boson, the LHC’s success, and a brief survey of beyond the standard model physics that focuses on supersymmetry and extra dimensions. The reader then learns about other large-scale physics experiments that China has run or is running, and about the currently discussed options for the next larger particle accelerator. Nadis and Yau don’t waste time discussing details of all accelerators that are presently considered, but get quickly to the point of laying out the benefits of a circular 50 or even 100 TeV collider in China.

And the benefits are manifold. The favored location for the gigantic project is Qinghuangdao, which is “an attractive destination that might appeal to foreign scientists” because, among other things, “its many beaches [are] ranked among the country’s finest,” “the countryside is home to some of China’s leading vineyards” and even the air quality is “quite good” at least “compared to Beijing.” Book me in.

The authors make a good case that both the world and China only have to gain from the giant collider project. China because “one result would likely be an enhancement of national prestige, with the country becoming a leader in the field of high-energy physics and perhaps eventually becoming the world center for such research. Improved international relations may be the most important consequence of all.” And the rest of the world benefits because, besides preventing thousands of particle physicists from boredom, “civil engineering costs are low in the country – much cheaper than those in many Western countries.”

The book is skillfully written with scientific explanations that are detailed, yet not overly technical, and much space is given to researchers in the field. Nadis and Yau quote whoever might help getting their message across: David Gross, Lisa Randall, Frank Wilczek, Don Lincoln, Don Hopper, Joseph Lykken, Nima Arkani-Hamed, Nathan Seiberg, Martinus Veltman, Steven Weinberg, Gordon Kane, John Ellis – everybody gets a say.

My favorite quote is maybe that by Henry Tye, who argues that the project is a good investment because “the worldwide impact of a collider is much bigger than if the money were put into some other area of science,” since “even if China were to spend more than the United States in some field of science and engineering other than high-energy physics, US professors would still do their research in the US.” This quote sums up the authors’ investigation of whether such a major financial commitment might maybe have a larger payoff were it invested into any other research area.

Don’t get me wrong there, if the Chinese want to build a collider, I think that’s totally great and an awesome contribution to knowledge discovery and the good of humanity, the forgiveness of sins, the resurrection of the body, and the life everlasting, amen. But there’s a real discussion here to be had whether building the next bigger ring-thing is where the money should flow or if not putting a radio telescope on the moon or a gravitational wave interferometer in space would bring more bang for the Yuan. Unfortunately, you’re not going to find that discussion in Nadis and Yau’s book.

Aside: The print has smear-stripes.Yes, that puts me in a bad mood.

In summary, this book will come in very handy next time you have to convince a Chinese government official to spend a lot of money on bringing protons up to speed.

[Disclaimer: Free review copy.]

Sunday, January 10, 2016

Free will is dead, let’s bury it.

I wish people would stop insisting they have free will. It’s terribly annoying. Insisting that free will exists is bad science, like insisting that horoscopes tell you something about the future – it’s not compatible with our knowledge about nature.

According to our best present understanding of the fundamental laws of nature, everything that happens in our universe is due to only four different forces: gravity, electromagnetism, and the strong and weak nuclear force. These forces have been extremely well studied, and they don’t leave any room for free will.

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will. You will almost certainly fail. The only thing really you can do to hold on to free will is to wave hands, yell “magic”, and insist that there are systems which are exempt from the laws of nature. And these systems somehow have something to do with human brains.

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions. As an aside: The paper was rejected by several journals. Not because anyone found anything wrong with it. No, the philosophy journals complained that it was too much physics, and the physics journals complained that it was too much philosophy. And you wonder why there isn’t much interaction between the two fields.

After plain denial, the somewhat more enlightened way to insist on free will is to redefine what it means. You might settle for example on speaking of free will as long as your actions cannot be predicted by anybody, possibly not even by yourself. Clearly, it is presently impossible to make such a prediction. It remains to be seen whether it will remain impossible, but right now it’s a reasonable hope. If that’s what you want to call free will, go ahead, but better not ask yourself what determined your actions.

A popular justification for this type of free will is insisting that on comparably large scales, like those between molecules responsible for chemical interactions in your brain, there are smaller components which may have a remaining influence. If you don’t keep track of these smaller components, the behavior of the larger components might not be predictable. You can then say “free will is emergent” because of “higher level indeterminism”. It’s like saying if I give you a robot and I don’t tell you what’s in the robot, then you can’t predict what the robot will do, consequently it must have free will. I haven’t managed to bring up sufficient amounts of intellectual dishonesty to buy this argument.

But really you don’t have to bother with the details of these arguments, you just have to keep in mind that “indeterminism” doesn’t mean “free will”. Indeterminism just means there’s some element of randomness, either because that’s fundamental or because you have willfully ignored information on short distances. But there is still either no “freedom” or no “will”. Just try it. Try to write down one equation that does it. Just try it.

I have written about this a few times before and according to the statistics these are some of the most-read pieces on my blog. Following these posts, I have also received a lot of emails by readers who seem seriously troubled by the claim that our best present knowledge about the laws of nature doesn’t allow for the existence of free will. To ease your existential worries, let me therefore spell out clearly what this means and doesn’t mean.

It doesn’t mean that you are not making decisions or are not making choices. Free will or not, you have to do the thinking to arrive at a conclusion, the answer to which you previously didn’t know. Absence of free will doesn’t mean either that you are somehow forced to do something you didn’t want to do. There isn’t anything external imposing on you. You are whatever makes the decisions. Besides this, if you don’t have free will you’ve never had it, and if this hasn’t bothered you before, why start worrying now?

This conclusion that free will doesn’t exist is so obvious that I can’t help but wonder why it isn’t widely accepted. The reason, I am afraid, is not scientific but political. Denying free will is considered politically incorrect because of a wide-spread myth that free will skepticism erodes the foundation of human civilization.

For example, a 2014 article in Scientific American addressed the question “What Happens To A Society That Does not Believe in Free Will?” The piece is written by Azim F. Shariff, a Professor for Psychology, and Kathleen D. Vohs, a Professor of Excellence in Marketing (whatever that might mean).

In their essay, the authors argue that free will skepticism is dangerous: “[W]e see signs that a lack of belief in free will may end up tearing social organization apart,” they write. “[S]kepticism about free will erodes ethical behavior,” and “diminished belief in free will also seems to release urges to harm others.” And if that wasn’t scary enough already, they conclude that only the “belief in free will restrains people from engaging in the kind of wrongdoing that could unravel an ordered society.”

To begin with I find it highly problematic to suggest that the answers to some scientific questions should be taboo because they might be upsetting. They don’t explicitly say this, but the message the article send is pretty clear: If you do as much as suggest that free will doesn’t exist you are encouraging people to harm others. So please read on before you grab the axe.

The conclusion that the authors draw is highly flawed. These psychology studies always work the same. The study participants are engaged in some activity in which they receive information, either verbally or in writing, that free will doesn’t exist or is at least limited. After this, their likeliness to conduct “wrongdoing” is tested and compared to a control group. But the information the participants receive is highly misleading. It does not prime them to think they don’t have free will, it instead primes them to think that they are not responsible for their actions. Which is an entirely different thing.

Even if you don’t have free will, you are of course responsible for your actions because “you” – that mass of neurons – are making, possibly bad, decisions. If the outcome of your thinking is socially undesirable because it puts other people at risk, those other people will try to prevent you from more wrongdoing. They will either try to fix you or lock you up. In other words, you will be held responsible. Nothing of this has anything to do with free will. It’s merely a matter of finding a solution to a problem.

The only thing I conclude from these studies is that neither the scientists who conducted the research nor the study participants spent much time thinking about what the absence of free will really means. Yes, I’ve spent far too much time thinking about this.

The reason I am hitting on the free will issue is not that I want to collapse civilization, but that I am afraid the politically correct belief in free will hinders progress on the foundations of physics. Free will of the experimentalist is a relevant ingredient in the interpretation of quantum mechanics. Without free will, Bell’s theorem doesn’t hold, and all we have learned from it goes out the window.

This option of giving up free will in quantum mechanics goes under the name “superdeterminism” and is exceedingly unpopular. There seem to be but three people on the planet who work on this, ‘t Hooft, me, and a third person of whom I only learned from George Musser’s recent book (and whose name I’ve since forgotten). Chances are the three of us wouldn’t even agree on what we mean. It is highly probable we are missing something really important here, something that could very well be the basis of future technologies.

Who cares, you might think, buying into the collapse of the wave-function seems a small price to pay compared to the collapse of civilization. On that matter though, I side with Socrates “The unexamined life is not worth living.”

Thursday, January 07, 2016

More information emerges about new proposal to solve black hole information loss problem

Soft hair. Redshifted.

Last year August, Stephen Hawking announced he had been working with Malcom Perry and Andrew Strominger on a solution to the black hole information loss problem, and they were closing in on a solution. But little was explained other than that this solution rests on a symmetry group by name of supertranslations.

Yesterday then, Hawking, Perry, and Strominger, had a new paper on the arxiv that fills in a little more detail
    Soft Hair on Black Holes
    Stephen W. Hawking, Malcolm J. Perry, Andrew Strominger
    arXiv:1601.00921
I haven’t had much time to think about this, but I didn’t want to leave you hanging, so here is a brief summary.

First of all, the paper seems only a first step in a longer argument. Several relevant questions are not addressed and I assume further work will follow. As the authors write: “Details will appear elsewhere.”

The present paper does not study information retrieval in general. It instead focuses on a particular type of information, the one contained in electrically charged particles. The benefit in doing this is that the quantum theory of electric fields is well understood.

Importantly, they are looking at black holes in asymptotically flat (Minkowski) space, not in asymptotic Anti-de-Sitter (AdS) space. This is relevant because string theorists believe that the black hole information loss problem doesn’t exist in asymptotic AdS space. They don’t know however how to extend this argument to asymptotically flat space or space with a positive cosmological constant. To best present knowledge we don’t live in AdS space, so understanding the case with a positive cosmological constant is necessary to describe what happens in the universe we actually inhabit.

In the usual treatment, a black hole counts only the net electric charge of particles as they fall in. The total charge is one of the three classical black hole “hairs,” next to mass and angular momentum. But all other details about the charges (eg in which chunks they came in) is lost: there is no way to store anything in or on an object that has no features, has no “hairs”.

In the new paper the authors argue that the entire information about the infalling charges is stored on the horizon in form of 'soft photons', that are photons of zero energy. These photons are the “hair” which previously was believed to be absent.

Since these photons can carry information but have zero energy, the authors conclude that the vacuum is degenerate. A 'degenerate' state is one on which several distinct quantum states share the same energy. This means there are different vacuum states which can surround the black hole and so the vacuum can hold and release information.

It is normally assumed that the vacuum state is unique. If it is not, this allows one to have information in the outgoing radiation (which is the ingoing vacuum). A vacuum degeneracy is thus a loophole in the argument originally lead by Hawking according to which information must get lost.

What the ‘soft photons’ are isn't further explained in the paper; they are simply identified with the action of certain operators and supposedly Goldstone bosons of a spontaneously broken symmetry. Or rather of an infinite amount of symmetries that, basically, belong to the conserved charges of something akin multipole moments. It sounds plausible, but the interpretation eludes me. I haven’t yet read the relevant references.

I think the argument goes basically like this: We can expand the electric field in form of all these (infinitely many) higher moments and show that each of them is associated with a conserved charge. Since the charge is conserved, the black hole can’t destroy it. Consequently, it must be maintained somehow. In the presence of a horizon, future infinity is not a Cauchy surface, so we add the horizon as boundary. And on this additional boundary we put the information that we know can’t get lost, which is what the soft photons are good for.

The new paper adds to Hawking’s previous short note by providing an argument for why the amount of information that can be stored this way by the black hole is not infinite, but instead bounded by the Bekenstein-Hawking entropy (ie proportional to the surface area). This is an important step to assure this idea is compatible with everything else we know about black holes. Their argument however is operational and not conceptual. It is based on saying, not that the excess degrees of freedom don't exist, but that they cannot be used by infalling matter to store information. Note that, if this argument is correct, the Bekenstein-Hawking entropy does not count the microstates of the black hole, it instead sets an upper limit to the possible number of microstates.

The authors don’t explain just how the information becomes physically encoded in the outgoing radiation, aside from writing down an operator. Neither, for that matter, do they demonstrate that by this method actually all of the information of the initial can be stored and released. Focusing on photons of course they can't do this anyway. But they don’t have an argument how it can be extended to all degrees of freedom. So, needless to say, I have to remain skeptical that they can live up to the promise.

In particular, I still don’t see that the conserved charges they are referring to actually encode all the information that’s in the field configuration. For all I can tell they only encode the information in the angular directions, not the information in the radial direction. If I were to throw in two concentric shells of matter, I don’t see how the asymptotic expansion could possibly capture the difference between two shells and one shell, as long as the total charge (or mass) is identical. The only way I see to get around this issue is to just postulate that the boundary at infinity does indeed contain all the information. And that in return we only know to work in AdS space. (At least it’s believed to work in this case.)

Also, the argument for why the charges on the horizon are bounded and the limit reproduces the Bekenstein-Hawking entropy irks me. I would have expected the argument for the bound to rely on taking into account that not all configurations that one can encode in the infinite distance will actually go on to form black holes.

Having said that, I think it’s correct that a degeneracy of the vacuum state would solve the black hole information loss problem. It’s such an obvious solution that you have to wonder why nobody thought of this before, except that I thought of it before. In a note from 2012, I showed that a vacuum degeneracy is the conclusion one is forced to draw from the firewall problem. And in a follow-up paper I demonstrated explicitly how this solves the problem. I didn’t have a mechanism though to transfer the information into the outgoing radiation. So now I’m tempted to look at this, despite my best intentions to not touch the topic again...

In summary, I am not at all convinced that the new idea proposed by Hawking, Perry, and Strominger solves the information loss problem. But it seems an interesting avenue that is worth further exploration. And I am sure we will see further exploration...

Monday, January 04, 2016

Finding space-time quanta in the cosmic microwave background: Not so simple

“Final theory” is such a misnomer. The long sought-after unification of Einstein’s General Relativity with quantum mechanics would not be an end, it would be a beginning. A beginning to unravel the nature of space and time, and also a beginning to understand our own beginning – the origin of the universe.

The biggest problem physicists face while trying to find such a theory of quantum gravity is the lack of experimental guidance. The energy necessary to directly test quantum gravity is enormous, and far beyond what we can achieve on Earth. But for cosmologists, the universe is the laboratory. And the universe knows how to reach such high energies. It’s been there, it’s done it.

Our universe was born when quantum gravitational effects were strong. Looking back in time for traces of these effects is therefore one of the most promising, if not the most promising, place to find experimental evidence for quantum gravity. But if it was simple, it would already have been done.

The first issue is that, lacking a theory of quantum gravity, nobody knows how to describe the strong quantum gravitational effects in the early universe. This is the area where phenomenological model building becomes important. But this brings up the next difficulty, which is that the realm of strong quantum gravity is even before inflation – the early phase in which the universe blew up exponentially fast – and neither today’s nor tomorrow’s observations will pin down any one particular model.

There is another option though, that is focusing on the regime of where quantum gravitational effects are weak, yet strong enough to still affect matter. In this regime, relevant during and towards the end of inflation, we know how the theory works. The mathematics to treat the quantum properties of space-time during this period is well-understood because such small perturbations can be dealt with almost the same way as with all other quantum fields.

Indeed, the weak quantum gravity approximation is routinely used in the calculation of today’s observables, such as the spectrum of the cosmic microwave background. That is right – cosmologists do actually use quantum gravity. It becomes necessary because, according to the currently most widely accepted models, inflation is driven by a quantum field – the “inflaton” – whose fluctuations go on to seed the structures we observe today. The quantum fluctuations of the inflaton cause quantum fluctuations of space-time. And these, in return, remain visible today in the large-scale distribution of matter and in the cosmic microwave background (CMB).

This is why last year’s claim by the BICEP collaboration that they had observed the CMB imprint left by gravitational waves from the early was claimed by some media outlets to be evidence for quantum gravity. But the situation is so simple not. Let us assume they had indeed measured what they originally claimed. Even then, obtaining correct predictions from a theory that was quantized doesn’t demonstrate the correct theory must have been quantized. To demonstrate that space-time must have had quantum behavior in the early universe, we must instead find an observable that could not have been produced by any unquantized theory.

In the last months, two papers appeared that studied this question and analyzed the prospects of finding evidence for quantum gravity in the CMB. The conclusions, however, are in both cases rather pessimistic.

The first paper is “A model with cosmological Bell inequalities” by Juan Maldacena. Maldacena tries to construct a Bell-type test that could be used to rule out a non-quantum origin of the signatures that are leftover today from the early universe. The problem is that, once inflation ends, only the classical distribution of the, originally quantum, fluctuation goes on to enter the observables, like the CMB temperature fluctuations. This makes any Bell-type setup with detectors in the current era impossible because the signal was long gone.

Maldacena refuses to be discouraged by this and instead tries to find a way in which another field, present during inflation, plays the role of the detector in the Bell-experiment. This additional field could then preserve the information about the quantum-ness of space-time. He explicitly constructs such a model with an additional field that serves as detector, but calls it himself “baroque” and “contrived.” It is a toy-model to demonstrate there exist cases in which a Bell-test can be performed on the CMB, but not a plausible scenario for our universe.

I find the paper nevertheless interesting as it shows what it would take to use this method and also exhibits where the problem lies. I wish there were more papers like this, where theorists come forward with ideas that didn’t work, because these failures are still a valuable basis for further studies.

The second paper is “Quantum Discord of Cosmic Inflation: Can we Show that CMB Anisotropies are of Quantum-Mechanical Origin?” by Jerome Martin and Vincent Vennin. The authors of this paper don’t rely on the Bell-type test specifically, but instead try to measure the “quantum discord” of the CMB temperature fluctuations. The quantum discord, in a nutshell, measures the quantum-ness in the correlations of a system. The observables they look at are firstly the CMB two-point correlations and later also higher correlation functions.

The authors address the question in two steps. In the first step they ask whether the CMB observations can also be reproduced in the standard treatment if the state has little or no quantum correlations, ie if one has a ‘classical state’ (in terms of correlations) in a quantum theory. They find that for what already existing observables are concerned, the modifications due to the lack of quantum correlations are existent but unobservable.
    “[I]n practice, the difference between the quantum and the classical results is tiny and unobservable probably forever.”
They are tentatively hopeful that the two cases might become distinguishable with higher-order correlation functions. On these correlations, experimentalists have so far only very little data, but it is a general topic of interest and future missions will undoubtedly sharpen the existing constraints. In the present work, the authors however do not quantify the predictions, but rather defer to future work: “[I]t remains to generate templates […] to determine whether such a four-point function is already excluded or not.”

The second step is that they study whether the observed correlations could be created by a theory that is classical to begin with, so that the fluctuations are stochastic. They then demonstrate that this can always be achieved, and thus there is no way to distinguish the two cases. To arrive at this conclusion, they first derive the equations for the correlations in the unquantized case, then demand that they reproduce those of the quantized case, and then argue that these equations can be fulfilled.

On the latter point I am, maybe uncharacteristically, less pessimistic than the authors themselves because their general case might be too general. Combining a classical theory with a quantum field gives rise to a semi-classical set of equations that lead to peculiar violations of the uncertainty principle, and an entirely classical theory would need a different mechanism to even create the fluctuations. That is to say, I believe that it might be possible to further constrain the prospects of unquantized fluctuations if one takes into account other properties that such models necessarily must have.

In summary, I have to conclude that we still have a long way to go until we can conclude that space-time must have been quantized in the early universe. Nevertheless, I think it is one of the most promising avenues to pin down the first experimental signature for quantum gravity.