Pages

Friday, February 26, 2016

"Rate your Supervisor" comes to High Energy Physics

A new website called the "HEP Postdoc Project" allows postdocs in high energy physics to rate their supervisors in categories like "friendliness," "expertise," and "accessibility."

I normally ignore emails that more or less explicitly ask me to advertise sites on my blog, but decided to make an exception for this one. It seems a hand-made project run by a small number of anonymous postdocs who want to help their fellows find good supervisors. And it's a community that I care much about.

While I appreciate the initiative, I have to admit being generally unenthusiastic about anonymous ratings on point scales. Having had the pleasure of reading though an estimated several thousand of recommendation letters, I have found that an assessment of skills is only useful if you know the person it comes from.

Much of this is cultural. A letter from a Russian prof that says this student isn't entirely bad at math might mean the student is up next for the Fields Medal. On the other hand, letters from North Americans tend to exclusively contain positive statements, and the way to read them is to search for qualities that were not listed.

But leaving aside the cultural stereotypes, more important are personal differences in the way people express themselves and use point scales, even if they are given a description for each rating (and that is missing on the website). We occasionally used 5 point rating scales in committees. You then notice quickly that some people tend to clump everyone in the middle-range, while others are more comfortable using the high and low scores. Then again others either give a high rating or refuse to have any opinion. To get a meaningful aggregate, you can't just take an average, you need to know roughly how each committee member uses the scale. (Which will require endless hours of butt-flattening meetings. Trust me, I'd be happy being done with clicking on a star scale.)

You could object that any type of online rating suffers from these problems and yet they seem to serve some purpose. That's right of course, so this isn't to say they're entirely useless. Thus I am sharing this link thinking it's better than nothing. And at the very least you can have some fun browsing through the list to see who got the lowest marks ;)

Wednesday, February 24, 2016

10 Years BackRe(action)

Yes, today marks the 10th anniversary of my first post on this blog.

I started blogging while I was in Santa Barbara, in a tiny fifth-floor office that slightly swayed with the occasional Earthquakes. I meant to write about postdoc-life in California, but ended up instead writing mostly about my research interests. Because, well, that's what I'm interested in. Sorry, California.

Those were the years of the String Wars and of Black Holes at the LHC. And since my writing was on target, traffic to this blog increased rapidly -- a somewhat surprising and occasionally disturbing experience.

Over the years, I repeatedly tried to share the work of regularly feeding this blog, but noticed it's more effort trying to convince others to write than to just write myself. And no, it's not zero effort. In an attempt to improve my Germenglish, I have read Strunk's "Elements of Style" forwards and backwards, along with several books titled "Writing Well" (which were written really well!), and I hope you benefit from it. For me, the outcome has been that now I can't read my older blogposts without crying over my own clumsy writing. Also, there's link-rot. But if you have some tolerance for awkward English and missing images, there's 10 years worth of archives totalling more than 1500 entries waiting in the side-bar.

The content of this blog has slightly changed over the years. Notably, I don't share links here any more. For this, I use instead my twitter and facebook accounts, which you can follow to get reading recommendations and the briefer commentaries. But since I can't stand cluttered pages, this blog is still ad-free and I don't make money with it. So if you like my writing, please have a close look at the donate-button in the top-right corner.

In the 10 years that have passed, this blog moved with me through the time-zones, from California to Canada, from Canada to Sweden, and from Sweden eventually back to Germany. It witnessed my wedding and my pregnancy and my daughters turning from babies to toddlers to Kindergartners. And the journey goes on. As some of you know already, I'm writing a book (or at least I'm supposed to be writing a book), so stay tuned, there's more to come.

I want to thank all of you for reading along, especially the commenters. I know that some of you have been around since the first days, and you have become part of my extended family. You have taught me a lot, about life and about science and about English grammar.

A special thank you goes to those of you who have sent me donations since I put up the button a few months ago. It is a great encouragement for me to continue.

Monday, February 22, 2016

Too many anti-neutrinos: Evidence builds for new anomaly

Bump ahead.
Tl;dr: A third experiment has reported an unexplained bump in the spectrum of reactor-produced anti-neutrinos. Speculations for the cause of the signal so far focus on incomplete nuclear fission models.


Neutrinos are the least understood of the known elementary particles, and they just presented physicists with a new puzzle. While monitoring the neutrino flux from nearby nuclear power plants, three different experiments have measured an unexpected bump around 5 MeV. First reported by the Double Chooz experiment in 2014, the excess was originally not statistically significant
5 MeV bump as seen by Double Chooz. Image source: arXiv:1406.7763
Last year, a second experiment, RENO, reported an excess but did not assign a measure of significance. However, the bump is clearly visible in their data
5 MeV bump as seen by RENO. Image source: arXiv:1511.05849
The newest bump is from the Daya Bay collaboration and was just published in PRL

5 MeV bump as seen by Daya Bay. Image source: arXiv:1508.04233

They give the excess a local significance of 4.1 σ – a probability of less than one in ten thousand for the signal being due to pure chance.

This is a remarkable significance for a particle that interacts so feebly, and an impressive illustration of how much detector technology has improved. Originally, the neutrino’s interaction was thought to be so weak that to measure it at all it seemed necessary placing detectors next to the most potent neutrino source known – a nuclear bomb explosion.

And this is exactly what Frederick Reines and Clyde Cowan set out to do. In 1951, they devised “Project Poltergeist” to detect the neutrino emission from a nuclear bomb: “Anyone untutored in the effects of nuclear explosions would be deterred by the challenge of conducting an experiment so close to the bomb,” wrote Reines, “but we knew otherwise from experience and pressed on.” And their audacious proposal was approved swiftly: “Life was much simpler in those days—no lengthy proposals or complex review committees,” recalls Reines.

Briefly after their proposal was approved, however, the two men found a better experimental design and instead placed a larger detector close by a nuclear power plant. But the controlled splitting of nuclei in a power plant needs much longer to produce the same number of neutrinos as a nuclear bomb blast, and patience was required of Reines and Cowan. Their patience eventually paid off: They were awarded the 1995 Nobel Prize in physics for the first successful detection of neutrinos – a full 65 years after the particles were first predicted.

Another Nobel Prize for neutrinos was handed out just last year, this one commemorating the neutrino’s ability to “oscillate,” that is to change between different neutrino types as they travel. But, as the recent measurements demonstrate, neutrinos still have surprises in stock.

Good news first, the new experiments have confirmed the neutrino oscillations. On short base-lines as that of Daya Bay – a few kilometer – the electron-anti-neutrinos that are emitted during nuclear fission change into to tau-anti-neutrinos and arrive at the detector in reduced numbers. The wavelength of the oscillation between the two particles depends on the energy – higher energy means a longer wavelength. Thus, a detector placed at fixed distance from the emission point will see a different energy-distribution of particles than that at emission.

The emitted energy spectrum can be deduced from the composition of the reactor core – a known mixture of Uranium and Plutonium, each in two different isotopes. After the initial split, these isotopes leave behind a bunch of radioactive nuclei which then decay further. The math is messy, but not hugely complicated. With nuclear fission and decay models as input, the experimentalists can then extract from their data the change in the energy-distribution due to neutrino oscillation. And the parameters of the oscillation that they have observed fit those of other experiments.

Now to the bad news. The fits of the oscillation parameters to the energy spectrum do not take into account the overall number of particles. And when they look at the overall number, the Daya Bay experiment, like other reactor neutrino experiments before, falls about 6% short of expectation. And then there is the other oddity: the energy spectrum has a marked bump that does not agree with the predictions based on nuclear models. There are too many neutrinos in the energy range of 5 MeV.

There are four possible origins for this discrepancy: Detection, travel, production, and misunderstood background. Let us look at them one after the other.

Detection: The three experiments all use the same type of detector, a liquid scintillator with Gadolinium target. Neutrino-nucleus cross-sections are badly understood because neutrinos interact so weakly and very little data is available. However, the experimentalists calibrate their detectors with other radioactive sources in near vicinity, and no bumps have been seen in these reference measurements. This strongly speaks against detector shortcomings as an explanation.

Travel: An overall lack of particles could be explained with oscillation into a so-far undiscovered new type of ‘sterile’ neutrino. However, such an oscillation cannot account for a bump in the spectrum. This could thus at best be a partial explanation, though an intriguing one.

Production: The missing neutrinos and the bump in the spectrum are inferred relative to the expected neutrino flux from the power plant. To calculate the emission spectrum, the physicists rely on nuclear models. The isotopes in the power plant’s core are among the best studied nuclei ever, but still this is a likely source of error. Most research studies of radioactive nuclei investigate them in small numbers, whereas in a reactor a huge number of different nuclei are able to interact with each other. A few proposals have been put forward that mostly focus on the decay of Rubidium and Yttrium isotopes because these make the main contribution to the high energy tail of the spectrum. But so far none of the proposed explanations has been entirely convincing.

Background: Daya Bay and RENO both state that the signal is correlated with the reactor power which makes it implausible that it’s a background effect. There aren’t many details in the paper about the time-dependence of the emission though. It would seem possible to me that reactor power depends on the time of the day or on the season, both of which could also be correlated with background. But this admittedly seems like a long shot.

Thus, at the moment the most conservative explanation is a lacking understanding of processes taking place in the nuclear power plant. It presently seems very unlikely to me that there is fundamentally new physics involved in this – if the signal is real to begin with. It looks convincing to me, but I asked fellow blogger Tommaso Dorigo for his thoughts: “Their signal looks a bit shaky to me - it is very dependent on the modeling of the spectrum and the p-value is unimpressive, given that there is no reason to single out the 5 MeV region a priori. I bet it's a modeling issue.”

Whatever the origin of the reactor antineutrino anomaly, it will require further experiments. As Anna Hayes, a nuclear theorist at Los Alamos National Laboratory, told Fermilab’s Symmetry Magazine: “Nobody expected that from neutrino physics. They uncovered something that nuclear physics was unaware of for 40 years.”

Wednesday, February 17, 2016

Dear Dr Bee: Can LIGO’s gravitational wave detection tell us something about quantum gravity?

“I was hoping you could comment on the connection between gravitational waves and gravitational quanta. From what I gather, the observation of gravitational waves at LIGO do not really tell us anything about the existence or properties of gravitons. Why should this be the case?”

“Can LIGO provide any experimental signature of quantum gravity?”

“Is gravity wave observation likely to contribute to [quantum] gravity? Or is it unlikely to be sensitive enough?”


It’s a question that many of you asked, and I have an answers for you over at Forbes! Though it comes down to “don’t get your hopes up too high.” (Sorry for the extra click, it’s my monthly contribution to Starts With a Bang. You can leave comments here instead.)

Monday, February 15, 2016

What makes an idea worthy? An interview with Anthony Aguirre

That science works merely by testing hypotheses has never been less true than today. As data have become more precise and theories have become more successful, scientists have become increasingly careful in selecting hypotheses before even putting them to test. Commissioning an experiment for every odd idea would be an utter waste of time, not to mention money. But what makes an idea worthy?

Pre-selection of hypotheses is especially important in fields where internal consistency and agreement with existing data are very strong constraints already, and it therefore plays an essential role in the foundation of physics. In this area, most new hypotheses are born dead or die very quickly, and researchers would rather not waste time devising experimental tests for ill-fated non-starters. During their career, physicists must thus constantly decide whether a new ideas justifies spending years of research on it. Next to personal interest, their decision criteria are often based on experience and community norms – past-oriented guidelines that reinforce academic inertia.

Philosopher Richard Dawid coined the word “post-empirical assessment” for the practice of hypotheses pre-selection, and described it as a non-disclosed Bayesian probability estimate. But philosophy is one thing, doing research another thing. For the practicing scientist, the relevant question is whether a disclosed and organized pre-selection could help advance research. This would require the assessment to be performed in a cleaner way than is presently the case, a way that is less prone to error induced by social and cognitive biases.

One way to achieve this could be to give researchers incentives for avoiding such biases. Monetary incentives are a possibility, but to convince a scientist that their best path of action is putting aside the need to promote their own research would mean incentives totaling research grants for several years – an amount that adverts on nerd pages won’t raise, and thus an idea that seems one of these ill-fated non-starters. But then for most scientists their reputation is more important than money.

Anthony Aquirre.
Image Credits: Kelly Castro.
And so Anthony Aquirre, Professor of Physics at UC Santa Cruz, devised an algorithm by which scientists can estimate the chances that an idea succeeds, and gain reputation by making accurate predictions. On his website Metaculus, users are asked to evaluate the likelihood of success for various scientific and technological developments. In the below email exchange, Antony explains his idea.

Bee: Last time I heard from you, you were looking for bubble collisions as evidence of the multiverse. Now you want physicists to help you evaluate the expected impact of high-risk/high-reward research. What happened?

Anthony: Actually, I’ve been thinking about high-risk/high-reward research for longer than bubble collisions! The Foundational Questions Institute (FQXi) is now in its tenth year, and from the beginning we’ve seen part of FQXi’s mission as helping to support the high-risk/high-reward part of the research funding spectrum, which is not that well-served by the national funding agencies. So it’s a long-standing question how to best evaluate exactly how high-risk and high-reward a given proposal is.

Bubble collisions are actually a useful example of this. It’s clear that seeing evidence of an eternal-inflation multiverse would be pretty huge news, and of deep scientific interest. But even if eternal inflation is right, there are different versions of it, some of which have bubble and some of which don’t; and even of those that do, only some subset will yield observable bubble collisions. So: how much effort should be put into looking for them? A few years of grad student or postdoc time? In my opinion, yes. A dedicated satellite mission? No way, unless there were some other evidence to go on.

(Another lesson, here, in my opinion, is that if one were to simply accept the dismissive “the multiverse is inherently unobservable” critique, one would never work out that bubble collisions might be observable in the first place.)

B: What is your relation to FQXi?

A: Max Tegmark and I started FQXi in 2006, and have had a lot of fun (and only a bit of suffering!) trying to build something maximally useful to community of people thinking about the type of foundational, big-picture questions we like to think about.

B: What problem do you want to address with Metaculus?

Predicting and evaluating (should “prevaluating” be a word?) science research impact was actually — for me — the second motivation for Metaculus. The first grew out of another nonprofit I helped found, the Future of Life Institute (FLI). A core question there is how major new technologies like AI, genetic engineering, nanotech, etc., are likely to unfold. That’s a hard thing to know, but not impossible to make interesting and useful forecasts for.

FLI and organizations like it could try to build up a forecasting capability by hiring a bunch of researchers to do that. But I wanted to try something different: to generate a platform for soliciting and aggregating predictions that — with enough participation and data generation — could make accurate and well-calibrated predictions about future technology emergence as well as a whole bunch of other things.

As this idea developed, my collaborators (including Greg Laughlin at UCSC) and I realized that it might also be useful in filling a hole in our community’s ability to predict the impact of research. This could in principle help make better decisions about questions ranging from the daily (“Which of these 40 papers in my “to read” folder should I actually carefully read”) to the large-scale (“Should we fund this $2M experiment on quantum cognition?”).

B: How does Metaculus work?

The basic structure is of a set of (currently) binary questions about the occurrence of future events, ranging from predictions about technologies like self-driving cars, Go-playing AIs and nuclear fusion, to pure science questions such as the detection of Planet 9, publication of experiments in quantum cognition or tabletop quantum gravity, or announcement of the detection of gravitational waves.

Participants are invited assess the likelihood (1%-99%) of those events occurring. When a given question ‘resolves’ as either true or false, points are award depending upon a user's prediction, the community’s predictions, and what actually happened. These points add a competitive game aspect, but serve a more important purpose of providing steady feedback so that predictors can learn how to predict more accurately, and with better calibration. As data accumulations, predictors will also amass a track record, both overall and in particular subjects. This can be used to aggregate predictions into a single, more accurate, one (at the moment, the ‘community’ predictions is just a straight median).

An important aspect of this, I think is not ‘just’ to make better predictions about well-known questions, but to create lots and lots of well-posed questions. It really does make you think about things differently when you have to come up with a well-posed question that has a clear criterion for resolution. And there are lots of questions where even a few predictions (even one!) by the right people can be a very useful resource. So a real utility is for this to be a sort of central clearing-house for predictions.

B: What is the best possible outcome that you can imagine from this website and what does it take to get there?

A: The best outcome I could imagine would be this becoming really large-scale and useful, like a Wikipedia or Quora for predictions. It would also be a venue in which the credibility to make pronouncements about the future would actually be based on one’s actual demonstrated ability to make good predictions. There is, sadly, nothing like that in our current public discourse, and we could really use it.

I’d also be happy (if not as happy) to see Metaculus find a more narrow but deep niche, for example in predicting just scientific research/experiment success, or just high-impact technological rollouts (such as AI or Biotech).

In either case, it will take continued steady growth of both the community of users and the website’s capabilities. We already have all sorts of plans for multi-outcome questions, contingent questions, Bayes nets, algorithms for matching questions to predictors, etc. — but that will take time. We also need feedback about what users like, and what they would like the system to be able to do. So please try it out, spread the word, and let us know what you think!

Wednesday, February 10, 2016

Everything you need to know about gravitational waves

Last year in September, upgrades of the gravitational wave interferometer LIGO were completed. The experiment – now named advanced LIGO – searches for gravitational waves emitted in the merger of two black holes. Such a merger signal should fall straight into advanced LIGOs reach.

Estimated gravitational wave spectrum. [Image Source]


It was thus expected that the upgraded experiment either sees something immediately, or we’ve gotten something terribly wrong. And indeed, rumors about a positive detection started to appear almost immediately after the upgrade. But it wasn’t until this week that the LIGO collaboration announced several press-conferences in the USA and Europe, scheduled for tomorrow, Thursday Feb 11, at 3:30pm GMT. So something big is going to hit the headlines tomorrow, and here are the essentials that you need to know.

Gravitational waves are periodic distortions of space-time. They alter distance ratios for orthogonal directions. An interferometer works by using lasers to measure and compare orthogonal distances very precisely, thus it picks up even the tiniest space-time deformations.

Moving masses produce gravitational waves much like moving charges create electromagnetic waves. The most relevant differences between the two cases are
  1. Electromagnetic waves travel in space-time, whereas gravitational waves are a disturbance of space-time itself.
  2. Electromagnetic waves have spin 1, gravitational waves have spin two. The spin counts how much you have to rotate the wave for it to come back onto itself. For the electromagnetic fields that’s one full rotation, for the gravitational field it’s only half a rotation.
  3. [Image Credit: David Abergel]
  4. The dominant electromagnetic emission comes from the dipole moment (normally used eg for transmitter antennae), but gravitational waves have no dipole moment (a consequence of momentum conservation). It’s instead the quadrupole emission that is leading.
If you keep these differences in mind, you can understand gravitational waves in much the same way as electromagnetic waves. They can exist at any wavelength. They move at the speed of light. How many there are at a given wavelength depends on how many processes there are to produce them. The known processes give rise to the distribution in the graphic above. A gravitational wave detector is basically an antenna tuned in to a particularly promising frequency.

Since all matter gravitates, the motion of matter generically creates gravitational waves. Every time you move, you create gravitational waves, lots of them. These are, however, so weak that they are impossible to measure.

The gravitational waves that LIGO is looking for come from the most violent events in the universe that we know of: black hole mergers. In these events, space-time gets distorted dramatically as the two black holes join to one, leading to significant emission of gravitational waves. This combined system later settles with a characteristic “ringdown” into a new stable state.



Yes, this also means that these gravitational waves go right through you and distort you oh-so-slightly on their way.

The wave-lengths of gravitational waves emitted in such merger events are typically of the same order as the dimension of the system. That is, for black holes with masses between 10 and 100 times the solar mass, wavelengths are typically a hundred to a thousand km – right in the range that LIGO is most sensitive.

If you want to score extra points when discussing the headlines we expect tomorrow, learn how to pronounce Fabry–Pérot. This is a method for bouncing back light-signals in interferometer arms several times before making the measurments, which effectively increases the armlength. This is why LIGO is sensitive in a wavelength regime far longer than its actual arm length of about 2-4 km. And don’t call them gravity waves. A gravity wave is a cloud phenomenon.

Gravitational waves were predicted a hundred years ago as one of the consequences of Einstein’s theory of General Relativity. Their existence has since been indirectly confirmed because gravitational wave emission leads to energy loss, which has the consequence that two stars which orbit around a common center speed up over the course of time. This has been observed and was awarded the Nobel Prize for physics in 1993. If LIGO has detected the sought-after signal, it would not be the first detection, but the first direct detection.

Interestingly, even though it was long known that black hole mergers would emit gravitational waves, it wasn’t until computing power had increased sufficiently that precise predictions became possible. So it’s not like experiment is all that far behind theory on that one. General Relativity, though often praised for its beauty, does leave you with one nasty set of equations that in most cases cannot be solved analytically and computer simulations become necessary.

The existence of gravitational waves is not doubted by anyone in the physics community, or at least not by anybody I have met. This is for good reasons: On the experimental side there is the indirect evidence, and on the theoretical side there is the difficulty of making any theory of gravity work that does not have gravitational waves. But the direct detection of gravitational waves would be tremendously exciting because it opens our eyes to an entirely new view on the universe.

Hundreds of millions of years ago, a primitive form of life crawled out of the water on planet Earth and opened their eyes to see, for the first time, the light of the stars. Detecting gravitational waves is a momentous event just like this – it’s the first time we can receive signals that were previously entirely hidden from us, revealing an entirely new layer of reality.

So bookmark the webcast page and mark your calendar for tomorrow 3:30 GMT  –  it might enter the history books.

Update Feb 11: The rumors were all true. They have a 5.1 σ signal of a binary black hole merger. The paper is published in PRL, here is the abstract.

Friday, February 05, 2016

Much Ado around Nothing: The Cosmological non-Constant Problem

Tl;dr: Researchers put forward a theoretical argument that new physics must appear at energies much lower than commonly thought, barely beyond the reach of the LHC.
The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude. And as if that wasn’t embarrassing enough, this gives rise to, not one, but three problems: Why is the measured cosmological constant neither 1) huge nor 2) zero, and 3) Why didn’t this occur to us a billion years earlier? With that, you’d think that physicists have their hands full getting zeroes arranged correctly. But Niayesh Afshordi and Elliot Nelson just added to our worries.
In a paper that made it third place of this year’s Buchalter Cosmology Prize, Afshordi and Nelson pointed out that the cosmological constant, if it arises from the vacuum energy of matter fields, should be subject to quantum fluctuations. And these fluctuations around the average are still large even if you have managed to get the constant itself to be small.

The cosmological constant, thus, is not actually constant. And since matter curves space-time, the matter fluctuations lead to space-time fluctuations – which can screw with our cosmological models. Afshordi and Nelson dubbed it the “Cosmological non-Constant Problem.”

But there is more to their argument than just adding to our problems because Afshordi and Nelson quantified what it takes to avoid a conflict with observation. They calculate the effect of stress-energy fluctuations on the space-time background, and then analyze what consequences this would have for the gravitational interaction. They introduce as a free parameter an energy scale up to which the fluctuations abound, and then contrast the corrections from this with observations, like for example the CMB power spectrum or the peculiar velocities of galaxy clusters. From these measurements they derive bounds on the scale at which the fluctuations must cease, and thus, where some new physics must come into play.

They find that the scale beyond which we should already have seen the effect of the vacuum fluctuations is about 35 TeV. If their argument is right, this means something must happen either to matter or to gravity before reaching this energy scale; the option the authors advocate in their paper is that physics becomes strongly coupled below this scale (thus invalidating the extrapolation to larger energies, removing the problem).

Unfortunately, the LHC will not be able to reach all the way up to 35 TeV. But a next larger collider – and we all hope there will be one! – almost certainly would be able to test the full range. As Niayesh put it: “It’s not a problem yet” – but it will be a problem if there is no new physics before getting all the way up to 35 TeV.

I find this an interesting new twist on the cosmological constant problem(s). Something about this argument irks me, but I can’t quite put a finger on it. If I have an insight, you’ll hear from me again. Just generally I would caution you to not take the exact numerical value too seriously because in this kind of estimate there are usually various places where factors of order one might come in.

In summary, if Afshordi and Nelson are right, we’ve been missing something really essential about gravity.

Me, Elsewhere

I'm back from my trip. Here are some things that prevented me from more substantial blogging:
  • I wrote an article for Aeon, "The superfluid Universe," which just appeared. For a somewhat more technical summary, see this earlier blogpost.
  • I did a Q&A with John The-End-of-Science Horgan, which was fun. I disagree with him on many things, but I admire his writing. He is infallibly skeptic and unashamedly opinionated -- qualities I find lacking in much of today's science writing, including, sometimes, my own.
  • I spoke with Davide Castelvecchi about Stephen Hawking's recent attempt to solve the black hole information loss problem, which I previously wrote about here.
  • And I had some words to spare for Zeeya Merali, probably more words than she wanted, on the issue with the arXiv moderation, which we discussed here.
  • Finally, I had the opportunity to give some input for this video on the PhysicsGirl's YouTube channel:



    I previously explained in this blogpost that Hawking radiation is not produced at the black hole horizon, a correction to the commonly used popular science explanation that caught much more attention than I anticipated.

    There are of course still some things in the above video I'd like to complain about. To begin with, anti-particles don't normally have negative energy (no they don't). And the vacuum is the same for two observers who are moving relative to each other with constant velocity - it's the acceleration that makes the difference between the vacua. In any case, I applaud the Physics Girl team for taking on what is admittedly a rather technical and difficult topic. If anyone can come up with a better illustration for Hawking-radiation than Hawking's own idea with the pairs that are being ripped apart (which is far too localized to fit well with the math), please leave a suggestion in the comments.

Thursday, January 28, 2016

Does the arXiv censor submissions?

The arXiv is the physicsts' marketplace of ideas. In high energy physics and adjacent fields, almost all papers are submitted to the arXiv prior to journal submission. Developed by Paul Ginsparg in the early 1990s, this open-access pre-print repository has served the physics community for more than 20 years, and meanwhile extends also to adjacent fields like mathematics, economics, and biology. It fulfills an extremely important function by helping us to exchange ideas quickly and efficiently.

Over the years the originally free signup became more restricted. If you sign up for the arXiv now, you need to be "endorsed" by several people who are already signed up. It also became necessary to screen submissions to keep the quality level up. In hindsight, this isn't surprising: more people means more trouble. And sometimes, of course, things go wrong.

I have heard various stories about arXiv moderation gone wrong, mostly these are from students, and mostly it affects those who work in small research areas or those whose name is Garrett Lisi.

A few days ago, a story appeared online which quickly spread. Nicolas Gisin, an established Professor for Physics who works on quantum cryptography (among other things) relates the story of two of his students who ventured in a territory unfamiliar for him, black hole physics. They wrote a paper that appeared to him likely wrong but reasonable. It got rejected by the arxiv. The paper later got published by PLA (a respected journal that however does not focus on general relativity). More worrisome still, the students' next paper also got rejected by the arXiv, making it appear as if they were now blacklisted.

Now the paper that caused the offense is, haha, not on the arXiv, but I tracked it down. So let me just say that I think it's indeed wrong and it shouldn't have gotten published in a journal. They are basically trying to include the backreaction of the outgoing Hawking-radiation on the black hole. It's a thorny problem (the very problem this blog was named after) and the treatment in the paper doesn't make sense.

Hawking radiation is not produced at the black hole horizon. No, it is not. And tracking back the flux from infinity to the horizon is therefore is not correct. Besides this, the equation for the mass-loss that they use is a late-time approximation in a collapse situation. One can't use this approximation for a metric without collapse, and it certainly shouldn't be used down to the Planck mass. If you have a collapse-scenario, to get the backreaction right you would have to calculate the emission rate prior to horizon formation, time-dependently, and integrate over this.

Ok, so the paper is wrong. But should it have been rejected by the arXiv? I don't think so. The arxiv moderation can't and shouldn't replace peer review, it should just be a basic quality check, and the paper looks like a reasonable research project.

I asked a colleague who I know works as an arXiv moderator for comment. (S)he wants to stay anonymous but offers the following explanation:


I had not heard of the complaints/blog article, thanks for passing that information on...  
 The version of the article I saw was extremely naive and was very confused regarding coordinates and horizons in GR... I thought it was not “referee-able quality’’ — at least not in any competently run GR journal... (The hep-th moderator independently raised concerns...)  
 While it is now published at Physics Letters A, it is perhaps worth noting that the editorial board of Physics Letters A does *not* include anyone specializing in GR.
(S)he is correct of course. We haven't seen the paper that was originally submitted. It was very likely in considerably worse shape than the published version. Indeed, Gisin writes in his post that the paper was significantly revised during peer review. Taking this into account, the decision seems understandable to me.

The main problem I have with this episode is not that a paper got rejected which maybe shouldn't have been rejected -- because shit happens. Humans make mistakes, and let us be clear that the arXiv, underfunded as it is, relies on volunteers for the moderation. No, the main problem I have is the lack of transparency.

The arXiv is an essential resource for the physics community. We all put trust in a group of mostly anonymous moderators who do a rather thankless and yet vital job. I don't think the origin of the problem is with these people. I am sure they do the best they can. No, I think the origin of the problem is the lack of financial resources which must affect the possibility to employ administrative staff to oversee the operations. You get what you pay for.

I hope that this episode be a wake-up call to the community to put their financial support behind the arXiv, and to the arXiv to use this support to put into place a more transparent and better organized moderation procedure.

Note added: It was mentioned to me that the problem with the paper might be more elementary in that they're using wrong coordinates to begin with - it hadn't even occurred to me to check this. To tell you the truth, I am not really interested in figuring out exactly why the paper is wrong, it's besides the point. I just hope that whoever reviewed the paper for PLA now goes and sits in the corner for an hour with a paper bag over their head.

Wednesday, January 27, 2016

Hello from Maui

Greetings from the west-end of my trip, which brought me out to Maui, visiting Garrett at the Pacific Science Institute, PSI. Launched roughly a year ago, Garrett and his girlfriend/partner Crystal have now hosted about 60 traveling scientists, "from all areas except chemistry" I was told.

I got bitten by mosquitoes and picked at by a set of adorable chickens (named after the six quarks), but managed to convince everybody that I really didn't feel like swimming, or diving, or jumping off things at great height. I know I'm dull. I did watch some sea turtles though and I also got a new T-shirt with the PSI-logo, which you can admire in the photo to the right (taken in front of a painting by Crystal).

I'm not an island-person, don't like mountains, and I can't stand humidity, so for me it's somewhat of a mystery what people think is so great about Hawaii. But leaving aside my preference for German forests, it's as pleasant a place as can be.

You won't be surprised to hear that Garrett is still working on his E8 unification and says things are progressing well, if slowly. Aloha.






Monday, January 25, 2016

Is space-time a prism?

Tl;dr: A new paper demonstrates that quantum gravity can split light into spectral colors. Gravitational rainbows are almost certainly undetectable on cosmological scales, but the idea might become useful for Earth-based experiments.

Einstein’s theory of general relativity still stands apart from the other known forces by its refusal to be quantized. Progress in finding a theory of quantum gravity has stalled because of the complete lack of data – a challenging situation that physicists have never encountered before.

The main problem in measuring quantum gravitational effects is the weakness of gravity. Estimates show that testing its quantum effects would require detectors the size of planet Jupiter or particle accelerators the size of the Milky-way. Thus, experiments to guide theory development are unfeasible. Or so we’ve been told.

But gravity is not a weak force – its strength depends on the masses between which it acts. (Indeed, that is the very reason gravity is so difficult to quantize.) Saying that gravity is weak makes sense only when referring to a specific mass, like that of the proton for example. We can then compare the strength of gravity to the strength of the other interactions, demonstrating its relative weakness – a puzzling fact known as the “strong hierarchy problem.” But that the strength of gravity depends on the particles’ masses also means that quantum gravitational effects are not generally weak: their magnitude too depends on the gravitating masses.

To be more precise one should thus say that quantum gravity is hard to detect because if an object is massive enough to have large gravitational effects then its quantum properties are negligible and don’t cause quantum behavior of space-time. General relativity however acts in two ways: Matter affects space-time and space-time affects matter. And so the reverse is also true: If the dynamical background of general relativity for some reason has an intrinsic quantum uncertainty, then this will affect the matter moving in this space-time – in a potentially observable way.

Rainbow gravity, proposed in 2003 by Magueijo and Smolin, is based on this idea, that the quantum properties of space-time could noticeably affect particles propagating in it. In rainbow gravity, space-time itself depends on the particle’s energy. In particular, light of different energies travels with different speeds, splitting up different colors, hence the name. It’s a nice idea but unfortunately it’s is an internally inconsistent theory and so far nobody has managed to make much sense of it.

First, let us note that already in general relativity the background depends of course on the energy of the particle, and this certainly should carry over also into quantum gravity. More precisely though, space-time depends not on the energy but on the energy-density of matter in it. So this cannot give rise to rainbow gravity. Worse even, because of this, general relativity is in outright conflict with rainbow gravity.

Second, an energy-dependent metric can be given meaning to in the framework of asymptotically safe gravity, but this is not what rainbow gravity is about either. Asymptotically safe gravity is an approach to quantum gravity in which space-time depends on the energy by which it is probed. The energy in rainbow gravity is however not that by which space-time is probed (which is observer-independent), but is supposedly the energy of a single particle (which is observer-dependent).

Third, the whole idea crumbles to dust once you start wondering how the particles in rainbow gravity are supposed to interact. You need space-time to define “where” and “when”. If each particle has its own notion of where and when, the requirement that an interaction be local rather than “spookily” on a distance can no longer be fulfilled.

In a paper which recently appeared in PLB (arXiv version here), three researchers from the University of Warsaw have made a new attempt to give meaning to rainbow gravity. While it doesn’t really solve all problems, it makes considerably more sense than the previous attempts.

In their paper, the authors look a small (scalar) perturbations over a cosmological background, that are modes with different energies. They assume that there is some theory for quantum gravity which dictates what the background does but do not specify this theory. They then ask what happens to the perturbations which travel in the background and derive equations for each mode of the perturbation. Finally, they demonstrate that these equations can be reformulated so that, effectively, the perturbation travels in a space-time which depends on the perturbation’s own energy – it is a variant of rainbow gravity.

The unknown theory of quantum gravity only enters into the equations by an average over the quantum states of the background’s dynamical variables. That is, if the background is classical and in one specific quantum state, gravity doesn’t cause any rainbows, which is the usual state of affairs in general relativity. It is the quantum uncertainty of the space-time background that gives rise to rainbows.

This type of effective metric makes somewhat more sense to me than the previously considered scenarios. In this new approach, it is not the perturbation itself that causes the quantum effect (which would be highly non-local and extremely suspicious). Instead the particle merely acts as a probe for the background (a quite common approximation that neglects backreaction).

Unfortunately, one must expect the quantum uncertainty of space-time to be extremely tiny and undetectable. A long time has passed since quantum gravitational effects were strong in the very early universe and since then they have long decohered. Of course we don’t really know this with certainty, so looking for such effects is generally a good idea. But I don’t think it’s likely we’d find something here.

The situation looks somewhat better though for a case not discussed in the paper, which is a quantum uncertainty in space-time caused by massive particles with a large position uncertainty. I discussed this possibility in this earlier post, and it might be that the effect considered in the new paper can serve as a way to probe it. This would require though to know what happens not to background perturbations but other particles traveling in this background, requiring a different approach than the one used in this paper.

I am not really satisfied with this version of rainbow gravity because I still don’t understand how particles would know where to interact, or which effective background to travel in if several of them are superposed, which seems somewhat of a shortcoming for a quantum theory. But this version isn’t quite as nonsensical as the previous one, so let me say I am cautiously hopeful that this idea might one day become useful.

In summary, the new paper demonstrates that gravitational rainbows might appear in quantum gravity under quite general circumstances. It might be an interesting contribution that, with further work, could become useful in the search for experimental evidence of quantum gravity.

Note added: The paper deals with a FRW background and thus trivially violates Lorentz-invariance.

Thursday, January 21, 2016

Messengers from the Dark Age

Astrophysicists dream of putting radio
telescopes on the far side of the moon.
[Image Credits: 21stcentech.com]
An upcoming generation of radio telescopes will soon let us look back into the dark age of the universe. The new observations can test dark matter models, inflation, and maybe even string theory.

The universe might have started with a bang, but once the echoes faded it took quite some while until the symphony began. Between the creation of the cosmic microwave background (CMB) and the formation of the first stars, 100 million years passed in darkness. This “dark age” has so far been entirely hidden from observation, but this situation is soon to change.

The dark age may hold the answers to many pressing questions. During this period, most of the universe’s mass was in form of light atoms – primarily hydrogen – and dark matter. The atoms slowly clumped under the influence of gravitational forces, until they finally ignited the first stars. Before the first stars, astrophysical processes were few, and so the distribution of hydrogen during the dark age carries very clean information about structure formation. Details about both the behavior of dark matter and the size of structures are encoded in these hydrogen clouds. But how can we see into the darkness?

Luckily the dark age was not entirely dark, just very, very dim. Back then, the hydrogen atoms that filled the universe frequently bumped into each other, which can flip the electron’s spin. If a collision flips the spin, the electron’s energy changes by a tiny amount because the energy depends on whether the electron’s spin is aligned with the spin of the nucleus or whether it points in the opposite direction. This energy difference is known as “hyperfine splitting.” Flipping the hydrogen electron’s spin therefore leads to the emission of a very low energy photon with a wavelength of 21cm. If we can trace the emissions of these 21cm photons, we can trace the distribution of hydrogen.


But 21 cm is the wavelength of the photons at the time of emission, which was 13 billion years ago. Since then the universe has expanded significantly and stretched the photons’ wavelength with it. How much the wavelength has been stretched depends on whether it was emitted early or late during the dark ages. The early photons have meanwhile been stretched by a factor of about 1000, resulting in wavelengths of a few hundred meters. Photons emitted towards the end of the dark age have not been stretched quite as much – they today have wavelength of some meters.

This most exciting aspect of 21cm astronomy is that it does not only give us a snapshot at one particular moment – like the CMB – but allows us to map different times during the dark age. By measuring the red-shifted photons at different wavelengths we can scan through the whole period. This would give us many new insights about the history of our universe.

To begin with, it is not well understood how the dark age ends and the first stars are formed. The dark age fades away in a phase of reionization in which the hydrogen is stripped of its electrons again. This reionization is believed to be caused by the first star’s radiation, but exactly what happens we don’t know. Since the ionized hydrogen no longer emits the hyperfine line, 21cm astronomy could tell us how the ionized regions grow, teaching us much about the early stellar objects and the behavior of the intergalactic medium.

21 cm astronomy can also help solve the riddle of dark matter. If dark matter self-annihiliates, this affects the distribution of neutral hydrogen, which can be used to constrain or rule out dark matter models.

Inflation models too can be probed by this method: The distribution of structures that 21cm astronomy can map carries an imprint of the quantum fluctuations that caused them. These fluctuations in return depend on the type of inflation fields and the field’s potential. Thus, the correlations in the structures which were present already during the dark age let us narrow down what type of inflation has taken place.

Maybe most excitingly, the dark ages might give us a peek at cosmic strings, one-dimensional objects with a high density and high gravitational pull. In many models of string phenomenology, cosmic strings can be produced at the end of inflation, before the dark age begins. By distorting the hydrogen clouds, the cosmic strings would leave a characteristic signal in the 21cm emission spectrum.

CSL-1. A candidate signal for a cosmic
string, later identified as two galaxies.
Read more about cosmic strings here.
But measuring photons of this wavelength is not easy. The Milkyway too has sources that emit in this regime, which gives rise to an unavoidable galactic foreground. In addition, the Earth’s atmosphere distorts the signal and some radio broadcasts too can interfere with the measurement. Nevertheless, astronomers have risen up to the challenge and the first telescopes hunting for the 21cm signal are now in operation.

The Low-Frequency Array (LOFAR) went online in late 2012. Its main telescope is located in the Netherlands, but it combines data from 24 other telescopes in Europe. It reaches wavelengths up to 30m. The Mileura Widefield Array (MWA) in Australia, which is sensitive to wavelengths of a few meters, has started taking data in 2013. And in 2025, the Square Kilometer Array (SKA) is scheduled to be completed. This joint project between Australia and South Africa will be the yet largest radio telescope.

Still, the astronomers’ dream would be to get rid of the distortion caused by Earth’s atmosphere. Their most ambitious plan is to put an array of telescopes on the far side of the moon. But this idea is, unfortunately, still far-fetched – for not to mention underfunded.

Only a few decades ago, cosmology was a discipline so starved of data that it was closer to philosophy than to science. Today it is a research area based on high precision measurements. The progress in technology and in our understanding of the universe’s history has been nothing but stunning, but we have only just begun. The dark age is next.


[This post previously appeared on Starts With a Bang.]

Saturday, January 16, 2016

Away Note

I am traveling the next three weeks and things will go very slowly on this blog.

In case you missed it, you might enjoy two pieces I recently wrote for NOVA: Are Singularities Real? and Are Space and Time discrete or continuous? There should be a third one appearing later this month (which will also be the last because it seems they're scraping this column). And then I wrote an article for Quanta Magazine String Theory Meets Loop Quantum Gravity, to which you find some background material here and here. Finally you might find this article in The Independent amusing: Stephen Hawking publishes paper on black holes that could get him 'a Nobel prize after all', in which I'm quoted as the voice of reason.

Wednesday, January 13, 2016

Book review: “From the Great Wall to the Great Collider” by Nadis and Yau

From the Great Wall to the Great Collider: China and the Quest to Uncover the Inner Workings of the Universe
By Steve Nadis and Shing-Tung Yau
International Press of Boston (October 23, 2015)

Did you know that particle physicists like the Chinese government’s interest in building the next larger particle collider? If not, then this neat little book about the current plans for the Great Collider, aka “Nimatron,” is just for you.

Nadis and Yau begin their book laying out the need for a larger collider, followed by a brief history of accelerator physics that emphasizes the contribution of Chinese researchers. Then come two chapters about the hunt for the Higgs boson, the LHC’s success, and a brief survey of beyond the standard model physics that focuses on supersymmetry and extra dimensions. The reader then learns about other large-scale physics experiments that China has run or is running, and about the currently discussed options for the next larger particle accelerator. Nadis and Yau don’t waste time discussing details of all accelerators that are presently considered, but get quickly to the point of laying out the benefits of a circular 50 or even 100 TeV collider in China.

And the benefits are manifold. The favored location for the gigantic project is Qinghuangdao, which is “an attractive destination that might appeal to foreign scientists” because, among other things, “its many beaches [are] ranked among the country’s finest,” “the countryside is home to some of China’s leading vineyards” and even the air quality is “quite good” at least “compared to Beijing.” Book me in.

The authors make a good case that both the world and China only have to gain from the giant collider project. China because “one result would likely be an enhancement of national prestige, with the country becoming a leader in the field of high-energy physics and perhaps eventually becoming the world center for such research. Improved international relations may be the most important consequence of all.” And the rest of the world benefits because, besides preventing thousands of particle physicists from boredom, “civil engineering costs are low in the country – much cheaper than those in many Western countries.”

The book is skillfully written with scientific explanations that are detailed, yet not overly technical, and much space is given to researchers in the field. Nadis and Yau quote whoever might help getting their message across: David Gross, Lisa Randall, Frank Wilczek, Don Lincoln, Don Hopper, Joseph Lykken, Nima Arkani-Hamed, Nathan Seiberg, Martinus Veltman, Steven Weinberg, Gordon Kane, John Ellis – everybody gets a say.

My favorite quote is maybe that by Henry Tye, who argues that the project is a good investment because “the worldwide impact of a collider is much bigger than if the money were put into some other area of science,” since “even if China were to spend more than the United States in some field of science and engineering other than high-energy physics, US professors would still do their research in the US.” This quote sums up the authors’ investigation of whether such a major financial commitment might maybe have a larger payoff were it invested into any other research area.

Don’t get me wrong there, if the Chinese want to build a collider, I think that’s totally great and an awesome contribution to knowledge discovery and the good of humanity, the forgiveness of sins, the resurrection of the body, and the life everlasting, amen. But there’s a real discussion here to be had whether building the next bigger ring-thing is where the money should flow or if not putting a radio telescope on the moon or a gravitational wave interferometer in space would bring more bang for the Yuan. Unfortunately, you’re not going to find that discussion in Nadis and Yau’s book.

Aside: The print has smear-stripes.Yes, that puts me in a bad mood.

In summary, this book will come in very handy next time you have to convince a Chinese government official to spend a lot of money on bringing protons up to speed.

[Disclaimer: Free review copy.]

Sunday, January 10, 2016

Free will is dead, let’s bury it.

I wish people would stop insisting they have free will. It’s terribly annoying. Insisting that free will exists is bad science, like insisting that horoscopes tell you something about the future – it’s not compatible with our knowledge about nature.

According to our best present understanding of the fundamental laws of nature, everything that happens in our universe is due to only four different forces: gravity, electromagnetism, and the strong and weak nuclear force. These forces have been extremely well studied, and they don’t leave any room for free will.

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will. You will almost certainly fail. The only thing really you can do to hold on to free will is to wave hands, yell “magic”, and insist that there are systems which are exempt from the laws of nature. And these systems somehow have something to do with human brains.

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions. As an aside: The paper was rejected by several journals. Not because anyone found anything wrong with it. No, the philosophy journals complained that it was too much physics, and the physics journals complained that it was too much philosophy. And you wonder why there isn’t much interaction between the two fields.

After plain denial, the somewhat more enlightened way to insist on free will is to redefine what it means. You might settle for example on speaking of free will as long as your actions cannot be predicted by anybody, possibly not even by yourself. Clearly, it is presently impossible to make such a prediction. It remains to be seen whether it will remain impossible, but right now it’s a reasonable hope. If that’s what you want to call free will, go ahead, but better not ask yourself what determined your actions.

A popular justification for this type of free will is insisting that on comparably large scales, like those between molecules responsible for chemical interactions in your brain, there are smaller components which may have a remaining influence. If you don’t keep track of these smaller components, the behavior of the larger components might not be predictable. You can then say “free will is emergent” because of “higher level indeterminism”. It’s like saying if I give you a robot and I don’t tell you what’s in the robot, then you can’t predict what the robot will do, consequently it must have free will. I haven’t managed to bring up sufficient amounts of intellectual dishonesty to buy this argument.

But really you don’t have to bother with the details of these arguments, you just have to keep in mind that “indeterminism” doesn’t mean “free will”. Indeterminism just means there’s some element of randomness, either because that’s fundamental or because you have willfully ignored information on short distances. But there is still either no “freedom” or no “will”. Just try it. Try to write down one equation that does it. Just try it.

I have written about this a few times before and according to the statistics these are some of the most-read pieces on my blog. Following these posts, I have also received a lot of emails by readers who seem seriously troubled by the claim that our best present knowledge about the laws of nature doesn’t allow for the existence of free will. To ease your existential worries, let me therefore spell out clearly what this means and doesn’t mean.

It doesn’t mean that you are not making decisions or are not making choices. Free will or not, you have to do the thinking to arrive at a conclusion, the answer to which you previously didn’t know. Absence of free will doesn’t mean either that you are somehow forced to do something you didn’t want to do. There isn’t anything external imposing on you. You are whatever makes the decisions. Besides this, if you don’t have free will you’ve never had it, and if this hasn’t bothered you before, why start worrying now?

This conclusion that free will doesn’t exist is so obvious that I can’t help but wonder why it isn’t widely accepted. The reason, I am afraid, is not scientific but political. Denying free will is considered politically incorrect because of a wide-spread myth that free will skepticism erodes the foundation of human civilization.

For example, a 2014 article in Scientific American addressed the question “What Happens To A Society That Does not Believe in Free Will?” The piece is written by Azim F. Shariff, a Professor for Psychology, and Kathleen D. Vohs, a Professor of Excellence in Marketing (whatever that might mean).

In their essay, the authors argue that free will skepticism is dangerous: “[W]e see signs that a lack of belief in free will may end up tearing social organization apart,” they write. “[S]kepticism about free will erodes ethical behavior,” and “diminished belief in free will also seems to release urges to harm others.” And if that wasn’t scary enough already, they conclude that only the “belief in free will restrains people from engaging in the kind of wrongdoing that could unravel an ordered society.”

To begin with I find it highly problematic to suggest that the answers to some scientific questions should be taboo because they might be upsetting. They don’t explicitly say this, but the message the article send is pretty clear: If you do as much as suggest that free will doesn’t exist you are encouraging people to harm others. So please read on before you grab the axe.

The conclusion that the authors draw is highly flawed. These psychology studies always work the same. The study participants are engaged in some activity in which they receive information, either verbally or in writing, that free will doesn’t exist or is at least limited. After this, their likeliness to conduct “wrongdoing” is tested and compared to a control group. But the information the participants receive is highly misleading. It does not prime them to think they don’t have free will, it instead primes them to think that they are not responsible for their actions. Which is an entirely different thing.

Even if you don’t have free will, you are of course responsible for your actions because “you” – that mass of neurons – are making, possibly bad, decisions. If the outcome of your thinking is socially undesirable because it puts other people at risk, those other people will try to prevent you from more wrongdoing. They will either try to fix you or lock you up. In other words, you will be held responsible. Nothing of this has anything to do with free will. It’s merely a matter of finding a solution to a problem.

The only thing I conclude from these studies is that neither the scientists who conducted the research nor the study participants spent much time thinking about what the absence of free will really means. Yes, I’ve spent far too much time thinking about this.

The reason I am hitting on the free will issue is not that I want to collapse civilization, but that I am afraid the politically correct belief in free will hinders progress on the foundations of physics. Free will of the experimentalist is a relevant ingredient in the interpretation of quantum mechanics. Without free will, Bell’s theorem doesn’t hold, and all we have learned from it goes out the window.

This option of giving up free will in quantum mechanics goes under the name “superdeterminism” and is exceedingly unpopular. There seem to be but three people on the planet who work on this, ‘t Hooft, me, and a third person of whom I only learned from George Musser’s recent book (and whose name I’ve since forgotten). Chances are the three of us wouldn’t even agree on what we mean. It is highly probable we are missing something really important here, something that could very well be the basis of future technologies.

Who cares, you might think, buying into the collapse of the wave-function seems a small price to pay compared to the collapse of civilization. On that matter though, I side with Socrates “The unexamined life is not worth living.”

Thursday, January 07, 2016

More information emerges about new proposal to solve black hole information loss problem

Soft hair. Redshifted.

Last year August, Stephen Hawking announced he had been working with Malcom Perry and Andrew Strominger on a solution to the black hole information loss problem, and they were closing in on a solution. But little was explained other than that this solution rests on a symmetry group by name of supertranslations.

Yesterday then, Hawking, Perry, and Strominger, had a new paper on the arxiv that fills in a little more detail
    Soft Hair on Black Holes
    Stephen W. Hawking, Malcolm J. Perry, Andrew Strominger
    arXiv:1601.00921
I haven’t had much time to think about this, but I didn’t want to leave you hanging, so here is a brief summary.

First of all, the paper seems only a first step in a longer argument. Several relevant questions are not addressed and I assume further work will follow. As the authors write: “Details will appear elsewhere.”

The present paper does not study information retrieval in general. It instead focuses on a particular type of information, the one contained in electrically charged particles. The benefit in doing this is that the quantum theory of electric fields is well understood.

Importantly, they are looking at black holes in asymptotically flat (Minkowski) space, not in asymptotic Anti-de-Sitter (AdS) space. This is relevant because string theorists believe that the black hole information loss problem doesn’t exist in asymptotic AdS space. They don’t know however how to extend this argument to asymptotically flat space or space with a positive cosmological constant. To best present knowledge we don’t live in AdS space, so understanding the case with a positive cosmological constant is necessary to describe what happens in the universe we actually inhabit.

In the usual treatment, a black hole counts only the net electric charge of particles as they fall in. The total charge is one of the three classical black hole “hairs,” next to mass and angular momentum. But all other details about the charges (eg in which chunks they came in) is lost: there is no way to store anything in or on an object that has no features, has no “hairs”.

In the new paper the authors argue that the entire information about the infalling charges is stored on the horizon in form of 'soft photons', that are photons of zero energy. These photons are the “hair” which previously was believed to be absent.

Since these photons can carry information but have zero energy, the authors conclude that the vacuum is degenerate. A 'degenerate' state is one on which several distinct quantum states share the same energy. This means there are different vacuum states which can surround the black hole and so the vacuum can hold and release information.

It is normally assumed that the vacuum state is unique. If it is not, this allows one to have information in the outgoing radiation (which is the ingoing vacuum). A vacuum degeneracy is thus a loophole in the argument originally lead by Hawking according to which information must get lost.

What the ‘soft photons’ are isn't further explained in the paper; they are simply identified with the action of certain operators and supposedly Goldstone bosons of a spontaneously broken symmetry. Or rather of an infinite amount of symmetries that, basically, belong to the conserved charges of something akin multipole moments. It sounds plausible, but the interpretation eludes me. I haven’t yet read the relevant references.

I think the argument goes basically like this: We can expand the electric field in form of all these (infinitely many) higher moments and show that each of them is associated with a conserved charge. Since the charge is conserved, the black hole can’t destroy it. Consequently, it must be maintained somehow. In the presence of a horizon, future infinity is not a Cauchy surface, so we add the horizon as boundary. And on this additional boundary we put the information that we know can’t get lost, which is what the soft photons are good for.

The new paper adds to Hawking’s previous short note by providing an argument for why the amount of information that can be stored this way by the black hole is not infinite, but instead bounded by the Bekenstein-Hawking entropy (ie proportional to the surface area). This is an important step to assure this idea is compatible with everything else we know about black holes. Their argument however is operational and not conceptual. It is based on saying, not that the excess degrees of freedom don't exist, but that they cannot be used by infalling matter to store information. Note that, if this argument is correct, the Bekenstein-Hawking entropy does not count the microstates of the black hole, it instead sets an upper limit to the possible number of microstates.

The authors don’t explain just how the information becomes physically encoded in the outgoing radiation, aside from writing down an operator. Neither, for that matter, do they demonstrate that by this method actually all of the information of the initial can be stored and released. Focusing on photons of course they can't do this anyway. But they don’t have an argument how it can be extended to all degrees of freedom. So, needless to say, I have to remain skeptical that they can live up to the promise.

In particular, I still don’t see that the conserved charges they are referring to actually encode all the information that’s in the field configuration. For all I can tell they only encode the information in the angular directions, not the information in the radial direction. If I were to throw in two concentric shells of matter, I don’t see how the asymptotic expansion could possibly capture the difference between two shells and one shell, as long as the total charge (or mass) is identical. The only way I see to get around this issue is to just postulate that the boundary at infinity does indeed contain all the information. And that in return we only know to work in AdS space. (At least it’s believed to work in this case.)

Also, the argument for why the charges on the horizon are bounded and the limit reproduces the Bekenstein-Hawking entropy irks me. I would have expected the argument for the bound to rely on taking into account that not all configurations that one can encode in the infinite distance will actually go on to form black holes.

Having said that, I think it’s correct that a degeneracy of the vacuum state would solve the black hole information loss problem. It’s such an obvious solution that you have to wonder why nobody thought of this before, except that I thought of it before. In a note from 2012, I showed that a vacuum degeneracy is the conclusion one is forced to draw from the firewall problem. And in a follow-up paper I demonstrated explicitly how this solves the problem. I didn’t have a mechanism though to transfer the information into the outgoing radiation. So now I’m tempted to look at this, despite my best intentions to not touch the topic again...

In summary, I am not at all convinced that the new idea proposed by Hawking, Perry, and Strominger solves the information loss problem. But it seems an interesting avenue that is worth further exploration. And I am sure we will see further exploration...