Friday, December 11, 2015

Quantum gravity could be observable in the oscillation frequency of heavy quantum states

Observable consequences of quantum gravity were long thought inaccessible by experiment. But we theorists might have underestimated our experimental colleagues. Technology has now advanced so much that macroscopic objects, weighting as much as a billionth of a gram, can be coaxed to behave as quantum objects. A billionth of a gram might not sound much, but it is huge compared to the elementary particles that quantum physics normally is all about. It might indeed be enough to become sensitive to quantum gravitational effects.

One of the most general predictions of quantum gravity is that it induces a limit to the resolution of structures. This limit is at an exceedingly tiny distance that is the Planck-length, 10-33 cm. There is no way we can directly probe it. However, theoretically the presence of such a minimal length scale leads to a modification of quantum field theory. This is generally thought of as an effective description of quantum gravitational effects.

These models with a minimal length scale come in three types. One in which Poincaré-invariance, the symmetry of Special Relativity, is broken by the introduction of a preferred frame. One in which Poincaré-symmetry is deformed for freely propagating particles. And one in which it is deformed, but only for virtual particles.

The first two types of these models make predictions that have already been ruled out. The third one is the most plausible model because it leaves Special Relativity intact in all observables – the deformation only enters in intermediate steps. But for this reason, this type of model is also extremely hard to test. I worked on this ten years ago, but got eventually so frustrated that I abandoned the topic: Whatever observable I computed, it was dozens of orders of magnitude below measurement precision.

A recent paper by Alessio Belanchia et al now showed me that I might have given up too early. If one asks how such a modification of quantum mechanics affects the motion of heavy quantum mechanical oscillators, Planck-scale sensitivity is only a few orders of magnitudes away.
    Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators
    Alessio Belenchia, Dionigi M. T. Benincasa, Stefano Liberati, Francesco Marin, Francesco Marino, Antonello Ortolan
    arXiv:1512.02083 [gr-qc]

The title of their paper refers to “non-locality” because the modification due to a minimal length leads to higher-order terms in the Lagrangian. In fact, there have to be terms up to infinite order. This is a very tame type of non-locality, because it is confined to Planck scale distances. How strong the modification is however also depends on the mass of the object. So if you can get a quite massive object to display quantum behavior, then you can increase your sensitivity to effects that might be indicative of quantum gravity.

This has been tried before. A bad example was this attempt, which implicitly used models of either the first or second type, that are ruled out by experiment already. A more recent and much more promising attempt was this proposal. However, they wanted to test a model that is not very plausible on theoretical grounds, so their test is of limited interst. As I mentioned in my blogpost however, this was a remarkable proposal because it was the first demonstration that the sensitivity to Planck scale effects can now be reached.

The new paper uses a system that is pretty much the same as that in the previous proposal. It’s a small disk of silicone, weighting a nanogram or so, that is trapped in an electromagnetic potential and cooled down to some mK. In this trap, the disk oscillates at a frequency that depends on the mass and the potential. This is a pure quantum effect – it is observable and it has been observed.

Belanchia et al calculate how this oscillation would be modified if the non-local correction terms were present and find that the oscillation is no longer simply harmonic but becomes more complicated (see figure). They then estimate the size of the effect and come to the conclusion that, while it is challenging, existing technology is only a few orders of magnitude away from reaching Planck scale precision.

The motion of the mean-value, x, of the oscillator's position in the potential as a function of time, t. The black curve shows the motion without quantum gravitational effects, the red curve shows the motion with quantum gravitational effects (greatly enlarged for visibility). The experiment relies on measuring the difference.
I find this a very exciting development because both the phenomenological model that is being tested here and the experimental precision seems plausible to me. I have recently had some second and third thoughts about the model under question (it’s complicated) and believe that it has some serious shortcomings, but I don’t think that these matter in the limit considered here.

It is very likely that we will see more proposals for testing quantum gravity with heavy quantum-mechanical probes, because once sensitivity reaches a certain parameter range, there suddenly tend to be loads of opportunities. At this point I have become tentatively optimistic that we might indeed be able to measure quantum gravitational effects within, say, the next two decades. I am almost tempted to start working on this again...

Saturday, December 05, 2015

What Fermilab’s Holometer Experiment teaches us about Quantum Gravity.

The Fermilab Holometer searched for
correlations between two interferometers
Tl;dr: Nothing. It teaches us nothing. It just wasted time and money.

The Holometer experiment at Fermilab just published the results of their search for holographic space-time foam. They didn’t find any evidence for noise that could be indicative of quantum gravity.

The idea of the experiment was to find correlations in quantum gravitational fluctuations of space-time by using two very sensitive interferometers and comparing their measurements. Quantum gravitational fluctuations are exceedingly tiny, and in all existing models they are far too small to be picked up by interferometers. But the head of the experiment, Craig Hogan, argued that, if the holographic principle is valid, then the fluctuations should be large enough to be detectable by the experiment.

The holographic principle is the idea that everything that happens in a volume can be encoded on the volume’s surface. Many physicists believe that the principle is realized in nature. If that was so, it would indeed imply that fluctuations have correlations. But these correlations are not of the type that the experiment could test for. They are far too subtle to be measureable in this way.

In physics, all theories have to be expressed in form of a consistent mathematical description. Mathematical consistency is an extremely strong constraint when combined with the requirement that the theory also has to agree with all observations we already have. There is very little that can be changed in the existing theories that a) leads to new effects and b) does not spoil the compatibility with existing data. It’s not an easy job.

Hogan didn’t have a theory. It’s not that I am just grumpy  he said so himself: “It's a slight cheat because I don't have a theory,” as quoted by Michael Moyer in a 2012 Scientific American article.

For what I have extracted from Hogan’s papers on the arxiv, he tried twice to construct a theory that would capture his idea of holographic noise. The first violated Lorentz-invariance and was thus already ruled out by other data. The second violated basic properties of quantum mechanics and was thus already ruled out too. In the end he seems to have given up finding a theory. Indeed, it’s not an easy job.

Searching for a prediction based on a hunch rather than a theory makes it exceedingly unlikely that something will be found. That is because there is no proof that the effect would even be consistent with already existing data, which is difficult to achieve. But Hogan isn’t a no-one; he is head of Fermilab’s Center for Particle Astrophysics. I assume he got funding for his experiment by short-circuiting peer review. A proposal for such an experiment would never have passed peer review – it simply doesn’t live up to today’s quality standards in physics.

I wasn’t the only one perplexed about this experiment becoming reality. Hogan relates the following anecdote: “Lenny [Susskind] has an idea of how the holographic principle works, and this isn’t it. He’s pretty sure that we’re not going to see anything. We were at a conference last year, and he said that he would slit his throat if we saw this effect.” This is a quote from another Scientific American article. Oh, yes, Hogan definitely got plenty of press coverage for his idea.

Ok, so maybe I am grumpy. That’s because there are hundreds of people working on developing testable models for quantum gravitational effects, each of whom could tell you about more promising experiments than this. It’s a research area by name quantum gravity phenomenology. The whole point of quantum gravity phenomenology is to make sure that new experiments test promising ranges of parameter space, rather than just wasting money.

I might have kept my grumpiness to myself, but then the Fermilab Press release informed me that “Hogan is already putting forth a new model of holographic structure that would require similar instruments of the same sensitivity, but different configurations sensitive to the rotation of space. The Holometer, he said, will serve as a template for an entirely new field of experimental science.”

An entirely new field of experimental science, based on models that either don’t exist or are ruled out already and that, when put to test, morph into new ideas that require higher sensitivity. That scared me so much I thought somebody has to spell it out: I sincerely hope that Fermilab won’t pump any more money into this unless the idea goes through rigorous peer review. It isn’t just annoying. It’s a slap into the face of many hard-working physicists whose proposals for experiments are of much higher quality but who don’t get funding.

At the very least, if you have a model for what you test, you can rule out the model. With the Holometer you can’t even rule out anything because there is no theory and no model that would be tested with it. So what we have learned is nothing. I can only hope that at least this episode draws some attention to the necessity of having at mathematically consistent model. It’s not an easy job. But it has to be done.

The only good news here is that Lenny Susskind isn’t going to slit his throat.

Thursday, December 03, 2015

Peer Review and its Discontents [slide show]

I have made a slide-show of my Monday talk a the Munin conference and managed to squeeze a one-hour lecture into 23 minutes. Don't expect too much, nothing happens in this video, it's just me mumbling over the slides (no singing either ;)). I was also recorded on Monday, but if you prefer the version with me walking around and talking for 60 minutes you'll have to wait a few days until the recording goes online.



I am very much interested in finding a practical solution to these problems. If you have proposals to make, please get in touch with me or leave a comment.

Tuesday, December 01, 2015

Hawking radiation is not produced at the black hole horizon.

Stephen Hawking’s “Brief History of Time” was one of the first popular science books I read, and I hated it. I hated it because I didn’t understand it. My frustration with this book is a big part of the reason I’m a physicist today – at least I know who to blame.

I don’t hate the book any more – admittedly Hawking did a remarkable job of sparking public interest in the fundamental questions raised by black hole physics. But every once in a while I still want to punch the damned book. Not because I didn’t understand it, but because it convinced so many other people they did understand it.

In his book, Hawking painted a neat picture for black hole evaporation that is now widely used. According to this picture, black holes evaporate because pairs of virtual particles nearby the horizon are ripped apart by tidal forces. One of the particles gets caught behind the horizon and falls in, the other escapes. The result is a steady emission of particles from the black hole horizon. It’s simple, it’s intuitive, and it’s wrong.

Hawking’s is an illustrative picture, but nothing more than that. In reality – you will not be surprised to hear – the situation is more complicated.

The pairs of particles – to the extent that it makes sense to speak of particles at all – are not sharply localized. They are instead blurred out over a distance comparable to the black hole radius. The pairs do not start out as points, but as diffuse clouds smeared all around the black hole, and they only begin to separate when the escapee has retreated from the horizon a distance comparable to the black hole’s radius. This simple image that Hawking provided for the non-specialist is not backed up by the mathematics. It contains an element of the truth, but take it too seriously and it becomes highly misleading.

That this image isn’t accurate is not a new insight – it’s been known since the late 1970s that Hawking radiation is not produced in the immediate vicinity of the horizon. Already in Birrell and Davies’ textbook it is clearly spelled out that taking the particles from the far vicinity of the black hole and tracing them back to the horizon – thereby increasing (“blueshifting”) their frequency – does not deliver the accurate description in the horizon area. The two parts of the Hawking-pairs blur into each other in the horizon area, and to meaningfully speak of particles one should instead use a different, local, notion of particles. Better even, one should stick to calculating actually observable quantities like the stress-energy tensor.

That the particle pairs are not created in the immediate vicinity of the horizon was necessary to solve a conundrum that bothered physicists back then. The temperature of the black hole radiation is very small, but this is in the far distance to the black hole. For this radiation to have been able to escape, it must have started out with an enormous energy close by the black hole horizon. But if such an enormous energy was located there, then an infalling observer should notice and burn to ashes. This however violates the equivalence principle, according to which the infalling observer shouldn’t notice anything unusual upon crossing the horizon.

This problem is resolved by taking into account that tracing back the outgoing radiation to the horizon does not give a physically meaningful result. If one instead calculates the stress-energy in the vicinity of the horizon, one finds that it is small and remains small even upon horizon crossing. It is so small that an observer would only be able to tell the difference to flat space on distances comparable to the black hole radius (which is also the curvature scale). Everything fits nicely, and no disagreement with the equivalence principle comes about.

[I know this sounds very similar to the firewall problem that has been discussed more recently but it’s a different issue. The firewall problem comes about because if one requires the outgoing particles to carry information, then the correlation with the ingoing particles gets destroyed. This prevents a suitable cancellation in the near-horizon area. Again however one can criticize this conclusion by complaining that in the original “firewall paper” the stress-energy wasn’t calculated. I don’t think this is the origin of the problem, but other people do.]

The actual reason that black holes emit particles, the one that is backed up by mathematics, is that different observers have different notions of particles.

We are used to a particle either being there or not being there, but this is only the case so long as we move relative to each other at constant velocity. If an observer is accelerated, his definition of what a particle is changes. What looks like empty space for an observer at constant velocity suddenly seems to contain particles for an accelerated observer. This effect, named after Bill Unruh – who discovered it almost simultaneously with Hawking’s finding that black holes emit radiation – is exceedingly tiny for accelerations we experience in daily life, thus we never notice it.

The Unruh effect is very closely related to the Hawking effect by which black holes evaporate. Matter that collapses to a black hole creates a dynamical space-time that gives rise to an acceleration between observers in the past and in the future. The result is that the space-time around the collapsing matter, that did not contain particles before the black hole was formed, contains thermal radiation in the late stages of collapse. This Hawking-radiation that is emitted from the black hole is the same as the vacuum that initially surrounded the collapsing matter.

That, really, is the origin of particle emission from black holes: what is a “particle” depends on the observer. Not quite as simple, but dramatically more accurate.

The image provided by Hawking with the virtual particle pairs close by the horizon has been so stunningly successful that now even some physicists believe it is what really happens. The knowledge that blueshifting the radiation from infinity back to the horizon gives a grossly wrong stress-energy seems to have gotten buried in the literature. Unfortunately, misunderstanding the relation between the flux of Hawking-particles in the far distance and in the vicinity of the black hole leads one to erroneously conclude that the flux is much larger than it is. Getting this relation wrong is for example the reason why Mersini-Houghton came to falsely conclude that black holes don’t exist.

It seems about time someone reminds the community of this. And here comes Steve Giddings.

Steve Giddings is the nonlocal hero of George Musser’s new book “Spooky Action at a Distance.” For the past two decades or so he’s been on a mission to convince his colleagues that nonlocality is necessary to resolve the black hole information loss problem. I spent a year in Santa Barbara a few doors down the corridor from Steve, but I liked his papers better when we was still on the idea that black hole remnants keep the information. Be that as it may, Steve knows black holes inside and out, and he has a new note on the arxiv that discusses the question where Hawking radiation originates.

In his paper, Steve collects the existing arguments why we know the pairs of the Hawking radiation are not created in the vicinity of the horizon, and he adds some new arguments. He estimates the effective area from which Hawking-radiation is emitted and finds it to be a sphere with a radius considerably larger than the black hole. He also estimates the width of wave-packets of Hawking radiation and shows that it is much larger than the separation of the wave-packet’s center from the horizon. This nicely fits with some earlier work of his that demonstrated that the partner particles do not separate from each other until after they have left the vicinity of the black hole.

All this supports the conclusion that Hawking particles are not created in the near vicinity of the horizon, but instead come from a region surrounding the black hole with a few times the black hole’s radius.

Steve’s paper has an amusing acknowledgement in which he thanks Don Marolf for confirming that some of their colleagues indeed believe that Hawking radiation is created close by the horizon. I can understand this. When I first noticed this misunderstanding I also couldn’t quite believe it. I kept pointing towards Birrell-Davies but nobody was listening. In the end I almost thought I was the one who got it wrong. So, I for sure am very glad about Steve’s paper because now, rather than citing a 40 year old textbook, I can just cite his paper.

If Hawking’s book taught me one thing, it’s that sticky visual metaphors that can be a curse as much as they can be a blessing.

Friday, November 27, 2015

Away note

I’ll be traveling the next two weeks. First I’ll be going to a conference on “scholarly publishing” in the picturesque city of Tromsø. The “o” with the slash is Norwegian and the trip is going to beat my personal farthest-North record that is currently held by Reykjavik (or some village with an unpronouncable name a little North of that).

I don’t have the faintest clue why they invited me, to give a keynote lecture out of all things, in company of some Nobelprize winner. But I figured I’d go and tell them what’s going wrong with peer review, at least that will be entertaining. Thanks to a stomach bug that my husband brought back from India, by means of which I lost an estimated 800 pounds in 3 days, “tell them what's going wrong with peer review” is so far pretty much the whole plan for the lecture.

The week after I’ll be going to a workshop in Munich on the question “Why trust a theory?”. This event is organized by the Munich Center for Mathematical Philosophy, where I already attended an interesting workshop two years ago. This time the workshop is dedicated to the topics raised in Richard Dawid’s book “String Theory and the Scientific Method" which I reviewed here. The topic has since been a lot on my mind and I’m looking forward to the workshop.

Monday, November 23, 2015

Dear Dr B: Can you think of a single advancement in theoretical physics, other than speculation, since the early 1980's?

This question was asked by Steve Coyler, who was a frequent commenter on this blog before facebook ate him up. His full question is:
“Can you think of a single advancement in theoretical physics, other than speculation like Strings and Loops and Safe Gravity and Twistors, and confirming things like the Higgs Boson and pentaquarks at the LHC, since Politizer and Wilczek and Gross (and Coleman) did their thing re QCD in the early 1980's?”
Dear Steve:

What counts as “advancement” is somewhat subjective – one could argue that every published paper is an advancement of sorts. But I guess you are asking for breakthroughs that have generated new research areas. I also interpreted your question to have an emphasis on “theoretical,” so I will leave aside mostly experimental advances, like electron lasers, attosecond spectroscopy, quantum dots, and so on.

Admittedly your question pains me considerably. Not only does it demonstrate you have swallowed the stories about a crisis in physics that the media warm up and serve every couple of months. It also shows that I haven’t gotten across the message I tried to convey in this earlier post: the topics which dominate the media aren’t the topics that dominate actual research.

The impression you get about physics from reading science news outlets is extremely distorted. The vast majority of physicists have nothing to do with quantum gravity, twistors, or the multiverse. Instead they work in fields that are barely if ever mentioned in the news, like atomic and nuclear physics, quantum optics, material physics, plasma physics, photonics, or chemical physics. In all these areas theory and experiment are very closely tied together, and the path to patents and applications is short.

Unfortunately, advances in theoretical physics get pretty much no media coverage whatsoever. They only make it into the news if they were experimentally confirmed – and then everybody cheers the experimentalists, not the theorists. The exceptions are the higher speculations that you mention, which are deemed news-worthy because they supposedly show that “everything we thought about something is wrong.” These headlines are themselves almost always wrong.

Having said that, your question is difficult for me to answer. I’m not a walking and talking encyclopedia of contemporary physics, and in the early 1980s I was in Kindergarten. The origin of many research areas that are hot today isn’t well documented because their history hasn’t yet been written. This is to warn you that I might be off a little with the timing on the items below.

I list for you the first topics that come to my mind, and I invite readers to submit additions in the comments:

  • Topological insulators. That’s one of the currently hottest topics in physics, and many people expect a Nobelprize to go into this area in the near future. A topological insulators is a material that conducts only on its surface. They were first predicted theoretically in the mid 80s.

  • Quantum error correction, quantum logical gates, quantum computing. The idea of quantum computing came up in the 1980s, and most of the understanding of quantum computation and quantum information is only two decades old. [Corrected date: See comment by Matt.]

  • Quantum cryptography. While the first discussion of quantum cryptography predates the 1980s, the field really only took off in the last two decades. Also one of the hottest topics today because first applications are now coming up. [Corrected date: See comment by Matt.]

  • Quantum phase transitions, quantum critical points. I haven’t been able to find out exactly when this was first discussed, but it’s an area that has flourished in the last 20 years or so. This is work mainly lead by theory, not experiment.

  • Metamaterials. While materials with a negative refraction index were first discussed in the mid 60s, this wasn’t paid much attention to until the late 1990, when further theoretical work demonstrated that materials with negative permittivity and permeability should exist. The first experimental confirmation came in in 2000, and since then the field has exploded. This is another area which will probably see a Nobelprize in the soon future. You have read in the news about this under the headline “invisibility cloak.”

  • Dirac (Weyl) materials. These are materials in which excitations behave like Dirac (Weyl) fermions. Graphene is an example. Again I don’t really know when this was first predicted, but I think it was past 1980.

  • Fractional Quantum Hall Effect The theoretical explanation was provided by Laughlin in 1983, and he was awarded a Nobelprize in 1998, together with two experimentalists. [Added, see comment by Flavio.]

  • Inflation. Inflation is the rapid expansion in the early universe, a theoretical prediction that served to solve a lot of problems. It was developed in the early 1980s.

  • Effective field theory/Renormalization group running. While the origin of this framework go back to Wilson in 1975, this field has only taken off in the mid 90s. This topic too is about to become hot because the breakdown of effective field theory is one of the possible explanations for the unnatural parameters of the Standard Model indicated by recent LHC data.

  • Quantum Integrable Systems. This is a largely theoretical field that is still waiting to see its experimental prime-time. One might argue that the first papers on the topic were written already by Bethe in the 1930s, but most of the work has been in the last 20 years or so.

  • Conformal field theory. Like the previous topic, this area is still heavily dominated by theory and is waiting for its time to come. It started taking off in the mid 1990s. It was topic of one of the first-ever arxiv papers.

  • Geometrical frustration, spin glasses. Geometrically frustrated materials have a large entropy even at zero temperature. You have read about these in the context of monopoles in spin-ice. Much of the theoretical work on this started only in the mid 1980s and it’s still a very active research area.

  • Cosmological Perturbation Theory. This is the mathematical framework necessary to describe the formation of structures in the universe. It was developed starting in the 1980s.

  • Gauge-gravity duality (AdS/CFT). This is a relation between different types of field theories which was discovered in the late 1990s. Its applications are still being explored, but it’s one of the most promising research directions in quantum field theory at the moment.
If you want to get a visual impression for what is going on in physics you can browse arxiv papers using Paperscape.org. You see there all arxiv papers as dots. The larger the dot, the more citations. The images in this blogpost are screenshots from Paperscape.

You can follow this blog on facebook here.

Tuesday, November 17, 2015

The scientific method is not a myth

Heliocentrism, natural selection, plate tectonics – much of what is now accepted fact was once controversial. Paradigm-shifting ideas were, at their time, often considered provocative. Consequently the way to truth must be pissing off as many people as possible by making totally idiotic statements. Like declaring that the scientific method is a myth, which was most recently proclaimed by Daniel Thurs on Discover Blogs.

Even worse, his article turns out to be a book excerpt. This hits me hard after just having discovered that someone by name Matt Ridley also published a book full of misconceptions about how science supposedly works. Both fellows seem to have the same misunderstanding: the belief that science is a self-organized system and therefore operates without method – in Thurs’ case – and without governmental funding – in Ridley’s case. That science is self-organized is correct. But to conclude from this that progress comes from nothing is wrong.

I blame Adam Smith for all this mistaken faith in self-organization. Smith used the “invisible hand” as a metaphor for the regulation of prices in a free market economy. If the actors in the market have full information and act perfectly rational, then all goods should eventually be priced at their actual value, maximizing the benefit for everyone involved. And ever since Smith, self-organization has been successfully used out of context.

In a free market, the value of the good is whatever price this ideal market would lead to. This might seem circular but it isn’t: It’s a well-defined notion, at least in principle. The main argument of neo-conservatism is that any kind of additional regulation, like taxes, fees, or socialization of services, will only lead to inefficiencies.

There are many things wrong with this ideal of a self-regulating free market. To begin with real actors are neither perfectly rational nor do they ever have full information. And then the optimal prices aren’t unique; instead there are infinitely many optimal pricing schemes, so one needs an additional selection mechanism. But oversimplified as it is, this model, now known as equilibrium economics, explains why free markets work well, or at least better than planned economies.

No, the main problem with trust in self-optimization isn’t the many shortcomings of equilibrium economics. The main problem is the failure to see that the system itself must be arranged suitably so that it can optimize something, preferably something you want to be optimized.

A free market needs, besides fiat money, rules that must be obeyed by actors. They must fulfil contracts, aren’t allowed to have secret information, and can’t form monopolies – any such behavior would prevent the market from fulfilling its function. To some extent violations of these rules can be tolerated, and the system itself would punish the dissidents. But if too many actors break the rules, self-optimization would fail and chaos would result.

Then of course you may want to question whether the free market actually optimizes what you desire. In a free market, future discounting and personal risk tends to be higher than many people prefer, which is why all democracies have put in place additional regulations that shift the optimum away from maximal profit to something we perceive as more important to our well-being. But that’s a different story that shall be told another time.

The scientific system in many regards works similar to a free market. Unfortunately the market of ideas isn’t as free as it should be to really work efficiently, but by and large it works well. As with the market economies though, it only works if the system is set up suitably. And then it optimizes only what it’s designed to optimize, so you better configure it carefully.

The development of good scientific theories and the pricing of goods are examples for adaptive systems, and so is natural selection. Such adaptive systems generally work in a circle of four steps:
  1. Modification: A set of elements that can be modified.
  2. Evaluation: A mechanism to evaluate each element according to a measure. It’s this measure that is being optimized.
  3. Feedback: A way to feed the outcome of the evaluation back into the system.
  4. Reaction: A reaction to the feedback that optimizes elements according to the measure by another modification.
With these mechanisms in place, the system will be able to self-optimize according to whatever measure you have given it, by reiterating a cycle going through steps one to four.

In the economy the set of elements are priced goods. The evaluation is whether the goods sell. The feedback is the vendor being able to tell how many goods sell. The reaction is to either change the prices or improve the goods. What is being optimized is the satisfaction (“utility”) of vendors and consumers.

In natural selection the set of elements are genes. The evaluation is whether the organism thrives. The feedback is the dependence of the amount of offspring on the organisms’ well-being. The reaction is survival or extinction. What is being optimized are survival chances (“fitness”).

In science the set of elements are hypotheses. The evaluation is whether they are useful. The feedback is the test of hypotheses. The reaction is that scientists modify or discard hypotheses that don’t work. What is being optimized in the scientific system depends on how you define “useful.” It once used to mean predictive, yet if you look at high energy physics today you might be tempted to think it’s instead mathematical elegance. But that’s a different story that shall be told another time.

That some systems optimize a set of elements according to certain criteria is not self-evident and doesn’t come from nothing. There are many ways systems can fail at this, for example because feedback is missing or a reaction isn’t targeted enough. A good example for lacking feedback is the administration of higher education institutions. They operate incredibly inefficiently, to the extent that the only way one can work with them is by circumvention. The reason is that, by my own experience, it’s next to impossible to fix obviously nonsensical policies or to boot incompetent administrative personnel.

Natural selection, to take another example, wouldn’t work if genetic mutations scrambled the genetic code too much because whole generations would be entirely unviable and feedback wasn’t possible. Or take the free market. If we’d all agree that tomorrow we don’t believe in the value of our currency any more, the whole system would come down.

Back to science.

Self-optimization by feedback in science, now known as the scientific method, was far from obvious for people in the middle ages. It seems difficult to fathom today how they could not have known. But to see how this could be you only have to look at fields where they still don’t have a scientific method, like much of the social and political sciences. They’re not testing hypotheses so much as trying to come up with narratives or interpretations because most of their models don’t make testable predictions. For a long time, this is exactly what the natural sciences also were about: They were trying to find narratives, they were trying to make sense. Quantification, prediction, and application came much later, and only then could the feedback cycle be closed.

We are so used to rapid technological progress now that we forget it didn’t used to be this way. For someone living 2000 years ago, the world must have appeared comparably static and unchanging. The idea that developing theories about nature allows us to shape our environment to better suit human needs is only a few hundred years old. And now that we are able to collect and handle sufficient amounts of data to study social systems, the feedback on hypotheses in this area will probably also become more immediate. This is another opportunity to shape our environment better to our needs, by recognizing just which setup makes a system optimize what measure. That includes our political systems as well as our scientific systems.

The four steps that an adaptive system needs to cycle through don’t come from nothing. In science, the most relevant restriction is that we can’t just randomly generate hypotheses because we wouldn’t be able to test and evaluate them all. This is why science heavily relies on education standards, peer review, and requires new hypotheses to tightly fit into existing knowledge. We also need guidelines for good scientific conduct, reproducibility, and a mechanism to give credits to scientists with successful ideas. Take away any of that and the system wouldn’t work.

The often-depicted cycle of the scientific method, consisting of hypotheses-generation and subsequent testing, is incomplete and lacks details, but it’s correct in its core. The scientific method is not a myth.


Really I think today anybody can write a book about whatever idiotic idea comes to their mind. I suppose the time has come for me to join the club.

Monday, November 16, 2015

I am hiring: Postdoc in AdS/CFT applications to condensed matter

I am hiring a postdoc for a 3-year position based at Nordita in Stockholm. The research is project-bound, funded by a grant from the Swedish Research Council. I am looking for someone with a background in AdS/CFT applications to condensed matter and/or analogue gravity. If you want to know what the project is about, have a look at these recent papers. It’s a good contract with full benefits. Please submit your application documents (CV, research interests, at least two letters) here. Further questions should be addressed to hossi[at]nordita.org

Thursday, November 12, 2015

Mysteriously quiet space baffles researchers

The Parkes Telescope. [Image Source]

Astrophysicists have concluded the yet most precise search for the gravitational wave background created by supermassive black hole mergers. But the expected signal isn’t there.


Last month, Lawrence Krauss rumored that the newly updated gravitational wave detector LIGO had seen its first signal. The news spread quickly – and was shot down almost as quickly. The new detector still had to be calibrated, a member of the collaboration explained, and a week later it emerged that the signal was probably a test run.


While this rumor caught everybody’s attention, a surprise find from another gravitational wave experiment almost drowned in the noise. The Parkes Pulsar Timing Array Project just published results from analyzing 11 years’ worth of data in which they expected to find evidence for gravitational waves created by mergers of supermassive black holes. The sensitivity of their experiment is well within the regime where the signal was predicted to be present. But the researchers didn’t find anything. Spacetime, it seems, is eerily quiet.

The Pulsar Timing Array project uses the 64 m Parkes radio telescope in Australia to monitor regularly flashing light sources in our galaxy. Known as pulsars, such objects are thought to be created in some binary systems, where two stars orbit around a common center. When a neutron star succeeds in accreting mass from the companion star, an accretion disk forms and starts to emit large amounts of particles. Due to the rapid rotation of the system, this emission goes into one particular direction. Since we can only observe the signal when it is aimed at our telescopes, the source seems to turn on and off in regular intervals: A pulsar has been created.

The astrophysicists on the lookout for gravitational waves use the fastest-spinning pulsars as enormously precise galactic clocks. These millisecond pulsars rotate so reliably that their pulses get measurably distorted already by minuscule disturbances in spacetime. Much like buoys move with waves on the water, pulsars move with the gravitational waves when space and time is stretched. In this way, the precise arrival times of the pulsars’ signals on Earth gets distorted. The millisecond pulsars in our galaxy are thus nothing but a huge gravitational wave detector that nature has given us for free.

Take the pulsar with the catchy name PSR J1909-3744. It flashes us every 2.95 milliseconds, a hundred times in the blink of an eye. And, as the new experiment reveals, it does so to a precision within a few microseconds, year after year after year. This tells the researchers that the the noise they expected from supermassive black hole mergers is not there.

The reason for this missing signal is a great puzzle right now. Most known galaxies, including our own, seem to host huge black holes with masses of more than a million times that of our Sun. And in the vastness of space and on cosmological times, galaxies bump into each other every once and then. If that happens, they most often combine to a larger galaxy and, after some period of turmoil, the new galaxy will have a supermassive binary black hole at its center. These binary systems emit gravitational waves which should be found throughout the entire universe.

The prevalence of gravitational waves from supermassive binary black holes can be estimated from the probability of a galaxy to host a black hole, and the frequency in which galaxies merge. The emission of gravitational waves in these systems is a consequence of Einstein’s theory of General Relativity. Combine the existing observations with the calculation for the emission, and you get an estimate for the background noise from gravitational waves. The pulsar timing should be sensitive to this noise. But the new measurement is inconsistent with all existing models for the gravitational wave background in this frequency range.

Gravitational waves are one of the key predictions of General Relativity, Einstein’s masterwork which celebrates its 100th anniversary this year. They have never been detected directly, but the energy loss that gravitational waves must cause has been observationally confirmed in stellar binary systems. A binary system acts much like a gravitational antenna: it constantly emits a radiation, just that instead of electromagnetic waves it is gravitational waves that the system sends into space. As a consequence of the constant loss of energy, the stars move closer together and the rotation frequency of binary systems increases. In 1993 the Physics Nobel Prize went to Hulse and Taylor for pioneering this remarkable confirmation of General Relativity.

Ever since, researchers have tried to find other ways to measure the elusive gravitational waves. The amount of gravitational waves they expect depends on their wavelength – roughly speaking, the longer the wavelength, the more of them should be around. The LIGO experiment is sensitive to wavelengths of the order of some thousand km. The network of pulsars however is sensitive to wavelengths of a several lightyears, corresponding to 1016 meters or even more. At these wavelengths astrophysicists expected a much larger background signal. But this is now excluded by the recent measurement.

Estimated gravitational wave spectrum. [Image Source]

Why the discrepancy with the models? In their paper the researchers offer various possible explanations. To begin with, the estimates for the number of galaxy mergers or supermassive binary black holes could be wrong. Or the supermassive black holes might not be able to form close-enough binary systems in the mergers. Or it could be that the black holes experience an environment full with interstellar gas, which would reduce the time during which they emit gravitational waves. There are many astrophysical scenarios that might explain the observation. An absolutely last resort is to reconsider what General Relativity tells us about gravitational-wave emission.

 You have just witnessed the birth of a new mystery in physics.


[This post previously appeared at Starts with a Bang.]

Tuesday, November 10, 2015

Dear Dr. B: What do physicists mean when they say time doesn’t exist?

That was a question Brian Clegg didn’t ask but should have asked. What he asked instead in a recent blogpost was: “When physicists say many processes are independent of time, are they cheating?” He then answered his own question with yes. Let me explain first what’s wrong with Brian’s question, then I’ll tell you something about the existence of time.

What is time-reversal invariance?

The problem with Brian’s question is that no physicist I know would ever say that “many processes are independent of time.” Brian, I believe, didn’t mean time-independent processes but time-reversal invariant laws. The difference is important. The former is a process that doesn’t depend on time. The latter is a symmetry of the equations determining the process. Having a time reversal-invariant law means that the equations remain the same when the direction of time is reversed. This doesn’t mean the processes remain the same.

The mistake is twofold. Firstly, a time-independent process is a very special case. If you watch a video that shows a still image, it doesn’t matter if you watch it forward or backward. So, yes, time-independence implies time-reversal invariance. But secondly, if the underlying laws are time-reversal invariant, the processes themselves are reversible, but not necessarily invariant under this reversal. You can watch any movie backwards with the same technology that you can watch it forwards, yet the plot will look very different. The difference is the starting point, the “initial condition.”

The fundamental laws of nature, for all we know today, are time-reversal invariant*. This means you can rewind any movie and watch it backwards using the same laws. The reason that movies look very different backwards than forwards is a probabilistic one, captured in the second law of thermodynamics: entropy never decreases. In large open systems, it instead tends to increase. The initial state is thus almost always very different from the final state.

Probabilities enter not through the laws themselves, but through these initial conditions. It is easy enough to set up a bowl with flour, sugar, butter, and eggs (initial condition), and then mix it (the law) to a smooth dough. But it is for all practical purposes impossible to set up dough so that a reverse-spinning mixer would separate the eggs from the flour.

In principle the initial state for this unmixing exists. We know it exists because we can create its time-reversed version. But you would have to arrange the molecules in the dough so precisely that it’s impossible to do. Consequently, you never see dough unmixing, never see eggs unbreaking, and facelifts don’t make you younger.

It is worth noting that all of this is true only in very large systems, with a large number of constituents. This is always the case for daily life experience. But if a system is small enough, it is indeed possible for entropy to decrease every once in a while just by chance. So you can ‘unmix’ very small patches of dough.

What does any of this have to do with the existence of time? Not very much. Time arguably does exist. In a previous blogpost I explained that the property of being reality isn’t binary (true or false), but it is graded from “mostly true” to “most likely false.” Things don’t either exist or don’t exist, they exist at various levels of immediacy, depending on how detached they are from direct sensory exploration.

Space and time are something we experience every day. Einstein taught us space and time are combined in space-time, and its curvature is the origin of gravity. We move around in space-time. If space-time wasn’t there, we wouldn’t be there because there wouldn’t be any “there” to be at, and since space and time belong together time exists the same way as space does.

Claiming that time doesn’t exist is therefore misusing language. In General Relativity, time is a coordinate, one that is relevant to obtain predictions for observables. It isn’t uniquely defined, and it is not itself observable, but that doesn’t make it non-existent. If you’d ask me what it means for time to exist, I’d say it’s the Lorentzian signature of the metric, and that is something which we need for our theories to work. Time is, essentially, the label to order frames in the movie of our universe.

Why do some physicists say that time isn’t real?

When physicists say that time doesn’t exist they mean one of two things: 1) The passage of time is an illusion, or 2) Time isn’t fundamental.

As to 1). In our current description of the universe, the past isn’t fundamentally different from the future. It will look different in outcome, but it will be made of the same stuff and it work the same way. There is no dividing line that separates past and future, and that demarks the present moment.

Our experience of there being a “present” comes from the one-sidedness of memory-formation. We can only form memory about things from a time where entropy was smaller, so we can’t remember the future. The perception of time passing comes from the update of our memory in the direction of entropy increase.

In this view, every moment in time exists in some way, though from our personal experience at each moment most of them are remote from experience (the past) or inaccessible from experience (the future). The perception of existence itself is time-dependent and also individual. You might say that the future is so remote to your perception, and prediction so close to impossible, that it is on the level of non-existence. I wouldn’t argue with you about that, but if you learn some more General Relativity your perception might shift.

Now this point of view irks some people, by which I mean Lee Smolin. Lee doesn’t like it that the laws of nature we know today do not give a special relevance to a present moment. He argues that this signals there is something missing in our theories, and that time should be “real.” What he means by that is that the laws of nature themselves must give rise to something like a present moment, which is not presently the case.

As to 2). We know that General Relativity cannot be the fundamental theory of space and time because it breaks down when gravity becomes very strong. The underlying theory might not have a notion of time, instead space and/or time might be emergent – they might be built up of something more fundamental.

I have some sympathy for this idea because I find it plausible that Euclidean and Lorentzian signatures are two different phases of the same underlying structure. This necessarily implies that time isn’t fundamental, but that it comes about in some phase transition.

Some people say that in this case “time doesn’t exist” but I find this extremely misleading. Any such theory would have to reproduce General Relativity and give rise to the notion of time we presently use. Saying that something isn’t real because it’s emergent is a meaningless abuse of terminology. It’s like saying the forest doesn’t exist because it’s made of trees.

In summary, time is real in a well-defined way, has always been real, and will always be real. When physicists say that time isn’t real they normally use it as a short-hand to refer to specific properties of their favorite theories, for example that the laws are time-reversal invariant, or that space-time is emergent. The one exception is Lee Smolin who means something else. I’m not entirely sure what, but he has written a book or two about it.

* Actually they’re not, they’re CPT invariant. But if you know the difference then I don’t have to explain you the difference.

Monday, November 09, 2015

Another new social networking platform for scientists: Loop

Logo of Loop Network website.

A recent issue of Nature magazine ran an advertisement feature for “Loop,” a new networking platform to “maximize the impact of researchers and their discoveries.” It’s an initiative by Frontiers, an open access publisher. Of course I went and got an account to see what it does and I’m here to report back.

In the Nature advert, the CEO of Frontiers interviews herself and answers the question “What makes Loop unique?” with “Loop is changing research networking on two levels. Firstly, researchers should not have to go to a network; it should come to them. Secondly, researchers should not have to fill in dozens of profiles on different websites.”

So excuse me when I was awaiting a one-click registration that made use of one of my dozens of other profiles. Instead I had to fill in a lengthy registration form that, besides name and email, didn’t only ask for my affiliation and country of residence and job description and field of occupation, domain, and speciality, but also for my birthdate, before I had to confirm my email address.

Since that network was so good at “coming to me” it wasn’t possible after registration to import my profile from any other site, Google Scholar, ORCID, Linkedin, ResearchGate, Academia.edu or whathaveyou, facebook, G+, twitter if you must. Instead, I had to fill in my profile yet another time. Neither, for all I can tell, can you actually link your other accounts to the Loop account.

If you scroll down the information pages, it turns out what the integration refers to is “Your Loop profile is discoverable via the articles you have authored on nature.com and in the Frontiers journals.” Somewhat underwhelming.

Then you have to assemble a publication list. I am lucky to have a name that, for all I know, isn’t shared by anybody else on the planet, so it isn’t so difficult to scan the web for my publications. The Loop platform came up with 86 suggested finds. These appeared in separate pop-up windows. If you have ever done this process before you can immediately see the problem: Typically in these lists there are many duplicate entries. So going through the entries one by one without seeing what is already approved means you have to memorize all previous items. Now I challenge you to recall whether item number 86 had appeared before on the list.

Finally done with this, what do you have there? A website that shows a statistic for how many people have looked at your profile (on this site, presumably), how many people have downloaded your papers (from this site, presumably) and a number of citations which shows zero for me and for a lot of other profiles I looked at. A few people have a number there from the Scopus database. I conclude that Loop doesn’t have its own citation metric, and neither uses the one from Google Scholar or Spires.



As to the networking, you get suggestions for people you might know. I don’t know any of the suggested people, which isn’t surprising because we already noticed they’re not importing information, so how are they supposed to know who I know? I’m not sure what I would like to follow any of these people for, why that would be any better than following them elsewhere, or not at all. I followed some random person just because. If that person actually did something (which he doesn’t, much like everybody else whose profile I looked at), presumably it would appear in my feed. From that angle, it looks much like any other networking website. There is also a box that asks me to share something with my network of one.

In summary, for all I can tell this website is as useless as it gets. I don’t have the faintest clue what they think it’s good for. Even if it’s good for something it does a miserable job at telling me what that something is. So save your time.

Sunday, November 08, 2015

10 things you should know about black holes [video]

Since I had the blue screen up already, I wanted to try out some things to improve my videos. I'm quite happy with this one (finally managed to export it in a reasonable resolution), but I noticed too late I should have paid more attention to the audio. Sorry about that. Next time I'll use an external mic. I have also decided to finally replace the blue screen with a green screen, which I hope will solve the problem with the eye erasure.

Friday, November 06, 2015

New music video

Yes!

I know you can hardly contain the excitement about my new lipstick and the badly illuminated blue screen, so please enjoy my newest release, exclusively for you, dear reader.

I actually wrote this song last year, but then I mixed myself into a mush. In the hope that I've learned some things since, I revisited this project and gave it a second try. Thought I'm still not quite happy with it (I never seem to get the vocals right), I strongly believe there's a merit to finishing up things. Also, if I have to hear this thing once again my head will implode (though at least that would set an end to the concussion symptoms and neck pain I caused myself with the hair shaking). Lesson learned: hitting your forehead against a wall isn't really pleasant. If you feel like engaging in it, you should at the very least videotape it, because that justifies just about anything stupid.

Sunday, November 01, 2015

Dumb Holes Leak

Tl;dr: A new experiment demonstrates that Hawking radiation in a fluid is entangled, but only in the high frequency end. This result might be useful to solve the black hole information loss problem.

In August I went to Stephen Hawking’s public lecture in the fully packed Stockholm Opera. Hawking was wheeled onto the stage, placed in the spotlight, and delivered an entertaining presentation about black holes. The silence of the audience was interrupted only by laughter to Hawking’s well-placed jokes. It was a flawless performance with standing ovations.


In his lecture, Hawking expressed hope that he will win the Nobelprize for the discovery that black holes emit radiation. Now called “Hawking radiation,” this effect should have been detected at the LHC had black holes been produced there. But time has come, I think, for Hawking to update his slides. The ship to the promised land of micro black holes has long left the harbor, and it sunk – the LHC hasn’t seen black holes, has not, in fact, seen anything besides the Higgs.

But you don’t need black holes to see Hawking radiation. The radiation is a consequence of applying quantum field theory in a space- and time-dependent background, and you can use some other background to see the same effect. This can be done, for example, by measuring the propagation of quantum excitations in Bose-Einstein condensates. These condensates are clouds of about a billion or so ultra-cold atoms that form a fluid with basically zero viscosity. It’s as clean a system as it gets to see this effect. Handling and measuring the condensate is a big experimental challenge, but what wouldn’t you do to create a black hole in the lab?

The analogy between the propagation of excitations on background fluids and in a curved space-time background was first pointed out by Bill Unruh in the 1980s. Since then, many concrete examples have been found for condensed-matter systems that can be used as stand-ins for gravitational fields; they are summarily known as “analogue gravity system” – this is “analogue” as in “analogy,” not as opposed to “digital.”

In these analogue gravity systems, the quantum excitations are sound waves, and the corresponding quantum particles are called “phonons.” A horizon in such a space-time is created at the boundary of a region in which the velocity of the background fluid exceeds the speed of sound, thereby preventing the sound waves from escaping. Since these fluids trap sound rather than light, such gravitational analogues are also called “dumb,” rather than “black” holes.

Hawking radiation was detected in fluids a few years ago. But these measurements only confirmed the thermal spectrum of the radiation and not its most relevant property: the entanglement across the horizon. The entanglement of the Hawking radiation connects pairs of particles, one inside and one outside the horizon. It is a pure quantum effect: The state of either particle separately is unknown and unknowable. One only knows that their states are related, so that measuring one of the particles determines the measurement outcome of the other particle – this is Einstein’s “spooky action at a distance.”

The entanglement of Hawking radiation across the horizon is origin of the black hole information loss problem. In a real black hole, the inside partner of the entangled pair eventually falls into the singularity, where it gets irretrievably lost, leaving the state of its partner undetermined. In this process, information is destroyed, but this is incompatible with quantum mechanics. Thus, by combining gravity with quantum mechanics, one arrives at a result that cannot happen in quantum mechanics. It’s a classical proof by contradiction, and signals a paradox. This headache is believed to be remedied by the, still missing, theory of quantum gravity, but exactly what the remedy is nobody knows.

In a new experiment, Jeff Steinhauer from the Israel Institute of Technology measured the entanglement of the Hawking radiation in an analogue black hole; his results are available on the arxiv.

For this new experiment, the Bose Einstein condensate was trapped and put in motion with laser light, making it an effectively one-dimensional system in flow. In this trap, the condensate had low density on one half and a higher density in the other half, achieved by a potential step from a second laser. The speed of sound in such a condensate depends on the density, so that a higher density corresponds to a higher speed of sound. The high density region thus allowed the phonons to escape and corresponds to the outside of the horizon, whereas the low density region corresponds to the inside of the horizon.

The figure below shows the density profile of the condensate:

Figure 1 from 1510.00621. The density profile of the condensate. 

In this system then, Steinhauer measured correlations of fluctuations. These flowing condensates don’t last very long, so to get useful data, the same setting must be reproduced several thousand times. The analysis clearly shows a correlation between the excitations inside and outside the horizon, as can be seen in the figure below. The entanglement appears in the grey lines on the diagonal from top left to bottom right. I have marked the relevant feature with red arrows (ignore the green ones, they indicate matches between the measured angles and the theoretical prediction).


When Steinhauer analyzed the dependence on the frequency, he found a correlation only in the high frequency end, not in the low frequency end. This is as intriguing as confusing. In a real black hole all frequencies should be entangled. But if the Hawking radiation was not entirely entangled across the horizon, that might allow information to escape. One has to be careful, however, to not read too much into this finding.

To begin with, let us be clear, this is not a gravitational system. It’s a system that shares some properties with the gravitational case. But when it comes to the quantum behavior of the background, that may or may not be a useful comparison. Even if it was, the condensate studied here is not rotationally symmetric, as a real black hole would be. Since the rotational symmetry is essential for the red-shift in the gravitational potential, I actually don’t know how to interpret the low frequencies. Possibly they correspond to a regime that real black holes just don’t have. And then the correlation might just have gotten lost in experimental uncertainties – limitations by finite system size, number of particles, noise, etc – on which the paper doesn’t have much detail.

The difference between the analogue gravity system, which is the condensate, and the real gravity system is that we do have a theory for the quantum properties of the condensate. If gravity was quantized in a similar way, then studies like the one done by Steinhauer, might indicate where Hawking’s calculation fails – for it must fail if the information paradox is to be solved. So I find this a very interesting development.

Will Hawking and Steinhauer get a Nobelprize for the discovery and detection of the thermality and entanglement of the radiation? I think this is very unlikely, for right now it isn’t clear whether this is even relevant for anything. Should this finding turn out to be key to developing a theory of quantum gravity however, that would be groundbreaking. And who knows, maybe Hawking will again be invited to Stockholm.

Thursday, October 29, 2015

What is basic science and what is it good for?

Basic science is, above all, a stupid word. It sounds like those onions we sliced in 8th grade. And if people don’t mistake “basic” for “everybody knows,” they might think instead it means “foundational,” that is, dedicated to questioning the present paradigms. But that’s not what the word refers to.

Basic science refers to research which is not pursued with the aim of producing new technologies; it is sometimes, more aptly, referred to as “curiosity driven” or “blue skies” research. The NSF calls it “transformative,” the ERC calls it “frontier” research. Quite possibly they don’t mean exactly the same, which is another reason why it’s a stupid word.

A few days ago, Matt Ridley wrote an article for the Wall Street Journal in which he argues that basic research, to the extent that it’s necessary at all, doesn’t need governmental funding. He believes that it is technology that drives science, not the other way round. “Deep scientific insights are the fruits that fall from the tree of technological change,” Ridley concludes. Apparently he has written a whole book with this theme, which is about to be published next week. The WSJ piece strikes me as shallow and deliberately provocative, published with the only aim of drawing attention to his book, which I hope has more substance and not just more words.

The essence of the article seems to be that it’s hard to demonstrate a correlation, not to mention causation, between tax-funded basic science and economic growth. Instead, Ridley argues, in many examples scientific innovations originated not in one single place, but more or less simultaneously in various different places. He concludes that tax-funded research is unnecessary.

Leaving aside for a moment that measures for economic growth can mislead about a countries’ prosperity, it is hardly surprising that a link between tax-funded basic research and economic growth is difficult to find. It must come as a shock to nationalists, but basic research is the possibly most international profession in existence. Ideas don’t stop at country borders. Consequently, to make use of basic research, you don’t yourself need to finance it. You can just wait until a breakthrough occurs elsewhere and then pay your people to jump on it. The main reason we so frequently see examples of simultaneous breakthroughs in different groups is that they build on more or less the same knowledge. Scientists can jump very quickly.

But the conclusion that this means one does not need to support basic research is just wrong. It’s a classic demonstration of the “free rider” problem. Your country can reap the benefits of basic research elsewhere, as long as somebody else does the thinking for you. But if every country does this, innovation would run dry, eventually.

Besides this, the idea that technology drives science might have worked in the last century but it does no longer work today. The times where you could find new laws of nature by dabbling with some equipment in the lab are over. To make breakthroughs today, you need to know what to build, and you need to know how to analyze your data. Where will you get that knowledge if not from basic resarch?

The technologies we use today, the computer that you sit in front of – semiconductors, lasers, liquid crystal displays – are based on last century’s theories. We still reap the benefits. And we all do, regardless of whether our nation paid salary for one of quantum mechanics’ founding fathers. But if we want progress to continue in the next century, we have to go beyond that. You need basic research to find out which direction is promising, which is a good investment. Or otherwise, you’ll waste lots of time and money.

There is a longer discussion that one can have whether some types of basic research have any practical use at all. It is questionable, for example, that knowing about the accelerated expansion of the universe will ever lead to a better phone. In my perspective the materialistic focus is as depressing as meaningless. Sure, it would be nice if my damned phone battery wouldn’t die in the middle of a call, and, yeah, I want to live forever watching cat videos on my hoverboard. But I fail to see what it’s ultimately good for. The only meaning I can find in being thrown into this universe is to understand how it works and how we are part of it. To me, knowledge is an end unto itself. Keep your hoverboard, just tell me how to quantize gravity.

Here is a simple thought experiment. Consider all tax-funded basic research were to cease tomorrow. What would go missing? No more stories about black holes, exoplanets, or loophole-free tests of quantum entanglement. No more string theory, no multiverses, no theories of everything, no higgsinos, no dark matter, no cosmic neutrinos, extra-dimensions, wormholes, or holographic universes. Except for a handful of lucky survivors at partly privately funded places – like Perimeter Institute, the KAVLI institutes, and some Templeton-funded initiatives, who in no way would be able to continue all of this research – all this research would die quickly. The world would be a poorer place, one with no hope of ever understanding this amazing universe that we live in.

Democracy is a funny thing, you know, it’s kind of like an opinion poll. Basic research is tax-funded in all developed countries. Could there be any clearer expression of the people’s opinion? They say: we want to know. We want to know where we come from, and what we are made of, and what’s the fate of our universe. Yes, they say, we are willing to pay taxes for that, but please tell us. As someone who works in basic research, I see my task as delivering to this want.

Monday, October 26, 2015

Black holes and academic walls

Image credits: Paul Terry Sutton
According to Einstein you wouldn’t notice crossing a black hole horizon. But now researchers argue that a firewall or brickwall would be in your way. Have they entirely lost their mind?

Tl;dr: Yes.

It is hard, sometimes, to understand why anyone would waste time on a problem as academic as black hole information loss. And I say that as someone who spent a significant part of the last decade pushing this very problem around in my head. Don’t physicists have anything better to do, in a world that is suffering from war and disease, bad grammar even? What drives these researchers, other than the hope to make headlines for solving a 40 years old conundrum?

Many physicists today work on topics that, like black hole information loss, seem entirely detached from reality. Black holes only succeed in destroying information once they entirely evaporate, and that won’t happen for the next 100 billion years or so. What drives these researchers is not making tangible use of their insights, but the recognition that someone today has to pave way for the science that will become relevant in a hundred, a thousand, or ten thousand years from now. And as I scan the human mess in my news feed, the unearthly cleanliness of the argument, the seemingly inescapable logic leading to a paradox, admittedly only adds to its appeal.

If black hole information loss was a cosmic whodunit, then quantum theory would be the victim. Stephen Hawking demonstrated in the early 1970s that when one combines quantum theory with gravity, one finds that black holes must emit thermal radiation. This “Hawking radiation” is composed of particles that besides their temperature do not contain any information. And so, when a black hole entirely evaporates all the information about what fell inside must ultimately be destroyed. But such destruction of information is incompatible with the very quantum theory one used to arrive at this conclusion. In quantum theory all processes can happen both forward and backward in time, but black hole evaporation, it seems, cannot be reversed.

This presented physicists with a major conundrum, because it demonstrated that gravity and quantum theory refused to combine. It didn’t help either to try to explain away the problem alluding to the unknown theory of quantum gravity. Hawking radiation is not a quantum gravitational process, and while quantum gravity does eventually become important in the very late stages of a black hole’s evaporation, the argument goes that by this time it is too late to get all the information out.

The situation changed dramatically in the late 1990s, when Maldacena proposed that certain gravitational theories are equivalent to gauge theories. Discovered in string theory, this famed gauge-gravity correspondence, though still mathematically unproved, does away with the problem because whatever happens when a black hole evaporates is equivalently described in the gauge theory. The gauge theory however is known to not be capable of murdering information, thus implying that the problem doesn’t exist.

While the gauge-gravity correspondence convinced many physicists, including Stephen Hawking himself, that black holes do not destroy information, it did not shed much light on just exactly how the information escapes the black hole. Research continued, but complacency spread through the ranks of theoretical physicists. String theory, it seemed, had resolved the paradox, and it was only a matter of time until details would be understood.

But that wasn’t how things panned out. Instead, in 2012, a group of four physicist, Almheiri, Marolf, Polchinski, and Sully (AMPS) demonstrated that what was thought to be a solution is actually also inconsistent. Specifically they demonstrated that four assumptions, generally believed by most string theorists to all be correct, cannot in fact be simultaneously true. These four assumptions are that:
  1. Black holes don’t destroy information.
  2. The Standard Model of particle physics and General Relativity remain valid close by the black hole horizon.
  3. The amount of information stored inside a black hole is proportional to its surface area.
  4. An observer crossing the black hole horizon will not notice it.
The second assumption rephrases the statement that Hawking radiation is not a quantum gravitational effect. The third assumption is a conclusion drawn from calculations of the black hole microstates in string theory. The fourth assumption is Einstein’s equivalence principle. In a nutshell, AMPS say that at least one of these assumptions must be wrong. One of the witnesses is lying, but who?

In their paper, AMPS suggested, maybe not quite seriously, giving up on the least contested of these assumptions, number 4). Giving up on 4), the other three assumptions imply that an observer falling into the black hole would encounter a “firewall” and be burnt to ashes. The equivalence principle however is the central tenet of general relativity and giving it up really is the last resort.

For the uninitiated observer, the lying witness is obviously 3). In contrast to the other assumptions, which are consequences of theories we already know and have tested to high precision, number 3) comes from a so-far untested theory. So if one assumption has to dropped then maybe it is the assumption that string theory is right about the information content of black holes, but that option isn’t very popular with string theorists...

And so within a matter of months the hep-th category of the arxiv was cluttered with attempts to reconcile the disagreeable assumptions with another. Proposed solutions included everything from just accepting the black hole firewall to the multiverse to elaborated thought-experiments meant to demonstrate that an observer wouldn’t actually notice being burnt. Yes, that’s modern physics for you.

I too of course have an egg in the basket. I found the witnesses all to be convincing, none of them seemed to be lying. And taking them at face value, it finally occurred to me that what made the assumptions seemingly incompatible was an unstated fifth assumption. Like witnesses’ accounts might suddenly all make sense once you realize the victim wasn’t killed at the same place the body was found, the four assumptions suddenly all make sense when you do not require the information to be saved in a particular way (that the final state is “typical” state). Instead the requirement that energy must be locally conserved near the horizon makes the firewall impossible and at the same time also told me exactly just how the black hole evaporation remains compatible with quantum theory.

I think nobody really liked my paper because it lead you to the rather strange conclusion that somewhere near the horizon there is a boundary which does alter the quantum theory, yet in a way that isn’t noticeable for any observer near by the black hole. It is possible to measure its effects, but only in the far distance. And while my proposal did resolve the firewall conundrum, it didn’t do anything about the black hole information loss problem. I mentioned in a side-note that in principle one could use this boundary to hand information into the outgoing radiation, but that would still not explain how the information would get into the boundary to begin with.

After publishing this paper, I vowed once again to never think about black hole evaporation again. But then last month, an arxiv preprint appeared by ‘t Hooft. One of the first to dabble in black hole thermodynamics, in his new paper ‘t Hooft proposes that the black hole horizon acts like a boundary that reflects information, a “brick wall” as New Scientist wants it. This new idea has been inspired by Stephen Hawking’s recent suggestion that much of the information falling into black holes continues to be stored on the horizon. If that is so, then giving the horizon a chance to act can allow the information to leave again.

I don’t think that bricks are much of an improvement over fire and I’m pretty sure that this exact realization of the idea won’t hold up. But after all the confusion, this might eventually allow us to better understand just exactly how the horizon interacts with the Hawking radiation and how it might manage to encode information in it.

Fast forward a thousand years. At the end of the road there is a theory of quantum gravity that will allow us to understand the behavior of space and time on shortest distance scales and, so many hope, the origin of quantum theory itself. Progress might seem incremental and sometimes history leads us in circles, but what keeps physicists going is the knowledge that there must be a solution.

[This post previously appeared at Starts with a Bang.]

Monday, October 19, 2015

Book review: Spooky Action at a Distance by George Musser

Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time--and What It Means for Black Holes, the Big Bang, and Theories of Everything
By George Musser
Scientific American, To be released November 3, 2015

“Spooky Action at a Distance” explores the question Why aren’t you here? And if you aren’t here, what is it that prevents you from being here? Trying to answer this simple-sounding question leads you down a rabbit hole where you have to discuss the nature of space and time with many-world proponents and philosophers. In his book, George reports back what he’s found down in the rabbit hole.

Locality and non-locality are topics as confusing as controversial, both in- and outside the community, and George’s book is a great introduction to an intriguing development in contemporary physics. It’s a courageous book. I can only imagine how much headache writing it must have been, after I once organized a workshop on nonlocality and realized that no two people could agree on what they even meant with the word.

George is a very gifted writer. He gets across the most relevant concepts the reader needs to know on a nontechnical level with a light and unobtrusive humor. The historical background is nicely woven together with the narrative, and the reader gets to meet many researchers in the field, Steve Giddings, Fotini Markopoulou, and Nima Arkani-Hamed, to only mention a few.

In his book, George lays out how the attitude of scientists towards nonlocality has gone from acceptance to rejection and makes a case that now the pendulum is swinging back to acceptance again. I think he is right that this is the current trend (thus the workshop).

I found the book somewhat challenging to read because I was constantly trying to translate George’s metaphors back into equations and I didn’t always succeed. But then that’s a general problem I have with popular science books and I can’t blame George for this. I have another complaint though, which his that George covers a lot of different research in rapid succession without adding qualifiers about these research programs’ shortcomings. There’s quantum graphity and string theory and black holes in AdS and causal sets and then there’s many worlds. The reader might be left with the mistaken impression that these topics are somehow all related with each other.

Spooky Action at a Distance starts out as an Ode to Steve Giddings and ends as a Symphony for Arkani-Hamed. For my taste it’s a little too heavy on person-stories, but then that seems to be the style of science writing today. In summary, I can only say it’s a great book, so go buy it, you won’t regret it.

[Disclaimers: Free review copy; I know the author.]

Fade-out ramble: You shouldn’t judge a book by its subtitle, really, but whoever is responsible for this title-inflation, please make it stop. What’s next? Print the whole damn book on the cover?

Monday, October 12, 2015

A newly proposed table-top experiment might be able to demonstrate that gravity is quantized

Tl;dr: Experimentalists are bringing increasingly massive systems into quantum states. They are now close to masses where they might be able to just measure what happens to the gravitational field.

Quantum effects of gravity are weak, so weak they are widely believed to not be measurable at all. Freeman Dyson indeed is fond of saying that a theory of quantum gravity is entirely unnecessary, arguing that we could never observe its effects anyway. Theorists of course disagree, and not just because they’re being paid to figure out the very theory Dyson deems unnecessary. Measurable or not, they search for a quantized version of gravity because the existing description of nature is not merely incomplete – it is far worse, it contains internal contradictions, meaning we know it is wrong.

Take the century-old double-slit experiment, the prime example for quantum behavior. A single electron that goes through the double-slit is able to interact with itself, as if it went through both slits at once. Its behavior is like that of a wave which overlaps with itself after passing an obstacle. And yet, when you measure the electron after it went through the slit it makes a dot on a screen, like a particle would. The wave-like behavior again shows up if one measures the distribution of many electrons that passed the slit. This and many other experiments demonstrate that the electron is neither a particle nor a wave – it is described by a wave-function from which we obtain a probability distribution, a formulation that is the core of quantum mechanics.

Well understood as this is, it leads to a so-far unsolved conundrum.

The most relevant property of the electron’s quantum behavior is that it can go through both slits at once. It’s not that half of the electron goes one way and half the other. Neither does the electron sometimes take this slit and sometimes the other. Impossible as it sounds, the electron goes fully through both slits at the same time, in a state referred to as quantum superposition.

Electrons carry a charge and so they have an electric field. This electric field also has quantum properties and moves along with the electron in its own quantum superposition. The electron also has a mass. Mass generates a gravitational field, so what happens to the gravitational field? You would expect it to also move along with the electron, and go through both slits in a quantum superposition. But that can only work if gravity is quantized too. According to Einstein’s theory of General Relativity though, it’s not. So we simply don’t know what happens to the gravitational field unless we find a theory of quantum gravity.

It’s been 80 years since the question was first raised, but we still don’t know what’s the right theory. The main problem is that gravity is an exceedingly weak force. We notice it so prominently in our daily life only because, in contrast to the other interactions, it cannot be neutralized. But the very reason that planet Earth doesn’t collapse to a black hole is that much stronger forces than gravity prevent this from happening. The electromagnetic force, the strong nuclear force, and even the supposedly weak nuclear force, are all much more powerful than gravity.

For the experimentalist this means they either have an object heavy enough so its gravitational field can be measured. Or they have an object light enough so its quantum properties can be measured. But not both at once.

At least that was the case so far. But the last decade has seen an enormous progress in experimental techniques to bring heavier and heavier objects into quantum states and measure their behavior. And in a recent paper a group of researchers from Italy and the UK propose an experiment that might just be the first feasible measurement of the gravitational field of a quantum object.

Almost all researchers who work on the theory of quantum gravity expect that the gravitational field of the electron behaves like its electric field, that is, it has quantum properties. They are convinced of this because we have a well-working theory to describe this situation. Yes, I know, they told you nobody has quantized gravity, but that isn’t true. Gravity has been quantized in the 1960s by DeWitt, Feynman, and others using a method known as perturbative quantization. However, the result one gets with this method only works when the gravitational field is weak, and it breaks down when gravity becomes strong, such as at the Big Bang or inside black holes. In other words, this approach, while well understood, fails us exactly in the situations we are interested in the most.

Because of this failure in strong gravitational fields, perturbatively quantized gravity cannot be a fundamentally valid theory; it requires completion. It is this completion that is normally referred to as “quantum gravity.” However, when gravitational fields are weak, which is definitely the case for the little electron, the method works perfectly fine. Whether it is realized in nature though, nobody knows.

If the gravitational field is not quantized, one has instead a theory known as “semi-classical gravity,” in which the matter is quantized but gravity isn’t. Though nobody can make much sense of this theory conceptually, it’s infuriatingly hard to disprove. If the gravitational field of the electron remained classical, its distribution would follow the probability of the electron taking either slit rather than itself going through the slits with this probability.

To see the difference, consider you put a (preferably uncharged) test particle in the middle between the slits to see where the gravitational pull goes. If the gravitational field is quantized, then in half of the cases when the electron goes through the slit, the test particle will move left, in the other half of cases it would move right (it would also destroy the interference pattern). If the gravitational field is classical however, the test particle won’t move because it’s pulled equally to both sides.



So the difference between quantized and semi-classical gravity is observable. Unfortunately, even for the most massive objects that can be pushed through double slits, like large molecules, the gravitational field is far too weak to be measurable.

In the new paper now, the researchers propose a different method. They consider a tiny charged disk of osmium with a mass of about a nano-gram, held by electromagnetic fields in a trap. The particle is cooled down to some hundred mK which brings it into the lowest possible energy state. Above this ground-level there are now discrete energy levels for the disk, much like the electron orbits around the atomic nucleus, except that the level spacing is tiny. The important point is that the exact energy values of these levels depend on the gravitational self-interaction of the whole object. Measure the spacing of the energy levels precisely enough, and you can figure out whether the gravitational field was quantized or not.

Figure 1 from arxiv:arXiv:1510.01696. Depicted are the energy levels of the disk in the potential, and how they shift with the classical gravitational self-interaction taken into account, for two different scenarios of the distribution of the disk’s wave-function.


For this calculation they use the Schrödinger-Newton equation, which is the non-relativistic limit of semi-classical gravity incorporated in quantum mechanics. In an accompanying paper they have worked out the description of multi-particle systems in this framework, and demonstrated how the system approximately decomposes into a center-of-mass variable and the motions relative to the center of mass. They then calculate how the density distribution is affected by the gravitational field caused by its own probability distribution, and finally the energy levels of the system.

I haven’t checked this calculation in detail, but it seems both plausible that the effect should be present, and that it is large enough to potentially be measurable. I don’t know much about these types of experiments, but two of the authors of the paper, Hendrik Ulbricht and James Bateman, are experimentalists and I trust they know what current technology allows to measure.

Suppose they make this measurement and they do, as expected, not find the additional shift of energy levels that should exist if gravity was unquantized. This would not, strictly speaking, demonstrate that perturbatively quantized gravity is correct, but merely that the Schrödinger-Newton equation is incorrect. However, since these are the only two alternatives I am aware of, it would in practice be the first experimental confirmation that gravity is indeed quantized.