Showing posts with label Quantum Gravity. Show all posts
Showing posts with label Quantum Gravity. Show all posts

Friday, July 28, 2017

New paper claims string theory can be tested with Bose-Einstein-Condensates

Fluorescence image of
Bose-Einstein-Condensate.
Image Credits: Stefan Kuhr and
Immanuel Bloch, MPQ
String theory is infamously detached from experiment. But in a new paper, a group from Mexico put forward a proposal to change that
    String theory phenomenology and quantum many–body systems
    Sergio Gutiérrez, Abel Camacho, Héctor Hernández
    arXiv:1707.07757 [gr-qc]
Ahead, let me be clear they don’t want to test string theory, but the presence of additional dimensions of space, which is a prediction of string theory.

In the paper, the authors calculate how additional space-like dimensions affect a condensate of ultra-cold atoms, known as Bose-Einstein-Condensate. At such low temperatures, the atoms transition to a state where their quantum wave-function acts as one and the system begins to display quantum effects, such as interference, throughout.

In the presence of extra-dimensions, every particle’s wave-function has higher harmonics because the extra-dimensions have to close up, in the simplest case like circles. The particle’s wave-functions have to fit into the extra dimensions, meaning their wave-length must be an integer fraction of the radius.

Each of the additional dimensions has a radius of about a Planck length, which is 10-35m or 15 orders of magnitude smaller than what even the LHC can probe. To excite these higher harmonics, you correspondingly need an energy of 1015 TeV, or 15 orders of magnitude higher than what the LHC can produce.

How do the extra-dimensions of string theory affect the ultra-cold condensate? They don’t. That’s because at those low temperatures there is no way you can excite any of the higher harmonics. Heck, even the total energy of the condensates presently used isn’t high enough. There’s a reason string theory is famously detached from experiment – because it’s a damned high energy you must reach to see stringy effects!

So what’s the proposal in the paper then? There isn’t one. They simply ignore that the higher harmonics can’t be excited and make a calculation. Then they estimate that one needs a condensate of about a thousand particles to measure a discontinuity in the specific heat, which depends on the number of extra-dimensions.

It’s probably correct that this discontinuity depends on the number of extra-dimensions. Unfortunately the authors don’t go back and check what’s the mass per particle in the condensate that’s needed to make this work. I’ve put in the numbers and get something like a million tons. That gigantic mass becomes necessary because it has to combine with the miniscule temperature of about a nano-Kelvin to have a geometric mean that exceeds the Planck mass.

In summary: Sorry, but nobody’s going to test string theory with Bose-Einstein-Condensates.

Thursday, July 13, 2017

Nature magazine publishes comment on quantum gravity phenomenology, demonstrates failure of editorial oversight

I have a headache and
blame Nature magazine for it.
For about 15 years, I have worked on quantum gravity phenomenology, which means I study ways to experimentally test the quantum properties of space and time. Since 2007, my research area has its own conference series, “Experimental Search for Quantum Gravity,” which took place most recently September 2016 in Frankfurt, Germany.

Extrapolating from whom I personally know, I estimate that about 150-200 people currently work in this field. But I have never seen nor heard anything of Chiara Marletto and Vlatko Vedral, who just wrote a comment for Nature magazine complaining that the research area doesn’t exist.

In their comment, titled “Witness gravity’s quantum side in the lab,” Marletto and Vedral call for “a focused meeting bringing together the quantum- and gravity-physics communities, as well as theorists and experimentalists.” Nice.

If they think such meetings are a good idea, I recommend they attend them. There’s no shortage. The above mentioned conference series is only the most regular meeting on quantum gravity phenomenology. Also the Marcel Grossmann Meeting has sessions on the topic. Indeed, I am writing this from a conference here in Trieste, which is about “Probing the spacetime fabric: from concepts to phenomenology.”

Marletto and Vedral point out that it would be great if one could measure gravitational fields in quantum superpositions to demonstrate that gravity is quantized. They go on to lay out their own idea for such experiments, but their interest in the topic apparently didn’t go far enough to either look up the literature or actually put in the numbers.

Yes, it would be great if we could measure the gravitational field of an object in a superposition of, say, two different locations. Problem is, heavy objects – whose gravitational fields are easy to measure – decohere quickly and don’t have quantum properties. On the other hand, objects which are easy to bring into quantum superpositions are too light to measure their gravitational field.

To be clear, the challenge here is to measure the gravitational field created by the objects themselves. It is comparably easy to measure the behavior of quantum objects in the gravitational field of the Earth. That has something to do with quantum and something to do with gravity, but nothing to do with quantum gravity because the gravitational field isn’t quantized.

In their comment, Marletto and Vedral go on to propose an experiment:
“Likewise, one could envisage an experiment that uses two quantum masses. These would need to be massive enough to be detectable, perhaps nanomechanical oscillators or Bose–Einstein condensates (ultracold matter that behaves as a single super-atom with quantum properties). The first mass is set in a superposition of two locations and, through gravitational interaction, generates Schrödinger-cat states on the gravitational field. The second mass (the quantum probe) then witnesses the ‘gravitational cat states’ brought about by the first.”
This is truly remarkable, but not because it’s such a great idea. It’s because Marletto and Vedral believe they’re the first to think about this. Of course they are not.

The idea of using Schrödinger-cat states, has most recently been discussed here. I didn’t write about the paper on this blog because the experimental realization faces giant challenges and I think it won’t work. There is also Anastopolous and Hu’s CQG paper about “Probing a Gravitational Cat State” and a follow-up paper by Derakhshani, which likewise go unmentioned. I’d really like to know how Marletto and Vedral think they can improve on the previous proposals. Letting a graphic designer make a nice illustration to accompany their comment doesn’t really count much in my book.

The currently most promising attempt to probe quantum gravity indeed uses nanomechanical oscillators and comes from the group of Markus Aspelmeyer in Vienna. I previously discussed their work here. This group is about six orders of magnitude away from being able to measure such superpositions. The Nature comment doesn’t mention it either.

The prospects of using Bose-Einstein condensates to probe quantum gravity has been discussed back and forth for two decades, but clear is that this isn’t presently the best option. The reason is simple: Even if you take the largest condensate that has been created to date – something like 10 million atoms – and you calculate the total mass, you are still way below the mass of the nanomechanical oscillators. And that’s leaving aside the difficulty of creating and sustaining the condensate.

There are some other possible gravitational effects for Bose-Einstein condensates which have been investigated, but these come from violations of the equivalence principle, or rather the ambiguity of what the equivalence principle in quantum mechanics means to begin with. That’s a different story though because it’s not about measuring quantum superpositions of the gravitational field.

Besides this, there are other research directions. Paternostro and collaborators, for example, have suggested that a quantized gravitational field can exchange entanglement between objects in a way that a classical field can’t. That too, however, is a measurement which is not presently technologically feasible. A proposal closer to experimental test is that by Belenchia et al, laid out their PRL about “Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators” (which I wrote about here).

Others look for evidence of quantum gravity in the CMB, in gravitational waves, or search for violations of the symmetries that underlie General Relativity. You can find a little summary in my blogpost “How Can we test Quantum Gravity”  or in my Nautilus essay “What Quantum Gravity Needs Is More Experiments.”

Do Marletto and Vedral mention any of this research on quantum gravity phenomenology? No.

So, let’s take stock. Here, we have two scientists who don’t know anything about the topic they write about and who ignore the existing literature. They faintly reinvent an old idea without being aware of the well-known difficulties, without quantifying the prospects of ever measuring it, and without giving proper credits to those who previously wrote about it. And they get published in one of the most prominent scientific journals in existence.

Wow. This takes us to a whole new level of editorial incompetence.

The worst part isn’t even that Nature magazine claims my research area doesn’t exist. No, it’s that I’m a regular reader of the magazine – or at least have been so far – and rely on their editors to keep me informed about what happens in other disciplines. For example with the comments pieces. And let us be clear that these are, for all I know, invited comments and not selected from among unsolicited submissions. So, some editor deliberately chose these authors.

Now, in this rare case when I can judge their content’s quality, I find the Nature editors picked two people who have no idea what’s going on, who chew up 30 years old ideas, and omit relevant citations of timely contributions.

Thus, for me the worst part is that I will henceforth have to suspect Nature’s coverage of other research areas is equally miserable as this.

Really, doing as much as Googling “Quantum Gravity Phenomenology” is more informative than this Nature comment.

Friday, May 26, 2017

Can we probe the quantization of the black hole horizon with gravitational waves?


Tl;dr: Yes, but the testable cases aren’t the most plausible ones.

It’s the year 2017, but we still don’t know how space and time get along with quantum mechanics. The best clue so far comes from Stephen Hawking and Jacob Bekenstein. They made one of the most surprising finds that theoretical physics saw in the 20th century: Black holes have entropy.

It was a surprise because entropy is a measure for unresolved microscopic details, but in general relativity black holes don’t have details. They are almost featureless balls. That they nevertheless seem to have an entropy – and a gigantically large one in addition – indicates strongly that black holes can be understood only by taking into account quantum effects of gravity. The large entropy, so the idea, quantifies all the ways the quantum structure of black holes can differ.

The Bekenstein-Hawking entropy scales with the horizon area of the black hole and is usually interpreted as a measure for the number of elementary areas of size Planck-length squared. A Planck-length is a tiny 10-35 meters. This area-scaling is also the basis of the holographic principle which has dominated research in quantum gravity for some decades now. If anything is important in quantum gravity, this is.

It comes with the above interpretation that the area of the black hole horizon always has to be a multiple of the elementary Planck area. However, since the Planck area is so small compared to the size of astrophysical black holes – ranging from some kilometers to some billion kilometers – you’d never notice the quantization just by looking at a black hole. If you got to look at it to begin with. So it seems like a safely untestable idea.

A few months ago, however, I noticed an interesting short note on the arXiv in which the authors claim that one can probe the black hole quantization with gravitational waves emitted from a black hole, for example in the ringdown after a merger event like the one seen by LIGO:
    Testing Quantum Black Holes with Gravitational Waves
    Valentino F. Foit, Matthew Kleban
    arXiv:1611.07009 [hep-th]

The basic idea is simple. Assume it is correct that the black hole area is always a multiple of the Planck area and that gravity is quantized so that it has a particle – the graviton – associated with it. If the only way for a black hole to emit a graviton is to change its horizon area in multiples of the Planck area, then this dictates the energy that the black hole loses when the area shrinks because the black hole’s area depends on the black hole’s mass. The Planck-area quantization hence sets the frequency of the graviton that is emitted.

A gravitational wave is nothing but a large number of gravitons. According to the area quantization, the wavelengths of the emitted gravitons is of the order of the order of the black hole radius, which is what one expects to dominate the emission during the ringdown. However, so the authors’ argument, the spectrum of the gravitational wave should be much narrower in the quantum case.

Since the model that quantizes the black hole horizon in Planck-area chunks depends on a free parameter, it would take two measurements of black hole ringdowns to rule out the scenario: The first to fix the parameter, the second to check whether the same parameter works for all measurements.

It’s a simple idea but it may be too simple. The authors are careful to list the possible reasons for why their argument might not apply. I think it doesn’t apply for a reason that’s a combination of what is on their list.

A classical perturbation of the horizon leads to a simultaneous emission of a huge number of gravitons, and for those there is no good reason why every single one of them must fit the exact emission frequency that belongs to an increase of one Planck area as long as the total energy adds up properly.

I am not aware, however, of a good theoretical treatment of this classical limit from the area-quantization. It might indeed not work in some of the more audacious proposals we have recently seen, like Gia Dvali’s idea that black holes are condensates of gravitons. Scenarios such like Dvali’s might be testable indeed with the ringdown characteristics. I’m sure we will hear more about this in the coming years as LIGO accumulates data.

What this proposed test would do, therefore, is to probe the failure of reproducing general relativity for large oscillations of the black hole horizon. Clearly, it’s something that we should look for in the data. But I don’t think black holes will release their secrets quite as easily.

Thursday, April 06, 2017

Dear Dr. B: Why do physicists worry so much about the black hole information paradox?

    “Dear Dr. B,

    Why do physicists worry so much about the black hole information paradox, since it looks like there are several, more mundane processes that are also not reversible? One obvious example is the increase of the entropy in an isolated system and another one is performing a measurement according to quantum mechanics.

    Regards, Petteri”


Dear Petteri,

This is a very good question. Confusion orbits the information paradox like accretion disks orbit supermassive black holes. A few weeks ago, I figured even my husband doesn’t really know what the problem is, and he doesn’t only have a PhD in physics, he has also endured me rambling about the topic for more than 15 years!

So, I’m happy to elaborate on why theorists worry so much about black hole information. There are two aspects to this worry: one scientific and one sociological. Let me start with the scientific aspect. I’ll comment on the sociology below.

In classical general relativity, black holes aren’t much trouble. Yes, they contain a singularity where curvature becomes infinitely large – and that’s deemed unphysical – but the singularity is hidden behind the horizon and does no harm.

As Stephen Hawking pointed out, however, if you take into account that the universe – even vacuum – is filled with quantum fields of matter, you can calculate that black holes emit particles, now called “Hawking radiation.” This combination of unquantized gravity with quantum fields of matter is known as “semi-classical” gravity, and it should be a good approximation as long as quantum effects of gravity can be neglected, which means as long as you’re not close by the singularity.

Illustration of black hole with jet and accretion disk.
Image credits: NASA.


Hawking radiation consists of pairs of entangled particles. Of each pair, one particle falls into the black hole while the other one escapes. This leads to a net loss of mass of the black hole, ie the black hole shrinks. It loses mass until entirely evaporated and all that’s left are the particles of the Hawking radiation which escaped.

Problem is, the surviving particles don’t contain any information about what formed the black hole. And not only that, information of the particles’ partners that went into the black hole is also lost. If you investigate the end-products of black hole evaporation, you therefore can’t tell what the initial state was; the only quantities you can extract are the total mass, charge, and angular momentum- the three “hairs” of black holes (plus one qubit). Black hole evaporation is therefore irreversible.



Irreversible processes however don’t exist in quantum field theory. In technical jargon, black holes can turn pure states into mixed states, something that shouldn’t ever happen. Black hole evaporation thus gives rise to an internal contradiction, or “inconsistency”: You combine quantum field theory with general relativity, but the result isn’t compatible with quantum field theory.

To address your questions: Entropy increase usually does not imply a fundamental irreversibility, but merely a practical one. Entropy increases because the probability to observe the reverse process is small. But fundamentally, any process is reversible: Unbreaking eggs, unmixing dough, unburning books – mathematically, all of this can be described just fine. We merely never see this happening because such processes would require exquisitely finetuned initial conditions. A large entropy increase makes a process irreversible in practice, but not irreversible in principle.

That is true for all processes except black hole evaporation. No amount of finetuning will bring back the information that was lost in a black hole. It’s the only known case of a fundamental irreversibility. We know it’s wrong, but we don’t know exactly what’s wrong. That’s why we worry about it.

The irreversibility in quantum mechanics, which you are referring to, comes from the measurement process, but black hole evaporation is irreversible already before a measurement was made. You could argue then, why should it bother us if everything we can possibly observe requires a measurement anyway? Indeed, that’s an argument which can and has been made. But in and by itself it doesn’t remove the inconsistency. You still have to demonstrate just how to reconcile the two mathematical frameworks.

This problem has attracted so much attention because the mathematics is so clear-cut and the implications are so deep. Hawking evaporation relies on the quantum properties of matter fields, but it does not take into account the quantum properties of space and time. It is hence widely believed that quantizing space-time is necessary to remove the inconsistency. Figuring out just what it would take to prevent information loss would teach us something about the still unknown theory of quantum gravity. Black hole information loss, therefore, is a lovely logical puzzle with large potential pay-off – that’s what makes it so addictive.

Now some words on the sociology. It will not have escaped your attention that the problem isn’t exactly new. Indeed, its origin predates my birth. Thousands of papers have been written about it during my lifetime, and hundreds of solutions have been proposed, but theorists just can’t agree on one. The reason is that they don’t have to: For the black holes which we observe (eg at the center of our galaxy), the temperature of the Hawking radiation is so tiny there’s no chance of measuring any of the emitted particles. And so, black hole evaporation is the perfect playground for mathematical speculation.

[Lots of Papers. Img: 123RF]
There is an obvious solution to the black hole information loss problem which was pointed out already in early days. The reason that black holes destroy information is that whatever falls through the horizon ends up in the singularity where it is ultimately destroyed. The singularity, however, is believed to be a mathematical artifact that should no longer be present in a theory of quantum gravity. Remove the singularity and you remove the problem.

Indeed, Hawking’s calculation breaks down when the black hole has lost almost all of its mass and has become so small that quantum gravity is important. This would mean the information would just come out in the very late, quantum gravitational, phase and no contradiction ever occurs.

This obvious solution, however, is also inconvenient because it means that nothing can be calculated if one doesn’t know what happens nearby the singularity and in strong curvature regimes which would require quantum gravity. It is, therefore, not a fruitful idea. Not many papers can be written about it and not many have been written about it. It’s much more fruitful to assume that something else must go wrong with Hawking’s calculation.

Sadly, if you dig into the literature and try to find out on which grounds the idea that information comes out in the strong curvature phase was discarded, you’ll find it’s mostly sociology and not scientific reasoning.

If the information is kept by the black hole until late, this means that small black holes must be able to keep many different combinations of information inside. There are a few papers which have claimed that these black holes then must emit their information slowly, which means small black holes would behave like a technically infinite number of particles. In this case, so the claim, they should be produced in infinite amounts even in weak background fields (say, nearby Earth), which is clearly incompatible with observation.

Unfortunately, these arguments are based on an unwarranted assumption, namely that the interior of small black holes has a small volume. In GR, however, there isn’t any obvious relation between surface area and volume because space can be curved. The assumption that such small black holes, for which quantum gravity is strong, can be effectively described as particles is equally shaky. (For details and references, please see this paper I wrote with Lee some years ago.)

What happened, to make a long story short, is that Lenny Susskind wrote a dismissive paper about the idea that information is kept in black holes until late. This dismissal gave everybody else the opportunity to claim that the obvious solution doesn’t work and to henceforth produce endless amounts of papers on other speculations.

Excuse the cynicism, but that’s my take on the situation. I’ll even admit having contributed to the paper pile because that’s how academia works. I too have to make a living somehow.

So that’s the other reason why physicists worry so much about the black hole information loss problem: Because it’s speculation unconstrained by data, it’s easy to write papers about it, and there are so many people working on it that citations aren’t hard to come by either.

Thanks for an interesting question, and sorry for the overly honest answer.

Friday, January 13, 2017

What a burst! A fresh attempt to see space-time foam with gamma ray bursts.

It’s an old story: Quantum fluctuations of space-time might change the travel-time of light. Light of higher frequencies would be a little faster than that of lower frequencies. Or slower, depending on the sign of an unknown constant. Either way, the spectral colors of light would run apart, or ‘disperse’ as they say if they don’t want you to understand what they say.

Such quantum gravitational effects are miniscule, but added up over long distances they can become observable. Gamma ray bursts are therefore ideal to search for evidence of such an energy-dependent speed of light. Indeed, the energy-dependent speed of light has been sought for and not been found, and that could have been the end of the story.

Of course it wasn’t because rather than giving up on the idea, the researchers who’d been working on it made their models for the spectral dispersion increasingly difficult and became more inventive when fitting them to unwilling data. Last thing I saw on the topic was a linear regression with multiple curves of freely chosen offset – sure way to fit any kind of data on straight lines of any slope – and various ad-hoc assumptions to discard data that just didn’t want to fit, such as energy cuts or changes in the slope.

These attempts were so desperate I didn’t even mention them previously because my grandma taught me if you have nothing nice to say, say nothing.

But here’s a new twist to the story, so now I have something to say, and something nice in addition.

On June 25 2016, the Fermi Telescope recorded a truly remarkable burst. The event, GRB160625, had a total duration of 770s and had three separate sub-bursts with the second, and largest, sub-burst lasting 35 seconds (!). This has to be contrasted with the typical burst lasting a few seconds in total.

This gamma ray burst for the first time allowed researchers to clearly quantify the relative delay of the different energy channels. The analysis can be found in this paper
    A New Test of Lorentz Invariance Violation: the Spectral Lag Transition of GRB 160625B
    Jun-Jie Wei, Bin-Bin Zhang, Lang Shao, Xue-Feng Wu, Peter Mészáros
    arXiv:1612.09425 [astro-ph.HE]

Unlike supernovae IIa, which have very regular profiles, gamma ray bursts are one of a kind and they can therefore be compared only to themselves. This makes it very difficult to tell whether or not highly energetic parts of the emission are systematically delayed because one doesn’t know when they were emitted. Until now, the analysis relied on some way of guessing the peaks in three different energy channels and (basically) assuming they were emitted simultaneously. This procedure sometimes relied on as little as one or two photons per peak. Not an analysis you should put a lot of trust in.

But the second sub-burst of GRB160625 was so bright, the researchers could break it down in 38 energy channels – and the counts were still high enough to calculate the cross-correlation from which the (most likely) time-lag can be extracted.

Here are the 38 energy channels for the second sub-burst

Fig 1 from arXiv:1612.09425


For the 38 energy channels they calculate 37 delay-times relative to the lowest energy channel, shown in the figure below. I find it a somewhat confusing convention, but in their nomenclature a positive time-lag corresponds to an earlier arrival time. The figure therefore shows that the photons of higher energy arrive earlier. The trend, however, isn’t monotonically increasing. Instead, it turns around at a few GeV.

Fig 2 from arXiv:1612.09425


The authors then discuss a simple model to fit the data. First, they assume that the emission has an intrinsic energy-dependence due to astrophysical effects which cause a positive lag. They model this with a power-law that has two free parameters: an exponent and an overall pre-factor.

Second, they assume that the effect during propagation – presumably from the space-time foam – causes a negative lag. For the propagation-delay they also make a power-law ansatz which is either linear or quadratic. This ansatz has one free parameter which is an energy scale (expected to be somewhere at the Planck energy).

In total they then have three free parameters, for which they calculate the best-fit values. The fitted curves are also shown in the image above, labeled n=1 (linear) and n=2 (quadratic). At some energy, the propagation-delay becomes more relevant than the intrinsic delay, which leads to the turn-around of the curve.

The best-fit value of the quantum gravity energy is 10q GeV with q=15.66 for the linear and q=7.17 for the quadratic case. From this they extract a lower limit on the quantum gravity scale at the 1 sigma confidence level, which is 0.5 x 1016 GeV for the linear and 1.4 x 107 GeV for the quadratic case. As you can see in the above figure, the data in the high energy bins has large error-bars owing to the low total count, so the evidence that there even is a drop isn’t all that great.

I still don’t buy there’s some evidence for space-time foam to find here, but I have to admit that this data finally convinces me that at least there is a systematic lag in the spectrum. That’s the nice thing I have to say.

Now to the not-so-nice. If you want to convince me that some part of the spectral distortion is due to a propagation-effect, you’ll have to show me evidence that its strength depends on the distance to the source. That is, in my opinion, the only way to make sure one doesn’t merely look at delays present already at emission. And even if you’d done that, I still wouldn’t be convinced that it has anything to do with space-time foam.

I’m skeptic of this because the theoretical backing is sketchy. Quantum fluctuations of space-time in any candidate-theory for quantum gravity do not lead to this effect. One can work with phenomenological models, in which such effects are parameterized and incorporated as new physics into the known theories. This is all well and fine. Unfortunately, in this case existing data already constrains the parameters so that the effect on the propagation of light is unmeasurably small. It’s already ruled out. Such models introduce a preferred frame and break Lorentz-invariance and there is loads of data speaking against it.

It has been claimed that the already existing constraints from Lorentz-invariance violation can be circumvented if Lorentz-invariance is not broken but instead deformed. In this case the effective field theory limit supposedly doesn’t apply. This claim is also quoted in the paper above (see end of section 3.) However, if you look at the references in question, you will not find any argument for how one manages to avoid this. Even if one can make such an argument though (I believe it’s possible, not sure why it hasn’t been done), the idea suffers from various other theoretical problems that, to make a very long story very short, make me think the quantum gravity-induced spectral lag is highly implausible.

However, leaving aside my theory-bias, this newly proposed model with two overlaid sources for the energy-dependent time-lag is simple and should be straight-forward to test. Most likely we will soon see another paper evaluating how well the model fits other bursts on record. So stay tuned, something’s happening here.

Friday, December 02, 2016

Can dark energy and dark matter emerge together with gravity?

A macaroni pie? Elephants blowing ballons? 
No, it’s Verlinde’s entangled universe.
In a recent paper, the Dutch physicist Erik Verlinde explains how dark energy and dark matter arise in emergent gravity as deviations from general relativity.

It’s taken me some while to get through the paper. Vaguely titled “Emergent Gravity and the Dark Universe,” it’s a 51-pages catalog of ideas patched together from general relativity, quantum information, quantum gravity, condensed matter physics, and astrophysics. It is clearly still research in progress and not anywhere close to completion.

The new paper substantially expands on Verlinde’s earlier idea that the gravitational force is some type of entropic force. If that was so, it would mean gravity is not due to the curvature of space-time – as Einstein taught us – but instead caused by the interaction of the fundamental elements which make up space-time. Gravity, hence, would be emergent.

I find it an appealing idea because it allows one to derive consequences without having to specify exactly what the fundamental constituents of space-time are. Like you can work out the behavior of gases under pressure without having a model for atoms, you can work out the emergence of gravity without having a model for whatever builds up space-time. The details would become relevant only at very high energies.

As I noted in a comment on the first paper, Verlinde’s original idea was merely a reinterpretation of gravity in thermodynamic quantities. What one really wants from emergent gravity, however, is not merely to get back general relativity. One wants to know which deviations from general relativity come with it, deviations that are specific predictions of the model and which can be tested.

Importantly, in emergent gravity such deviations from general relativity could make themselves noticeable at long distances. The reason is that the criterion for what it means for two points to be close by each other emerges with space-time itself. Hence, in emergent gravity there isn’t a priori any reason why new physics must be at very short distances.

In the new paper, Verlinde argues that his variant of emergent gravity gives rise to deviations from general relativity on long distances, and these deviations correspond to dark energy and dark matter. He doesn’t explain dark energy itself. Instead, he starts with a universe that by assumption contains dark energy like we observe, ie one that has a positive cosmological constant. Such a universe is described approximately by what theoretical physicists call a de-Sitter space.

Verlinde then argues that when one interprets this cosmological constant as the effect of long-distance entanglement between the conjectured fundamental elements, then one gets a modification of the gravitational law which mimics dark matter.

The reason is works is that to get normal gravity one assigns an entropy to a volume of space which scales with the surface of the area that encloses the volume. This is known as the “holographic scaling” of entropy, and is at the core of Verlinde’s first paper (and earlier work by Jacobson and Padmanabhan and others). To get deviations from normal gravity, one has to do something else. For this, Verlinde argues that de Sitter space is permeated by long-distance entanglement which gives rise to an entropy which scales, not with the surface area of a volume, but with the volume itself. It consequently leads to a different force-law. And this force-law, so he argues, has an effect very similar to dark matter.

Not only does this modified force-law from the volume-scaling of the entropy mimic dark matter, it more specifically reproduces some of the achievements of modified gravity.

In his paper, Verlinde derives the observed relation between the luminosity of spiral galaxies and the angular velocity of their outermost stars, known as the Tully-Fisher relation. The Tully-Fisher relation can also be found in certain modifications of gravity, such as Moffat Gravity (MOG), but more generally every modification that approximates Milgrom’s modified Newtonian Dynamics (MOND). Verlinde, however, does more than that. He also derives the parameter which quantifies the acceleration at which the modification of general relativity becomes important, and gets a value that fits well with observations.

It was known before that this parameter is related to the cosmological constant. There have been various attempts to exploit this relation, most recently by Lee Smolin. In Verlinde’s approach the relation between the acceleration scale and the cosmological constant comes out naturally, because dark matter has the same origin of dark energy. Verlinde further offers expressions for the apparent density of dark matter in galaxies and clusters, something that, with some more work, can probably be checked observationally.

I find this is an intriguing link which suggests that Verlinde is onto something. However, I also find the model sketchy and unsatisfactory in many regards. General Relativity is a rigorously tested theory with many achievements. To do any better than general relativity is hard, and thus for any new theory of gravity the most important thing is to have a controlled limit in which General Relativity is reproduced to good precision. How this might work in Verlinde’s approach isn’t clear to me because he doesn’t even attempt to deal with the general case. He starts right away with cosmology.

Now in cosmology we have a preferred frame which is given by the distribution of matter (or by the restframe of the CMB if you wish). In general relativity this preferred frame does not originate in the structure of space-time itself but is generated by the stuff in it. In emergent gravity models, in contrast, the fundamental structure of space-time tends to have an imprint of the preferred frame. This fundamental frame can lead to violations of the symmetries of general relativity and the effects aren’t necessarily small. Indeed, there are many experiments that have looked for such effects and haven’t found anything. It is hence a challenge for any emergent gravity approach to demonstrate just how to avoid such violations of symmetries.

Another potential problem with the idea is the long-distance entanglement which is sprinkled over the universe. The physics which we know so far works “locally,” meaning stuff can’t interact over long distances without a messenger that travels through space and time from one to the other point. It’s the reason my brain can’t make spontaneous visits to the Andromeda nebula, and most days I think that benefits both of us. But like that or not, the laws of nature we presently have are local, and any theory of emergent gravity has to reproduce that.

I have worked for some years on non-local space-time defects, and based on what I learned from that I don’t think the non-locality of Verlinde’s model is going to be a problem. My non-local defects aren’t the same as Verlinde’s entanglement, but guessing that the observational consequences scale similarly, the amount of entanglement that you need to get something like a cosmological constant is too small to leave any other noticeable effects on particle physics. I am therefore more worried about the recovery of local Lorentz-invariance. I went to great pain in my models to make sure I wouldn’t get these, and I can’t see how Verlinde addresses the issue.

The more general problem I have with Verlinde’s paper is the same I had with his 2010 paper, which is that it’s fuzzy. It remained unclear to me exactly what are the necessary assumptions. I hence don’t know whether it’s really necessary to have this interpretation with the entanglement and the volume-scaling of the entropy and with assigning elasticity to the dark energy component that pushes in on galaxies. Maybe it would be sufficient already to add a non-local modification to the sources of general relativity. Having toyed with that idea for a while, I doubt it. But I think Verlinde’s approach would benefit from a more axiomatic treatment.

In summary, Verlinde’s recent paper offers the most convincing argument I have seen so far that dark matter and dark energy are related. However, it is presently unclear if not this approach would also have unwanted side-effects that are in conflict with observation already.

Wednesday, November 30, 2016

Dear Dr. B: What is emergent gravity?

    “Hello Sabine, I've seen a couple of articles lately on emergent gravity. I'm not a scientist so I would love to read one of your easy-to-understand blog entries on the subject.

    Regards,

    Michael Tucker
    Wichita, KS”

Dear Michael,

Emergent gravity has been in the news lately because of a new paper by Erik Verlinde. I’ll tell you some more about that paper in an upcoming post, but answering your question makes for a good preparation.

The “gravity” in emergent gravity refers to the theory of general relativity in the regimes where we have tested it. That means Einstein’s field equations and curved space-time and all that.

The “emergent” means that gravity isn’t fundamental, but instead can be derived from some underlying structure. That’s what we mean by “emergent” in theoretical physics: If theory B can be derived from theory A but not the other way round, then B emerges from A.

You might be more familiar with seeing the word “emergent” applied to objects or properties of objects, which is another way physicists use the expression. Sound waves in the theory of gases, for example, emerge from molecular interactions. Van-der Waals forces emerge from quantum electrodynamics. Protons emerge from quantum chromodynamics. And so on.

Everything that isn’t in the standard model or general relativity is known to be emergent already. And since I know that it annoys so many of you, let me point out again that, yes, to our current best knowledge this includes cells and brains and free will. Fundamentally, you’re all just a lot of interacting particles. Get over it.

General relativity and the standard model are the currently the most fundamental descriptions of nature which we have. For the theoretical physicist, the interesting question is then whether these two theories are also emergent from something else. Most physicists in the field think the answer is yes. And any theory in which general relativity – in the tested regimes – is derived from a more fundamental theory, is a case of “emergent gravity.”

That might not sound like such a new idea and indeed it isn’t. In string theory, for example, gravity – like everything else – “emerges” from, well, strings. There are a lot of other attempts to explain gravitons – the quanta of the gravitational interaction – as not-fundamental “quasi-particles” which emerge, much like sound-waves, because space-time is made of something else. An example for this is the model pursued by Xiao-Gang Wen and collaborators in which space-time, and matter, and really everything is made of qbits. Including cells and brains and so on.

Xiao-Gang’s model stands out because it can also include the gauge-groups of the standard model, though last time I looked chirality was an issue. But there are many other models of emergent gravity which focus on just getting general relativity. Lorenzo Sindoni has written a very useful, though quite technical, review of such models.

Almost all such attempts to have gravity emerge from some underlying “stuff” run into trouble because the “stuff” defines a preferred frame which shouldn’t exist in general relativity. They violate Lorentz-invariance, which we know observationally is fulfilled to very high precision.

An exception to this is entropic gravity, an idea pioneered by Ted Jacobson 20 years ago. Jacobson pointed out that there are very close relations between gravity and thermodynamics, and this research direction has since gained a lot of momentum.

The relation between general relativity and thermodynamics in itself doesn’t make gravity emergent, it’s merely a reformulation of gravity. But thermodynamics itself is an emergent theory – it describes the behavior of very large numbers of some kind of small things. Hence, that gravity looks a lot like thermodynamics makes one think that maybe it’s emergent from the interaction of a lot of small things.

What are the small things? Well, the currently best guess is that they’re strings. That’s because string theory is (at least to my knowledge) the only way to avoid the problems with Lorentz-invariance violation in emergent gravity scenarios. (Gravity is not emergent in Loop Quantum Gravity – its quantized version is directly encoded in the variables.)

But as long as you’re not looking at very short distances, it might not matter much exactly what gravity emerges from. Like thermodynamics was developed before it could be derived from statistical mechanics, we might be able to develop emergent gravity before we know what to derive it from.

This is only interesting, however, if the gravity that “emerges” is only approximately identical to general relativity, and differs from it in specific ways. For example, if gravity is emergent, then the cosmological constant and/or dark matter might emerge with it, whereas in our current formulation, these have to be added as sources for general relativity.

So, in summary “emergent gravity” is a rather vague umbrella term that encompasses a large number of models in which gravity isn’t a fundamental interaction. The specific theory of emergent gravity which has recently made headlines is better known as “entropic gravity” and is, I would say, the currently most promising candidate for emergent gravity. It’s believed to be related to, or maybe even be part of string theory, but if there are such links they aren’t presently well understood.

Thanks for an interesting question!

Aside: Sorry about the issue with the comments. I turned on G+ comments, thinking they'd be displayed in addition, but that instead removed all the other comments. So I've reset this to the previous version, though I find it very cumbersome to have to follow four different comment threads for the same post.

Monday, November 07, 2016

Steven Weinberg doesn’t like Quantum Mechanics. So what?

A few days ago, Nobel laureate Steven Weinberg gave a one-hour lecture titled “What’s the matter with quantum mechanics?” at a workshop for science writers organized by the Council for the Advancement of Science Writing (CASW).

In his lecture, Weinberg expressed a newfound sympathy for the critics of quantum mechanics.
“I’m not as happy about quantum mechanics as I used to be, and not as dismissive of the critics. And it’s a bad sign in particular that those physicists who are happy about quantum mechanics, who see nothing wrong with it, don’t agree with each other about what it means.”
You can watch the full lecture here. (The above quote is at 17:40.)


It’s become a cliché that physicists in their late years develop an obsession with quantum mechanics. On this account, you can file Weinberg together with Mermin and Penrose and Smolin. I’m not sure why that is. Maybe it’s something which has bothered them all along, they just never saw it as important enough. Maybe it’s because they start paying more attention to their intuition, and quantum mechanics – widely regarded as non-intuitive – begins itching. Or maybe it’s because they conclude it’s the likely reason we haven’t seen any progress in the foundations of physics for 30 years.

Whatever Weinberg’s motivation, he doesn’t like neither Copenhagen, nor Many Worlds, nor decoherent or consistent histories, and he seems to be allergic to pilot waves (1:02:15). As for qbism, which Mermin finds so convincing, that doesn’t even seem noteworthy to Weinberg.

I learned quantum mechanics in the mid-1990s from Walter Greiner, the one with the textbook series. (He passed away a few weeks ago at age 80.) Walter taught the Copenhagen Interpretation. The attitude he conveyed in his lectures was what Mermin dubbed “shut up and calculate.”

Of course I as most other students spent some time looking into the different interpretations of quantum mechanics – nothing’s more interesting than the topics your prof refuses to talk about. But I’m an instrumentalist by heart and also I quite like the mathematics of quantum mechanics, so I never had a problem with the Copenhagen Interpretation. I’m also, however, a phenomenologist. And so I’ve always thought of quantum mechanics as an incomplete, not fundamental, theory which needs to be superseded by a better, underlying explanation.

My misgivings of quantum mechanics are pretty much identical to the ones which Weinberg expresses in his lecture. The axioms of quantum mechanics, whatever interpretation you chose, are unsatisfactory for a reductionist. They should not mention the process of measurement, because the fundamental theory should tell you what a measurement is.

If you believe the wave-function is a real thing (psi-ontic), decoherence doesn’t solve the issue because you’re left with a probabilistic state that needs to be suddenly updated. If you believe the wave-function only encodes information (psi-epistemic) and the update merely means we’ve learned something new, then you have to explain who learns and how they learn. None of the currently existing interpretations address these issues satisfactorily.

It isn’t so surprising I’m with Weinberg on this because despite attending Greiner’s lectures, I never liked Greiner’s textbooks. That we students were more or less forced to buy them didn’t make them any more likable. So I scraped together my Deutsche Marks and bought Weinberg’s textbooks, which I loved for the concise mathematical approach.

I learned both general relativity and quantum field theory from Weinberg’s textbooks. I also later bought Weinberg’s lectures on Quantum Mechanics which appeared in 2013, but haven’t actually read them, except for section 3.7, where he concludes that:
“[T]oday there is no interpretation of quantum mechanics that does not have serious flaws, and [we] ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation.”
It’s not much of a secret that I’m a fan of non-local hidden variables (aka superdeterminism), which I believe to be experimentally testable. To my huge frustration, however, I haven’t been able to find an experimental group willing and able to do that. I am therefore happy that Weinberg emphasizes the need to find a better theory, and to also look for experimental evidence. I don’t know what he thinks of superdeterminism. But superdeterminism or something else, I think probing quantum mechanics in new regimes is best shot we presently have at making progress on the foundations of physics.

I therefore don’t understand the ridicule aimed at those who think that quantum mechanics needs an overhaul. Being unintuitive and feeling weird doesn’t make a theory wrong – we can all agree on this. We don’t even have to agree it’s unintuitive – I actually don’t think so. Intuition comes with use. Even if you can’t stomach the math, you can build your quantum intuition for example by playing “Quantum Moves,” a video game that crowd-sources players’ solutions for quantum mechanical optimization problems. Interestingly, humans do better than algorithms (at least for now).

[Weinberg (left), getting some
kind of prize or title. Don't
know for what. Image: CASW]
So, yeah, maybe quantum physics isn’t weird. And even if it is, being weird doesn’t make it wrong, and therefore you don’t think it’s a promising research avenue to pursue. Fine, then don’t. But before you make jokes about physicists who rely on their intuition, let us be clear that being ugly doesn’t make a theory wrong either. And yet it’s presently entirely acceptable to develop new theories with the only aim of prettifying the existing ones.

I don’t think for example that numerological coincidences are problems worth thinking about – they’re questions of aesthetic appeal. The mass of the Higgs is much smaller than the Planck mass. So what? The spatial curvature of the universe is almost zero, the cosmological constant tiny, and the electric dipole moment of the neutron is for all we know absent. Why should that bother me? If you think that’s a mathematical inconsistency, think again – it’s not. There’s no logical reason for why that shouldn’t be so. It’s just that to our human sense it doesn’t quite feel right.

A huge amount of work has gone into curing these “problems” because finetuned constants aren’t thought of as beautiful. But in my eyes the cures are all worse than the disease: Solutions usually require the introduction of additional fields and potentials for these fields and personally I think it’s much preferable to just have a constant – is there any axiom simpler than that?

The difference between the two research areas is that there are tens of thousands of theorists trying to make the fundamental laws of nature less ugly, but only a few hundred working on making them less weird. That in and by itself is reason to shift focus to quantum foundations, just because it’s the path less trodden and more left to explore.

But maybe I’m just old beyond my years. So I’ll shut up now and go back to my calculations.

Tuesday, September 27, 2016

Dear Dr B: What do physicists mean by “quantum gravity”?

[Image Source: giphy.com]
“please could you give me a simple definition of "quantum gravity"?

J.”

Dear J,

Physicists refer with “quantum gravity” not so much to a specific theory but to the sought-after solution to various problems in the established theories. The most pressing problem is that the standard model combined with general relativity is internally inconsistent. If we just use both as they are, we arrive at conclusions which do not agree with each other. So just throwing them together doesn’t work. Something else is needed, and that something else is what we call quantum gravity.

Unfortunately, the effects of quantum gravity are very small, so presently we have no observations to guide theory development. In all experiments made so far, it’s sufficient to use unquantized gravity.

Nobody knows how to combine a quantum theory – like the standard model – with a non-quantum theory – like general relativity – without running into difficulties (except for me, but nobody listens). Therefore the main strategy has become to find a way to give quantum properties to gravity. Or, since Einstein taught us gravity is nothing but the curvature of space-time, to give quantum properties to space and time.

Just combining quantum field theory with general relativity doesn’t work because, as confirmed by countless experiments, all the particles we know have quantum properties. This means (among many other things) they are subject to Heisenberg’s uncertainty principle and can be in quantum superpositions. But they also carry energy and hence should create a gravitational field. In general relativity, however, the gravitational field can’t be in a quantum superposition, so it can’t be directly attached to the particles, as it should be.

One can try to find a solution to this conundrum, for example by not directly coupling the energy (and related quantities like mass, pressure, momentum flux and so on) to gravity, but instead only coupling the average value, which behaves more like a classical field. This solves one problem, but creates a new one. The average value of a quantum state must be updated upon measurement. This measurement postulate is a non-local prescription and general relativity can’t deal with it – after all Einstein invented general relativity to get rid of the non-locality of Newtonian gravity. (Neither decoherence nor many worlds remove the problem, you still have to update the probabilities, somehow, somewhere.)

The quantum field theories of the standard model and general relativity clash in other ways. If we try to understand the evaporation of black holes, for example, we run into another inconsistency: Black holes emit Hawking-radiation due to quantum effects of the matter fields. This radiation doesn’t carry information about what formed the black hole. And so, if the black hole entirely evaporates, this results in an irreversible process because from the end-state one can’t infer the initial state. This evaporation however can’t be accommodated in a quantum theory, where all processes can be time-reversed – it’s another contradiction that we hope quantum gravity will resolve.

Then there is the problem with the singularities in general relativity. Singularities, where the space-time curvature becomes infinitely large, are not mathematical inconsistencies. But they are believed to be physical nonsense. Using dimensional analysis, one can estimate that the effects of quantum gravity should become large close by the singularities. And so we think that quantum gravity should replace the singularities with a better-behaved quantum space-time.

The sought-after theory of quantum gravity is expected to solve these three problems: tell us how to couple quantum matter to gravity, explain what happens to information that falls into a black hole, and avoid singularities in general relativity. Any theory which achieves this we’d call quantum gravity, whether or not you actually get it by quantizing gravity.

Physicists are presently pursuing various approaches to a theory of quantum gravity, notably string theory, loop quantum gravity, asymptotically safe gravity, and causal dynamical triangulation, for just to name the most popular ones. But none of these approaches has experimental evidence speaking for it. Indeed, so far none of them has made a testable prediction.

This is why, in the area of quantum gravity phenomenology, we’re bridging the gap between theory and experiment with simplified models, some of which motivated by specific approaches (hence: string phenomenology, loop quantum cosmology, and so on). These phenomenological models don’t aim to directly solve the above mentioned problems, they merely provide a mathematical framework – consistent in its range of applicability – to quantify and hence test the presence of effects that could be signals of quantum gravity, for example space-time fluctuations, violations of the equivalence principle, deviations from general relativity, and so on.

Thanks for an interesting question!

Thursday, September 15, 2016

Experimental Search for Quantum Gravity 2016

Research in quantum gravity is quite a challenge since we neither have a theory nor data. But some of us like a challenge.

So far, most effort in the field has gone into using requirements of mathematical consistency to construct a theory. It is impossible of course to construct a theory based on mathematical consistency alone, because we can never prove our assumptions to be true. All we know is that the assumptions give rise to good predictions in the regime where we’ve tested them. Without assumptions, no proof. Still, you may hope that mathematical consistency tells you where to look for observational evidence.

But in the second half of the 20th century, theorists have used the weakness of gravity as an excuse to not think about how to experimentally test quantum gravity at all. This isn’t merely a sign of laziness, it’s back to the days when philosophers believed they could find out how nature works by introspection. Just that now many theoretical physicists believe mathematical introspection is science. Particularly disturbing to me is how frequently I speak with students or young postdocs who have never even given thought to the question what makes a theory scientific. That’s one of the reasons the disconnect between physics and philosophy worries me.

In any case, the cure clearly isn’t more philosophy, but more phenomenology. The effects of quantum gravity aren’t necessarily entirely out of experimental reach. Gravity isn’t generally a weak force, not in the same way that, for example, the weak nuclear force is weak. That’s because the effects of gravity get stronger with the amount of mass (or energy) that exerts the force. Indeed, this property of the gravitational force is the very reason why it’s so hard to quantize.

Quantum gravitational effects hence were strong in the early universe, they are strong inside black holes, and they can be non-negligible for massive objects that have pronounced quantum properties. Furthermore, the theory of quantum gravity can be expected to give rise to deviations from general relativity or the symmetries of the standard model, which can have consequences that are observable even at low energies.

The often repeated argument that we’d need to reach enormously high energies – close by the Planck energy, 16 orders of magnitude higher than LHC energies – is simply wrong. Physics is full with examples of short-distance phenomena that give rise to effects at longer distances, such as atoms causing Brownian motion, or quantum electrodynamics allowing stable atoms to begin with.

I have spent the last 10 years or so studying the prospects to find experimental evidence for quantum gravity. Absent a fully-developed theory we work with models to quantify effects that could be signals of quantum gravity, and aim to test these models with data. The development of such models is relevant to identify promising experiments to begin with.

Next week, we will hold the 5th international conference on Experimental Search for Quantum Gravity, here in Frankfurt. And I dare to say we have managed to pull together an awesome selection of talks.

We’ll hear about the prospects of finding evidence for quantum gravity in the CMB (Bianchi, Krauss, Vennin) and in quantum oscillators (Paternostro). We have a lecture about the interface between gravity and quantum physics, both on long and short distances (Fuentes), and a talk on how to look for moduli and axion fields that are generic consequences of string theory (Conlon). Of course we’ll also cover Loop Quantum Cosmology (Barrau), asymptotically safe gravity (Eichhorn), and causal sets (Glaser). We’re super up-to-date by having a talk about constraints from the LIGO gravitational wave-measurements on deviations from general relativity (Yunes), and several of the usual suspects speaking about deviations from Lorentz-invariance (Mattingly), Planck stars (Rovelli, Vidotto), vacuum dispersion (Giovanni), and dimensional reduction (Magueijo). There’s neutrino physics (Paes), a talk about what the cosmological constant can tell us about new physics (Afshordi), and, and, and!

You can download the abstracts here and the timetable here.

But the best is I’m not telling you this to depress you because you can’t be with us, but because our IT guys still tell me we’ll both record the talks and livestream them (to the extent that the speakers consent of course). I’ll share the URL with you here once everything is set up, so stay tuned.

Update:Streaming link will be posted on the institute's main page briefly before the event. Another update: Lifestream is available here.

Wednesday, August 24, 2016

What if the universe was like a pile of laundry?

    What if the universe was like a pile of laundry?

    Have one.

    See this laundry pile? Looks just like our universe.

    No?

    Here, have another.

    See it now? It’s got three dimensions and all.

    But look again.

    The shirts and towels, they’re really crinkled and interlocked two-dimensional surfaces.

    Wait.

    It’s one-dimensional yarn, knotted up tightly.

    You ok?

    Have another.

    I see it clearly now. It’s everything at once, one-two-three dimensional. Just depends on how closely you look at it.

    Amazing, don’t you think? What if our universe was just like that?


Universal Laundry Pile.
[Img Src: Clipartkid]

It doesn’t sound like a sober thought, but it’s got math behind it, so physicists think there might be something to it. Indeed the math piled up lately. They call it “dimensional reduction,” the idea that space on short distances has fewer than three dimensions – and it might help physicists to quantize gravity.

We’ve gotten used to space with additional dimensions, rolled up so small we can’t observe them. But how do you get rid of dimensions instead? To understand how it works we first have clarify what we mean by “dimension.”

We normally think about dimensions of space by picturing lines which spread from a point. How quickly the lines dilute with the distance from the point tells us the “Hausdorff dimension” of a space. The faster the lines diverge from each other with distance, the larger the Hausdorff dimension. If you speak through a pipe, for example, sound waves spread less and your voice carries farther. The pipe hence has a lower Hausdorff dimension than our normal 3-dimensional office cubicles. It’s the Hausdorff dimension that we colloquially refer to as just dimension.

For dimensional reduction, however, it is not the Hausdorff dimension which is relevant, but instead the “spectral dimension,” which is a slightly different concept. We can calculate it by first getting rid of the “time” in “space-time” and making it into space (period). We then place a random walker at one point and measure the probability that it returns to the same point during its walk. The smaller the average return probability, the higher the probability the walker gets lost, and the higher the number of spectral dimensions.

Normally, for a non-quantum space, both notions of dimension are identical. However, add quantum mechanics and the spectral dimension at short distances goes down from four to two. The return probability for short walks becomes larger than expected, and the walker is less likely to get lost – this is what physicists mean by “dimensional reduction.”

The spectral dimension is not necessarily an integer; it can take on any value. This value starts at 4 when quantum effects can be neglected, and decreases when the walker’s sensitivity to quantum effects at shortest distances increases. Physicists therefore also like to say that the spectral dimension “runs,” meaning its value depends on the resolution at which space-time is probed.

Dimensional reduction is an attractive idea because quantizing gravity is considerably easier in lower dimensions where the infinities that plague traditional attempts to quantize gravity go away. A theory with a reduced number of dimensions at shortest distances therefore has much higher chances to remain consistent and so to provide a meaningful theory for the quantum nature of space and time. Not so surprisingly thus, among physicists, dimensional reduction has received quite some attention lately.

This strange property of quantum-spaces was first found in Causal Dynamical Triangulation (hep-th/0505113), an approach to quantum gravity that relies on approximating curved spaces by triangular patches. In this work, the researchers did a numerical simulation of a random walk in such a triangulized quantum-space, and found that the spectral dimension goes down from four to two. Or actually to 1.80 ± 0.25 if you want to know precisely.

Instead of doing numerical simulations, it is also possible to study the spectral dimension mathematically, which has since been done in various other approaches. For this, physicists exploit that the behavior of the random walk is governed by a differential equation – the diffusion equation – which depends on the curvature of space. In quantum gravity, the curvature has quantum fluctuations, and then it’s instead its average value which enters the diffusion equation. From the diffusion equation one then calculates the return probability for the random walk.

This way, physicists have inferred the spectral dimension also in Asymptotically Safe Gravity (hep-th/0508202), an approach to quantum gravity which relies on the resolution-dependence (the “running”) of quantum field theories. And they found the same drop from four to two spectral dimensions.

Another indication comes from Loop Quantum Gravity, where the scaling of the area operator with length changes at short distances. In this case is somewhat questionable whether the notion of curvature makes sense at all on short distances. But ignoring this, one can construct the diffusion equation and finds that the spectral dimension drops from four to two (0812.2214).

And then there is Horava-Lifshitz gravity, yet another modification of gravity which some believe helps with quantizing it. Here too, dimensional reduction has been found (0902.3657).

It is difficult to visualize what is happening with the dimensionality of space if it goes down continuously, rather than in discrete steps as in the example with the laundry pile. Maybe a good way to picture it, as Calcagni, Eichhorn and Saueressig suggest, is to think of the quantum fluctuations of space-time hindering a particle’s random walk, thereby slowing it down. It wouldn’t have to be that way. Quantum fluctuations could also kick the particle around wildly, thereby increasing the spectral dimension rather than decreasing it. But that’s not what the math tells us.

One shouldn’t take this picture too seriously though, because we’re talking about a random walk in space, not space-time, and so it’s not a real physical process. Turning time into space might seem strange, but it is a common mathematical simplification which is often used for calculations in quantum theory. Still, it makes it difficult to interpret what is happening physically.

I find it intriguing that several different approaches to quantum gravity share a behavior like this. Maybe it is a general property of quantum space-time. But then, there are many different types of random walks, and while these different approaches to quantum gravity share a similar scaling behavior for the spectral dimension, they differ in the type of random walk that produces this scaling (1304.7247). So maybe the similarities are only superficial.

And of course this idea has no observational evidence speaking for it. Maybe never will. But one day, I’m sure, all the math will click into place and everything will make perfect sense. Meanwhile, have another.

[This article first appeared on Starts With A Bang under the title Dimensional Reduction: The Key To Physics' Greatest Mystery?]

Monday, July 18, 2016

Can black holes tunnel to white holes?

Tl;dr: Yes, but it’s unlikely.

If black holes attract your attention, white holes might blow your mind.

A white hole is a time-reversed black hole, an anti-collapse. While a black hole contains a region from which nothing can escape, a white hole contains a region to which nothing can fall in. Since the time-reversal of a solution of General Relativity is another solution, we know that white holes exist mathematically. But are they real?

Black holes were originally believed to merely be of mathematical interest, solutions that exist but cannot come into being in the natural world. As physicists understood more about General Relativity, however, the exact opposite turned out to be the case: It is hard to avoid black holes. They generically form from matter that collapses under its own gravitational pull. Today it is widely accepted that the black hole solutions of General Relativity describe to high accuracy astrophysical objects which we observe in the real universe.

The simplest black hole solutions in General Relativity are the Schwarzschild-solutions, or their generalizations to rotating and electrically charged black holes. These solutions however are not physically realistic because they are entirely time-independent, which means such black holes must have existed forever. Schwarzschild black holes, since they are time-reversal invariant, also necessarily come together with a white hole. Realistic black holes, on the contrary, which are formed from collapsing matter, do not have to be paired with white holes.

(Aside: Karl Schwarzschild was German. Schwarz means black, Schild means shield. Probably a family crest. It’s got nothing to do with children.)

But there are many things we don’t understand about black holes, most prominently how they handle information of the matter that falls in. Solving the black hole information loss problem requires that information finds a way out of the black hole, and this could be done for example by flipping a black hole over to a white hole. In this case the collapse would not complete, and instead the black hole would burst, releasing all that it had previously swallowed.

It’s an intriguing and simple option. This black-to-white-hole transition has been discussed in the literature for some while, recently by Rovelli and Vidotto in the Planck star idea. It’s also subject of a last week’s paper by Barcelo and Carballo-Rubio.

Is this a plausible solution to the black hole information loss problem?

It is certainly possible to join part of the black hole solution with part of the white hole solution. But doing this brings some problems.

The first problem is that at the junction the matter must get a kick that transfers it from one state into the other. This kick cannot be achieved by any known physics – we know this from the singularity theorems. There isn’t anything in the known physics can prevent a black hole from collapsing entirely once the horizon is formed. Whatever makes this kick hence needs to violate one of the energy conditions, it must be new physics.

Something like this could happen in a region with quantum gravitational effects. But this region is normally confined to deep inside the black hole. A transition to a white hole could therefore happen, but only if the black hole is very small, for example because it has evaporated for a long time.

But this isn’t the only problem.

Before we think about the stability of black holes, let us think about a simpler question. Why doesn’t dough unmix into eggs and flour and sugar neatly separated? Because that would require an entropy decrease. The unmixing can happen, but it’s exceedingly unlikely, hence we never see it.

A black hole too has entropy. It has indeed enormous entropy. It saturates the possible entropy that can be contained within a closed surface. If matter collapses to a black hole, that’s a very likely process to happen. Consequently, if you time-reverse this collapse, you get an exceedingly unlikely process. This solution exists, but it’s not going to happen unless the black hole is extremely tiny, close by the Planck scale.

It is possible that the white hole which a black hole supposedly turns into is not the exact time-reverse, but instead another solution that further increases entropy. But in that case I don’t know where this solution comes from. And even so I would suspect that the kick required at the junction must be extremely finetuned. And either way, it’s not a problem I’ve seen addressed in the literature. (If anybody knows a reference, please let me know.)

In a paper written for the 2016 Awards for Essays on Gravitation, Haggard and Rovelli make an argument in favor of their idea, but instead they just highlight the problem with it. They claim that small quantum fluctuations around the semi-classical limit which is General Relativity can add up over time, eventually resulting in large deviations. Yes, this can happen. But the probability that this happens is tiny, otherwise the semi-classical limit wouldn’t be the semi-classical limit.

The most likely thing to happen instead is that quantum fluctuations average out to give back the semi-classical limit. Hence, no white-hole transition. For the black-to-white-hole transition one would need quantum fluctuations to conspire together in just the right way. That’s possible. But it’s exceedingly unlikely.

In the other recent paper the authors find a surprisingly large transition rate for black to white holes. But they use a highly symmetrized configuration with very few degrees of freedom. This must vastly overestimate the probability for transition. It’s an interesting mathematical example, but it has very little to do with real black holes out there.

In summary: That black holes transition to white holes and in this way release information is an idea appealing because of its simplicity. But I remain unconvinced because I am missing a good argument demonstrating that such a process is likely to happen.

Tuesday, July 12, 2016

Pulsars could probe black hole horizons

The first antenna of MeerKAT,
a SKA precursor in South Africa.
[Image Source.]

It’s hard to see black holes – after all, their defining feature is that they swallow light. But it’s also hard to discourage scientists from trying to shed light on mysteries. In a recent paper, a group of researchers from Long Island University and Virginia Tech have proposed a new way to probe the near-horizon region of black holes and, potentially, quantum gravitational effects.

    Shining Light on Quantum Gravity with Pulsar-Black Hole Binaries
    John Estes, Michael Kavic, Matthew Lippert, John H. Simonetti
    arXiv:1607.00018 [hep-th]

The idea is simple and yet promising: Search for a binary system in which a pulsar and a black hole orbit around each other, then analyze the pulsar signal for unusual fluctuations.

A pulsar is a rapidly rotating neutron star that emits a focused beam of electromagnetic radiation. This beam goes into the direction of the poles of the magnetic field, and is normally not aligned with the neutron star’s axis of rotation. The beam therefore spins with a regular period like a lighthouse beacon. If Earth is located within the beam’s reach, our telescopes receive a pulse every time the beam points into our direction.

Pulsar timing can be extremely precise. We know some pulsars that have been flashing for decades every couple of milliseconds to a precision of a few microseconds. This high regularity allows astrophysicists to search for signals which might affect the timing. Fluctuations of space-time itself, for example, would increase the pulsar-timing uncertainty, a method that has been used to derive constraints on the stochastic gravitational wave background. And if a pulsar is in a binary system with a black hole, the pulsar’s signal might scrape by the black hole and thus encode information about the horizon which we can catch on Earth.


No such pulsar-black hole binaries are known to date. But upcoming experiments like eLISA and the Square Kilometer Array (SKA) will almost certainly detect new pulsars. In their paper, the authors estimate that SKA might observe up to 100 new pulsar-black hole binaries, and they put the probability that a newly discovered system would have a suitable orientation at roughly one in a hundred. If they are right, the SKA would have a good chance to find a promising binary.

Much of the paper is dedicated to arguing that the timing accuracy of such a binary pulsar could carry information about quantum gravitational effects. This is not impossible but speculative. Quantum gravitational effects are normally expect to be strong towards the black hole singularity, ie well inside the black hole and hidden from observation. Naïve dimensional estimates reveal that quantum gravity should be unobservably small in the horizon area.

However, this argument has recently been questioned in the aftermath of the firewall controversy surrounding black holes, because one solution to the black hole firewall paradox is that quantum gravitational effects can stretch over much longer distances than the dimensional estimates lead one to expect. Steve Giddings has long been a proponent of such long-distance fluctuations, and scenarios like black hole fuzzballs, or Dvali’s Bose-Einstein Computers also lead to horizon-scale deviations from general relativity. It is hence something that one should definitely look for.

Previous proposals to test the near-horizon geometry were based on measurements of gravitational waves from merger events or the black hole shadow, each of which could reveal deviations from general relativity. However, so far these were quite general ideas lacking quantitative estimates. To my knowledge, this paper is the first to demonstrate that it’s technologically feasible.

Michael Kavic, one of the authors of this paper, will attend our September conference on “Experimental Search for Quantum Gravity.” We’re still planning to life-streaming the talks, so stay tuned and you’ll get a chance to listen in.

Monday, June 06, 2016

Dear Dr B: Why not string theory?

[I got this question in reply to my last week’s book review of Why String Theory? by Joseph Conlon.]

Dear Marco:

Because we might be wasting time and money and, ultimately, risk that progress stalls entirely.

In contrast to many of my colleagues I do not think that trying to find a quantum theory of gravity is an endeavor purely for the sake of knowledge. Instead, it seems likely to me that finding out what are the quantum properties of space and time will further our understanding of quantum theory in general. And since that theory underlies all modern technology, this is research which bears relevance for applications. Not in ten years and not in 50 years, but maybe in 100 or 500 years.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

As far as quantum gravity is concerned, string theorist’s main argument seems to be “Well, can you come up with something better?” Then of course if someone answers this question with “Yes” they would never agree that something else might possibly be better. And why would they – there’s no evidence forcing them one way or the other.

I don’t see what one learns from discussing which theory is “better” based on philosophical or aesthetic criteria. That’s why I decided to stay out of this and instead work on quantum gravity phenomenology. As far as testability is concerned all existing approaches to quantum gravity do equally badly, and so I’m equally unconvinced by all of them. It is somewhat of a mystery to me why string theory has become so dominant.

String theorists are very proud of having a microcanonical explanation for the black hole entropy. But we don’t know whether that’s actually a correct description of nature, since nobody has ever seen a black hole evaporate. In fact one could read the firewall problem as a demonstration that indeed this cannot be a correct description of nature. Therefore, this calculation leaves me utterly unimpressed.

But let me be clear here. Nobody (at least nobody whose opinion matters) says that string theory is a research program that should just be discontinued. The question is instead one of balance – does the promise justify the amount of funding spend on it? And the answer to this question is almost certainly no.

The reason is that academia is currently organized so that it invites communal reinforcement, prevents researchers from leaving fields whose promise is dwindling, and supports a rich-get-richer trend. That institutional assessments use the quantity of papers and citation counts as a proxy for quality creates a bonus for fields in which papers can be cranked out quickly. Hence it isn’t surprising that an area whose mathematics its own practitioners frequently describe as “rich” would flourish. What does mathematical “richness” tell us about the use of a theory in the description of nature? I am not aware of any known relation.

In his book Why String Theory?, Conlon tells the history of the discipline from a string theorist’s perspective. As a counterpoint, let me tell you how a cynical outsider might tell this story:

String theory was originally conceived as a theory of the strong nuclear force, but it was soon discovered that quantum chromodynamics was more up to the task. After noting that string theory contains a particle that could be identified as the graviton, it was reconsidered as a theory of quantum gravity.

It turned out however that string theory only makes sense in a 25-dimensional space. To make that compatible with observations, 22 of the dimensions were moved out of sight by rolling them up (compactifying) them to a radius so small they couldn’t be observationally probed.

Next it was noted that the theory also needs supersymmetry. This brings down the number of space dimensions to 9, but also brings a new problem: The world, unfortunately, doesn’t seem to be supersymmetric. Hence, it was postulated that supersymmetry is broken at an energy scale so high we wouldn’t see the symmetry. Even with that problem fixed, however, it was quickly noticed that moving the superpartners out of direct reach would still induce flavor changing neutral currents that, among other things, would lead to proton decay and so be in conflict with observation. Thus, theorists invented R-parity to fix that problem.

The next problem that appeared was that the cosmological constant turned out to be positive instead of zero or negative. While a negative cosmological constant would have been easy to accommodate, string theorists didn’t know what to do with a positive one. But it only took some years to come up with an idea to make that happen too.

String theory was hoped to be a unique completion of the standard model including general relativity. Instead it slowly became clear that there is a huge number of different ways to get rid of the additional dimensions, each of which leads to a different theory at low energies. String theorists are now trying to deal with that problem by inventing some probability measure according to which the standard model is at least a probable occurrence in string theory.

So, you asked, why not string theory? Because it’s an approach that has been fixed over and over again to make it compatible with conflicting observations. Every time that’s been done, string theorists became more convinced of their ideas. And every time they did this, I became more convinced they are merely building a mathematical toy universe.

String theorists of course deny that they are influenced by anything but objective assessment. One noteworthy exception is Joe Polchinski who has considered that social effects play a role, but just came to the conclusion that they aren’t relevant. I think it speaks for his intellectual sincerity that he at least considered it.

At the Munich workshop last December, David Gross (in an exchange with Carlo Rovelli) explained that funding decisions have no influence on whether theoretical physicists chose to work in one field or the other. Well, that’s easy to say if you’re a Nobel Prize winner.

Conlon in his book provides “evidence” that social bias plays no role by explaining that there was only one string theorist in a panel that (positively) evaluated one of his grants. To begin with anecdotes can’t replace data and there is ample evidence that social biases are common human traits, so by default scientists should be susceptible. But even considering his anecdote, I’m not sure why Conlon thinks leaving decisions to non-experts limits bias. My expectation would be that it amplifies bias because it requires drawing on simplified criteria, like the number of papers published and how often they’ve been cited. And what does that depend on? Depends on how many people there are in the field and how many peers favorably reviewed papers on the topic of your work.

I am listing these examples to demonstrate that it is quite common of theoretical physicists (not string theorists in particular) to dismiss the mere possibility that social dynamics influences research decisions.

How large a role play social dynamics and cognitive biases, and how much do they slow down progress on the foundations of physics? I can’t tell you. But even though I can’t tell you how much faster progress could be, I am sure it’s slowed down. I can tell that in the same way that I can tell you diesel in Germany is sold under market value even though I don’t know the market value. I know that because it’s subsidized. And in the same way I can tell that string theory is overpopulated and its promise is overestimated because it’s an idea that benefits from biases which humans demonstrably possess. But I can’t tell you what its real value would be.

The reproduction crisis in the life-sciences and psychology has spurred a debate for better measures of statistical significance. Experimentalists go to length to put into place all kinds of standardized procedures to not draw the wrong conclusions from what their apparatuses measures. In theory development, we have our own crisis, but nobody talks about it. The apparatuses that we use are our own brains and biases we should guard against are cognitive and social biases, communal reinforcement, sunk cost fallacy, wishful thinking and status-quo bias, for just to mention the most common ones. These however are presently entirely unaccounted for. Is this the reason why string theory has gathered so many followers?

Some days I side with Polchinski and Gross and don’t think it makes that much of a difference. It really is an interesting topic and it’s promising. On other days I think we’ve wasted 30 years studying bizarre aspects of a theory that doesn’t bring us any closer to understanding quantum gravity, and it’s nothing but an empty bubble of disappointed expectations. Most days I have to admit I just don’t know.

Why not string theory? Because enough is enough.

Thanks for an interesting question.