Showing posts with label Papers. Show all posts
Showing posts with label Papers. Show all posts

Thursday, August 03, 2017

Self-tuning brings wireless power closer to reality

Cables under my desk.
One of the unlikelier fights I picked while blogging was with an MIT group that aimed to wirelessly power devices – by tunneling:
“If you bring another resonant object with the same frequency close enough to these tails then it turns out that the energy can tunnel from one object to another,” said Professor Soljacic.
They had proposed a new method for wireless power transfer using two electric circuits in magnetic resonance. But there’s no tunneling in such a resonance. Tunneling is a quantum effect. Single particles tunnel. Sometimes. But kilowatts definitely don’t.

I reached out to the professor’s coauthor, Aristeidis Karalis, who told me, even more bizarrely: “The energy stays in the system and does not leak out. It just jumps from one to the other back and forth.”

I had to go and calculate the Poynting vector to make clear the energy is – as always – transmitted from one point to another by going through all points in between. It doesn’t tunnel, and it doesn’t jump either. For the MIT guys’ envisioned powering device with the resonant coils the energy flow is focused between the coils’ centers.

The difference between “jumping” and “flowing” energy is more than just words. Once you know that energy is flowing, you also know that if you’re in its way you might get some of it. And the more focused the energy, the higher the possible damage. This means, large devices have to be close together and the energy must be spread out over large surfaces to comply with safety standards.

Back then, I did some estimates. If you want to transfer, say, 1 Watt, and you distribute it over a coil with radius 30cm, you end up with a density of roughly 1 mW/cm2. That already exceeds the safety limit (in the frequency range 30-300 MHz). And that’s leaving aside there usually must be much more energy in the resonance field than what’s actually transmitted. And 30cm isn’t exactly handy. In summary, it’ll work – but it’s not practical and it won’t charge the laptop without roasting what gets in the way.

The MIT guys meanwhile founded a company, Witricity, and dropped the tunneling tale.

Another problem with using resonance for wireless power is that the efficiency depends on the distance between the circuits. It doesn’t work well when they’re too far, and not when they’re too close either. That’s not great for real-world applications.

But in a recent paper published in Nature, a group from Stanford put forward a solution to this problem. And even though I’m not too enchanted by transfering power by magnetic resonance, it is a really neat idea:
Usually the resonance between two circuits is designed, meaning they receiver’s and sender’s frequencies are tuned to work together. But in the new paper, the authors instead let the frequency of the sender range freely – they merely feed it energy. They then show that the coupled system will automatically tune to a resonance frequency at which efficiency is maximal.

The maximal efficiency they reach is the same as with the fixed-frequency circuits. But it works better for shorter distances. While the usual setting is inefficient both at too short and too long distances, the self-tuned system has a stable efficiency up to some distance, and then decays. This makes the new arrangement much more useful in practice.
Efficiency of energy transfer as a function of distance
between the coils (schematic). Blue curve is for the
usual setting with pre-fixed frequency. Red curve is
for the self-tuned circuits.

The group didn’t only calculate this, they also did an experiment to show it works. One limitation of the present setup though is that it works only in one direction, so still not too practical. But it’s a big step forward.

Personally, I’m more optimistic about using ultrasound for wireless power transfer than about the magnetic resonance because ultrasound presently reaches larger distances. Both technologies, however, are still very much in their infancy, so hard to tell which one will win out.

(Note added: Ultrasound not looking too convincing either, ht Tim, see comments for more.)

Let me not forget to mention that in an ingenious paper which was completely lost on the world I showed you don’t need to transfer the total energy to the receiver. You only need to send the information necessary to decrease entropy in the receiver’s surrounding, then it can draw energy from the environment.

Unfortunately, I could think of how to do this only for a few atoms at a time. And, needless to say, I didn’t do any experiment – I’m a theoretician after all. While I’m sure in a few thousand years everyone will use my groundbreaking insight, until then, it’s coils or ultrasound or good, old cables.

Friday, July 28, 2017

New paper claims string theory can be tested with Bose-Einstein-Condensates

Fluorescence image of
Bose-Einstein-Condensate.
Image Credits: Stefan Kuhr and
Immanuel Bloch, MPQ
String theory is infamously detached from experiment. But in a new paper, a group from Mexico put forward a proposal to change that
    String theory phenomenology and quantum many–body systems
    Sergio Gutiérrez, Abel Camacho, Héctor Hernández
    arXiv:1707.07757 [gr-qc]
Ahead, let me be clear they don’t want to test string theory, but the presence of additional dimensions of space, which is a prediction of string theory.

In the paper, the authors calculate how additional space-like dimensions affect a condensate of ultra-cold atoms, known as Bose-Einstein-Condensate. At such low temperatures, the atoms transition to a state where their quantum wave-function acts as one and the system begins to display quantum effects, such as interference, throughout.

In the presence of extra-dimensions, every particle’s wave-function has higher harmonics because the extra-dimensions have to close up, in the simplest case like circles. The particle’s wave-functions have to fit into the extra dimensions, meaning their wave-length must be an integer fraction of the radius.

Each of the additional dimensions has a radius of about a Planck length, which is 10-35m or 15 orders of magnitude smaller than what even the LHC can probe. To excite these higher harmonics, you correspondingly need an energy of 1015 TeV, or 15 orders of magnitude higher than what the LHC can produce.

How do the extra-dimensions of string theory affect the ultra-cold condensate? They don’t. That’s because at those low temperatures there is no way you can excite any of the higher harmonics. Heck, even the total energy of the condensates presently used isn’t high enough. There’s a reason string theory is famously detached from experiment – because it’s a damned high energy you must reach to see stringy effects!

So what’s the proposal in the paper then? There isn’t one. They simply ignore that the higher harmonics can’t be excited and make a calculation. Then they estimate that one needs a condensate of about a thousand particles to measure a discontinuity in the specific heat, which depends on the number of extra-dimensions.

It’s probably correct that this discontinuity depends on the number of extra-dimensions. Unfortunately the authors don’t go back and check what’s the mass per particle in the condensate that’s needed to make this work. I’ve put in the numbers and get something like a million tons. That gigantic mass becomes necessary because it has to combine with the miniscule temperature of about a nano-Kelvin to have a geometric mean that exceeds the Planck mass.

In summary: Sorry, but nobody’s going to test string theory with Bose-Einstein-Condensates.

Wednesday, July 19, 2017

Penrose claims LIGO noise is evidence for Cyclic Cosmology

Noise is the physicists’ biggest enemy. Unless you are a theorist whose pet idea masquerades as noise. Then you are best friends with noise. Like Roger Penrose.
    Correlated "noise" in LIGO gravitational wave signals: an implication of Conformal Cyclic Cosmology
    Roger Penrose
    arXiv:1707.04169 [gr-qc]

Roger Penrose made his name with the Penrose-Hawking theorems and twistor theory. He is also well-known for writing books with very many pages, most recently “Fashion, Faith, and Fantasy in the New Physics of the Universe.”

One man’s noise is another man’s signal.
Penrose doesn’t like most of what’s currently in fashion, but believes that human consciousness can’t be explained by known physics and that the universe is cyclically reborn. This cyclic cosmology, so his recent claim, gives rise to correlations in the LIGO noise – just like what’s been observed.

The LIGO experiment consists of two interferometers in the USA, separated by about 3,000 km. A gravitational wave signal should pass through both detectors with a delay determined by the time it takes the gravitational wave to sweep from one US-coast to the other. This delay is typically of the order of 10ms, but its exact value depends on where the waves came from.

The correlation between the two LIGO detectors is one of the most important criteria used by the collaboration to tell noise from signal. The noise itself, however, isn’t entirely uncorrelated. Some sources of the correlations are known, but some are not. This is not unusual – understanding the detector is as much part of a new experiment as is the measurement itself. The LIGO collaboration, needless to say, thinks everything is under control and the correlations are adequately taken care of in their signal analysis.

A Danish group of researchers begs to differ. They recently published a criticism on the arXiv in which they complain that after subtracting the signal of the first gravitational wave event, correlations remain at the same time-delay as the signal. That clearly shouldn’t happen. First and foremost it would demonstrate a sloppy signal extraction by the LIGO collaboration.

A reply to the Danes’ criticism by Ian Harry from the LIGO collaboration quickly appeared on Sean Carroll’s blog. Ian pointed out some supposed mistakes in the Danish group’s paper. Turns out though, the mistake was on his site. Once corrected, Harry’s analysis reproduces the correlations which shouldn’t be there. Bummer.

Ian Harry did not respond to my requests for comment. Neither did Alessandra Buonanno from the LIGO collaboration, who was also acknowledged by the Danish group. David Shoemaker, the current LIGO spokesperson, let me know he has “full confidence” in the results, and also, the collaboration is working on a reply, which might however take several months to appear. In other words, go away, there’s nothing to see here.

But while we wait for the LIGO response, speculations abound what might cause the supposed correlation. Penrose beat everyone to it with an explanation, even Craig Hogan, who has run his own experiment looking for correlated noise in interferometers, and who I was counting on.

Penrose’s cyclic cosmology works by gluing the big bang together with what we usually think of as the end of the universe – an infinite accelerated expansion into nothingness. Penrose conjectures that both phases – the beginning and the end – are conformally invariant, which means they possess a symmetry under a stretching of distance scales. Then he identifies the end of the universe with the beginning of a new one, creating a cycle that repeats indefinitely. In his theory, what we think of as inflation – the accelerated expansion in the early universe – becomes the final phase of acceleration in the cycle preceding our own.

Problem is, the universe as we presently see it is not conformally invariant. What screws up conformal invariance is that particles have masses, and these masses also set a scale. Hence, Penrose has to assume that eventually all particle masses fade away so that conformal invariance is restored.

There’s another problem. Since Penrose’s conformal cyclic cosmology has no inflation it also lacks a mechanism to create temperature fluctuations in the cosmic microwave background (CMB). Luckily, however, the theory also gives rise to a new scalar particle that couples only gravitationally. Penrose named it  “erebon” after the ancient Greek God of Darkness, Erebos, that gives rise to new phenomenology.

Erebos, the God of Darkness,
according to YouTube.
The erebons have a mass of about 10-5 gram because “what else could it be,” and they have a lifetime determined by the cosmological constant, presumably also because what else could it be. (Aside: Note that these are naturalness arguments.) The erebons make up dark matter and their decay causes gravitational waves that seed the CMB temperature fluctuations.

Since erebons are created at the beginning of each cycle and decay away through it, they also create a gravitational wave background. Penrose then argues that a gravitational wave signal from a binary black hole merger – like the ones LIGO has observed – should be accompanied by noise-like signals from erebons that decayed at the same time in the same galaxy. Just that this noise-like contribution would be correlated with the same time-difference as the merger signal.

In his paper, Penrose does not analyze the details of his proposal. He merely writes:
“Clearly the proposal that I am putting forward here makes many testable predictions, and it should not be hard to disprove it if it is wrong.”
In my impression, this is a sketchy idea and I doubt it will work. I don’t have a major problem with inventing some particle to make up dark matter, but I have a hard time seeing how the decay of a Planck-mass particle can give rise to a signal comparable in strength to a black hole merger (or why several of them would add up exactly for a larger signal).

Even taking this at face value, the decay signals wouldn’t only come from one galaxy but from all galaxies, so the noise should be correlated all over and at pretty much all time-scales – not just at the 12ms as the Danish group has claimed. Worst of all, the dominant part of the signal would come from our own galaxy and why haven’t we seen this already?

In summary, one can’t blame Penrose for being fashionable. But I don’t think that erebons will be added to the list of LIGO’s discoveries.

Tuesday, June 20, 2017

If tensions in cosmological data are not measurement problems, they probably mean dark energy changes

Galaxy pumpkin.
Src: The Swell Designer
According to physics, the universe and everything in it can be explained by but a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit.

Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.

One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.

The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.

The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.

It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.

Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.

Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.

Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.

That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.

A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.

This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)

Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.

Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.

The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.


[This article previously appeared on Starts With A Bang.]

Update June 21: Corrected several sentences to address comments below.

Wednesday, June 14, 2017

What’s new in high energy physics? Clockworks.

Clockworks. [Img via dwan1509].
High energy physics has phases. I don’t mean phases like matter has – solid, liquid, gaseous and so on. I mean phases like cranky toddlers have: One week they eat nothing but noodles, the next week anything as long as it’s white, then toast with butter but it must be cut into triangles.

High energy physics is like this. Twenty years ago, it was extra dimensions, then we had micro black holes, unparticles, little Higgses – and the list goes on.

But there hasn’t been a big, new trend since the LHC falsified everything that was falsifiable. It’s like particle physics stepped over the edge of a cliff but hasn’t looked down and now just walks on nothing.

The best candidate for a new trend that I saw in the past years is the “clockwork mechanism,” though the idea just took a blow and I’m not sure it’ll go much farther.

The origins of the model go back to late 2015, when the term “clockwork mechanism” was coined by Kaplan and Rattazzi, though Cho and Im pursued a similar idea and published it at almost the same time. In August 2016, clockworks were picked up by Giudice and McCullough, who advertised the model as a “a useful tool for model-building applications” that “offers a solution to the Higgs naturalness problem.”

Gears. Img Src: Giphy.
The Higgs naturalness problem, to remind you, is that the mass of the Higgs receives large quantum corrections. The Higgs is the only particle in the standard model that suffers from this problem because it’s the only scalar. These quantum corrections can be cancelled by subtracting a constant so that the remainder fits the observed value, but then the constant would have to be very finely tuned. Most particle physicists think that this is too much of a coincidence and hence search for other explanations.

Before the LHC turned on, the most popular solution to the Higgs naturalness issue was that some new physics would show up in the energy range comparable to the Higgs mass. We now know, however, that there’s no new physics nearby, and so the Higgs mass has remained unnatural.

Clockworks are a mechanism to create very small numbers in a “natural” way, that is from numbers that are close by 1. This can be done by copying a field multiple times and then coupling each copy to two neighbors so that they form a closed chain. This is the “clockwork” and it is assumed to have a couplings with values close to 1 which are, however, asymmetric among the chain neighbors.

The clockwork’s chain of fields has eigenmodes that can be obtained by diagonalizing the mass matrix. These modes are the “gears” of the clockwork and they contain one massless particle.

The important feature of the clockwork is now that this massless particle’s mode has a coupling that scales with the clockwork’s coupling taken to the N-th power, where N is the number of clockwork gears. This means even if the original clockwork coupling was only a little smaller than 1, the coupling of the lightest clockwork mode becomes small very fast when the clockwork grows.

Thus, clockworks are basically a complicated way to make a number of order 1 small by exponentiating it.

I’m an outspoken critic of arguments from naturalness (and have been long before we had the LHC data) so it won’t surprise you to hear that I am not impressed. I fail to see how choosing one constant to match observation is supposedly worse than introducing not only a new constant, but also N copies of some new field with a particular coupling pattern.

Either way, by March 2017, Ben Allanach reports from Recontres de Moriond – the most important annual conference in particle physics – that clockworks are “getting quite a bit of attention” and are “new fertile ground.”

Ben is right. Clockworks contain one light and weakly coupled mode – difficult to detect because of the weak coupling – and a spectrum of strongly coupled but massive modes – difficult to detect because they’re massive. That makes the model appealing because it will remain impossible to rule it out for a while. It is, therefore, perfect playground for phenomenologists.

And sure enough, the arXiv has since seen further papers on the topic. There’s clockwork inflation and clockwork dark mattera clockwork axion and clockwork composite Higgses – you get the picture.

But then, in April 2017, a criticism of the clockwork mechanism appears on the arXiv. Its authors Craig, Garcia Garcia, and Sutherland point out that the clockwork mechanism can only be used if the fields in the clockwork’s chain have abelian symmetry groups. If the group isn’t abelian the generators will mix together in the zero mode, and maintaining gauge symmetry then demands that all couplings be equal to one. This severely limits the application range of the model.

A month later, Giudice and McCullough reply to this criticism essentially by saying “we know this.” I have no reason to doubt it, but I still found the Craig et al criticism useful for clarifying what clockworks can and can’t do. This means in particular that the supposed solution to the hierarchy problem does not work as desired because to maintain general covariance one is forced to put a hierarchy of scales into the coupling already.

I am not sure whether this will discourage particle physicists from pursuing the idea further or whether more complicated versions of clockworks will be invented to save naturalness. But I’m confident that – like a toddler’s phase – this too shall pass.

Wednesday, May 31, 2017

Does parametric resonance solve the cosmological constant problem?

An oscillator too.
Source: Giphy.
Tl;dr: Ask me again in ten years.

A lot of people asked for my opinion about a paper by Wang, Zhu, and Unruh that recently got published in Physical Reviews D, one of the top journals in the field.


Following a press-release from UBC, the paper has attracted quite some attention in the pop science media which is remarkable for such a long and technically heavy work. My summary of the coverage so far is “bla-bla-bla parametric resonance.”

I tried to ignore the media buzz a) because it’s a long paper, b) because it’s a long paper, and c) because I’m not your public community debugger. I actually have own research that I’m more interested in. Sulk.

But of course I eventually came around and read it. Because I’ve toyed with a similar idea some while ago and it worked badly. So, clearly, these folks outscored me, and after some introspection I thought that instead of being annoyed by the attention they got, I should figure out why they succeeded where I failed.

Turns out that once you see through the math, the paper is not so difficult to understand. Here’s the quick summary.

One of the major problems in modern cosmology is that vacuum fluctuations of quantum fields should gravitate. Unfortunately, if one calculates the energy density and pressure contained in these fluctuations, the values are much too large to be compatible with the expansion history of the universe.

This vacuum energy gravitates the same way as the cosmological constant. Such a large cosmological constant, however, should lead to a collapse of the universe long before the formation of galactic structures. If you switch the sign, the universe doesn’t collapse but expands so rapidly that structures can’t form because they are ripped apart. Evidently, since we are here today, that didn’t happen. Instead, we observe a small positive cosmological constant and where did that come from? That’s the cosmological constant problem.

The problem can be solved by introducing an additional cosmological constant that cancels the vacuum energy from quantum field theory, leaving behind the observed value. This solution is both simple and consistent. It is, however, unpopular because it requires fine-tuning the additional term so that the two contributions almost – but not exactly – cancel. (I believe this argument to be flawed, but that’s a different story and shall be told another time.) Physicists therefore have tried for a long time to explain why the vacuum energy isn’t large or doesn’t couple to gravity as expected.

Strictly speaking, however, the vacuum energy density is not constant, but – as you expect of fluctuations – it fluctuates. It is merely the average value that acts like a cosmological constant, but the local value should change rapidly both in space and in time. (These fluctuations are why I’ve never bought the “degravitation” idea according to which the vacuum energy decouples because gravity has a built-in high-pass filter. In that case, you could decouple a cosmological constant, but you’d still be stuck with the high-frequency fluctuations.)

In the new paper, the authors make the audacious attempt to calculate how gravity reacts to the fluctuations of the vacuum energy. I say it’s audacious because this is not a weak-field approximation and solving the equations for gravity without a weak-field approximation and without symmetry assumptions (as you would have for the homogeneous and isotropic case) is hard, really hard, even numerically.

The vacuum fluctuations are dominated by very high frequencies corresponding to a usually rather arbitrarily chosen ‘cutoff’ – denoted Λ – where the effective theory for the fluctuations should break down. One commonly assumes that this frequency roughly corresponds to the Planck mass, mp. The key to understanding the new paper is that the authors do not assume this cutoff, Λ, to be at the Planck mass, but at a much higher energy, Λ >> mp.

As they demonstrate in the paper, massaged into a suitable form, one of the field equations for gravity takes the form of an oscillator equation with a time- and space-dependent coupling term. This means, essentially, space-time at each place has the properties of a driven oscillator.

The important observation that solves the cosmological constant problem is then that the typical resonance frequency of this oscillator is Λ2/mp which is by assumption much larger than the main frequency of fluctuations the oscillator is driven by, which is Λ. This means that space-time resonates with the frequency of the vacuum fluctuations – leading to an exponential expansion like that from a cosmological constant – but it resonates only with higher harmonics, so that the resonance is very weak.

The result is that the amplitude of the oscillations grows exponentially, but it grows slowly. The effective cosmological constant they get by averaging over space is therefore not, as one would naively expect, Λ, but (omitting factors that are hopefully of order one) Λ* exp (-Λ2/mp). One hence uses a trick quite common in high-energy physics, that one can create a large hierarchy of numbers by having a small hierarchy of numbers in an exponent.

In conclusion, by pushing the cutoff above the Planck mass, they suppress the resonance and slow down the resulting acceleration.

Neat, yes.

But I know you didn’t come for the nice words, so here’s the main course. The idea has several problems. Let me start with the most basic one, which is also the reason I once discarded a (related but somewhat different) project. It’s that their solution doesn’t actually solve the field equations of gravity.

It’s not difficult to see. Forget all the stuff about parametric resonance for a moment. Their result doesn’t solve the field equations if you set all the fluctuations to zero, so that you get back the case with a cosmological constant. That’s because if you integrate the second Friedmann-equation for a negative cosmological constant you can only solve the first Friedmann-equation if you have negative curvature. You then get Anti-de Sitter space. They have not introduced a curvature term, hence the first Friedmann-equation just doesn’t have a (real valued) solution.

Now, if you turn back on the fluctuations, their solution should reduce to the homogeneous and isotropic case on short distances and short times, but it doesn’t. It would take a very good reason for why that isn’t so, and no reason is given in the paper. It might be possible, but I don’t see how.

I further find it perplexing that they rest their argument on results that were derived in the literature for parametric resonance on the assumption that solutions are linearly independent. General relativity, however, is non-linear. Therefore, one generally isn’t free to combine solutions arbitrarily.

So far that’s not very convincing. To make matters worse, if you don’t have homogeneity, you have even more equations that come from the position-dependence and they don’t solve these equations either. Let me add, however, that this doesn’t worry me all that much because I think it might be possible to deal with it by exploiting the stochastic properties of the local oscillators (which are homogeneous again, in some sense).

Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings.

The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study.

In conclusion, “bla-bla-bla parametric resonance” is a pretty accurate summary.

How serious are these problems? Is there something in the paper that might be interesting after all?

Maybe. But the assumption (see below Eq (42)) that the fields that source the fluctuations satisfy normal energy conditions is, I believe, a non-starter if you want to get an exponential expansion. Even if you introduce a curvature term so that you can solve the equations, I can’t for the hell of it see how you average over locally approximately Anti-de Sitter spaces to get an approximate de Sitter space. You could of course just flip the sign, but then the second Friedmann equation no longer describes an oscillator.

Maybe allowing complex-valued solutions is a way out. Complex numbers are great. Unfortunately, nature’s a bitch and it seems we don’t live in a complex manifold. Hence, you’d then have to find a way to get rid of the imaginary numbers again. In any case, that’s not discussed in the paper either.

I admit that the idea of using a de-tuned parametric resonance to decouple vacuum fluctuations and limit their impact on the expansion of the universe is nice. Maybe I just lack vision and further work will solve the above mentioned problems. More generally, I think numerically solving the field equations with stochastic initial conditions is of general interest and it would be great if their paper inspires follow-up studies. So, give it ten years, and then ask me again. Maybe something will have come out of it.

In other news, I have also written a paper that explains the cosmological constant and I haven’t only solved the equations that I derived, I also wrote a Maple work-sheet that you can download and check the calculation for yourself. The paper was just accepted for publication in PRD.

For what my self-reflection is concerned, I concluded I might be too ambitious. It’s much easier to solve equations if you don’t actually solve them.


I gratefully acknowledge helpful conversation with two of this paper’s authors who have been very, very patient with me. Sorry I didn’t have anything nicer to say.

Friday, May 26, 2017

Can we probe the quantization of the black hole horizon with gravitational waves?


Tl;dr: Yes, but the testable cases aren’t the most plausible ones.

It’s the year 2017, but we still don’t know how space and time get along with quantum mechanics. The best clue so far comes from Stephen Hawking and Jacob Bekenstein. They made one of the most surprising finds that theoretical physics saw in the 20th century: Black holes have entropy.

It was a surprise because entropy is a measure for unresolved microscopic details, but in general relativity black holes don’t have details. They are almost featureless balls. That they nevertheless seem to have an entropy – and a gigantically large one in addition – indicates strongly that black holes can be understood only by taking into account quantum effects of gravity. The large entropy, so the idea, quantifies all the ways the quantum structure of black holes can differ.

The Bekenstein-Hawking entropy scales with the horizon area of the black hole and is usually interpreted as a measure for the number of elementary areas of size Planck-length squared. A Planck-length is a tiny 10-35 meters. This area-scaling is also the basis of the holographic principle which has dominated research in quantum gravity for some decades now. If anything is important in quantum gravity, this is.

It comes with the above interpretation that the area of the black hole horizon always has to be a multiple of the elementary Planck area. However, since the Planck area is so small compared to the size of astrophysical black holes – ranging from some kilometers to some billion kilometers – you’d never notice the quantization just by looking at a black hole. If you got to look at it to begin with. So it seems like a safely untestable idea.

A few months ago, however, I noticed an interesting short note on the arXiv in which the authors claim that one can probe the black hole quantization with gravitational waves emitted from a black hole, for example in the ringdown after a merger event like the one seen by LIGO:
    Testing Quantum Black Holes with Gravitational Waves
    Valentino F. Foit, Matthew Kleban
    arXiv:1611.07009 [hep-th]

The basic idea is simple. Assume it is correct that the black hole area is always a multiple of the Planck area and that gravity is quantized so that it has a particle – the graviton – associated with it. If the only way for a black hole to emit a graviton is to change its horizon area in multiples of the Planck area, then this dictates the energy that the black hole loses when the area shrinks because the black hole’s area depends on the black hole’s mass. The Planck-area quantization hence sets the frequency of the graviton that is emitted.

A gravitational wave is nothing but a large number of gravitons. According to the area quantization, the wavelengths of the emitted gravitons is of the order of the order of the black hole radius, which is what one expects to dominate the emission during the ringdown. However, so the authors’ argument, the spectrum of the gravitational wave should be much narrower in the quantum case.

Since the model that quantizes the black hole horizon in Planck-area chunks depends on a free parameter, it would take two measurements of black hole ringdowns to rule out the scenario: The first to fix the parameter, the second to check whether the same parameter works for all measurements.

It’s a simple idea but it may be too simple. The authors are careful to list the possible reasons for why their argument might not apply. I think it doesn’t apply for a reason that’s a combination of what is on their list.

A classical perturbation of the horizon leads to a simultaneous emission of a huge number of gravitons, and for those there is no good reason why every single one of them must fit the exact emission frequency that belongs to an increase of one Planck area as long as the total energy adds up properly.

I am not aware, however, of a good theoretical treatment of this classical limit from the area-quantization. It might indeed not work in some of the more audacious proposals we have recently seen, like Gia Dvali’s idea that black holes are condensates of gravitons. Scenarios such like Dvali’s might be testable indeed with the ringdown characteristics. I’m sure we will hear more about this in the coming years as LIGO accumulates data.

What this proposed test would do, therefore, is to probe the failure of reproducing general relativity for large oscillations of the black hole horizon. Clearly, it’s something that we should look for in the data. But I don’t think black holes will release their secrets quite as easily.

Friday, May 19, 2017

Can we use gravitational waves to rule out extra dimensions – and string theory with it?

Gravitational Waves,
Computer simulation.

Credits: Henze, NASA
Tl;dr: Probably not.

Last week I learned from New Scientist that “Gravitational waves could show hints of extra dimensions.” The article is about a paper which recently appeared on the arxiv:

The claim in this paper is nothing but stunning. Authors Andriot and Gómez argue that if our universe has additional dimensions, no matter how small, then we could find out using gravitational waves in the frequency regime accessible by LIGO.

While LIGO alone cannot do it because the measurement requires three independent detectors, soon upcoming experiments could either confirm or forever rule out extra dimensions – and kill string theory along the way. That, ladies and gentlemen, would be the discovery of the millennium. And, almost equally stunning, you heard it first from New Scientist.

Additional dimensions are today primarily associated with string theory, but the idea is much older. In the context of general relativity, it dates back to the work of Kaluza and Klein the 1920s. I came across their papers as an undergraduate and was fascinated. Kaluza and Klein showed that if you add a fourth space-like coordinate to our universe and curl it up to a tiny circle, you don’t get back general relativity – you get back general relativity plus electrodynamics.

In the presently most widely used variants of string theory one has not one, but six additional dimensions and they can be curled up – or ‘compactified,’ as they say – to complicated shapes. But a key feature of the original idea survives: Waves which extend into the extra dimension must have wavelengths in integer fractions of the extra dimension’s radius. This gives rise to an infinite number of higher harmonics – the “Kaluza-Klein tower” – that appear like massive excitations of any particle that can travel into the extra dimensions.

The mass of these excitations is inversely proportional to the radius (in natural units). This means if the radius is small, one needs a lot of energy to create an excitation, and this explains why he haven’t yet noticed the additional dimensions.

In the most commonly used model, one further assumes that the only particle that experiences the extra-dimensions is the graviton – the hypothetical quantum of the gravitational interaction. Since we have not measured the gravitational interaction on short distances as precisely as the other interactions, such gravity-only extra-dimensions allow for larger radii than all-particle extra-dimensions (known as “universal extra-dimensions”.) In the new paper, the authors deal with gravity-only extra-dimensions.

From the current lack of observation, one can then derive bounds on the size of the extra-dimension. These bounds depend on the number of extra-dimensions and on their intrinsic curvature. For the simplest case – the flat extra-dimensions used in the paper – the bounds range from a few micrometers (for two extra-dimensions) to a few inverse MeV for six extra dimensions (natural units again).

Such extra-dimensions do more, however, than giving rise to a tower of massive graviton excitations. Gravitational waves have spin two regardless of the number of spacelike dimensions, but the number of possible polarizations depends on the number of dimensions. More dimensions, more possible polarizations. And the number of polarizations, importantly, doesn’t depend on the size of the extra-dimensions at all.

In the new paper, the authors point out that the additional polarization of the graviton affects the propagation even of the non-excited gravitational waves, ie the ones that we can measure. The modified geometry of general relativity gives rise to a “breathing mode,” that is a gravitational wave which expands and contracts synchronously in the two (large) dimensions perpendicular to the direction of the wave. Such a breathing mode does not exist in normal general relativity, but it is not specific to extra-dimensions; other modifications of general relativity also have a breathing mode. Still, its non-observation would indicate no extra-dimensions.

But an old problem of Kaluza-Klein theories stands in the way of drawing this conclusion. The radii of the additional dimensions (also known as “moduli”) are unstable. You can assume that they have particular initial values, but there is no reason for the radii to stay at these values. If you shake an extra-dimension, its radius tends to run away. That’s a problem because then it becomes very difficult to explain why we haven’t yet noticed the extra-dimensions.

To deal with the unstable radius of an extra-dimension, theoretical physicists hence introduce a potential with a minimum at which the value of the radius is stuck. This isn’t optional – it’s necessary to prevent conflict with observation. One can debate how well-motivated that is, but it’s certainly possible, and it removes the stability problem.

Fixing the radius of an extra-dimension, however, will also make it more difficult to wiggle it – after all, that’s exactly what the potential was made to do. Unfortunately, in the above mentioned paper the authors don’t have stabilizing potentials.

I do not know for sure what stabilizing the extra-dimensions would do to their analysis. This would depend not only on the type and number of extra-dimension but also on the potential. Maybe there is a range in parameter-space where the effect they speak of survives. But from the analysis provided so far it’s not clear, and I am – as always – skeptical.

In summary: I don’t think we’ll rule out string theory any time soon.

[Updated to clarify breathing mode also appears in other modifications of general relativity.]

Thursday, May 11, 2017

A Philosopher Tries to Understand the Black Hole Information Paradox

Is the black hole information loss paradox really a paradox? Tim Maudlin, a philosopher from NYU and occasional reader of this blog, doesn’t think so. Today, he has a paper on the arXiv in which he complains that the so-called paradox isn’t and physicists don’t understand what they are talking about.
So is the paradox a paradox? If you mean whether black holes break mathematics, then the answer is clearly no. The problem with black holes is that nobody knows how to combine them with quantum field theory. It should really better be called a problem than a paradox, but nomenclature rarely follows logical argumentation.

Here is the problem. The dynamics of quantum field theories is always reversible. It also preserves probabilities which, taken together (assuming linearity), means the time-evolution is unitary. That quantum field theories are unitary depends on certain assumptions about space-time, notably that space-like hypersurfaces – a generalized version of moments of ‘equal time’ – are complete. Space-like hypersurfaces after the entire evaporation of black holes violate this assumption. They are, as the terminology has it, not complete Cauchy surfaces. Hence, there is no reason for time-evolution to be unitary in a space-time that contains a black hole. What’s the paradox then, Maudlin asks.

First, let me point out that this is hardly news. As Maudlin himself notes, this is an old story, though I admit it’s often not spelled out very clearly in the literature. In particular the Susskind-Thorlacius paper that Maudlin picks on is wrong in more ways than I can possibly get into here. Everyone in the field who has their marbles together knows that time-evolution is unitary on “nice slices”– which are complete Cauchy-hypersurfaces – at all finite times. The non-unitarity comes from eventually cutting these slices. The slices that Maudlin uses aren’t quite as nice because they’re discontinuous, but they essentially tell the same story.

What Maudlin does not spell out however is that knowing where the non-unitarity comes from doesn’t help much to explain why we observe it to be respected. Physicists are using quantum field theory here on planet Earth to describe, for example, what happens in LHC collisions. For all these Earthlings know, there are lots of black holes throughout the universe and their current hypersurface hence isn’t complete. Worse still, in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles. This would mean then, according to Maudlin’s argument, we’d have no reason to even expect a unitary evolution because the mathematical requirements for the necessary proof aren’t fulfilled. But we do.

So that’s what irks physicists: If black holes would violate unitarity all over the place how come we don’t notice? This issue is usually phrased in terms of the scattering-matrix which asks a concrete question: If I could create a black hole in a scattering process how come that we never see any violation of unitarity.

Maybe we do, you might say, or maybe it’s just too small an effect. Yes, people have tried that argument, which is the whole discussion about whether unitarity maybe just is violated etc. That’s the place where Hawking came from all these years ago. Does Maudlin want us to go back to the 1980s?

In his paper, he also points out correctly that – from a strictly logical point of view – there’s nothing to worry about because the information that fell into a black hole can be kept in the black hole forever without any contradictions. I am not sure why he doesn’t mention this isn’t a new insight either – it’s what goes in the literature as a remnant solution. Now, physicists normally assume that inside of remnants there is no singularity because nobody really believes the singularity is physical, whereas Maudlin keeps the singularity, but from the outside perspective that’s entirely irrelevant.

It is also correct, as Maudlin writes, that remnant solutions have been discarded on spurious grounds with the result that research on the black hole information loss problem has grown into a huge bubble of nonsense. The most commonly named objection to remnants – the pair production problem – has no justification because – as Maudlin writes – it presumes that the volume inside the remnant is small for which there is no reason. This too is hardly news. Lee and I pointed this out, for example, in our 2009 paper. You can find more details in a recent review by Chen et al.

The other objection against remnants is that this solution would imply that the Bekenstein-Hawking entropy doesn’t count microstates of the black hole. This idea is very unpopular with string theorists who believe that they have shown the Bekenstein-Hawking entropy counts microstates. (Fyi, I think it’s a circular argument because it assumes a bulk-boundary correspondence ab initio.)

Either way, none of this is really new. Maudlin’s paper is just reiterating all the options that physicists have been chewing on forever: Accept unitarity violation, store information in remnants, or finally get it out.

The real problem with black hole information is that nobody knows what happens with it. As time passes, you inevitably come into a regime where quantum effects of gravity are strong and nobody can calculate what happens then. The main argument we are seeing in the literature is whether quantum gravitational effects become noticeable before the black hole has shrunk to a tiny size.

So what’s new about Maudlin’s paper? The condescending tone by which he attempts public ridicule strikes me as bad news for the – already conflict-laden – relation between physicists and philosophers.

Friday, April 21, 2017

No, physicists have not created “negative mass”

Thanks to BBC, I will now for several years get emails from know-it-alls who think physicists are idiots not to realize the expansion of the universe is caused by negative mass. Because that negative mass, you must know, has actually been created in the lab:


The Independent declares this turns physics “completely upside down”


And if you think that was crappy science journalism, The Telegraph goes so far to insists it’s got something to do with black holes






Not that they offer so much as a hint of an explanation what black holes have to do with anything.

These disastrous news items purport to summarize a paper that recently got published in Physics Review Letters, one of the top journals in the field:
    Negative mass hydrodynamics in a Spin-Orbit--Coupled Bose-Einstein Condensate
    M. A. Khamehchi, Khalid Hossain, M. E. Mossman, Yongping Zhang, Th. Busch, Michael McNeil Forbes, P. Engels
    Phys. Rev. Lett. 118, 155301 (2017)
    arXiv:1612.04055 [cond-mat.quant-gas]

This paper reports the results of an experiment in which the physicists created a condensate that behaves as if it has a negative effective mass.

The little word “effective” does not appear in the paper’s title – and not in the screaming headlines – but it is important. Physicists use the preamble “effective” to indicate something that is not fundamental but emergent, and the exact definition of such a term is often a matter of convention.

The “effective radius” of a galaxy, for example, is not its radius. The “effective nuclear charge” is not the charge of the nucleus. And the “effective negative mass” – you guessed it – is not a negative mass.

The effective mass is merely a handy mathematical quantity to describe the condensate’s behavior.

The condensate in question here is a supercooled cloud of about ten thousand Rubidium atoms. To derive its effective mass, you look at the dispersion relation – ie the relation between energy and momentum – of the condensate’s constituents, and take the second derivative of the energy with respect to the momentum. That thing you call the inverse effective mass. And yes, it can take on negative values.
 
If you plot the energy against the momentum, you can read off the regions of negative mass from the curvature of the resulting curve. It’s clear to see in Fig 1 of the paper, see below. I added the red arrow to point to the region where the effective mass is negative.
Fig 1 from arXiv:1612.04055 [cond-mat.quant-gas]

As to why that thing is called effective mass, I had to consult a friend, David Abergel, who works with cold atom gases. His best explanation is that it’s a “historical artefact.” And it’s not deep: It’s called an effective mass because in the usual non-relativistic limit E=p2/m, so if you take two derivatives of E with respect to p, you get the inverse mass. Then, if you do the same for any other relation between E and p you call the result an inverse effective mass.

It's a nomenclature that makes sense in context, but it probably doesn’t sound very headline-worthy:
“Physicists created what’s by historical accident still called an effective negative mass.”
In any case, if you use this definition, you can rewrite the equations of motion of the fluid. They then resemble the usual hydrodynamic equations with a term that contains the inverse effective mass multiplied by a force.

What this “negative mass” hence means is that if you release the condensate from a trapping potential that holds it in place, it will first start to run apart. And then no longer run apart. That pretty much sums up the paper.

The remaining force which the fluid acts against, it must be emphasized, is then not even an external force. It’s a force that comes from the quantum pressure of the fluid itself.

So here’s another accurate headline:
“Physicists observe fluid not running apart.”
This is by no means to say that the result is uninteresting! Indeed, it’s pretty cool that this fluid self-limits its expansion thanks to long-range correlations which come from quantum effects. I’ll even admit that thinking of the behavior as if the fluid had a negative effective mass may be a useful interpretation. But that still doesn’t mean physicists have actually created negative mass.

And it has nothing to do with black holes, dark energy, wormholes, and the like. Trust me, physics is still upside up.

Saturday, March 11, 2017

Is Verlinde’s Emergent Gravity compatible with General Relativity?

Dark matter filaments, Millenium Simulation
Image: Volker Springel
A few months ago, Erik Verlinde published an update of his 2010 idea that gravity might originate in the entropy of so-far undetected microscopic constituents of space-time. Gravity, then, would not be fundamental but emergent.

With the new formalism, he derived an equation for a modified gravitational law that, on galactic scales, results in an effect similar to dark matter.

Verlinde’s emergent gravity builds on the idea that gravity can be reformulated as a thermodynamic theory, that is as if it was caused by the dynamics of a large number of small entities whose exact identity is unknown and also unnecessary to describe their bulk behavior.

If one wants to get back usual general relativity from the thermodynamic approach, one uses an entropy that scales with the surface area of a volume. Verlinde postulates there is another contribution to the entropy which scales with the volume itself. It’s this additional entropy that causes the deviations from general relativity.

However, in the vicinity of matter the volume-scaling entropy decreases until it’s entirely gone. Then, one is left with only the area-scaling part and gets normal general relativity. That’s why on scales where the average density is high – high compared to galaxies or galaxy clusters – the equation which Verlinde derives doesn’t apply. This would be the case, for example, near stars.

The idea quickly attracted attention in the astrophysics community, where a number of papers have since appeared which confront said equation with data. Not all of these papers are correct. Two of them seemed to have missed entirely that the equation which they are using doesn’t apply on solar-system scales. Of the remaining papers, three are fairly neutral in the conclusions, while one – by Lelli et al – is critical. The authors find that Verlinde’s equation – which assumes spherical symmetry – is a worse fit to the data than particle dark matter.

There has not, however, so far been much response from theoretical physicists. I’m not sure why that is. I spoke with science writer Anil Ananthaswamy some weeks ago and he told me he didn’t have an easy time finding a theorist willing to do as much as comment on Verlinde’s paper. In a recent Nautilus article, Anil speculates on why that might be:
“A handful of theorists that I contacted declined to comment, saying they hadn’t read the paper; in physics, this silent treatment can sometimes be a polite way to reject an idea, although, in fairness, Verlinde’s paper is not an easy read even for physicists.”
Verlinde’s paper is indeed not an easy read. I spent some time trying to make sense of it and originally didn’t get very far. The whole framework that he uses – dealing with an elastic medium and a strain-tensor and all that – isn’t only unfamiliar but also doesn’t fit together with general relativity.

The basic tenet of general relativity is coordinate invariance, and it’s absolutely not clear how it’s respected in Verlinde’s framework. So, I tried to see whether there is a way to make Verlinde’s approach generally covariant. The answer is yes, it’s possible. And it actually works better than I expected. I’ve written up my findings in a paper which just appeared on the arxiv:


It took some trying around, but I finally managed to guess a covariant Lagrangian that would produce the equations in Verlinde’s paper when one makes the same approximations. Without these approximations, the equations are fully compatible with general relativity. They are however – as so often in general relativity – hideously difficult to solve.

Making some simplifying assumptions allows one to at least find an approximate solution. It turns out however, that even if one makes the same approximations as in Verlinde’s paper, the equation one obtains is not exactly the same that he has – it has an additional integration constant.

My first impulse was to set that constant to zero, but upon closer inspection that didn’t make sense: The constant has to be determined by a boundary condition that ensures the gravitational field of a galaxy (or galaxy cluster) asymptotes to Friedmann-Robertson-Walker space filled with normal matter and a cosmological constant. Unfortunately, I haven’t been able to find the solution that one should get in the asymptotic limit, hence wasn’t able to fix the integration constant.

This means, importantly, that the data fits which assume the additional constant is zero do not actually constrain Verlinde’s model.

With the Lagrangian approach that I have tried, the interpretation of Verlinde’s model is very different – I dare to say far less outlandish. There’s an additional vector-field which permeates space-time and which interacts with normal matter. It’s a strange vector field both because it’s not – as the other vector-fields we know of – a gauge-boson, and has a different kinetic energy term. In addition, the kinetic term also appears in a way one doesn’t commonly have in particle physics but instead in condensed matter physics.

Interestingly, if you look at what this field would do if there was no other matter, it would behave exactly like a cosmological constant.

This, however, isn’t to say I’m sold on the idea. What I am missing is, most importantly, some clue that would tell me the additional field actually behaves like matter on cosmological scales, or at least sufficiently similar to reproduce other observables, like eg baryon acoustic oscillation. This should be possible to find out with the equations in my paper – if one manages to actually solve them.

Finding solutions to Einstein’s field equations is a specialized discipline and I’m not familiar with all the relevant techniques. I will admit that my primary method of solving the equations – to the big frustration of my reviewers – is to guess solutions. It works until it doesn’t. In the case of Friedmann-Robertson-Walker with two coupled fluids, one of which is the new vector field, it hasn’t worked. At least not so far. But the equations are in the paper and maybe someone else will be able to find a solution.

In summary, Verlinde’s emergent gravity has withstood the first-line bullshit test. Yes, it’s compatible with general relativity.

Thursday, March 02, 2017

Yes, a violation of energy conservation can explain the cosmological constant

Chad Orzel recently pointed me towards an article in Physics World according to which “Dark energy emerges when energy conservation is violated.” Quoted in the Physics World article are George Ellis, who enthusiastically notes that the idea is “no more fanciful than many other ideas being explored in theoretical physics at present,” and Lee Smolin, according to whom it’s “speculative, but in the best way.” Chad clearly found this somewhat too polite to be convincing and asked me for some open words:



I had seen the headline flashing by earlier but ignored it because – forgive me – it’s obvious energy non-conservation can mimic a cosmological constant.

Reason is that usually, in General Relativity, the expansion of space-time is described by two equations, known as the Friedmann-equations. They relate the velocity and acceleration of the universe’s normalized distance measures – called the ‘scale factor’ – with the average energy density and pressure of matter and radiation in the universe. If you put in energy-density and pressure, you can calculate how the universe expands. That, basically, is what cosmologists do for a living.

The two Friedmann-equations, however, are not independent of each other because General Relativity presumes that the various forms of energy-densities are locally conserved. That means if you take only the first Friedmann-equation and use energy-conservation, you get the second Friedmann-equation, which contains the cosmological constant. If you turn this statement around it means that if you throw out energy conservation, you can produce an accelerated expansion.

It’s an idea I’ve toyed with years ago, but it’s not a particularly appealing solution to the cosmological constant problem. The issue is you can’t just selectively throw out some equations from a theory because you don’t like them. You have to make everything work in a mathematically consistent way. In particular, it doesn’t make sense to throw out local energy-conservation if you used this assumption to derive the theory to begin with.

Upon closer inspection, the Physics World piece summarizes the paper:
which got published in PRL a few weeks ago, but has been on the arxiv for almost a year. Indeed, when I looked at it, I recalled I had read the paper and found it very interesting. I didn’t write about it here because the point they make is quite technical. But since Chad asked, here we go.

Modifying General Relativity is chronically hard because the derivation of the theory is so straight-forward that much violence is needed to avoid Einstein’s Field Equations. It took Einstein a decade to get the equations right, but if you know your differential geometry it’s a three-liner really. This isn’t to belittle Einstein’s achievement – the mathematical apparatus wasn’t then fully developed and he was guessing its way around underived theorems – but merely to emphasize that General Relativity is easy to get but hard to amend.

One of the few known ways to consistently amend General Relativity is ‘unimodular gravity,’ which works as follows.

In General Relativity the central dynamical quantity is the metric tensor (or just “metric”) which you need to measure the ratio of distances relative to each other. From the metric tensor and its first and second derivative you can calculate the curvature of space-time.

General Relativity can be derived from an optimization principle by asking: “From all the possible metrics, which is the one that minimizes curvature given certain sources of energy?” This leads you to Einstein’s Field Equations. In unimodular gravity in contrast, you don’t look at all possible metrics but only those with a fixed metric determinant, which means you don’t allow a rescaling of volumes. (A very readable introduction to unimodular gravity by George Ellis can be found here.)

Unimodular gravity does not result in Einstein’s Field Equations, but only in a reduced version thereof because the variation of the metric is limited. The result is that in unimodular gravity, energy is not automatically locally conserved. Because of the limited variation of the metric that is allowed in unimodular gravity, the theory has fewer symmetries. And, as Emmy Noether taught us, symmetries give rise to conservation laws. Therefore, unimodular gravity has fewer conservation laws.

I must emphasize that this is not the ‘usual’ non-conservation of total energy one already has in General Relativity, but a new violation of local energy-densities does that not occur in General Relativity.

If, however, you then add energy-conservation to unimodular gravity, you get back Einstein’s field equations, though this re-derivation comes with a twist: The cosmological constant now appears as an integration constant. For some people this solves a problem, but personally I don’t see what difference it makes just where the constant comes from – its value is unexplained either way. Therefore, I’ve never found unimodular gravity particularly interesting, thinking, if you get back General Relativity you could as well have used General Relativity to begin with.

But in the new paper the authors correctly point out that you don’t necessarily have to add energy conservation to the equations you get in unimodular gravity. And if you don’t, you don’t get back general relativity, but a modification of general relativity in which energy conservation is violated – in a mathematically consistent way.

Now, the authors don’t look at all allowed violations of energy-conservation in their paper and I think smartly so, because most of them will probably result in a complete mess, by which I mean be crudely in conflict with observation. They instead look at a particularly simple type of energy conservation and show that this effectively mimics a cosmological constant.

They then argue that on the average such a type of energy-violation might arise from certain quantum gravitational effects, which is not entirely implausible. If space-time isn’t fundamental, but is an emergent description that arises from an underlying discrete structure, it isn’t a priori obvious what happens to conservation laws.

The framework proposed in the new paper, therefore, could be useful to quantify the observable effects that arise from this. To demonstrate this, the authors look at the example of 1) diffusion from causal sets and 2) spontaneous collapse models in quantum mechanics. In both cases, they show, one can use the general description derived in the paper to find constraints on the parameters in this model. I find this very useful because it is a simple, new way to test approaches to quantum gravity using cosmological data.

Of course this leaves many open questions. Most importantly, while the authors offer some general arguments for why such violations of energy conservation would be too small to be noticeable in any other way than from the accelerated expansion of the universe, they have no actual proof for this. In addition, they have only looked at this modification from the side of General Relativity, but I would like to also know what happens to Quantum Field Theory when waving good-bye to energy conservation. We want to make sure this doesn’t ruin the standard model’s fit of any high-precision data. Also, their predictions crucially depend on their assumption about when energy violation begins, which strikes me as quite arbitrary and lacking a physical motivation.

In summary, I think it’s a so-far very theoretical but also interesting idea. I don’t even find it all that speculative. It is also clear, however, that it will require much more work to convince anybody this doesn’t lead to conflicts with observation.

Thursday, February 09, 2017

New Data from the Early Universe Does Not Rule Out Holography

[img src: entdeckungen.net]
It’s string theorists’ most celebrated insight: The world is a hologram. Like everything else string theorists have come up with, it’s an untested hypothesis. But now, it’s been put to test with a new analysis that compares a holographic early universe with its non-holographic counterpart.

Tl;dr: Results are inconclusive.

When string theorists say we live in a hologram, they don’t mean we are shadows in Plato’s cave. They mean their math says that all information about what’s inside a box can be encoded on the boundary of that box – albeit in entirely different form.

The holographic principle – if correct – means there are two different ways to describe the same reality. Unlike in Plato’s cave, however, where the shadows lack information about what caused them, with holography both descriptions are equally good.

Holography would imply that the three dimensions of space which we experience are merely one way to think of the world. If you can describe what happens in our universe by equations that use only two-dimensional surfaces, you might as well say we live in two dimensions – just that these are dimensions we don’t normally experience.

It’s a nice idea but hard to test. That’s because the two-dimensional interpretation of today’s universe isn’t normally very workable. Holography identifies two different theories with each other by a relation called “duality.” The two theories in question here are one for gravity in three dimensions of space, and a quantum field theory without gravity in one dimension less. However, whenever one of the theories is weakly coupled, the other one is strongly coupled – and computations in strongly coupled theories are hard, if not impossible.

The gravitational force in our universe is presently weakly coupled. For this reason General Relativity is the easier side of the duality. However, the situation might have been different in the early universe. Inflation – the rapid phase of expansion briefly after the big bang – is usually assumed to take place in gravity’s weakly coupled regime. But that might not be correct. If instead gravity at that early stage was strongly coupled, then a description in terms of a weakly coupled quantum field theory might be more appropriate.

This idea has been pursued by Kostas Skenderis and collaborators for several years. These researchers have developed a holographic model in which inflation is described by a lower-dimensional non-gravitational theory. In a recent paper, their predictions have been put to test with new data from the Planck mission, a high-precision measurement of the temperature fluctuations of the cosmic microwave background.


In this new study, the authors compare the way that holographic inflation and standard inflation in the concordance model – also known as ΛCDM – fit the data. The concordance model is described by six parameters. Holographic inflation has a closer connection to the underlying theory and so the power spectrum brings in one additional parameter, which makes a total of seven. After adjusting for the number of parameters, the authors find that the concordance model fits better to the data.

However, the biggest discrepancy between the predictions of holographic inflation and the concordance model arise at large scales, or low multipole moments respectively. In this regime, the predictions from holographic inflation cannot really be trusted. Therefore, the authors repeat the analysis with the low multipole moments omitted from the data. Then, the two models fit the data equally well. In some cases (depending on the choice of prior for one of the parameters) holographic inflation is indeed a better fit, but the difference is not statistically significant.

To put this result into context it must be added that the best-understood cases of holography work in space-times with a negative cosmological constant, the Anti-de Sitter spaces. Our own universe, however, is not of this type. It has instead a positive cosmological constant, described by de-Sitter space. The use of the holographic principle in our universe is hence not strongly supported by string theory, at least not presently.

The model for holographic inflation can therefore best be understood as one that is motivated by, but not derived from, string theory. It is a phenomenological model, developed to quantify predictions and test them against data.

While the difference between the concordance model and holographic inflation which this study finds are insignificant, it is interesting that a prediction based on such an entirely different framework is able to fit the data at all. I should also add that there is a long-standing debate in the community as to whether the low multipole moments are well-described by the concordance model, or whether any of the large-scale anomalies are to be taken seriously.

In summary, I find this an interesting result because it’s an entirely different way to think of the early universe, and yet it describes the data. For the same reason, however, it’s also somewhat depressing. Clearly, we don’t presently have a good way to test all the many ideas that theorists have come up with.