## Thursday, April 04, 2019

### Is the universe a hologram? Gravitational wave interferometers may tell us. Shadow on the wall.
Left-right. Forward-backward. Up-down. We are used to living in three dimensions of space. But according to some physicists, that’s an illusion. Instead, they say, the universe is a hologram and every move we make is really inscribed on the surface of the space around us. So far, this idea has been divorced from experimental test. But in a recent arXiv paper, physicists have now proposed a way to test it.

What does it mean to live in a hologram? Take a Rubik’s cube. No, don’t leave! I mean, take it, metaphorically speaking. The cube is made of 3 x 3 x 3 = 27 smaller cubes. It has 6 x 3 x 3 = 54 surface elements. If you use each surface pixel and each volume pixel to encode one bit of information, then you can use the surface elements to represent everything that happens in the volume.

A volume pixel, by the way, is called a voxel. If you learn nothing else from me today, take that.

So the Rubik’s cube has more pixels than voxels. But if you divide it up into smaller and smaller pieces, the ratio soon changes in favor of voxels. For N intervals on each edge, you have N3 voxels compared to 6 x N2 pixels. The amount of information that you can encode in the volume, therefore, increases much faster than the amount you can encode on the surface.

Or so you would think. Because the holographic principle forbids this from happening. This idea, which originates in the physics of black holes, requires that everything which can happen inside a volume of space is inscribed on the surface of that volume. This means if you’d look very closely at what particles do, you would find they cannot move entirely independently. Their motions must have subtle correlations in order to allow a holographic description.

We do not normally notice these holographic correlations, because ordinary stuff needs few of the theoretically available bits of information anyway. Atoms in a bag of milk, for example, are huge compared to the size of the voxels, which is given by the Planck length. The Planck length, that is 25 orders of magnitude smaller the diameter of an atom. And these 25 orders of magnitude, you may think, crush all hopes to ever see the constraints incurred on us by the holographic principle.

But theoretical physicists don’t give up hope that easily. The massive particles we know are too large to be affected by holographic limitations. But physicists expect that space and time have quantum fluctuations. These fluctuations are normally too small to observe. But if the holographic principle is correct, then we might be able to find correlations in these fluctuations.

To see these holographic correlations, we would have to closely monitor the extension of a large volume of space. Which is, in a nutshell, what a gravitational wave interferometer does.

The idea that you can look for holographic space-time fluctuations with gravitational wave interferometers is not new. It was proposed already a decade ago by Craig Hogan, at a time when the GEO600 interferometer reported unexplained noise.

Hogan argued that this noise was a signal of holography. Unfortunately, the noise vanished with a new readout method. Hogan, undeterred, found a factor in his calculation and went on to build his own experiment, the “Holometer” to search for evidence that we live in a hologram.

The major problem with Hogan’s idea, as I explained back then, was that his calculation did not respect the most important symmetry of Special Relativity, the so-called Lorentz-symmetry. For this reason the prediction made with Hogan’s approach had consequences we would have seen long ago with other experiments. His idea, therefore, was ruled out before the experiment even started.

Hogan, since he could not resolve the theoretical problems, later said he did not have a theory and one should just look. His experiment didn’t find anything. Or, well, it delivered null results. Which are also results, of course.

In any case, two weeks ago Erik Verlinde and Kathryn Zurek had another go at holographic noise:
Observational Signatures of Quantum Gravity in Interferometers
Erik P. Verlinde, Kathryn M. Zurek
arXiv:1902.08207 [gr-qc]
The theoretical basis of their approach is considerably better than Hogan’s.

The major problem with using holographic arguments for interferometers is that you need to specify what surface you are talking about on which the information is supposedly encoded. But the moment you define the surface you are in conflict with the observer-independence of Special Relativity. That’s the issue Hogan ran into, and ultimately was not able to resolve.

Verlinde and Zurek circumvent this problem by speaking not about a volume of space and its surface, but about a volume of space-time (a “causal diamond”) and its surface. Then they calculate the amount of fluctuations that a light-ray accumulates when it travels back and forth between the two arms of the interferometer.

The total deviation they calculate scales with the geometric mean of the length of the interferometer arm (about a kilometer) and the Planck length (about 10-35 meters). If you put in the numbers, that comes out to be about 10-16 meters, which is not far off the sensitivity of the LIGO interferometer, currently about 10-15 meters.

Please do not take these numbers too seriously, because they do not account for uncertainties. But this rough estimate explains why the idea put forward by Verlinde and Zurek is not crazy talk. We might indeed be able to reach the required sensitivity in the soon future.

Let me be honest, though. The new approach by Verlinde and Zurek has not eliminated my worries about Lorentz-invariance. The particular observable they calculate is determined by the rest-frame of the interferometer. This is fine. My worry is not with their interferometer calculation, but with the starting assumption they make about the fluctuations. These are by assumption uncorrelated in the radial direction. But that radial direction could be any direction. And then I am not sure how this is compatible with their other assumption, that is holography.

The authors of the paper have been very patient in explaining their idea to me, but at least so far I have not been able to sort out my confusion about this. I hope that one of their future publications will lay this out in more detail. The present paper also does not contain quantitative predictions. This too, I assume, will follow in a future publication.

If they can demonstrate that their theory is compatible with Special Relativity, and therefore not in conflict with other measurements already, this would be a well-motivated prediction for quantum gravitational effects. Indeed, I would consider it one of the currently most promising proposals.

But if Hogan’s null result demonstrates anything, it is that we need solid theoretical predictions to know where to search for evidence of new phenomena. In the foundations of physics, the days when “just look” was a promising strategy are over.

1. "So the Rubic’s cube has more voxels than pixels."

Just minor corrections: it is Rubik's cube (with k) and it has more pixels than voxels.

1. Ah, crap. Thanks for pointing out, I fixed that.

2. Any relation to Wojciech Zurek?

3. Isn't this actually an old idea? Basically, if you assume that each Planck voxel introduces a Planck length's uncertainty to any time or distance measurement, you get a random walk that scales as l_p sqrt(N) where N = L/l_p, and thus

sigma_L = sqrt(L l_p),

where sigma is the uncertainty, L the Length of your Interferometer and l_p the Planck length. Look, for example, at equation 44 in
https://arxiv.org/abs/0806.0339

Also note Equation 45 in that document, which is based on "[S]ome arguments inspired by the “holography paradigm” for quantum gravity" and has a weaker holographic bound of

sigma_L = L^(1/3) l_p^(2/3).

This would be independent for each traverse of an interferometer arm and so for LIGO (which has many traverses) there would be an addition factor of sqrt(N_traverse).

Verlinde and Zurek say a lot, but in the end they also "have derived length fluctuations of size δL^2 ∼ lp L."

If I were to go out and measure this, and came up with sigma ~ sqrt(l_p L), I, at least, would conclude that I had found the random walk limit of quantum gravity and that no holographic principles need be invoked to do so.

1. TME,

What is "actually an old idea"? Space-time fluctuations? Holography? Testing holography with gravitational wave interferometers? It would help if you could be somewhat more precise.

In case you are talking about random walks due to space-time fluctuations, yes, that idea is decades old. Almost all of these approaches violate Lorentz-invariance, however, and most of them have been ruled out. I wrote about this eg here.

4. Loops and tensors and holograms? (Oh my.)

"The relation between loop quantum gravity (LQG) and tensor networks is explored from the perspectives of the bulk-boundary duality and holographic entanglement entropy."
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.95.024011

5. The LIGO uses many photons to produce a signal. Each photon would have an uncorrelated delay due to this effect. They would average any detectable signal therefore reducing it's effect. Has this idea been accounted for?

1. Hi Steve,

For all I can tell, no. In the present paper, they really do not calculate an actual measurement result. Instead, they calculate what happens with the path of a photon.

6. I have not read this paper yet. I have only downloaded it. I will try to comment more carefully in the future after reading it. That may happen after this blog thread has gotten cold.

I tend to think the entanglement entropy force is for a stretched horizon. I do not think there is holographic content with just any surface. The merger of two black holes defines a zero entropy change condition for the final state black hole where the horizon area of that black hole is the sum of the original two. Of course this is the thermodynamic limit and black holes usually have large horizon areas. The maximal entropy change is given by the sum of the mass of the black holes. The real physics is in between and mergers usually convert about 5% of the mass of the two black holes into gravitational radiation. So what happens is the stretched horizon is transformed so that it has greater net area, this is a motion of the horizon and has an entropy force.

The causal diamond for the LHC is not an event horizon with a stretch horizon with fields. Just because it is a null surface may not quality it as a holographic screen. This gets into the question Geroch raised on just what is an event horizon, and light cones do not quality. Now to give this idea some possible credence, what may work is a two point function between the stretched horizon(s) of the final and initial black holes and this causal wedge. It would seem to me that this would be a more complete way to address holography with the LHC.

7. One way to look at quantum entanglement is to reject the assumption of counterfactual definiteness which is made when deriving Bell's inequality. We can then explain experimental results by introducing hidden variables. The experimenter's choice of measurement, with the experimenter being in the same universe as the entangled particle being measured, is not independent of the particle being measured. There is a correlation between them. It doesn't make sense to ask what would have happened had the measurement been done a different way with everything else in the universe remaining the same.

Could quantum entanglement be evidence for holographic principle that we haven't been interpreting in the right way?

1. They are related. An elementary bipartite entanglement is an instance of ℂ^2/SL(2, ℂ) = A that is an abelian group such as U(1) or SO(2) for the entanglement phase. Similarly we can write SL(2, ℂ) = ℂ^2/U(1). For ℂ^2 a complex realization of spacetime holography is a reduction on this by a parameter, say ℝ with some regular sequence given by the U(1), which is the radial distance to a black hole. So the hologram is the spacetime "modulo" the radius to the screen.

8. Whatever other merit there may be to this line of investigation, it has this in its favor: On the surface at least (pun unintended), it's testable.

9. Hi Sabine,. (and friends)
interesting conversation.
But what if?
( to take a question from one of my favorite teachers) Reality.
indeed, (regardless,for a moment,- technology, mathematics, et.al.) our perception based on our perspective, based on our point of view. - may be no more than (ultimately)..
on the wall of a cave.
what if,. indeed.
All Love,

10. Hi Sabine,
what if
- to take a question from one of my favorite teachers.-
all . our perception
(ultimately)
was no more than

- on the wall of a cave.

All Love,

11. A.C., I think that’s originally from Plato ...

1. Hi Jean,
-sorru for the delay
Your thinking is correct. I apologise for the misunderstanding. hi Sabine
- some of my teachers
are not of this era.
- Some of my teachers
of this era are no longer alive. ...and yet,.
Some of the lessons they taught (or I thought I learned) are given new life ( or'understanding') under a different light or from a different perspective.
-sometimes on a daily basis. lol
Once again,
All apologies.

* as a side note,
Plato is mostly remembered as a philosopher. However, he apparently had an affinity
for 'numbers'.
(He taught,or tried to teach, Aristotle)
- you should check out
Plato's Sequence of Dimensions adapted to
' The Unit Circle in the Complex Plane'
- really cool.

All the best ,

12. Does quantum holography require the world to have a conformal symmetry?

1. Conformal symmetry is the rescaling of a metric with g_{ab} → Ω^2g_{ab}. The Schwarzschild metric term g_{tt} = 1 - 2GM/rc^2 would presumably similarly transform. However, with O(1) rescaling as Ω^2 and radius as O(ℓ) rescaling as Ω the mass part of this rescales as Ω. This would at first glance here appear to tell us that black holes are not conformal.

It is the mass that does this as anything with mass has a Compton wavelength λ = ħ/mc that demarks the boundary between appearing as a spherical particle and at higher energy a blur of virtual particles that can send some of that energy out as new particles. Mass fixes a length scale and in the case of a black hole that is the Schwarzschild radius.

However, a scalar field transforms as φ → Ω^{-1}φ and the gravitational constant in a scalar-tensor theory will transform with G' = φ^{-1}G and we have now a conformal theory of gravitation for black holes! There is still the r = 0 where conformal symmetry breaks down, along with everything else. Another way to see this as justified is that r_s = 2GM/c^2 is the Schwarzschild radius, and if there is a conformal transformation of the spacetime with r → Ωr, then for this to be complete so too should r_s. So the Newton gravitational constant must then transform.

What is non-conformal can be broomed into the r = 0 singularity then we have physics conformal almost everywhere. The occurrence of lots of matter in space does result in shearing that breaks the Huygens principle of conserving phase space volume. Holography has an optical meaning in there is contraction along the radial direction and an optical appearance of an object near the horizon spread out over the horizon. If there is lots of matter present this will introduce shearing of null rays from which a distant observer can witness this object fall. This means the u, v coordinates for fields absorbed and emitted by the black hole are perturbed. We can then posit that holography is conformal.

13. @A.C.

In that case, the holographic principle probably corresponds with the interior surface of your eyeballs. The relationship with concept formation will be hexagonal grid cells (which have recently been represented in artificial intelligence to simulate navigational capabilities). And the basic mathematical elements relating to hexagons and tetrahedra will be found in Volume II of "Fundamentals of Mathematics" edited by Bachmann. Along similar lines, you will get a geometric relation to the group of Pauli matrices through the (hexagonal) complete graph over 6 vertices.

But, then one has a chicken and an egg. You need the cool physics to build the instruments and interpret the data by which we identify hexagonal grid cells.

14. "bag of milk"?

15. The philosopher in me wants to remind you of the individuals operating the hologram... They may change the answer before the problem is written down

16. I read the paper by Verlinde and Zurek and find no terrible fault with it, other than a big question. The central assumption of this paper is with equation 17 as I see it. There they define the entropy of the causal wedge or two light cones merged at their bases. This is treated as a holographic screen. The authors work the Green propagators on the system. The Greens function for two coordinate shifts δu(r_1) and δv(r_2) computes the Green's function in equation 23.

The coordinate transformation to the metric in equation 13 leads to what is called the "event horizon." The coordinate transformation is

(u - L)(v - L) = 4(L^2 - RL + 2Φ).

This then defines the metric term g_{tt} = 1 - L/R + 2Φ. The horizon occurs as a condition R = L. which is equivalent to u = L or v = L. Ignoring the Newtonian potential term this is really not a condition for an event horizon. but for being on the light cones in the causal wedge. I think this gets to the heart of the question. Is this really an event horizon? I would say no. Can we somehow treat this as an event horizon? That might be possible.

We might think of this as a picture of a hologram that is itself a hologram. The authors do not seen to say it by the tiny Newtonian potential appears to be what defines the holographic screen. We might then think of the holographic screen of the black hole horizons merging and changing via entropy force of gravity as sending information to a system with a causal wedge that encodes this. In effect this might be seen as a form of impedance matching by antennas and transmission lines. The maximum power transfer occurs with Z_{load} = Z_{source}, which happens when a receiving antenna is tuned to a transmitting antenna. A signal generator sends a signal out and this signal runs through a load impedance network to match the impedance of the antenna or transmission line. The entropy force of gravity in changing the configuration of black hole horizons transmits gravitational waves and by modeling the LIGO or any interferometer as a system according to a null surface adjusted by this Newtonian potential this is a way of adjusting the impedances. The holographic model of the interferometer is then a bit of a toy I think.

The stretched horizon of a black hole sits a Planck or string length above the mathematically defined event horizon. It is then not truly a null surface and it then has three dimensions with (t, θ, φ) coordinates. The causal wedge has much the same coordinates with f(R) = 0. The modes on both the black hole stretched horizon are either transverse or along the (θ, φ) or they are along the direction of the horizon in the t direction or along the cone in the case of the interferometer. These are the transverse and longitudinal modes. These are then by this form of impedance matching transferred from the black hole horizon to the LHC in the fashion.

I think then there is a big gap here that needs to be worked through. I think from what I have read that this is a fairly reasonable paper. The authors simply make some assumptions in their three postulates on page 2 that need further examination.

17. It would appear that Verlinde et al. are treating spacetime like a fluctuating turbid media (with some specific correlations) and trying to figure out what can be measured in this case. Which is a refreshing turn away from bigger colliders.

1. why don't they use the well-known theoretical framework for beam propagation in turbid media?

2. If spacetime really acts like a turbid media, then it sets an upper limit to the spatial coherence of light at long propagation distances. And everybody knows that starlight is (almost) completely spatially coherent. So instead of using gravitational wave observatories, we could just as well improve the resolution of stellar spatial coherence measurements to find the same signatures.

Maybe I should write something down to see whether that is the case.

1. What you say was predicted by the loop quantum gravity people. Spacetime as a quantum foam would force short wavelength photons on a more tortuous path. This would result in a sort of dispersion. The simultaneous arrival of different wavelengths of photons from very distant burstars meant this is wrong.

18. Hi dr Bee,

your attention refers to the real issue. How there could be anisotropy in 3d space volume?

I think the correct holographic surface is asymptotic surfaces of elementary particles. The space volume could be an emergent isotropy of interactions of elementary holographs.

You must search for the fluctuations in particles. Alas, we know the uncertainty principle. :)

19. Fun post thanks. Will think about this. When presented this way, it's actually interesting. One of the big problems with GR as a field theory is that a lot of stuff goes on even classically inside a volume element that cannot be reduced to surface terms. In the context of GR the upshot is no local conservation of energy (see Dirac's little book). I had not seen holography presented this way.

-drl

20. Holograms are amazing. I find it surprising that nature herself hasn’t found a way of using them. I could be wrong - but doesn’t appear to be any naturally occurring holograms in the natural world. But if the holographic principle is true, then I guess she has, even if it’s not how we usually see them.

Susskinds book on black holes and his long running argument with Stephen Hawking has a nice simple exposition on the holographic principle. It’s so simple that even I could follow it. He claims to be baffled how entropy is measured in units of area than volume.

I don’t know how useful the following is. Probably not very much and probably many holes could be punched into it. But it occurred to me that matter falling into a black hole would form a shell (for an external observer), and so we ought to expect that entropy ought to be measured in units of area.

1. The simplest “derivation” that Bekenstein-Hawking entropy is an area law is the following (Susskind used it in some lecture):
Throw 1 bit of information into a black hole (BH) with radius R and mass M. 1 bit could be represented by a photon with wavelength λ∼R, so there is no additional location information. We will also ignore polarization and further factors like π.
The energy/mass of the BH changes by ∆E=ℏω=ℏc/λ∼ℏc/R; thus ∆M=∆E/c²∼ℏ/Rc. With the Schwarzschild radius R=2MG/c² it follows ∆R=2∆MG/c²∼2ℏG/Rc³.
Area is A∼R² and thus ∆A∼2R∆R∼4ℏG/c³∼4(l_P)², i.e. the area grows like ∆A∼4(l_P)² per bit. Thus BH entropy S∼A/4(l_P)² is proportional to the area A measured in units of Planck area (l_P)²= ℏG/c³.

Remark:
- here entropy is not based on a number of microstates, but simply the information that is not accessible any more from the outside of the boundary.