Thursday, February 02, 2012

No evidence for spacetime foam

It goes under the name spacetime-foam or fuzz, or sometimes graininess or, more neutral, fluctuations: The general expectation that, due to quantum gravitational effects, spacetime on very short distance scales is not nice and smooth but, well, fuzzy or grainy or foamy.

If that seems somewhat fuzzy to you, it's because it is. Absent an experimentally verified and generally accepted theory of quantum gravity, nobody really knows what exactly spacetime foam looks like. A case for the phenomenlogists then! And, indeed, over the decades several models have been suggested to describe spacetime's foamy, fuzzy grains, based on a few simple assumptions. The idea is not that these models are fundamentally the correct description of spacetime but that they can, ideally, be tested against data, so that we can learn something about the general properties that we should expect the fundamental theory to deliver.

One example for such a model, going back to Amelino-Camelia in 1999, is that spacetime foam causes a particle to make a random walk in the direction of propagation. For each step of distance of the Planck length, lP, the particle randomly gains or loses another step. This is a useful model because random walks in one dimension are very well understood. Over a total distance L that consists of N =L/lP steps, the average deviation from the mean is the length of the step times the square root of the number of steps. Thus, over a distance L, a particle deviates a distance

where I have put in a dimensionless constant α - for a quantum gravitational effect, we would expect it to be of order one. See also the below figure for illustration and keep in mind that c=1 on this blog

While simple, this model is not, and probably was never meant to be, particularly compelling. Leaving aside that it's not Lorentz-invariant, there is no good reason why the steps should be discrete or be aligned in one direction. One might have hoped that this general idea would be worked out to become somewhat more plausible. Yet that never happened because this model was ruled out pretty much as soon as it was proposed. The reason is that if you consider the detection of lightrays from some source, the deviation from the normal propagation on the light-cone will have the effect of allowing different phases of the emitted light to arrive at once. The average phase blur is

If the phase blur is comparable to 2π, interferences would be washed out. See below figure for illustration

As it happens however, interference patterns, known as Airy rings, can be observed on objects as far as some Gpc away from earth. If you put in the numbers, for such a large distance and α ≈ 1 the phase should be entirely smeared out. To the right you see an actual image of such a distant quasar from the Hubble Space Telescope (Figure 3 from astro-ph/0610422). You can clearly see the interference rings. And there goes the random walk model.

There is a similar model going back to Wheeler and later Ng and Van Dam, that the authors have called the "holographic foam" model. (Probably because everything holographic was chic in the late 1990s. Except for the general scaling it has little to do with holography.) In any case, the main feature of this model is that the deviation from the mean goes with the 3rd root, rather than the square root, of N. Thus, the effects are smaller.

It is amazing though how quickly smart people can find ways to punch holes in your models. Already in 2003 it was pointed out, that with some classical optic formulas from the late 19th century, modern telescopes allow to set much tighter bounds. Roughly speaking, the reason is that a telescope with diameter D focuses a much larger part of the light's wavefront than just one wavelength λ. The telescope is very sensitive to phase-smearing all over its opening. Telescopes are for example sensitive to air turbulences, a problem that the Hubble Space Telescope does not have.

The sensitivity of a telescope to such phase distortions can be quantified by a pure number known as the "Strehl ratio." The closer the Strehl ratio is to 1, the closer the telescope's images are to those of an ideal telescope, showing a point-like sources as a perfect Airy patterns. A non-ideal telescope will cause an image degradation, most importantly a smearing of the intensity. The same effect would be caused by the holographic space-time fuzz. Thus, up to the telescope's limit on image quality, the additional phase distortion would be observable: it lowers the Strehl ratio of images of very far-away objects such as quasars. (Though, if it was observed, one wouldn't know exactly what its origin is.)

The relevant point is that, using the telescope's sensitivity to image degradation, one gains an additional factor of D/λ ≈ 108. In their paper:

the authors have presented an analysis of the images of 157 high-redshift (z > 4) quasi-stellar objects. They found no blurring. With that, also the holographic foam model is ruled out. Or, to be precise, the parameter α is constrained into a range that is implausible for quantum gravitational effects.

As it is often the case in the phenomenology of quantum gravity, the plausible models are difficult, if not impossible, to constrain by data. And the implausible ones nobody misses when they are ruled out. This is a case of the latter.

Thanks to Neil for reminding me of that paper.


PS: We were not able to find a derivation for the exact expression for the phase blurring as a function of the Strehl ratio, Eq. (5), that is used in the paper. We got so far that it's called the Marechal approximation. If you know of a useful reference, we'd be interested!

36 comments:

mgf said...

Surely, "c=0" should be replaced with "c=1" above?

Bee said...

Thanks, I've fixed that. If c was =0 on this blog, it would be quite dark here I guess ;o) Best,

B.

SRT said...

The original Marechal approximation is derived here:

Mare ́chal A. 1947. E ́ tude des effets combine ́s de la diffraction et des aberrations ge ́ome ́triques sur l’image d’un
point lumineux. Rev. Opt. Theor. Instrum. 26:257–77

joel rice said...

It sounds like one could safely replace Feynman's possible paths of photons with possible geodesic paths of photons, and pretty much ignore gravitons ?

Uncle Al said...

Interstellar vacuum is ~10 hydrogen atoms/m^3. Photon scattering, refraction, and dispersion are cumulatively insufficient to degrade image correlations. Betelgeuse diameter measurement with VLTI/AMBER (spatial interferometer; Physics Today June 2009) demonstrates correlation persistence through 640 lightyears pathlength. Spacetime foam effects, given their density, must be scaled accordingly - none.

21st century physical theory is Zen: falsification stymied by complexity's empirical irrelevance. Audio record one hand clapping, then play the sound file. Observe proton decay in IceCube: 20 decays/day given 6.2x10^37 hydrogen atoms and Super-Kamiokande's current proton mean lifetime (exceeding) 8.2x10^33 years. (pookie pookie SUSY)

Arun said...

Dear Bee,
How much phase smearing does light show going through optical glass? Is it enough to detect the atomic nature of glass?
Best,
-Arun

Arun said...

I don't know if these are useful

1. http://www.telescope-optics.net/aberrations.htm#The_extent and http://www.telescope-optics.net/Strehl.htm (see figure 60 in the second one.

2. Section 4.3.3 (pages 114-115) of the book available on books.google.com
Adaptive optics for astronomical telescopes by John W. Hardy

Arun said...

Can't be sure, but this seems to be a key paper for the extended Marechal approximation:

Strehl ratio for primary aberrations: some analytical results for circular and annular pupils

Virendra N. Mahajan

JOSA, Vol. 72, Issue 9, pp. 1258-1266 (1982)
http://dx.doi.org/10.1364/JOSA.72.001258

DocG said...

"Spacetime foam" plays roughly the same role as the aether did for pre-relativity physics. And it's wrong for the very same reason: it's a fudge, an attempt to paper over a deep abyss at the heart of science.

I've said it before and I'll say it again: Bohr's radical vision, encapsulated in the so-called "Copenhagen interpretation," is the ONLY one that has held up over the years, and also the only one that actually makes sense. Efforts to somehow get past it by positing "spacetime foam" or tiny particles made up of vibrating strings are wishful thinking.

What physicists seem unwilling to accept, yet what Bohr recognized very clearly, is that science is NOT a window onto the "real world," whatever that might mean, but a means of representing the world.

This becomes clear when one contemplates the history of representation in the arts, beginning with Cezanne, continuing with Picasso and Braque, and culminating in the work of Mondrian. Believe it or not, all four of these gentlemen were realists at heart, actually radical realists, yet all were ultimately forced to deal, every bit as much as scientists have been, with the fundamental problem of subject vs. object, which can be fudged only up to a point and no further.

Once the subject-object dichotomy breaks down, a more fundamental dichotomy emerges, a chasm that can never be broached. Bohr called it "complementarity," but that term is far too mild for what it represents.

Bee said...

Hi Arun,

I don't know but I guess the answer to your question depends on the wavelength of the light and the homogeneity of the medium. Most of the blurring would come, I think, not from the atomic nature of the glass but from density fluctuations. Best,

B.

Bee said...

SRT, Arun,

Thanks for the references! Best,

B.

Zephir said...

Space-time foam is easily visible at the results of Sloan survey http://discovermagazine.com/2009/mar/06-world.s-hardest-working-telescope/sloanmap.jpg The quantization of red-shift is visible there too.

Phil Warnell said...

Hi Bee,

A most interesting piece which could leave those such as Roger Penrose certainly discouraged. However I would point out that businesses such as Star Bucks not only depend on the concept of foam as being a good one, yet more so as actually in being able to create them benefitting by the many who enjoy them. So although reality might not present to be a Latte, there are many who find being given less is something to which they can assign being of greater value in having less being taken as more:-)

”I am forever walking upon these shores,
Betwixt the sand and the foam,
The high tide will erase my foot-prints,
And the wind will blow away the foam.
But the sea and the shore will remain
Forever.”



-Kahlil Gibran, “Sand and Foam”

Best,

Phil

Arun said...

Hi Bee,

If optical wavelengths can't be used to detect the atomicity of matter, then I'm dubious that they can be used to detect structure at the Planck scale.

Of course, the phenomenological models of quantum gravity are designed with the hope that some effect will show up. It is hardly surprising that they get shot down so readily.

-Arun

Bee said...

Psychologists call it "motivated cognition" ;o)

joel rice said...

DocG: just looking at the Feynman lectures on Gravitation and see on page xvi the remark "He also stresses, in sections 1.4 and 2.1, the inappropriateness of the Copenhagen interpretation of quantum mechanics in a cosmological context." Perhaps briefly - given Heisenberg uncertainty - what is delta x if the universe is expanding ? How would that affect H's explanation of the inverse square law ?

Plato said...

I just can't get away from how blurry things can look but have a suitable explanation.

It's as if one is raising the landscape issue again as to discern the viability of "hills and valleys" as a interpretation of the cosmic landscape?

While over-viewing the universe in terms of Lagrangian how is it possible to not see graviton concentrations as places that would affect your viewing more then others.

Best,

Plato said...

At the outset it must be re-emphasized that the test
for spacetime foam effects is a null test. A theoretical model for spacetime foam is disproved if images of a dis-tant point source do not exhibit the blurring predicted
by theoretical spacetime foam models.
See: Limits on Spacetime Foam

Best,

Uncle Al said...

@Arun: Rayleigh scattering (atmospheric to undersea cable fiberoptic) validates atomicity,

http://en.wikipedia.org/wiki/Rayleigh_scattering
http://www.its.bldrdoc.gov/pub/ntia-rpt/81-59/81-59_ocr.pdf
2.3.3 Fiber Dispersion

Billion lighyear pathlengths, gamma to radio photons, empirically reduce spacetime foam to a snark hunt. Compactified dimensions are negatively impacted in kind. Physics is somewhere deeply but selectively wrong. The failure must reside where physics will not look. Innovation is knowlege discovered outside the field in which it is applied.

http://www.mazepath.com/uncleal/erotor1.jpg
The only place physics will not look. The worst it can do is succeed.

Eric Perlman said...

The Tamburini paper is fundamentally flawed, as we showed in a paper published two months later in A&A. See http://adsabs.harvard.edu/abs/2011A%26A...535L...9P.

The problems we point out are as follows:

(1) they do not calculate the length element properly,
and overestimate the expected effect of spacetime foam on existing observations (and hence overestimate how much they can constrain it)

(2) They do not document carefully how they select their comparisons or compute their PSF comparisons, leading to problems in reproducibility.

stefan said...

Hi Arun,

thanks for digging out the book by John Hardy - it seems to be quite helpful. One can have a look inside at amazon.com.

There is also a derivation of the formula in Born&Wolf (p. 522 of the 7th edition), though it is difficult to keep track of the factor D/λ.


Hi Uncle, Arun -

Rayleigh scattering at atoms or molecules is a bit tricky - you need inhomogenities and fluctuations to avoid destructive interference ruining the scattering. That's the important spin-off of the work of Einstein and Smoluchowski on critical opalescence.

But I also think that a careful comparison with the propagation of light in thin media could be quite helpful for this kind of investigations. At least, I guess it could improve intuitive understanding of what is going on.


Hi Eric,

thanks for pointing out your paper on the topic! A print-out is already on the kitchen table, as we did come across it also via the ADS entry.


Cheers, Stefan

DocG said...

"He also stresses, in sections 1.4 and 2.1, the inappropriateness of the Copenhagen interpretation of quantum mechanics in a cosmological context."

Thanks, Joel, but that wasn't my point. I should have made myself more clear. The complementarity I was referring to is the complementarity between General Relativity and Quantum Mechanics, which strike me as fundamentally incompatible, by analogy with the incompatibility of the wave and particle views in quantum physics. Do you have any idea whether Feynman ever expressed himself in that regard?

DocG said...

Bee, since you characterize your work as "phenomenology" I would like to think you are open to a broader approach to scientific research than what might be called, if you will pardon the expression, "crude empiricism."

In this spirit I would urge you to go a bit further to consider certain issues in semiotics, insofar as semiotics can be considered a science, not simply of measurable phenomena, but the way we represent such phenomena. This is especially urgent because, as Bohr himself attested, science is fundamentally about how we represent reality to ourselves, and not reality "itself" (whatever that might mean).

To give you an idea of what I'm getting at, consider the history of representation in the visual arts, which begins as an attempt to produce an accurate "window on the world" -- and winds up as an absorption in the medium itself -- i.e. paint, color, texture, and, above all, the surface of the canvas.

I see a strong analogy with what has been happening in physics over the last 100 years, i.e., an increasing absorption in the medium by which the outside world is represented by physicists, i.e. mathematics.

Omni said...

It is claimed that:

Tamburini et al is wrong, see:
arXiv:1110.4986[astro-ph.CO], Astronomy & Astrophysics 535, L9 (2011)

Claim by Abdo et al (based on new Fermi Gamma-ray Space Telescope results) is wrong, see:
arXiv:0912.0535, Phys. Rev. D83, 084003 (2011)

Bee said...

Hi Eric, Omni,

I think "fundamentally flawed" is a quite strong expression to use here. We can agree that it's not a very good paper. I see that in the paper from Nov 2011 it is argued that Tamburini et al have used an inappropriate distance measure which means they have overestimated the effect. But we're talking here about an overestimation by an order of magnitude, not by a factor 10^8. I could not find a new constraint on a given alpha=2/3. The basic conclusion is still that the random walk model is dead, and the holographic model is disfavored and on the edge of dying. Best,

B.

Bee said...

I should better have written: constraint on the parameter "a" given \alpha =2/3.

Bee said...

For completeness, I found that there is a Nature news article by Amelino-Camelia on the Tamburini paper:

Shedding light on the fabric of space-time

It is actually very well written and readable (except for it not being open access that it).

Bee said...

"that is" not "that it"

need coffee. now.

Arun said...

Thanks, Uncle, Stefan! I had never pu together the idea that the sky is blue is a strong indication that matter is atomic.

It goes with why the night sky is dark as pointing to a fundamental truth about the world.

Uncle Al said...

Stefan, "...you need inhomogenities and fluctuations to avoid destructive interference ruining the scattering."

Bragg interference has similar footnotes. A perfect crystal lattice quenches its output. An overly good crystal is dipped in liquid nitrogen. The thermal shock adds a trace of disorde (mosaicity).

Perhaps things work not because they are exceptionally clean, but because they are just dirty enough. A little entropy is good for the universe. More than dirty enough would be observable.

joel rice said...

DocG: I don't know whether he did express thoughts on that - just that on the last page of QED: Strange Theory he was complaining about theories purporting to deal with gravity that postulate stuff nobody sees, and are non-renormalizable, and it is experimentally intractable anyway. It seems he was quite burned out on the whole subject. I think it's a shame because only a few years after doing Grav lectures, Penzias & Wilson found the CBR, which puts expansion on the table in a big way, rather than perihelion advance and all that. Now one really needs something like a metrical field, or whatever it is, just to say that 'space expanded', and forces are a minor detail.

Zephir said...

There are many evidences of "space-time" foam: the CMBR noise, the dispersion of light called the red shift, the remote galaxies are blurred, etc. It's simply random hyperbolic noise analogous to density fluctuations inside of gas.

Neil Bates said...

Bee, you're welcome and sorry I dawdled getting back to you (busy in various ways.) Here is yet another area of physics that remains controversial and unexplicated - we still just don't know the score.

Eric Perlman said...

Bee said:

> I think "fundamentally flawed" is a quite strong
> expression to use here. We can agree that it's not a
> very good paper. I see that in the paper from Nov
> 2011 it is argued that Tamburini et al have used an
> inappropriate distance measure which means they
> have overestimated the effect.

The use of the wrong distance measure causes them to overestimate the operative distance by an order of magnitude for a z=6 quasar (work it out for yourself). No, that's not 8 decades, but it is way off. But you're right, the excluded region is alpha<0.65, tantalizingly close to excluding the holographic model (alpha=2/3).

Eric Perlman said...

Zephir,

I'm very familiar with the Sloan survey and I am not aware of their making any claim about spacetime foam - can you please post the link to the paper rather than an article in Discovery (which link, btw, does not work)?

Also, the CMBR structure and redshift are completely unrelated to what we call spacetime foam. I'd like to know how you think they might be related with some reference, please.

Bee said...

Hi Eric,

My question was, for α=2/3, what is the constraint on the parameter "a"? I couldn't find that in your paper, and I'd be interested to know. It makes more sense to me to think of it this way, because there isn't really a continuum of models with different values of α. Best,

B.