- Is a tabletop search for Planck scale signals feasible?

Jacob D. Bekenstein

arXiv:1211.3816 [gr-qc]

The idea is roughly the following: Take a single photon and
spread its wavefunction by suitable lenses, then let it hit some macroscopic
solid block, for example a crystal. Focus the photon and detect it.

Since the crystal has a refractive index, the photon has to
discard momentum into it. This momentum will be evenly spread into the crystal,
distributed by phonons, and be returned to the photon upon exit. Essentially,
the block reacts not like single atoms but in one piece (though it cannot instantly do so, the distribution of momentum must take a finite amount of time).

If you give the crystal a momentum for the duration of the
photon’s passage, it will move, but since it’s macroscopically heavy, it will
move only a tiny distance. If you look at the shift of its center-of-mass, the
distance it will move scales with the energy of the incoming photon over the
mass of the block.

Bekenstein puts in the numbers and finds that with presently
available technology, the energy of a single photon could be so tiny that the
distance the crystal moves would have to be smaller than the Planck length. This,
he argues would “occasionally be at odds with the non-smooth texture of
spacetime on Planck scale.” If that is so, the photon would not be able to
transverse the crystal, leading to an unexpected, and observable, decrease in
the transmission probability.

He also estimates sources of noise that could move the block
oh-so slightly and affect the probability of the photon trespassing, thus
rendering the outcome inconclusive. Bekenstein argues that by cooling the block
to some Kelvin, which is cold indeed but still feasible, the noise could be
kept under control. This might seem implausible at first sight, but note that the thermal noise for the motion of the center-of-mass itself is not the problem because the photon spends only a very short time inside the crystal. The relevant question is whether the center-of-mass moves in that short duration.

So far, so brilliant. The proposed experiment is an
excellent example for a model-independent test. It is so model-independent in fact that
I don’t know which model could be constrained by it.

The usual expectation from Planck-scale fluctuations is that they lead to a position uncertainty that cannot become smaller than the
Planck length. This does not forbid you to move an item by distances less than
the Planck length, it just tells you that the position of the crystal wasn’t defined
to a precision better than the Planck length to begin with.

Now, if space-time was a discrete regular lattice with
Planck-length spacing then you could not move the crystal, as a rigid block, by
anything shorter than the Planck length. Already if the lattice isn’t regular,
this is no longer true. But even if the lattice was regular, the crystal would
have to be very rigid indeed, so as to not allow any relative shift among atoms
that could account for the motion of the center-of-mass. For example, if your
block has a number of particles about Avogadro’s number, 10

^{23}, and you move one out of 10^{15}of these atoms by a distance of 10^{-20}m (that’s less than the size of a proton and less than the LHC can probe), you’d move the center of mass by about a Planck length. Now I don’t know much about crystals, but it seems quite implausible to me that the effective description of phonons on the lattice should be sensitive to such tiny shifts at all (even worse if the block is not a crystal but some amorphous solid).
Besides this, I don’t understand how the “rejection” of the
photon should come about if one took the path integral of all possible
trajectories and scatterings in the crystal, none of which is sensitive to
Planck scale effects.

In summary: The proposed table-top argument tests a quantity,
the shift of the crystal’s center-of-mass, which is of the order Planck length.
It is unclear however if there is any plausible model for the phenomenology of
quantum gravity that would be constrained by this a measurement. Is a tabletop search for Planck scale signals feasible? Maybe. Is it possible with the proposed experiment? Probably not. Does it have to do anything with Planck mass black holes? No.
## 44 comments:

FYI, whenever I visit your site it asks for a username and password for "bloggerblogwidgets.googlecode.com". Not sure what's going on, but it seems like something's busted with the site. :)

If there is no flaw to how the experiment was conceived, we could really know if there is a Plank scale at all, which is great. We could even measure it.

If nothing is found, are all models with any kind of minimum scale ruled out?

The experiment sounds almost impossible difficult, but if it can be done, then it will help distinguish the two competing expressions for linear momentum of an electromagnetic wave inside a dielectric.

I see Nathan Reed reports the same off message that I see. I am using Safari on a MacBook Pro and don't get this message from other blogs using eBlogger.

Beckenstein's proposal is provocative, but mediocre with materials. Cool the optical test mass to its quantum limit (e.g., LIGO mirrors as single quantum objects). Have a flawless single crystal with tight phonon coupling plus low scattering for everything. A hard material accepts optical dimensioning and polish. Cylinder not square! Both faces must be quarter-wave anti-reflection coated against reflection losses. Given vacuum, reflection fraction of incoming photons is

[(n - 1)/(n + 1)]^2 for the incident face

[(n - 1)^2]/(n^2 + 1) total for through transmission.

Put the single photon emitter at the focus of the incoming convex lens, eliminating the concave lens. Start with plano-convex lenses for the sharper focal spot when the convex side is toward a collimated beam source |) [] (| Aspheric lenses satisfy sagittal and tangential image shell requirements. Coat the lenses, too.

Consider a cylinder of single crystal type IIa diamond that is isotopically pure C-12. It will be transparent from near-IR 2500 nm to UV 300 nm. Young's modulus of 1220 GPa - stiff. d = 3.51 g/cm^3.

http://www.ioffe.ru/SVA/NSM/Semicond/Diamond/Figs/341.gif

n = 2.435 @ 400 nm, 29.7% reflection loss for transmission.

At first, we should realize, what the subject of quantum gravity QG REALLY is. The QG is trying to reconcile the predictions of general relativity GR and quantum mechanics QM. So I don't quite understand, why it bothers with Planck scale, when the main scope of interest sits pretty well at the human observer scale. The physicists are seeking for QG phenomena well outside of the scope of interest of QG - the QG phenomena are all around us. The detection of thermal motion of atoms with scattering of light belongs into it - what else do you expect to find with it, after all? Of course, such an experiment doesn't prove the inhomogeneous structure of space-time at the Planck scale, but this inhomogeneity manifests with notoriously known CMBR noise, Casimir force or for example with motion of atoms even at the absolute zero temperature (the liquid helium never freezes at room pressure) - so there is nothing to prove already. We already know, that the vacuum is dynamic inhomogeneous stuff.

/* Does it have to do anything with Planck mass black holes? No. */

If we consider the common atoms and molecules as such micro-black holes, then yes - such an experiment will detect the common absorption of photon after then. Frankly, I dunno, why these objects should be excluded from QG framework.

The main source of confusion here is the poorly defined subject of quantum gravity as such. Which effect does still belong into quantum gravity effects - and which isn't? If we consider, that the quantum mechanics and general relativity - and their combinations - drive all observable effects in our Universe, then all these effects are evidence for quantum gravity at the same moment: you, me and all objects around us.

Nathan:

I have nothing to do with the blogger software and can't do anything about this. You're like the 5th person telling me that, and for all I know it's a browser issue. Are you using Firefox? Use a different browser. (I'm using Chrome and can't reproduce the problem.) Best,

B.

Daniel:

Why don't you read what I wrote? I wrote this proposed experiment doesn't constrain any model that I know of. I don't know of any minimal length or space-time foam model that would prevent you from moving the center of mass of a solid block by less than a Planck length. Not every quantity that you can construct that is of the order Planck length or Planck mass is interesting. The basic reason the shift he talks about is so small is that the center-of-mass shift has to be divided by the number of particles in the block. Best,

B.

@Zephir: "

We already know, that the vacuum is dynamic inhomogeneous stuff.1) The vacuum is homogeneous, then

2) Noether's theorems, requiring

3) linear momentum is conserved.

Cite an example of non-conservation of linear momentum: CERN's LHC, Fermilab's Tevatron, Brookhaven's RHIC; KEK's Belle, SLAC's PEP-II BarBar.

http://arxiv.org/abs/gr-qc/0205059

Pioneer anomaly

http://arxiv.org/abs/1103.5222

Anomaly ended - Phong shading

http://arxiv.org/abs/1107.2886

Anomalous slowing is slowing as Pu-238 RTG decay heat lessens with half-life (87.7 yrs vs. U-234 245,500 yrs).

Provide any reproducible quantitative empirical example of non-conservation of linear momentum, or shut up. Ignorance can be educated, stupidity is forever (being its own engine of creation).

To borrow from your prior blog entry, there is no free lunch. In this instance the block mass is moved so it is no longer in the same relativistic frame, so this fails for reason that Einstein's Box thought experiment fails per Bohr, 1930. Of course I am not an expert so will appreciate if you set me right if I am wrong.

I have only skimmed the paper, but it seems he doesn't at all consider the

internal degrees of freedomof the block. The section on thermal effects deals with external collisions but still treats the block as a single rigid body. But the block itself is actually an avogadro of atoms banging against each other! How can he possibly calculate the signature of the supposed Planck-scale displacement of the center of mass of the object, while ignoring the constant Angstrom-scale displacements of the atomic parts?/*..provide any reproducible quantitative empirical example of non-conservation of linear momentum..*/

Inhomogeneous space-time would mean, that the linear(?) momentum is conserved globally, but no locally. This routinely happens during Brownian motion of helium atoms at the zero temperature, for example. At the microscopic scale all particle are doing quantum jumps without apparent reason (it applies even to free particles).

This is a most general thought that has been coming to mind on the phenomenological proposal and approach.

One of these is Wheeler's proposal of quantum foam" [1]: on scales below that of Planck, spacetime is no longer a smooth manifold, but rather a frothy and tumultuous landscape.Is a tabletop search for Planck scale signals feasible?by Jacob D. BekensteinI have followed the differences in perspective that I believe have been adopted by your self and others that are close in this perspective of a "configuration space."

What had me thinking today is that the very idea of the Quantum foam as a discrete function had to have been meet by the way in which discrete measures are considered.

So in this sense the very approach considered by this paper. There is some things I am considering here. It got me thinking, respective of the photon's journey.

Interesting.

Best,

Bar of Lead Tungstate Source: A Quantum Diaries Survivor-Calorimeters for High Energy Physics experiments - part 1 April 6, 2008

Calorimeters measure the collective behavior of particles traveling along approximately the same path, and are thus naturally suited for the measurement of jets-Dorigo TommasoThe idea of the configuration space is related to how one measures? QGP processes, where we have ascertain "what is smooth," is of the nature of calorimetry design. You see?

Just thinking out loud.

Best,

I have biases in my own perspective that makes it difficult sometime to see the whole picture yet is is one that places time as a value in terms of an emergent product of expression.

Even still as is the discrete nature of measure, there is something beyond this, yet we need to count. Something I may have borrowed from Phil.:)

If man thinks of the totality as constituted of independent fragments, then that is how his mind will tend to operate, but if he can include everything coherently and harmoniously in an overall whole that is undivided, unbroken, and without a border then his mind will tend to move in a similar way, and from this will flow an orderly action within the whole.(David Bohm, Wholeness and the Implicate Order, 1980)Is such a view in trouble assuming that one needs to count from the perspective of such a discrete nature to include time?

Hi Bee,

I found this to be a most helpful explanation of Bekenstein’s proposed experiment. With the points you raised it also has enlightened me about elements of QG theories which I was not actually aware of before (not that there wouldn’t be many), which includes the addition of a level of uncertainty being extra to the one which is considered under the restrictions laid out in standard Quantum Mechanics. This has me curious as to wonder which if any of the two stands as being the fundamental one. I guess what I‘m asking is would any current QG theory allow for the violation of the Heisenberg level of uncertainty and yet still not permit this narrower level of uncertainty to be dispensed with.

Best,

Phil

Someone should repeat Bekenstein's analysis, but with the single block replaced by a collection of smaller blocks connected by springs. That would begin to create some intuition about how the properties of the re-radiated photon depend on the microstructure of the block.

Only on the nature of the crystal block itself, and it's uniformity. Crystal construction by it's own imperfections, would cause problems.

In the absence of gravity, such crystal formations would have better alignments.

@Plato Hagel, "

Bar of Lead Tungstate Source" PbWO_4 is tetragonal, scheelite structure, space group I4(1)/a between 1.4 and 300 K. It is not optically homogeneous. A micro-gee experiment is irrelevant. Optical test mass atomicity is irrelevant (Mossbauer spectroscopy; LIGO's mirrors as quantum objects (pdf)). Diamond's Debye temperature is 2230 K. Beckenstein's experiment runs at fractional K. Isotopically pure C-12 (commercial), not 1.07% C-13 natural abundance, minimizes phonon scattering (no C-14 in fossil carbon). CVD raw C-12 diamond, then HPHT recrystallization to Type IIa single crystal (remember Al, Zr, or Ti/Cu as nitrogen getter). Do the experiment.One must know the technical literature to find valid loopholes. Anomalous outputs give theorists and empiricists employment.

Hi Mitchell,

It's not the uncertainty of the center-of-mass position that matters but the uncertainty of the shift. It takes only a nanosecond or so for the photon to travel through the block. The question is, what does the momentum do in this timespan. Best,

B.

Hi Phil,

We discussed the additional uncertainty you get from combining quantum mechanigs and gravity here. Note that this is usually considered an effective description, it's not a fundamental feature that you would or should expect the theory to have. Best,

B.

It's difficult to see that while seeing the larger view of a Lagrangian of the universe is to see such an experiment located "as part and parcel of the variance in determination of the strength and weaknesses of that Lagrangian."

So you drive the focus down to the microscopic, and then to ask, in what case has the photon's journey been of value in the ascertaining the distance and time, of that photon's journey?

Just trying to orientate perspective in the case of Lagrange, yet we see on the macroscopically that it works.

It is to orientate the thought experiment in the direction of a determination of value when assigned to Lagrangian and its limits at minimal length.

A phenomenological question then about that environment with this application.

Best,

On another note then being respective of the drive toward understanding using Higg's,"

The solution is to make the equations more complicated and introduce a Higgs field, which, once it is non-zero on average, can give the electron its mass without messing up the workings of the weak nuclear force.11.Why the Higgs Field is NecessaryIt is difficult then to ascertain Higgs field at zero, yet I want to count?

Why should the Planck length structure of spacetime know in which reference frame the block is initially at rest? If it's not at rest, the whole argument breaks down. And what happens if two experiments are performed in different places on Earth, which means different rest frames due to rotation?

Come to think of it, why doesn't the Planck scale structure of spacetime prevent me from typing or walking across the room? These motions, just as much as those of the block, involve displacements of much less than 1 Planck length taking place inside some short time interval in some rest frame.

Since there will always be some frame where any given macrosopic object's c.o.m. is displaced less than a Planck length in some fairly short time (larger than the Planck time but smaller than some quantum decoherence time), I don't see why the 'hindrance' doesn't apply to everything in the world...

Hi Thomas,

I think I basically agree with you. As I wrote, leaving aside the question of why the effective description of phonons on the lattice should care about shifts shorter than its range of validity, the only case I can imagine where such a "graininess" might be relevant is a regular Planck length lattice. Which either breaks Lorentz-invariance or would lead you to conclude that you can't cross the room because it's of Planck length in somebody's frame of reference. Best,

B.

"... would lead you to conclude that you can't cross the room because it's of Planck length in somebody's frame of reference."

Is there any equipment that could do an experiment that could determine the "grainiess"?

What would be the two frame of reference?

Thomas you make some valid points, but i'm not sure you are directing ideas in the right direction. What bekenstein has done correctly is mention one implication of the claims that space-time is grainy, and breaks the Poincare symmetry at the Planck scale. This is the claim of many quantum gravity theorists, so bekenstein is exploring its consequences. Indeed it leads to many inconsistencies, as you point out. This is why these quantum gravity theorists who believe in the "atoms of space-time" , the breaking of the poincare symmetry, etc, as smolin and bee do, are advocating inconsistent theories.

Uncle Al:One must know the technical literature to find valid loopholes. Anomalous outputs give theorists and empiricists employment.Yes most certainly a perspective that is needed in order to be phenomenological productive to test theory. It reveals much about my layman status of course.

Like I said, from a general perspective the configuration space, while detailed according to what is measured(calorimeter wise) in LHC, the idea is specific to the way in which "the theory" is to be tested. IN this case(Bar of Lead Tungstate Source,) the idea of QGP being taken down too, as microscopically indication that is what is smooth, has jet potential measures.

These energy values then reveal a discrete measure as seen about something finer in distinction then that which is revealed as a "continuum in measure?"

The photon's journey, while seeking to define time as a measure of a distance what said that such distinctions might have been metallurgical described at such a discrete level?

Which point's toward her next post for me?

Best,

Hi Koala,

You write:

"This is why these quantum gravity theorists who believe in the "atoms of space-time" , the breaking of the poincare symmetry, etc, as smolin and bee do, are advocating inconsistent theories."You must be confusing me with somebody. I have never worked on or "advocated" a breaking of Poincare symmetry. Best,

B.

Not sure I understand all the criticisms raised here. The main point, though, seems to be that we only care about motions of the particles making up the crystal, not motions of the center-of-mass. Bekenstein claims to address this,

"But why should we care about this small translation of the c.m.? After all the c.m. is not the position of any speci c electron or quark. Eq. (A.3) of the Appendix shows that the c.m. position components are canonically conjugate to the corresponding components of the block momentum vector. This last is a key observable of the whole block, and so its canonical conjugate, the c.m. position observable, acts as a faithful proxy for the whole block's position. An additional argument for the relevance of the c.m. will be given in Sec. III."

Did Bee read the paper?

Carl,

The "faithful proxy" argument relies on the distribution of momentum via lattice excitations. As I explained, this "faithful" description isn't even sensitive to scales that can easily account for such a cm shift. Did you read what I wrote? Best,

B.

Bee, looking at your papers on the arXiv, roughly half of them involve discussions of a minimal length and the breaking of Lorentz invariance in one form or another -- this is all versions of breaking of the Poincare symmetry. So I am absolutely shocked that you now claim you have "never worked on" any of this.. wwhhhaaatttt????

Hi Koala,

If you had actually read my papers you would have noticed a) that they are not about a breaking of Lorentz-invariance, but about deformation - very different thing and b) that they are mostly criticisms of the existing realization of this idea.

I might have mentioned something about a breaking of Lorentz-invariance in the introduction or maybe in my review. Eg in my recent paper about superpositions of the speed of light, I explained in the introduction why I do not think a breaking of Lorentz-invariance is very appealing. But as I already said, I have never worked on a breaking of Lorentz-invariance.

Best,

B.

Bee, "deforming" is really a synonym for a kind of breaking of Lorentz invariance; please don't play semantic word games. By the Lorentz invariance I obviously meant the exact set of transformations due to Lorentz, anything else is a type of breaking of this particular symmetry and replacement by something else. Secondly, you have written many papers on a minimal length and a version of the atoms of space time, which "breaks" or "deforms" translational invariance, among other things, hence the poincare invariance. Please don't play silly word games by declaring you have "never worked on" any of this.

Hi Koala,

It is just plainly wrong that a deformation of Lorentz-invariance is the same as a breaking of Lorentz-invariance. I think you just don't understand the difference, or don't want to understand it. The latter introduces a preferred frame, the former doesn't. That's the common terminology. I have never worked on a breaking of Lorentz-invariance. And I have never worked on "versions of atoms of space-time" either. At least not so far. Best,

B.

I never said you had worked on models with a preferred frame of reference, I said models that don't have the "Poincare symmetry", i.e., models that don't exactly carry this symmetry group. And you have 100% certainly written many papers on this.

Hi Koala,

What you wrote was:

"This is why these quantum gravity theorists who believe in the "atoms of space-time" , the breaking of the poincare symmetry etc, as smolin and bee do, are advocating inconsistent theories."

and

"Bee, looking at your papers on the arXiv, roughly half of them involve discussions of a minimal length and the breaking of Lorentz invariance in one form or another"

As I already said, a breaking of Lorentz-invariance means in common terminology the existence of a preferred frame. I have never worked on that. If you now want to claim that you are using expressions in a different meaning, you have yourself to blame for misunderstandings. I have also so far not worked on anything to do with "atoms of spacetime" in any realization. And, as I said earlier, even most of my work on theories with deformed symmetries amounts to criticism of the idea.

Sorry, Koala, but the way you dismiss my work has merely documented that you are commenting on literature you are not actually following. Best,

B.

Bee, I have now repeatedly stated very clearly that I was referring to theories that don't carry the Poincare symmetry. That is what I am referring to, and have stated so from the very beginning. Period. These theories you have repeatedly worked on. I don't get the point of this denial.

Koala,

You later pedaled back, as is documented in this comment section for everybody who cares. My reply was to your original remark that was about a breaking of Lorentz invariance and "atoms of space time". Let us not forget that this post was about Bekenstein's paper which indeed assumes a breaking of Lorentz-invariance and some sort of atoms of space time. I have never worked on that as I have now stated sufficiently often and it seems you have meanwhile realized that. Yes, I have worked on deformations of the Poincare symmetry, which has made me, for better or worse, the most outspoken critic of this attempt. Best,

B.

That is incorrect. I stated very clearly from the beginning that I was referring to your work on the breaking of Poincare symmetry. You then challenged this by saying that you have not worked on preferred reference frames, which I never said you did. I clarified repeatedly that I was defining a breaking of the Poincare symmetry to mean....exactly that...a theory that does not carry the Poincare symmetry. And I have been 100% vindicated in this statement.

Koala: I don't have time for your bullshitting. Everybody who cares can just read what you wrote. You commented on my publications evidently without knowing what they actually are about, and then weren't able to admit your mistake. This exchange is totally uninsightful and I'm not interested in continuing it.

Breaking of symmetry and deforming of symmetry are two very different things. Clue: that is why there are two different terms.

Post a Comment