You might have read about this some weeks ago on Chad Orzel's blog or at Ars Technica: Nature published a paper by Pikovski et al on the possibility to test Planck scale physics with quantum optics. The paper is on the arXiv under arXiv:1111.1979 [quant-ph]. I left a comment at Chad's blog explaining that it is implausible the proposed experiment will test any Planck scale effects. Since I am generally supportive of everybody who cares about quantum gravity phenomenology, I'd have left it at this, and be happy that Planck scale physics made it into Nature. But then I saw that Physics Today picked it up, and before this spreads further, here's an extended explanation of my skepticism.
Igor Pikovski et al have proposed a test for Planck scale physics using recent advances in quantum optics. The framework they use is a modification of quantum mechanics, expressed by a deformation of the canonical commutation relation, that takes into account that the Planck length plays the role of a minimal length. This is one of the most promising routes to quantum gravity phenomenology, and I was excited to read the article.
In their article, the authors claim that their proposed experiment is feasible to "probe the possible effects of quantum gravity in table-top quantum optics experiment" and that it reaches a "hitherto unprecedented sensitivity in measuring Planck-scale deformations." The reason for this increased sensitivity for Planck-scale effects is, according to the authors own words, that "the deformations are enhanced in massive quantum systems."
Unfortunately, this claim is not backed up by the literature the authors refer to.
The underlying reason is that the article fails to address the question of Lorentz-invariance. The deformation used is not invariant under normal Lorentz-transformations. There are two ways to deal with that, either breaking Lorentz-invariance or deforming it. If it is broken, there exists a multitude of very strong constraints that would have to be taken into account and are not mentioned in the article. Presumably then the authors implicitly assume that
Lorentz-symmetry is suitably deformed in order to keep the commutation relations invariant - and in order to test something actually new. This can in fact be done, but comes at a price. Now the momenta transform non-linearly. Consequently, a linear sum of momenta is no longer Lorentz-invariant. In the appendix however, the authors have used the normal sum of momenta to define the center-of-mass momentum. This is inconsistent. To maintain Lorentz-invariance, the modified sum must be used.
This issue cannot be ignored for the following reason. If a suitably Lorentz-invariant sum is used, it contains higher-order terms. The relevance of these terms does indeed increase with the mass. This also means that the modification of the Lorentz-transformations become more relevant with the mass. Since this is a consequence of just summing up momenta, and has nothing in particular to do with the nature of the object that is being studied, the increasing relevance of corrections prevents one from reproducing a macroscopic limit that is in agreement with our
knowledge of Special Relativity. This behavior of the sum, whose use, we recall, is necessary for Lorentz-invariance, is thus highly troublesome. This is known in the literature as the "soccer ball problem." It is not mentioned in the article.
If the soccer-ball problem persists, the theory is in conflict with observation already. While several suggestions have been made how this problem can be addressed in the theory, no agreement has been reached to date. A plausible and useful ad-hoc suggestion that has been made by Magueijo and Smolin is that the relevant mass scale, the Planck mass, for N particles is rescaled to N times the Planck mass. Ie, the scale where effects become large moves
away when the number of particles increases.
Now, that this ad-hoc solution is correct is not clear. What is clear however is that, if the theory makes sense at all, the effect must become less relevant for systems with many constituents. A suppression with the number of constituents is a natural expectation.
If one takes into account that for sums of momenta the relevant scale is not the Planck mass, but N times the Planck mass, the effect the authors consider is suppressed by roughly a factor 1010. This means the existing bounds (for single particles) cannot be significantly improved in this way. This is the expectation that one can have from our best current understanding of the theory.
This is not to say that the experiment should not be done. It is always good to test new parameter regions. And, who knows, all I just said could turn out to be wrong. But it does mean that based on our current knowledge, it is extremely unlikely that anything new is to be found there. And vice versa, if nothing new is found, this cannot be used to rule out a minimal length modification of quantum mechanics.
(This is not the first time btw, that somebody tried to exploit the fact that the deviations get larger with mass by using composite systems, thereby promoting a bug to a feature. In my recent review, I have a subsection dedicated to this.)