Wednesday, May 10, 2006

The Minimal Length Scale

Yesterday, I gave a seminar here at UCSB about my recent work, and so I will use the opportunity for some self-advertisement. You find the slides (PDF) online:

I talked about quantum field theories with a minimal length scale, their interpretation, application, and their relation to Deformed Special Relativity (DSR). The talk is mainly based on my recent paper:

And the basics of the model are from our '03 paper




Here is a brief summary about the main statements. In 2003 my collaborators and I worked out a model that includes the notion of a fundamental minimal length into quantum mechanics and quantum field theories. It builds up on earlier works, most notably by Achim Kempf, but extends these approaches -- and is less mathematical but instead focused on applications. The model turned out to be useful to derive modifications to Feynman rules, and allowed us to compute cross-sections (at least at tree level).

Modifications of such a minimal length should become important at energies of about the inverse of that length. The Planck length is expected to play the role of such a minimal length. In case there exist large extra dimensions, the 'true' Planck length might be about 10-4 fm and be testable at the LHC. Actually, what one would find in this case is that there is nothing more to find. Once one reaches the minimal length scale, it is not possible to achieve a higher resolution of structures, no matter what. (And thus there's no point in building a larger collider.)

The motivations for the existence of such a minimal length are manifold, you can e.g. look up my brief review

Essentially the reason for the emergence of a fundamentally finite resolution is that at Planckian energies spacetime gets strongly distorted and it's not possible anymore to resolve finer structures. Also, such a minimal length acts as a regulator in the ultra-violet, which is a nice thing.

The basic idea of my model is to effectively describe such a finite resolution by assuming that no matter how high the energy of a particle gets, it's wavelength never becomes arbitrarily small. To do so, just drop the usual linear relation between wave-vector and momentum. With this, one can then quantize, 2nd quantize, etc. Note, that in my model, there is no upper bound on the energy. There is instead a lower bound on the wave-length. The energy and the momentum of the particles have the usual behavior and interpretation.

While writing the paper in 2003, I could not avoid noticing some kind of a problem with Lorentz-invariance. Apparently, a minimal length should not undergo a Lorentz-contraction and become smaller than minimal. However, it turned out that the quantities with the funny transformation behaviour never entered any observables. It was thus completely sufficient to assume that such a transformation exists, without knowing how it actually looked like.

It was only 2 years later that I realized the connection of this model to DSR which apparently has become enormously fashionable over the last years. (I just found that by now there is even a textbook on DSR available.) By now there are a vast number of papers on the subject. The one half are very phenomenological, the other half are very algebraical investigations. However, as far as I am aware, DSR-theories are still struggling to formulate a consistent quantum field theory. (There was a very notable recent attempt by Tomasz Konopka to set up a field theory with DSR, but imo the model has more than one flaw.)

Most importantly, DSR faces two problems: the one is the soccer-ball problem, the other one is the question of conserved quantities. To briefly address those:

  1. In DSR there is an upper limit on the energy of the particle. Such a limit should not be present for macroscopic objects (e.g. a soccer-ball). The problem is how to get the proper multi-particle limit for which the deformed Lorentz-transformation behaviour should un-deform. (There has been recent progress towards this direction, see. e.g. Joao's paper, but I think the issue is far from being solved).
  2. The second problem is that since in DSR the 'physical' momenta do not transform linearly, they do not add linearly, and thus one has to think about what quantities are conserved in interactions - or, how they are defined in the first place. To take a very simple example: usually, the square of the center of mass energy s for two particles with momenta p and q is s = (p+q)^2. Usually, s is Lorentz-invariant (its a scalar), and since the tranformation is linear on p and q, the result of boosting p+q is the same as boosting p, boosting q and then adding both. Not so in DSR. So, which quantity is the center of mass energy? Is it still a scalar? Is it conserved? The situation gets worse for more particles. (Again there has been progress, see e.g. the paper by Judes and Visser, but I think the issue is far from being solved.)

I should admit that I maybe just don't understand that. At least, it is confusing to me, and even after thinking about it several months, I still can't make sense out of it.

Therefore, lets get back to what I understand and that is how things work in my model. Free particles do not experience any quantum gravitational effects. They behave and propagate as usual. When they make an interaction with high center of mass energy and small impact parameter, quantum gravitational effects can strongly disturb the spacetime. An exchange particle in this region then experiences the effects of DSR. It's wavelength has a lower bound, and there is a limit to the resolution that can be reached in such an interaction. This is schematically shown in the figure below


The quantities that are conserved are as usual the asymptotic momenta of the particles. Composite objects do not experience any effects since they are not usually bound such that the gravitational interaction is very strong. Thus, both of the above mentioned problems of DSR are not present using this approach.

I should be so fair to point out that in my scenario there is no modification of the GZK-cutoff, which is the most popular prediction from DSR.

To briefly recall the problem: protons propagating through the universe can make cosmic rays when they hit the earth's atmosphere. The total energy of these events can be measured (at least a lower bound). A proton with very high energy however, should be able to make pions by interacting with photons of the CMB. If the energy of the proton is such high, it can not travel far enough, will not reach the earth, and can not produce a cosmic ray. This cutoff is expected to occur at a certain energy, the so-called GZK-cutoff. There are some events that seem to indicate that cosmic rays above this cutoff have been detected (data has to be confirmed). This has lead to a huge amount of speculation for the cause of the non-presence of the GZK-cutoff.

To come back to main subject, the reason why there is no modification of the GZK-cutoff in my model is very easy to see: the cutoff is a sudden increase in a cross-section as a function of the center of mass energy. Both, the center of mass energy and the cross-section are Lorentz-scalars. The cut-off has been measured on earth at a certain center of mass energy (about one GeV). Boosting it into the reference frame of the fast proton does not change the necessary center of mass energy.

Here is something that still puzzles me: DSR is argued to be observer independent. Now I wonder how can it be that in the one system the cut-off is at a different center of mass energy than in the other system. And if it was so, couldn't one then use exactly this to distinguish between observers?

Anyway, the bottomline is that my model is an alternative interpretation of DSR. It has less problems, but is also less spectacular. I am a particle physicist, and I would really like to see a self-consistent formulation of a QFT with the 'usual' DSR interpretation - maybe that would help me to understand it.

Update Dec. 9th 2006: see also Deformed Special Relativity

5 comments:

  1. If you are going to be completely honest about high energy particles curving space-time then you have to admit that the momentum operators no longer commute. They are after all the symmetrized divergences (you can't use the plain old gradient/derivative because it isn't contra-variant). It is easy to show that the amount by which they no longer commute is the curvature tensor.

    Thus you can never have complete knowledge of wave-length and frequency in an accelerated system.

    Next computing the Hiesenberg Uncertainty relationship yeilds the metric plus the product of a connection coefficient with the position vector.

    Never did like tensor notation though; was trained in Abstract Algebra and much perfer that basis free non-communative notation to all these raised and lowered indicies explicitly referencing basices.

    ReplyDelete
  2. Why should momentum be conserved? A minimum scale length suggests an emergent phenomenon that does not reduce to a zero-dimensional point. We can handle both.

    Noether's theorem couples conservation of angular momentum to isotropy of space. A non-Noetherian external symmetry breaking (coupled to translation and rotation) of proper magnitude and mechanism consistent with observation and empirically testable upsets the apple cart.

    Short list: geometric parity violation (chiral vacuum backround pseudoscalar). It's a non-point interaction, decoupling it from particle studies. It is also requires a minimum emergent scale. Party is not zero-dimensional. It severely prunes 10^500 satisfactory vacua in string theory.

    It is trivially testable in existing apparatus sensitive to 10^(-13) difference/average signal. Any net signal less than 10^(-10) relative is consistent with all physical and chemical observations to date.

    ReplyDelete
  3. insightaction said: [...] that the momentum operators no longer commute

    Right. There is a more generalized form of the generalized uncertainty principle that you e.g. find in the early Kempf papers

    On Quantum Field Theory with Nonzero Minimal Uncertainties in Positions and Momenta

    Author: Achim Kempf


    Best,

    B.

    ReplyDelete
  4. So once you start modifying the uncertainty relationships it becomes worthwhile to look at data from modern laser cavity resonance isotropy experiments. Isotropy experiments measure our error in our knowledge of the speed of light, or in other words the degree to which both frequency and wavelength are not completely known. The best experiments put a fractional limit around 10^-12 to 10^-17, but they haven't looked for dirunal variance in the standard deviation of the frequency, just in the frequency itself.

    If someone would lend me the data I'd do it myself in Matlab.

    ReplyDelete
  5. Bee, you may be aware of this recent paper:
    http://arxiv.org/abs/hep-th/0610064

    and comment on its consequences?

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.