Saturday, December 09, 2006

Deformed Special Relativity

My prediction about the number of comments on Joe Polchinski's review of Peter Woit's and Lee Smolin's books over at CV wasn't so bad, and they are still coming. I would like to use the opportunity to write about a topic I have been working on for some while, and which I believe was mentioned in the discussion, namely deformations of special relativity (DSR).

In May, I gave a seminar at UCSB about my recent work on the topic, which I have briefly summarized in the post The Minimal Length Scale. Since I recall that Joe Polchinski was present that day, I sadly conclude that the seminar wasn't very illuminating, so I'll try to clarify some things. In the beginning though, I should add a note of caution since my work on DSR is not in complete agreement with what the standard approach is. You find further details in my papers

To set the context, in the review Joe Polchinski writes:

    Smolin addresses the problem of the Planck length (“It is a lie,” he says). Indeed, Planck’s calculation applies to a worst-case scenario. String theorists have identified at least half a dozen ways that new physics might arise at accessible scales [6], and Smolin points to another in the theories that he favors [7], but each of these is a long shot. [...]

With reference to the footnotes:

    [6] The ones that came to mind were modifications of the gravitational force law on laboratory scales, strings, black holes, and extra dimensions at particle accelerators, cosmic superstrings, and trans-Planckian corrections to the CMB. One might also count more specific cosmic scenarios like DBI inflation, pre-Big-Bang cosmology, the ekpyrotic universe, and brane gas cosmologies.

    [7] I have a question about violation of Lorentz invariance, perhaps this is the place to ask it. In the case of the four-Fermi theory of the weak interaction, one could have solved the UV problem in many ways by violating Lorentz invariance, but preservation of Lorentz invariance led almost uniquely to spontaneously broken Yang-Mills theory. Why weren’t Lorentz-breaking cutoffs tried? Because they would have spoiled the success of Lorentz invariance at low energies, through virtual effects. Now, the Standard Model has of order 25 renormalizable parameters, but it would have roughly as many more if Lorentz invariance were not imposed; most of the new LV parameters are known to be zero to high accuracy. So, if your UV theory of gravity violates Lorentz invariance, this should feed down into these low energy LV parameters through virtual effects. Does there exist a framework to calculate this effect? Has it been done?

There is a reply to that question in comment #20 by Brett:

    I wanted to answer the question the question posed in [7].
    In short, this is a significant problem for any theory that predicts Lorentz violation. [...]


    The most explicit calculation of this that has been published is, I believe, in Collins, et al. Phys. Rev. Lett. 93, 191301 (2004). They take a Lorentz-violating cutoff and show how it affects one low-energy function. [...]

Which refers to this paper:
There follows a comment by Jaques Distler on Lorentz violation, some other comments, and comment #30 by Robert comes back to the question

    IIRC the way Lorentz violation is supposed to show up in loopy physics is that the dispersion relation is violated and the speed of light depends on energy (showing up in early or late arrival of ultra high energy gamma ray burst photons compared to ones of lower energy). The idea is that even if the relative effect is quite small the absolute size could be measurable as these photons have traveled across half the universe. Does anybody have an understanding of how this effect arises? [...] Which calculation this referes to? What do I have to compute to get this energy dependent speed of light?

Then, in comment #43, Joe Polchinski partly answers his own question:

    Brett #20,22: Thanks for the reference, this is certainly what I would expect. I understand that there is the hope for a `deformed algebra’ rather than a simple violation, but to an outsider it seems that what is being done in LQG is to return to pre-covariant methods of QFT, cut things off in that form, and hope for the best. It would be good to see some calculations.



Now let me add my comments:

The idea of deforming special relativity is to allow two invariant parameters of the transformations between reference frames. The one invariant is the speed of light, the other one is the regulator in the ultra violet, alias a maximal energy scale. This energy scale is usually identified with the Planck energy ~ 1019GeV. If one believes that the Planck energy acts as a maximal energy scale, then all observers should agree on this scale to be maximal. Since usual Lorentz transformations do not allow this (one can always boost an energy to arbitrarily high values), one needs a new type of transformations. These turn out to be non-linear in the momentum variables, which is the reason why they usually do not show up in standard derivations of Lorentz transformations, where one assumes linearity.

The construction of such transformations that respect the upper bound on the energy scale is possible, and they can be explicitly written down. The approach has been pushed forward notably by Giovanni Amelino-Camelia, who has written an enormous amount of papers on the topic. Unfortunately, I find his papers generally very hard to read and confusing. A very readable and clear introduction that I can recommend is e.g.

These theories do not break Lorentz invariance in the sense that they do not single out a preferred restframe. Instead, the Lorentz transformations (as functions of the boost parameter), are modified at high values of the boost parameter. This allows the maximal energy to be an invariant quantity. You can find an explicit example for such transformations e.g. in gr-qc/0303067, Eq. (19). What is deformed in this approach, as far as I understand it, is not the algebra itself, but the action of the generators on the elements of the space.

A deformation of Lorentz invariance consequently leads to a new invariant scalar product in momentum space, which means one has a modified dispersion relation. Under quantization, the approach is also known to imply a generalized uncertainty principle, which stems from the modified commutation relations. Theories of this type can but need not necessarily have an energy dependent speed of light (for details about these relations see e.g. hep-th/0510245).

In contrast to this, the paper mentioned by Brett in comment #20 by Collins et al explicitly examines a scenario with violation of Lorentz invariance. As they state already in the abstract "Here, we explain that combining known elementary particle interactions with a Planck-scale preferred frame gives rise to Lorentz violation at the percent level, some 20 orders of magnitude higher than earlier estimates[...]" I am reasonably sure this was not the scenario Lee Smolin is referring to in his book. I vaguely recall he actually writes something about Giovanni -- it implied a knife being put on somebodies throat or so. Unfortunately, I lent the book to my office mate, so I can't look it up.

If one introduces a hard cut-off in a momentum integration without making use of a modified Lorentz-symmetry one runs of course intro problems. With the use of deformed transformations however, this problem can be circumvented. A good way to think about it is in my opinion to picture momentum space not as being a flat, but a curved space. In this case, the integration over the volume in one or more directions can be finite. The non-flatness of the space shows up in the volume element via the square root of the determinant of the metric tensor, which can improve the convergence of loop integrals. By construction, the integration is invariant under the appropriate transformations in that space. In this approach, it is exactly the additional factor (square root of g) in the volume element that makes the integration invariant.

Another way to think about it is to consider a non-linear relation between wave-vector and momentum, in which case the role of the convergence-improving factor is played by the Jacobian determinant of the functional relation between both, see e.g. hep-ph/0405127.

A quantum field theory with DSR can be formulated as a theory with higher derivatives in the Lagrangian (see e.g. hep-th/0603032, or gr-qc/0603073). In fact, as I like to point out, in a power series expansion one needs arbitrarily high derivatives, since a finite polynomial could never reproduce an asymptotic limit. If one writes down a series expansion to get an effective theory, one has corrections in higher order interactions suppressed with powers of the Planck mass as one would expect. Each of these terms is Lorentz invariant, provided the quantities are transformed appropriately. However, in my opinion, such an expansion is not so very helpful, since the important thing is the convergence of the full series. These higher order terms come with the usual constraints on the interactions. I also don't see a point in examining them in great detail, since we don't know anyhow what other funny things might happen to the particle content at GUT or Planck scale energies.

The status of a full quantum field theory with DSR is presently unfortunately still very unsatisfactory. It is a topic I am working on myself, and I am very optimistic that there will be some progress soon. It is however possible to make some general predictions, using kinematic arguments, or just by applying the modified transformations. As mentioned by Robert above, the time of flight being energy dependent (in the case of DSR with an energy dependent speed of light) is an example for such a prediction. Some details about this can be found in

My interest in DSR arises from the fact that it is based on a very general expectation that we have about quantum gravity, which is that the Planck energy acts as a regulator in the ultra violet. In my works, I have mainly examined in how far it is possible to include this property into standard quantum field theories as an effective description of what should actually be described by a full theory of quantum gravity. I am not an expert as to how DSR is related to LQG, and how strictly this connection can be established.


Update: See also what the expert says.

TAGS: , ,

32 comments:

  1. Hi Sabine,
    I don't know DSR well enough to speculate properly, but something you said set off a bell:

    "A good way to think about it is in my opinion to picture momentum space not as being a flat, but a curved space."

    This sounds surprisingly like the deformation of a flat tangent space to a curved tangent space in Cartan geometry. Specifically, a Cartan geometry for GR uses de Sitter space as the tangent manifold instead of Minkowski space.

    I'll have to play with this idea more before I see if it's actually sensible -- but I thought it worth mentioning.

    One of John Baez's students, Derek Wise, just wrote up a nice expository paper on Cartan geometry for GR here:

    http://arxiv.org/abs/gr-qc/0611154

    -Garrett

    ReplyDelete
  2. Hi Garret,

    yes, this is exactly what I had in mind. It turns out that DSR can be formulated as such, see e.g.

    De Sitter space as an arena for Doubly Special Relativity

    and other works by Kowalski-Glikman.

    I haven't yet really worked out how the connections are to my approach, but it seems to me it is essentially the same. Though I wonder if one can make it a more general geometry than deSitter, or whether that clashes with the symmetry requirements (keep in mind that in momentum space there is no need for homogeneity).

    Best,

    B.

    ReplyDelete
  3. Sabine,

    I'm trying to understand your motivations, but I am confused. You wrote "If one believes that the Planck energy acts as a maximal energy scale, then all observers should agree on this scale to be maximal. Since usual Lorentz transformations do not allow this (one can always boost an energy to arbitrarily high values), one needs a new type of transformations."

    I'm afraid I don't really see why this is necessary. For instance, in QCD there is a preferred scale, Lambda_QCD, which all observers agree on, but they certainly don't have to modify Lorentz symmetry to do so! Instead what happens is that Lorentz-invariants (like the Mandelstam variables in a scattering process) are sensitive to the value of this scale.

    So, why do you expect that there is a preferred energy rather than a preferred p^2? To me it seems a rather arbitrary assumption, but surely you have some further motivation?

    ReplyDelete
  4. Dear Anonymous,

    This is of course correct. I apologize that my sentence is maybe misleading. The essential point is not that all observers should be able to determine an energy scale to be of the same value, but that they should agree on this energy scale to be a maximal value (say, for the energy of a virtual particle), or for a length scale to be a minimal value respectively (say, for the wave-length of a particle).

    In fact, one could as well say that in General Relativity, each observer can determine the Planck scale from the coupling of gravity to matter, and of course they would all agree on the value.

    However, to stay with the example: if one takes some matter density distributed in space-time and lets an observer travel through it with a large relative velocity, he will consider the medium to be arbitrarily dense as a result of Lorentz contraction. This density can eventually become larger than (the forth power of) the Planck scale. This is what a deformation of Lorentz transformations avoids. The observer will never perceive a medium with a super Planckian density.

    This in fact is closely connected to the blueshift problem in black hole evaporation. If I recall that correctly, it was actually Unruh who first introduced a modification of the dispersion relation with the aim to circumvent this problem (I'll see if I can find the reference, I don't have it at hand.)


    Instead what happens is that Lorentz-invariants (like the Mandelstam variables in a scattering process) are sensitive to the value of this scale.

    So, why do you expect that there is a preferred energy rather than a preferred p^2?


    I am not sure I understand the question. Do you mean a preferred value of Mandelstam variables, like s^2 being large, t^2 being small, at which the effect becomes important?

    Best,

    B.

    ReplyDelete
  5. Here is the reference from Unruh I had in mind. It might very well be that this was not actually the first such approach (it almost certainly was not), but at least it's one that I know

    Sonic analogue of black holes and the effects of high frequencies on black hole evaporation
    Phys. Rev. D 51, 2827 - 2838 (1995)

    ReplyDelete
  6. Dear Bee,

    much like the anonymous, I don't understand your motivation and your new example with the density made it less clear for me, not better.

    It's because I think that one can easily falsify your statement that the individual component "T_{00}" of the stress-energy tensor can't ever be bigger than the Planck density.

    Instead, as the anonymous correctly tried to argue but you ignored him or her, it is only the Lorentz-invariant quantities that can be a subject to general inequalities which is why these inequalities don't require - or don't allow - any modifications of special relativity.

    Take water - for example the ocean or the bottles you can buy in your local supermarket. Grab a fast particle with a large Lorentz gamma factor. A really energetic neutrino could do it but a photon has formally an infinite gamma.

    This particle will see "T_{00}" equal to gamma times the density of water. Do you really doubt that this is what will happen? Such a collision doesn't even have to produce black holes if you don't overshoot the energy.

    My understanding is that you want to assume that relativity is broken in the most naive, pre-1905 fashion. Then you assume some of the assumptions that were, on the contrary, shown to be incorrect when special relativity was discovered - like the assumption that all observers should agree on distances or energy - and then you argue that these non-relativistic assumptions are what you want to call a new relativity.

    Is there a difference between your reasoning and the reasoning of a generic alternative physicist who wants to debunk Einstein and return to non-relativistic physics? If there is one, I still did not understand it.

    Best
    Lubos

    ReplyDelete
  7. Anonymous' question: So, why do you expect that there is a preferred energy rather than a preferred p^2?

    Bee: I am not sure I understand the question. Do you mean a preferred value of Mandelstam variables, like s^2 being large, t^2 being small, at which the effect becomes important?

    LM: It is not quite clear to me what is unclear about the question.

    You (Bee) wrote, in the main article: "If one believes that the Planck energy acts as a maximal energy scale, then all observers should agree on this scale to be maximal. Since usual Lorentz transformations do not allow this (one can always boost an energy to arbitrarily high values), one needs a new type of transformations."

    Anonymous described it very politely but I think that according to the insights we call conventional 21st century physics, the sentence above is nothing else than misunderstanding of special relativity.

    Special relativity dictates that different observers will never agree on things like coordinate distances or time intervals or energy or momentum as individual components of the 4-vector.

    The only thing they can agree upon are Lorentz invariant quantities such as p^2 of a four-vector p. Indeed, there are many invariant length scales and energy scales that all observers agree upon, like the QCD scale. But they must be measured in the invariant fashion.

    You seem to suggest that the violation of Lorentz invariance is implied by something. Probably much like Anonymous, I think that the examples above show very clearly that the violation is certainly not inevitable in any sense.

    If there are inequalities that determine different regimes of physics, they refer to invariant scales such as p^2 - for example the Mandelstam variables (not sure why you wrote s^2, t^2 instead of just s,t, was it a typo?).

    So if I summarize it, you seem to justify your interest in these unusual theories by an argument that seems rather obviously flawed, and Anonymous' question is whether you also have some reason to be interested in these theories that is not flawed.

    Thanks.

    Best
    Lubos

    ReplyDelete
  8. Hi Lubos,

    It's because I think that one can easily falsify your statement [...] Instead, as the anonymous correctly tried to argue but you ignored him or her, it is only the Lorentz-invariant quantities that can be a subject to general inequalities [...]If there are inequalities that determine different regimes of physics, they refer to invariant scales such as p^2 - for example the Mandelstam variables (not sure why you wrote s^2, t^2 instead of just s,t, was it a typo?).


    It's no surprise for me that you don't understand my motivations. If you think you can easily falsify DSR, how about you do it? Regarding your concern about inequalities for non-invariant quantities: I share it, which I think you know. Unlike you however, I have indeed published my concerns. If you would make the effort to have a look at hep-th/0603032 (appendix A)? In this paper you also find my opinion about the soccer-ball problem (which I am afraid is fatal in the usual DSR approach, but unfortunately I can't prove it.) If you think it's too much effort to read it, have at least a look at my brief summary of the paper in the above mentioned earlier post The Mininmal Length Scale, I don't have the time to repeat myself endlessly.

    Indeed the whole point of the paper is that I think one has to define properly what sets the scale for the effects to become important that are supposed to describe quantum gravity.

    Also: sorry, yes, I keep forgetting s and t are already squared, this is a typo. I just wasn't sure what anonymous meant with p^2? I wanted to make sure whether p is indeed the sum of in- and outgoing momenta in an interaction?

    Is there a difference between your reasoning and the reasoning of a generic alternative physicist who wants to debunk Einstein and return to non-relativistic physics? If there is one, I still did not understand it.

    You don't understand me because you don't want to understand. What I try to do is to incorporate a fundamentally finite resolution of distances in a quantum field theory of particles. We know that probably the fundamental things aren't particles, but we have strong indications that whatever is the fundamental description, one can't resolve structures smaller than the Planck scale. Instead of using the fundamental yet-to-be found theory, I take the particles and equip them with an extra property which captures the funny behaviour. This way I get an effective description that reproduces e.g. the generalized uncertainty principle (see e.g. papers by Gross and Mende, back in the 80ies, can give you reference if you want).

    I have been concerned with particle interactions, and not so much with the DSR formulation, but it turned out the one is connected to the other. How my approach differs from Amelino-Camelia's, see mentioned paper. I am not going to defend some model which is a) not mine and b) one I either don't understand or think it is fatally flawed regarding the description of the free particle.

    Then you assume some of the assumptions that were, on the contrary, shown to be incorrect when special relativity was discovered - like the assumption that all observers should agree on distances or energy - and then you argue that these non-relativistic assumptions are what you want to call a new relativity.

    Look, Lubos, I don't call anything a 'new relativity'. I urge you again, if you can show that DSR is incorrect, then do it. I'll invite you up to PI, you can give a seminar on it, and we can all move ahead. Arguments like the ones that you brought up so far can easily be defeated, and unfortunately most DSR predictions are hard to measure. E.g. your (not very original) objection that DSR effects can be removed by redefining quantities is completely vacuous. The essential question is (which is also discussed in my works): what are the observable quantities? The whole point of DSR is that the observable momenta are the ones with the modified transformation behaviour. Yes, you can always express these through quantities that transform as usual four-vectors, but the claim is that these aren't the things you observe. Your arguments come down to the statement you don't like DSR.

    Whether or not DSR is in fact realized in nature, I don't know. When it comes to theories with an energy dependent speed of light, I actually don't think so. This is why in my papers I have a generalized uncertainty, but no energy dependent speed of light. I don't have a good argument for it, just that it doesn't go with my intuition. In fact, I tried to prove recently that it can't work. But I failed. Paper will be on the arxiv soon.

    Best regards,

    B.

    ReplyDelete
  9. Dear Bee!

    "If you think you can easily falsify DSR, how about you do it?"

    Well, I think I have already done it many times, and probably not just me, but feel free to choose to ignore these comments.

    The precise statements of DSR that have been offered so far are not terribly well-defined and they are not at the level of real research papers, so it is not possible to write the falsification on the level of serious papers either.

    But at the same level of accuracy as DSR has been proposed, it has also been falsified.

    "I don't have the time to repeat myself endlessly."

    No one wants you to do anything endlessly. You could however try to think about our concerns at least once which, as far as I know, you have not yet done.

    I have read all the sources of yours you mentioned pretty carefully and I am pretty sure that they don't contain any answer to our question why do you want to impose inequalities for individual components of four-vectors.

    Your answer reaffirms the qualified guess that the answer is really not there if it is so difficult to agree what is p^2. It doesn't matter what "p" you use, whether it is a momentum of one particle or the center-of-mass momentum of two particles, or momentum exchange. What matters is that you must square this 4-vector to get an invariant, and only this invariant can become a subject of generally valid inequalities.

    "What I try to do is to incorporate a fundamentally finite resolution of distances in a quantum field theory of particles."

    You don't want to listen. People are trying to peacefully explain you that you are approaching the problem incorrectly. The finite distance resolution refers to the proper distance scales, not coordinate distances, and this correctly incorporated finite resolution doesn't imply any modification of Lorentz symmetry as you can see in any theory with a scale such as QCD or string theory.

    QCD and string theory are not "yet-to-be-found" theories. They are demonstrably existing theories, unlike everything that you talk about. And they falsify your hypothesis that Lorentz invariance must be modified in order to introduce priviliged mass scales or the minimum length scale, as in string theory.

    Do you disagree? If you do, could you please present some extraordinary evidence for such an extraordinary statement - more precisely a statement that every particle physicist knows to be wrong? Or do you just fail to listen?

    I assure you that we know papers by Gross and Mende and they certainly don't support your conjectures about violations of Lorentz invariance.

    In the last paragraph, you ask me what are the observables in DSR. Don't you think that it is a question that a proponent of DSR should answer? I don't understand how can one seriously discuss a framework in which everything, including the choice of observables, is so completely undefined.

    It is you, not me, who claims that there is some interesting physics hiding behind DSR, so it is you, not me, who should say what the observables are. I don't think that there is any choice of observables that leads to anything else than what DSR is right now, namely chaos.

    Best
    Lubos

    ReplyDelete
  10. I think there is a fairly basic algebraic objection to DSR. DSR is supposed to be constructed as a nonlinear representation of the Lorentz algebra. The Lorentz algebra is just the universal enveloping algebra of so(3,1). When one constructs this eveloping algebra explicitly, as a quotient of the tensor algebra on the generators, it LOOKS like it depends a great deal on how those generators are chosen. However, this dependence is entirely illusory; the algebra is the same whatever basis is chosen.

    So, choose a linear basis for the transformations or a nonlinear one--the structure of the algebra is the same. The represtations are exactly the same. They look different, because in one case, you have written the theory in extremely awkward coordinates. The momentum in one version is the real momentum; in the other theory, what is called the momentum is actually an awkward nonlinear function of the true momentum. If you try to constuct a truly physical quantity, like the velocity (to choose an example that was worked out a few years ago), you find that it behaves exactly the same way, obeying ordinary SR, in either theory.

    This does not apply, obviously, if different fields see different deformations, but this then becomes an explicit breaking of Lorentz symmetry.

    ReplyDelete
  11. Bee,

    if my understanding is correct, in DSR Lorentz boost transformations are kinematical (just as in ordinary special relativity): boost transformations of the particle's momentum is exacly the same, no matter whether the particle is free or it interacts with other particles. In this respect, Lorentz boosts are assumed to be similar to space translations and rotations. What do you think about the idea of dynamical boosts? Why shouldn't the (Lorentz) boost transformations of the particle's momentum depend on the interaction of this particle with the rest of the system? This could be analogous to the properties of time translations. The time evolution of the particle's momentum certainly depends on the interaction. Why the "boost evolution" should be different? After all, it is well-known (see, e.g., Weinberg's vol. 1) that in interacting relativistic theories (including QFT) the generators of time translations (the Hamiltonian) and boosts are interaction-dependent, while the generators of space translations and rotations are interaction-free.

    ReplyDelete
  12. How can anyone believe that one can get new physics just by chosing some wacky new coordinates? It's like claiming that Newtonian mechanics is diffeomorphism invariant just because it can be formulated in curvilinear coordinates.

    But then again, it is not much worse than the quite popular kind of noncommutative geometry, where the configuration space becomes noncommutative because one makes a coordinate transformation in phase space. A noncanonical transformation, for sure, but a coordinate transformation nonetheless.

    Of course, it is much harder to find new cohomologically nontrivial modifications, like the multidimensional Virasoro algebra.

    ReplyDelete
  13. Dear Lubos:

    Had you read my papers, you would have noticed that I write that the allegedly present threshold corrections in DSR arise from the fact that the inequality s > sum over mass^2 does not have a Lorentz invariant quantity on the left hand side. Which is why I think they either break observer independence (that would indeed be back to a violation of Lorentz invariance), or they are just wrong (meaning, the threshold better be computed in a theory and not being 'derived' using a handful of equations).

    That's the issue with the GZK cutoff. Which lately has considerably calmed down, probably because people are waiting for the new data. I couldn't find anything wrong with the time of flight prediction for gamma rays. If one accepts the speed of light can be energy dependent, it is a pretty straight forward argument, and doesn't actually involve any interactions.

    I assure you that we know papers by Gross and Mende and they certainly don't support your conjectures about violations of Lorentz invariance.

    As I've tried to explain before, in my model all observables respect standard Lorentz invariance. Cross-sections are still scalars, momenta are still Lorentz-vectors in the usual sense. What is modified is the propagation of the virtual particle, which is modified such that it reproduces a fundamentally finite resolution, that in principle should be described by a more complete theory. This virtual particle is it which has funny transformation behavior. Since it is only an effective description, this transformation behaviour need not show up in the fundamental theory. Instead it is [a particle with funny properties under Lorentz boosts] that replaces a [fundamental description with usual behaviour under Lorentz boosts], the 'funny properties' being chosen such that they reproduce the generalized uncertainty and improve uv finiteness.

    In the last paragraph, you ask me what are the observables in DSR. Don't you think that it is a question that a proponent of DSR should answer? I don't understand how can one seriously discuss a framework in which everything, including the choice of observables, is so completely undefined.

    I didn't ask you what the observables are, I essentially stated this is the question. I have a pretty clear definition of what the observable momenta are, and of how to compute a cross-section in my model. As to standard DSR, the problem I have faced when I discuss with people is that they keep telling me, DSR is in momentum space, and position space is a problem. I don't particularly like this excuse, which is why I am working on the position space description. I hope that eventually it will be possible to give a clear interpretation to the quantities (using Noether's theorem e.g.).


    People are trying to peacefully explain you that you are approaching the problem incorrectly.

    If there is an upper bound on the integration in momentum space at it appears in loop integrals, it requires a modified transformation behaviour for the momentum of the virtual particle, otherwise the bound will indeed break Lorentz invariance. I don't know how to achieve this without a DSR-like transformation for the exchange particle (or a curved momentum space, see above), this is how I stumbled across the topic.

    Besides this you write many words with little content. If you have so good arguments against DSR, even if 'it is not possible to write the falsification on the level of serious papers', you should at least have go to a conference and present your conclusions. I offer you again, if you have something sensible to say, I'll invite you to PI and you can give a seminar.

    Best,

    B.

    ReplyDelete
  14. Clarification to avoid misinterpretation: in my last comment the sentence the inequality s > sum over mass^2 does not have a Lorentz invariant quantity on the left hand side. refers to that inequality in the DSR scenario.

    ReplyDelete
  15. Dear Anonymous,

    If you try to constuct a truly physical quantity, like the velocity (to choose an example that was worked out a few years ago), you find that it behaves exactly the same way, obeying ordinary SR, in either theory.

    I'd be happy to see the reference, if you'd be so kind to provide it. I'd have expected this requires knowledge of the formulation in position space, how else can you define the velocity?

    Best,

    B.

    ReplyDelete
  16. Dear Eugene,

    This is a very nice picture you have chosen :-) Regarding you suggestion, I see some serious conceptual problems.

    Why shouldn't the (Lorentz) boost transformations of the particle's momentum depend on the interaction of this particle with the rest of the system?

    How would you define the 'rest of the system' and the interactions with it? In my approach the transformation depends on whether the particle is on- or off-shell, which seemed to me like a useful prescription. How would you describe a particle being in an interaction, and the dependence of the boost on it?

    Best,

    B.

    ReplyDelete
  17. Dear Thomas,

    How can anyone believe that one can get new physics just by chosing some wacky new coordinates? It's like claiming that Newtonian mechanics is diffeomorphism invariant just because it can be formulated in curvilinear coordinates.

    This is not the point. The question is what were the appropriate coordinates (observables) from the beginning on, when there are several choices that agree in the limit that we have observed? If momentum space has a curvature, and its volume is finite in on or more directions, than you can not chose a global coordinate system that is flat everywhere. In the vicinity of p=0 (note that momentum space does have a center) you might be fooled into believing that flat coordinates are appropriate everywhere, but if you integrate, you can't use these coordinates globally. If you do so, you might erronously think the volume is infinite, because you haven't taken into account the curvature. If you want to translate that to DSR: the locally flat coordinates are the pseudovariables, the global coordinates are the physical variables with a modified behaviour at large momenta. Around p=0 both agree.

    The difference between my approach and the standard DSR approach is that I only distinguish both when indeed under the integral, that is, for virtual particles. Note that the modification I start from: the convergence of the integral over momentum space being improved by changing the properties of that space, is by definition invariant under the choice of variables, provided these be suitable global coordinate systems, and the transformation is done correctly (i.e. you have to change the volume element as well). E.g. in deSitter space you can choose many different coordinates. In standard DSR, these describe different DSR theories. In my approach the only thing that matters is the global structure of momentum space, one requirement being that it is almost flat around p=0 (you can translate that into a requirement on k(p) if you want to compare to my papers).

    One way or the other, the modification does not come from changing the coordinates, but from changing the base space, which is not an empty statement.

    Best,

    B.

    ReplyDelete
  18. For references on the velocity, I suggest hep=th/0304027, hep-th/0207022, and hep-th/0211057.

    The only possible way that DSR could be physical is indeed in its impact on virtual particles. By choosing a relatively nice-looking regulator in the deformed coordinates, one could potentially get a physical effect. However, doing this is no different from taking ordinary coordinates and using a Lorentz-violating regulator. DSR may perhaps be suggesting of HOW to choose a Lorentz-violating regulator, but it still does not describe any physics than cannot perfectly well be expressed in conventional coordinates.

    ReplyDelete
  19. hi Bee,

    How would you describe a particle being in an interaction, and the dependence of the boost on it?

    Let's consider a simple example of two interacting particles 1 and 2. The generators of the Poincare Lie algebra are

    P = p_1 + p_2 (total momentum)
    J = j_1 + j_2 (total angular momentum)
    H = h_1 + h_2 + V (total energy)
    K = k_1 + k_2 + W (total boost)

    where V(r_1, r_2, p_1, p_2) is the potential energy of interaction, which depends on positions and momenta of both particles, and W(r_1, r_2, p_1, p_2) is the boost interaction. It is important to note that if V is non-zero, then W should be non-zero as well, in order to preserve the commutators of the Poincare Lie algebra.

    Now, let us write the time evolution of the first particle's momentum

    p_1(t) = exp(iHt)p_1(0)exp(-iHt)
    = p_1(0) + i[V, p_1] t + ...

    The commutator [V, p_1] is non-zero, which means that there is a non-trivial dynamics due to the interaction between particles 1 and 2. This is well-known.

    Now, for the less known stuff. Let us write the transformation of p_1 with respect to a boost with rapidity s

    p_1(s) = exp(iKs)p_1(0)exp(-iKs)
    = p_1(0) + i[K_0,p_1]s + i[W,p_1]s +...
    = p_1 cosh(s) - h_1 sinh(s) + i[W,p_1]s + ...

    The first two terms on the right hand side is the usual Lorentz transformation formula


    p_1(s) = p_1 cosh(s) - h_1 sinh(s) (1)

    However, there is also the interaction-dependent term i[W,p_1]s and higher order terms (denoted by ...) which make the boost transformation law of the particle's momentum different from the simple linear Lorentz formula (1) in the presence of interaction.

    The total momentum P = p_1 + p_2 of the two-particle system transforms by the usual Lorentz formula (1), however momenta of individual interacting particles p_1 and p_2 have complex interaction-dependent transformation laws. These laws coincide with the Lorentz formula only when the interaction is absent V = W = 0.

    One can develop these ideas further and conclude that in the classical limit boost transformations of trajectories (or world-lines) of interacting particles are different from the usual Lorentz transformations. See, for example,

    D. G. Currie, T. F. Jordan, E. C. G. Sudarshan, "Relativistic invariance and Hamiltonian theories of interacting particles",
    Rev. Mod. Phys., 35 (1963), 350.

    Eugene.

    ReplyDelete
  20. Bee,
    Your comment of 3:03 PM, December 09, 2006 is indeed confusing.

    Presumably one (desired?) result of the kind of deformation you're looking at is a softening of the high energy behavior of scattering amplitudes?

    Presumably the difference from what you're trying to do and using a Lorentz-violating momentum cut-off is that in the latter, the cut-off is usually chosen to be very much higher than the energy scale of the particular problem; while in the former, your limit is when the energy scale is close to the regulator cut-off?

    ReplyDelete
  21. Dear Arun:

    Bee,
    Your comment of 3:03 PM, December 09, 2006 is indeed confusing.


    Well, reading it again, I realize it is confusing. I am sorry. What I was trying to explain there is the motivation for 'standard' DSR. This motivation being: one can't boost someone into the super-planckian regime. The motivation for my model is somewhat different. To stay with the example from above: if you boost your particle and let it travel through the medium, it will scatter with the constituents of the medium. But when the typical center of mass energy for such a scattering event comes close to the Planck energy (and the impact parameter is close by it's inverse), then the scattering process will be significantly affected by quantum gravitational effects. The result displaying a finite ability for resolution (cross-sections stagnate) and a generalized uncertainty principle. (This is the postulate of the model, which is motivated by several partwise results from more fundamental approaches).

    You have actually almost answered the question from Anonymous above. Yes, the point is that using the DSR formalism for a non-Lorentz violating regulator, one can go up to arbitrarily high energies (note that in my model there is no bound on the energy of a particle since I don't know how to make sense of that). The point is that in this limit, the exact properties of the model don't matter, since the only thing that is relevant is the asymptotic limit (that asymptotic limit being a requirement of the model and the same for all possible choices).

    Best,

    B.

    ReplyDelete
  22. Hi Eugene,

    Thanks, this is very interesting. It seems to me it should be possible to set this in some connection to the formalism I have used. I mean, I have essentially postulated such an behaviour and just parameterized it, but it might be helpful to approach it this way.

    But I will have to think about it, and get back to you. Best regards,

    Sabine

    ReplyDelete
  23. Dear Anonymous,

    Thank you for providing the references. Indeed, I've read these papers some while ago. I will address it elsewhere, this comment section isn't really the best place for it. Also, it is connected to a paper I'm working on.

    Regarding your concern that my model doesn't really provide us with any insights, there were several reasons why I found the approach pursued in my model interesting.

    The one is that with this kind of regularisation, one can actually go to energies close by and above the Planck scale (note that in my model there is no upper bound on the energy. In momentum space there is a 'squeezing factor' at high energies, which suppresses the contributions, but there is no hard cut-off). The point that I wanted to make with this was related to the multitude of calculations that have been done some years ago in models with extra dimensions, which examined production of KK-excitations or gravitons at energies potentially close by or above (the new) Planck Scale. I wanted to show that one can't just do this and drop the known fact that at this scale the limiting properties of the Planck scale should become important as well. This affects the cross-sections, and it's just inconsistent to ignore this.

    The other point is that usual regularization works fine in 4 dimensions, but in higher dimensions the result will explicitly depend on the regulator (since the coupling constant isn't dimensionless). In my model this regulator is provided also in higher dimensions, this was essentially the point of my work with the running coupling (hep-ph/0405127). Well, I've heard there are people who actually believe the universe isn't really 4 dimensional.


    And then this model can in principle be used as a connection between the standard model, and some fundamental theory. Its properties are parametrized in some function (or, alternatively, in the geometry of momentum space), that on the one hand affects observables, on the other hand should be determined by the fundamental theory.

    I hope this explains my motivation.

    Best,

    B.

    ReplyDelete
  24. Bee: "If there is an upper bound on the integration in momentum space at it appears in loop integrals, it requires a modified transformation behaviour for the momentum of the virtual particle, otherwise the bound will indeed break Lorentz invariance."

    Introducing the upper bound when integrating over the 4-momentum of a virtual particle does not break the Lorentz invariance because the cutoff is a Lorentz invariant.

    "I don't know how to achieve this without a DSR-like transformation for the exchange particle (or a curved momentum space, see above), this is how I stumbled across the topic."

    You don't need DSR or any other modifications, just use the cutoff regularization.

    ReplyDelete
  25. Anonymous: If you introduce a hard cutoff into any direction and boost it, it will be at a different value, thus it isn't invariant under (usual) Lorentz trafos. The 4 volume trivially stays invariant, but that isn't the point.

    ReplyDelete
  26. Bee:"Anonymous: If you introduce a hard cutoff into any direction and boost it, it will be at a different value, thus it isn't invariant under (usual) Lorentz trafos. The 4 volume trivially stays invariant, but that isn't the point."

    When one integrates over the 4-momentum of a virtual particle the cutoff is a Lorentz invariant. It's just a radius of a 4-sphere when you perform the Wick rotation.
    One then goes to spherical coordinates, integrates over the three angles and finally integrates over the radial direction up to the cutoff. It manifestly preserves the Lorentz invariance. Hence, there is manifestly a maximum scale \Lambda (on which all observers can agree) and there is absolutely no need to deform anything.

    ReplyDelete
  27. Anonymous: There are many ways to use a cutoff to regularize momentum-space integrals that lead to Lorentz-invariant results. The procedure you are talking about is one. That isn't the same as saying the cut-off introduced in momentum space is invariant under Lorentz-transformation. Note that this is before Wick-rotation.

    ReplyDelete
  28. Bee, you first said: "If there is an upper bound on the integration in momentum space at it appears in loop integrals, it requires a modified transformation behaviour for the momentum of the virtual particle, otherwise the bound will indeed break Lorentz invariance."

    Now you are admitting that: "There are many ways to use a cutoff to regularize momentum-space integrals that lead to Lorentz-invariant results. The procedure you are talking about is one."

    Hence, what your motivation of regularizing the integral by deforming the Lorentz invariance when this can be easily done in the standard way? What's the advantage?

    ReplyDelete
  29. I meant by deforming the Lorentz transformations when this can be easily done in the standard way? What's the advantage?

    ReplyDelete
  30. Anonymous: The difference I was pointing out is that between there being a prescription to use a cutoff in the momentum space integration that leads to a Lorentz invariant result, and the geometry of momentum space being modified in a Lorentz invariant way such that the integration is finite. The former: Lorentz invariant RESULT. The latter: Lorentz invariant MODEL based on a non-trivial geometry of momentum space. The former: Prescription employed to deal with integrals that annoyingly turn out to be infinite. The latter: integrals are finite. The former: Cutoff is introduced ad hoc. The latter: Cutoff is a consequence of the minimal length.

    Needless to say, you can regularize QFTs without a cutoff. If your cutoff procedure is any good, the result should be the same.

    ReplyDelete
  31. Bee: "The former: Lorentz invariant RESULT."

    This RESULT is obtained in a Lorentz-covariant way by preserving the Lorentz symmetry. In your MODEL you deform the Lorentz transformations in order to achieve a (the same???) RESULT.

    "The latter: Lorentz invariant MODEL based on a non-trivial geometry of momentum space."

    If the motivation for this MODEL is to regulate Feynman integrals, then what is its advantage over the standard cutoff technique or other regularization methods? Does your method preserve gauge invariance?

    "The former: Prescription employed to deal with integrals that annoyingly turn out to be infinite."

    The integrals are only formally infinite if you don't provide any physics input. Once you realize that you are dealing with an effective theory the integrals are perfectly finite. The cutoff is just the RG scale where new degrees of freedom kick in.


    "The latter: integrals are finite."

    They are only finite because you introduced a cutoff by deforming the Lorentz transformations. By introducing a universal cutoff near the Planck scale you seem to imply that this is the way to regulate Feynman integrals. In that case, according to this approach, the four-fermion Fermi theory of weak interactions is perfectly ok all the way up to the Planck scale.

    "The former: Cutoff is introduced ad hoc."

    Not at all, the cutoff is the RG scale up to which the particular effective field theory is valid.


    "The latter: Cutoff is a consequence of the minimal length."

    By the way, I'm still puzzled, is this a Lorentz-invariant proper length?

    ReplyDelete
  32. Anonymous,

    Could you please clarify the following: you have a momentum space, you introduce coordinates in it, you cut them off at some finite value. How do you achieve this finite value is the same under all Lorentz transformations?

    The motivation of the model was not to regularize integrals. One just gets this for free, and I think it's neat. It isn't so surprising that a minimal length acts as a regulator, but it's still nice to see it works.

    The integrals are only formally infinite if you don't provide any physics input

    Right. And the physics input I've provided is that there's a minimal length. I never claimed that's the only way to regularize momentum space integrals.

    Regarding gauge invariance: let me first add that my model is an outsider among the DSR models. The common DSR approach, I have no clue whether it's gauge invariant. It's somewhat hard to tell without a Lagrangian. My model is manifestly gauge invariant, which isn't hard to see. However, it has a higher order Lagrangian and if you truncate the series gauge invariance will only be approximate up to where you truncated.

    By introducing a universal cutoff near the Planck scale you seem to imply that this is the way to regulate Feynman integrals

    Huh? It certainly isn't, and I never said or "implied" anything like that. At least in 4-d the most common regularization scheme nowadays seems to me dimensional regularization. That however doesn't work in higher d where it becomes somewhat arbitrary. The regularization with the momentum space deformation works in all number of dimensions.

    For all I know, proper lengths are always Lorentz invariant.

    Best,

    B.

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.