When I submitted the title for this talk, I actually expected a reply saying "Look. This is THE international conference on Quantum Gravity. We already have ten people speaking about phenomonelogy - could you be a bit more precise here?". But instead, I found myself joking I

**am**the phenomenology of the conference. Therefore, I added a somewhat extended motivation to my talk which I found blog-suitable, so here it is.]

The standard model (SM) of particle physics [1] is an extremely precise theory and has demonstrated its predictive power over the last decades. But it has also left us with several unsolved problems, question that can not be answered - that can not even be addressed within the SM. There are the mysterious whys: why three families, three generations, three interactions, three spatial dimensions? Why these interactions, why these masses, and these couplings? There are the cosmological puzzles, there is dark matter and dark energy. And then there is the holy grail of quantum gravity (see also: my top ten unsolved physics problems).

There are two ways to attack these problems. The one is a top-down approach. Stating with a promising fundamental theory one tries to reach common ground and to connect to the standard model from a reductionist approach. The difficulty with this approach is that not only one needs that 'promising candidate for the fundamental theory', but most often one also has to come up with a whole new mathematical framework to deal with it. Most of the talks on the conference [2] were top down approaches. The other way is to start from what we know and extend the SM in a constructivist approach. Examples for that might be to take the SM Lagrangian and just add all kinds of higher order operators, thereby potentially giving up symmetries we know and like. The difficulty with this approach is to figure out what to do with all these potential extensions, and how to extract sensible knowledge about the fundamental theory from it.

I like it simple. Indeed, the most difficult thing about my work is how to pronounce 'phenomenology' (and I've practiced several years to manage that). So I picture myself somewhere in the middle. People have called that 'effective models' or 'test theories'. Others have called it 'cute' or 'nonsense'. I like to call it 'top-down inspired bottom-up approaches'. That is to say, I take some specific features that promising candidates for fundamental theories have, add them to the standard model and examine the phenomenology. Typical examples are e.g. just asking what the presence of extra dimensions lead to. Or the presence of a minimal length. Or a preferred reference frame. You might also examine what consequences it would have if the holographic principle or entropy bounds would hold. Or whether stochastic fluctuations of the background geometry would have observable consequences.

These approaches do not claim to be a fundamental theory of their own. Instead, they are simplified scenarios, suitable to examine certain features as to whether their realization would be compatible with reality. These models have their limitations, they are only approximations to a full theory. But to me, in a certain sense physics is the art of approximation. It is the art of figuring out what can be neglected, it is the art of building models, and the art of simplification.

*"Science may be described as the art of systematic over-simplification."*

But.

One can imagine more beyond the standard model than just QG! So, if we are talking about phenomenology of quantum gravity we'll have to ask what we actually mean with that. To me, quantum gravity is the question how we can reconcile the apparent disagreements between classical General Relativity (GR) and QFT. And I say 'apparent' because nature knows how quantum objects fall, so there has to be a solution to that problem [3]. To be honest though, we don't even know that gravity is quantized at all.

I carefully state we don't 'know' because we've no observational evidence for gravity to be quantized whatsoever. (The fact that we don't understand how a quantized field can be coupled to an unquantized gravitational field doesn't mean it's impossible.) Indeed one can be sceptical about whether it's observable at all. This is reflected very aptly in the below quotation from Freeman Dyson, which I think is deliberately provocative and basically says my whole field of work doesn't exist:

*"According to my hypothesis, the gravitational field described by Einstein's theory of general relativity is a purely classical field without any quantum behavior [...] If this hypothesis is true, we have two separate worlds, the classical world of gravitation and the quantum world of atoms, described by separate theories. The two theories are mathematically different and cannot be applied simultaneously. But no inconsistency can arise from using both theories, because*

**any differences between their predictions are physically undetectable."**~Freeman Dyson [Source]

Well. Needless to say, I

**do**think there there is phenomenology of QG that is in principle observable, even though we might not yet be able to observe it. And I

**do**think that observing it will lead us a way to QG.

However, there are various scenarios that could be realized at Planckian energies. Gravity could be quantized within one or the other approach. Also, higher order terms in classical gravity could become important. Or, there could be semi-classical effects coming into the game. Now one tries to take some insights from these approaches, leading to the above mentioned phenomenological models. Already here one most often has a redundancy. That is, various scenarios can lead to the same effect. E.g. modified dispersion relations, or the Planck scale being a fundamental limit to our resolution are effects that show up in more than one approach. In addition, there's a second step in which these models are then used to make predictions. Again, various models, even though different, could yield the same predictions. That's what I like to call the 'inverse problem': how can we learn something about the underlying theory of quantum gravity from potential signatures?

In the figure below I stress 'new and old' phenomenology because a sensible model shouldn't only be useful to make new predictions, it should also reproduce all that stuff we know and like. I have a really hard time to take seriously a model that doesn't reproduce the standard model and GR in suitable limits.

Now here are some approaches in this category of 'top down inspired buttom up approaches' that I find very interesting (for some literature, see e.g. this list):

- Extra dimensional extensions of the standard model, which can lower the Planck scale to values soon accessible at the LHC
- Deformations of Special Relativity (observer independent ?!? )
- Effects of a generalized uncertainty principle or a minimal length scale rspt.
- Breaking of Lorentz invariance
- Decoherence effects from space-time fluctuations
- Imprints of early an early quantum phase in the CMB/neutrino/graviton background

(And possibly we can maybe soon add macroscopic non-locality to that list, an interesting scenario that Fotini, Lee and Chanda are presently looking into.)

However, whenever one works within such a model one has to be aware of its limitations. E.g. the models with large extra dimensions are in my opinion such a case in which has been done what sensibly could be done. And now we'll have to turn on the LHC and see. After the original ideas had been outlined, many people began to build more and more specific models with a lot of extra features. It's not that I don't find that interesting, but it's somewhat besides the point. To me it's like building a house and worrying about the color of the curtains before the first brick has been laid.

Now, all of the approaches I've mentioned above are attempts to get definitive signatures of QG, but so far none of these predictions on its own would be really conclusive. Take e.g. a possible modification of the GZK cutoff - could have been 'new' physics, but not clear which, or maybe just some ununderstood 'old' physics, like the showers not being created by protons from outside our galaxy as generally assumed?

So, my suggestion to make progress in this regard is to construct models that are suitable to investigate observables in varios different areas. In such a way, we could be able to combine predictions and make them more conclusive. Think about the situation with GR at the beginning of the last century: It predicted a perihelion precession of Mercury, but there were other explanations like an additional planet, a quadrupole moment of the sun, or maybe a modification of Newtonian gravity. It took another observable - in this case light deflection by the sun - that was predicted within the same framework, and confirmed GR was the correct description of nature [4]. And please note, a factor 2 mattered here [5].

I personally am very optimistic about the future progress in quantum gravity - and that not only because it's hard to beat Dyson's pessimism. I think it doesn't matter where we start from, may it be a top-down, a buttom-up approach or somewhere in the middle. I also think it doesn't matter which direction each of us starts into. The history of science tells us that there often are various different ways to arrive at the same conclusion. A particularly nice example is how SchrÃ¶dinger's wave formulation and Heisenberg's matrix approach turned eventually out to be part of the same theory.

I think as long as we listen to what our theories tell us, if we take into account what nature has to say, are willing to redirect our research according to this - and if we don't get lost in distractions along the way, then I think we have good chances to find a way to quantum gravity. And this finally solves the mystery of the quotation on the last slide of my talk:

*'The problem is all inside your head' she said to me*

The answer is easy if you take it logically

I’d like to help you in your struggle to be free

There must be fifty ways to [quantum gravity]

The answer is easy if you take it logically

I’d like to help you in your struggle to be free

There must be fifty ways to [quantum gravity]

~Paul Simon, 50 Ways to leave your lover

[1] In my notation the SM includes General Relativity.

[2] The exception being the very recommendable talk on Effective Quantum Gravity by John F. Donoghue.

[3] Though 3 years living in the US have tought me there's actually no such thing as a 'problem' - it's called a challenge. One just has to like them, eh?

[4] Admittedly, what the measurement actually said was not as straight forward as one would have wished. I leave it to my husband to elaborate on this interesting part of the history of science.

[5] The resulting deviation can be reproduced in the Newtonian approach up to a factor 1/2.

TAGS: PHYSICS, QUANTUM GRAVITY, PHENOMENOLOGY

## 68 comments:

An assumed fundamental truth is not true. No axiomatic system is stronger than its weakest founding postulate. Euclid cannot navigate oceans given his Fifth (Parallel) Postulate.

Metric theory postulates and string theory (BRST invariance) demands the Equivalence Principle, as do all quantum theories. Affine, teleparallel, and noncommutative theories allow angular momentum EP violations: physical spin, quantum spin (magnets: aligned particle or orbital angular momenta), spin-orbit coupling (PSR J0737-3039A/B), and chemically identical opposite geometric parity mass distributions. The only allowed large amplitude violation is the last, where nobody has looked. The proper challenge of spacetime geometry is test mass geometry.

The only inarguable argument is observation.

Three, three, three...

Babylon 5!Sheridan, "A Vorlon said understanding is a three edged sword: your side, their side, and the truth."

Gruber Cosmology Proze

In 1997, the High-z team’s analysis of observations of distant exploding stars first showed evidence that the expansion of the universe is speeding up. That discovery flew in the face of common wisdom that the universe’s expansion should be slowing due to gravity.

Common Wisdom? based on what?

Bee, what would you consider to be physical proof of quantum gravity, not mathematical or theoretical but actual unequivocal physical proof of QG.

That's exactly the question Quasar...

If someone could measure smallest areas and volumes and they'd come in 'quanta', I would consider this proof. If we could make scattering experiments and measure graviton cross-sections (or string excitations?!) I'd consider this proof. But this is very far off from realistic measurements that we can presently make. I see chances that QG effects left imprints in the CMB, but there are too many scenarios that result in similar signatures (e.g. running of the spectral index) and there is the issue that we haven't really pinned down the details of inflation, which would also affect the data. I also see the possibility that some of the decoherence effects might become measurable, but again I have to ask would this say gravity is quantized? One can have stochastic background fluctuations without quantizing. Maybe Lorentz invariance is indeed broken, this could be measured fairly soon - but what does this have to do with quantum gravity? Sure, there are scenarios that are argued to be motivated by QG, but no tight connection also here.

That's why I am saying unless we come up with something ingeniously new, we'll need to combine several observations to outrule other possible explanations. Take neutrino oscillations as an example: there have been other explanations for the data, which was originally just missing neutrinos from the sun. Later it's been confirmed (SNO) that the neutrinos are not missing, but arrive with a changed flavor. Still, they could have been decaying into each other. It took a whole lot of experiments to outrule alternative explanations, and to constrain the parameter space to the precise values that we have today. I think by now there is almost nobody who doubts that neutrinos oscillate. But that didn't happen from one day to the other with a single measurement.

Best,

B.

I think in your list of possible observational signatures of quantum theories of gravity there is one important thing missing: If you come up with some microscopic theory it is extremely important not only to check how quasar observations or LHC or some even more powerful collider can see tiny effects but at zeroth order make sure that the theory does not screw up everyday observations.

For example, when I walk through life, the main source of breaking of isotropy is that the earth is in one singled out direction (I call it "down") and abstracting from that all directions are pretty much equal. Thus any theory that breaks SO(3,1) invariance should be able to explain why I do not see a sign of that every second.

Other constraints come from the fact, that I can see pretty far (several km on a nice day along the surface of the earth, many many light years in the sky). This implies that photons should better not decay or do weired things in significant amounts over such distance scales.

The soccer-ball problem is of a similar flavour. In a recent paper, we argued that the simplest form of space-non-commutativity does not fulfill this criterion and in my second paper about the LQG string I argue that if the LQG method of quantisation is applied to the harmonic oscillator it typically does not pass this criterion.

I think the lesson is that it's far too easy to screw up physics at the 1m length scale if you meddle with the fundamental theory. And when it comes to gravity (or for example varying constants, today we had a colloquium about these) make sure you do not violate extremely strict bounds on fifth force/equivalence principle tests.

So before you start to predict new physics make sure you do not violate old physics!

Hi Robert:

Yes, I totally agree. As I wrote above:

I stress 'new and old' phenomenology because a sensible model shouldn't only be useful to make new predictions, it should also reproduce all that stuff we know and like. I have a really hard time to take seriously a model that doesn't reproduce the standard model and GR in suitable limits.And, as I say in my conclusion, we should take seriously what our models or theories try to tell us. To take your example, if there is a soccer-ball problem that means the SM limit can not be reproduced, one has to take that seriously. One can't just believe in one prediction of a model, and convinently neglect other 'predictions' (soccerballs don't exist) just because they are in apparent disagreement with nature.

Thanks for pointing out your works on the topic, I will have a look if I find a chance (presently on my way to the airport). You might also find my recent paper on this interesting: hep-th/0702016.

Best,

B.

Bee said: Maybe Lorentz invariance is indeed broken, this could be measured fairly soon...

Confused again- w/o Lorentz invariance there are many possible marginal and relevant interactions, whose coefficient is measured to be zero to exquisite accuracy, much better than the accuracy suggested in any future experiment.

So, unless I am missing something, we don't need any more data to falsify that idea. Am I missing something?

B,

I read your paper when it came out but my memory is not very fresh so what I am saying might be completely irrelevant to your paper. Please blame this on me not remembering properly!

My first recollection of what you did is that you found a way to modify DSR not to have the soccer-ball problem. However by doing this you had to shift parameters significantly such that a "discovery just around the experimental corner" is no longer viable.

Then there is another caveat of what one should not do (and again: I do not want to imply that you or anybody else did this): The soccer-ball problem arises because a naive tensor product of DSR representations is no longer a DSR representation (tensor product arise when you consider for example the momentum of a bound state of two particles in terms of the individual momenta. In the conventional theory, the tensor product is just the sum but as the gentlemen Clebsch and Gordon explained to us in the case of SO(3) already it's more complicated).

So, let's do something stupid: For an ordinatry momentum vector p define

P = f(p)

with f(p)=p/sqrt(1-p^2/m^2). Then obviously the components of P are bounded by m. The capital P together with the energy E carry a non-linear representation of the Lorentz group and the tensor product is defined by

P1 "+" P2 := f( f^-1(P1) + f^-1(P2) )

Look what we have achived: We have now a new, non-linear momentum space which lives in a sphere of radius m but for which the tensor product of representations still works fine an there is no soccer-ball problem...

...the only problem being that we are not doing anything new physically, we are just using unusual coordinates for ordinary momentum space, coordinates which hide the linear structure of momentum space as a vector space. But everybody is of course free to use whatever coordinates they like...

Hi Moshe,

If you are missing something, I too am missing it. Yes, there are excellent constraints on Lorentz violation in the standard model. Some so-called 'test theories' with a modified dispersion relation evade these by just not writing down a Lagrangian and/or claiming there is no low energy effective description. I personally don't think the latter argument improves the situation, and makes the claim more convincing that there could be effects observable soon (that are not yet outruled). There still is the possibility though that we have some preferred frame effect, most likely aligned with the CMB, so effects would be tiny - well, you know, tightly constrained does never mean exactly zero... I find this possible, but I don't see why it must necessarily have something to do with quantum gravity, so I am not particularly excited about it.

Best,

B.

Hi Robert:

Thanks for the feedback. I was not aware my paper could be misread as an attempt to shift parameters out of experimental detection. This was most definitely not the point.

What I was saying in the paper is that if we want a field theory (well, we want to reproduce the SM), we should start with a field theory. That means, we are dealing with densities rather than integrated quantities like total energy and momentum. There's no reason to expect QG to become important at an energy close by the Planck scale, if that energy is not focused into a small region of space time. That is, the relevant quantity is the energy density. In this case, there is no soccer-ball problem, as the density of objects (bound states) can drop below the Planckian values easilt, even though the total energy exceeds the Planck mass.

The reason why I wrote this paper is to make clear that the raised expectation that QG effects might be relevant for photons from gamma ray bursts (or the GZK cutoff, but I believe this is dead by now) is not reasonable.

To your remark above: The wavelength is a geometrical quantity, it can be defined through a function's periodicity. The basic principle of quantization is that the momentum is proportional to the wavelength. p=\hbar k. This is an assumption, not a derivation. The point where I start from (and I am not talking about DSR right now) is to allow a more general relation which you might understand as \hbar being energy dependent or just as having a non-linear relation between both p= f(k). This too is the assumption, not a derivation.

If you try to quantize that approach you end up with a higher order derivative theory. This theory is not equivalent to the usual theory.

If one has a Lagrangian formulation (higher orders or not), one can derive conserved quantities (provided one has figured out the symmetries in position space). This is pretty much straigt forward. It turns out (unsurprisingly) that these are additive in the usual way. (In DSR they are not, I don't understand this).

Differences between my approach and the DSR approach: I have no modification for free particles (only off shell), In my case the (physical) momentum transforms under the Std. Lorentz transformations, In my case there is no energy dependent speed of light.

But everybody is of course free to use whatever coordinates they like...The modification doesn't come from chosing coordinates but from modifying the geometry in momentum space. It's not a flat space, choosing different coordinates doesn't change that.

Best,

B.

Thanks B, I think we agree with each other. I heard before the argument about not having an effective description, I find it a little funny. It is a little like me asking about the anomalous magnetic moment of the electron, just to be reassured that no, we don't have any electrons in the model. If your theory does not reduce to conventional EFT at low energies, Lorentz violation is the least of your problems...

OT

From the Reference Frame:

"my primary concerns are about the future, i.e. the postdocs coming after me."

-----hmmm what did you do to piss them off, Bee and why are they coming after you ;)

I can only assume Lubos is on dialup or internet cafe connection in Pilsen as usually he is quick to squelch nonsense, but, like you, I am not posting over there until he slams shut the Crackpot Pandora's box.

Physics is a reductionist science. The string is hypothesized to be the smallest particle of which everything is made. The goal of reductionist physics is to find a few simple principles that underlie complex phenomena. The string theorists invent astonishing physical and mathematical complexity as the end point of reductionism. Well, quite obviously, the end point of reductionism is a theory as Einstein stated that we can teach to the kids and quite obviously not a theory that nobody can understand. When the end point of reductionism is the greatest complexity imaginable it is just plain absurd.

In 2000 an independent scientist working alone sent a copy of his book, The N-particle Model, to all the physics graduate students at Berkeley. Now he’s back and on the Reference Frame. He claims the universe is made of a single elementary particle that he now names the Ã– particle and that particle is neither created nor destroyed. He claims its energy is 2.68138x10^-54 J. He claims the small size of the Ã– particle is the reason electric, magnetic, photon and gravity fields appear continuous.

There is the question about lemmings when they get to the edge of the cliff. Do they choose to jump off or are they responding only to herd instinct or maybe aerodynamically drafted. It looks to me like the theorists are right on the edge.

Hi Gordon :-)

My husband and I, we just had a good laugh on my behalf about the above sentence. It's one of these typical things that happens if one literally translates German into English. I hope it was at least possible to figure that what I meant to say with 'postdocs coming after me' was the next generation of postdocs...

Regarding the Ref.Frame, yes, it badly needs moderation, the signal to noise ratio is approaching zero. But Lubos post wasn't terribly original anyhow.

Best,

B.

Hi Bee,

I think it doesn't matter where we start from, may it be a top-down, a bottom-up approach or somewhere in the middle. I also think it doesn't matter which direction each of us starts into. The history of science tells us that there often are various different ways to arrive at the same conclusion. A particularly nice example is how SchrÃ¶dinger's wave formulation and Heisenberg's matrix approach turned eventually out to be part of the same theory.Feynman stressed the importance of looking at the world from different viewpoints, crucial to his success in quantum electrodynamics. (I think he knew more ways to look at electromagnetism than anybody else.) This is from his Nobel Prize Lecture:

I think the problem is not to find the best or most efficient method to proceed to a discovery, but to find any method at all. ...a good theoretical physicist today might find it useful to have a wide range of physical viewpoints and mathematical expressions of the same theory (for example, of quantum electrodynamics) available to him.... If every individual student follows the same current fashion in expressing and thinking about electrodynamics or field theory, then the variety of hypotheses being generated to understand strong interactions, say, is limited. Perhaps rightly so, for possibly the chance is high that the truth lies in the fashionable direction. But, on the off-chance that it is in another direction - a direction obvious from an unfashionable view of field theory - who will find it? Only someone who has sacrificed himself by teaching himself quantum electrodynamics from a peculiar and unusual point of view; one that he may have to invent for himself.I'm sure Feynman would have agreed with you that pursuing many approaches is the sensible way to search for a "quantum gravitodynamics."

Good you're not trying to solve all the riddles of the universe, or you'd be frozen as though by a sphinx.

Say we take the lack of [observed] quantization of gravity at face value.

What if gravity isn't quantized? Could that be possible? I don't honestly think that it is true, I'm just throwing it out there because *everything else* is being looked at as far as I know.

Could one coherent theory adquately handle a continuum for gravity and discretized everything else?

Could one coherent theory adquately handle a continuum for gravity and discretized everything else?To the extent that quantum mechanics itself is coherent, yes. I don't think we know how to define a measurement without a classical world.

Dear Kris:

Thanks for pointing out that quotation, I didn't know it. In fact, when I was preparing the talk I discussed with Stefan what other examples there are in the history of science, and the first thing he came up with was QED. I'll try to encourage him to have a post on it... Another example that came into my mind was supersymmetry, which has essentially been discovered repeatedly and independently. There are more examples like this from during the cold war, but I eventually settled on the one with QM because it was the one that required the least amount of explanation.

Dear Arun:

I don't think we'll ever be able to completely understand the universe (the principle of finite imagination ;-) ). I for myself would be incredibly happy to understand a small part of it (e.g. my husband). I esp. like how Douglas Adams put it:

"There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened."~Douglas Adams

Dear Eric:

I think you deserve a price for reading between the lines :-) Indeed, what I didn't say in the talk but tried to indicate: gravity is apparently a theory that so very clearly doesnt WANT to be quantized, maybe we should take that more seriously and consider it doesn't have to be quantized. Regarding your question: I don't think one can do much about it without understanding the measurement in quantum mechanics. The problem is, if one writes down gravity (classical) coupled to the standard model it just doesn't make sense because it's like adding apples and oranges. The one is a classical field, the other is operator valued and actually needs to 'act' on something (that being the purpose of an operator). Now the standard approach is to take the classical field and trying to also quantize it with all the problems that come along with it. What about 'unquantizing' the other side? Well, nobody knows how to do that. I admit it doesn't sound too convincing as a topic to work on, but as I also mentioned above: just because we don't know how to do it doesn't mean it's impossible. Best,

B.

Dear Bee,

While you're in question-answering mode, why is the Standard Model a renormalizable QFT?

Granted, along with gauge invariance, anomaly cancelation, asymptotic freedom, renormalizability played a big role in the search that resulted in the Standard Model.

But now we know better, QFT and the Standard Model are merely effective theories, embedded in a more complete theory such as string theory. An infinite number of terms in the effective theory Lagrangian should not phaze us, in principle every single one of them is calculable from first principles from string theory.

So, why?

Best,

-Arun (stuck in a neverending story)

Dear Arun:

I have the impression I am not so much answering questions as raising new ones. I had to think about your question for a while, it's a good one, and I don't have a remotely satisfactory answer (well, admit it, you'd have been surprised if I had had one). To negate the question: if we'd live in a universe where the SM was not renormalizable, could it describe a world that makes sense? Or to make matters more complicated: that makes sense to us?

Now that sounds pretty much anthropic and I don't particularly like it, so let me try a different approach. Indeed, I too think an infinite amount of terms should not necessarily bother us a priori. If one adds non-renormalizable terms to the SM the need to add even more seems to me it rather indicates our non-understanding of where we live in the low energy limit. Take e.g. a function like exp(-1/x^2 -x^2) (it's no coincidence that I chose a function that is symmetric under the exchange of x -> 1/x). Now consider the low energy limit is somewhere around the zero. The function is essentially flat and everything works fine up to highest energies (okay, in the SM you still have to renormalize, but at least its doable and uniquely so). Now you go somewhat further away and add a polynomial correction term. Suddenly, everything goes wrong with the asymptotic limit. Why? Because every polynomial with finite terms diverges at infinity. If you'd want to 'fix' that to get a good asymptotic limit, well, it would take an infinite amount of terms.

I'm not entirely sure myself what I want to say with that... Something like that we are living in a low energy domain around a renormalizable section, there are a lot of bad (non renormalizable) extensions that one can make that look really messy and wrong, but the full description again might be very simple. The biggest problem with the infinitely many terms I see is one of ugliness.

Best,

B.

PS: where are you stuck?

Arun, this is an excellent question, I always talk about this issue when teaching QFT. Why are we so lucky that the simplest QFTs (the renormalizable ones) turn out to be the ones relevant for nature, and this is true not just in high energy physics...

This is all to prelude a discussion of the renormalization group. If you start with any theory at all at some high energy, the different kinds of effects are classified by what they do as you go to lower energies, where we do measurements. The so-called relevant (or marginal) are the ones that grow (or stay the same, resp.) as you lower the energies. So at low enough energies compared to the fundamental scale they will be the dominant ones. As you guessed relevant or marginal is more or less synonymous with renormalizable. This means that at first approximation you'll always end up with renormalizable action, no matter where you start.

As for non-renormalizable terms, they will always be there if you are sensitive enough to see them. For example in the SM neutrino masses come from non-renormalizable (dimension 5) terms in the action.

(This is also related to my comment above regarding Lorentz violation, even if it extremely small at some fundamental scale, there are Lorentz violating relevant terms - some effects that grow towards low energies - which means whatever Lorentz violation you started with has to be ridiculously small to evade detection by current experiments).

Hi Moshe:

Thanks for that interesting contribution... what exactly is the relation between renormalizabilty and marginal operators? But the question that remains for me then is whether every kind of theory needs to have marginal operators? Best,

B.

B., this is a huge subject, takes about a semester of QFT course. I'd recommend the discussion in Peskin and Schroeder or Amit's books. It is extremely important though to understand precisely Wilson's viewpoint on renormalization and EFT, because this is how almost all modern theoretical physics is organized.

(to your question, renormalizable by definition means an action which includes only marginal and relevant operators)

B., this is a huge subject, takes about a semester of QFT course. I'd recommend the discussion in Peskin and Schroeder or Amit's books. It is extremely important though to understand precisely Wilson's viewpoint on renormalization and EFT, because this is how almost all modern theoretical physics is organized.

(to your question, renormalizable by definition means an action which includes only marginal and relevant operators)

Wow. That was a very informative post for this non-physicist. Thanks for blogging.

Best regards, - Steve Esser

Moshe,

If we wrote an effective Lagrangian below the W-Z scale, it is non-renormalizable, and was an indication that we have more physics.

The neutrino mass term must be a great relief, because otherwise the seeming lack of any non-renormalizable terms (experimentally speaking) suggests that there is a desert above the Standard Model scale.

I guess it is a question answered somewhere - what non-(marginal,relevant) terms with what range of coefficients will the LHC be sensitive to?

Arun,

The fact that no non-renormalizable (aka "irrelevant") terms are easily seen is exactly the reason for the big desert scenario, where the next scale of new physics beyond that of elecroweak symmetry breaking is way above the electroweak scale, for example the GUT scale.

(there is still the physics of EW symmetry breaking itself, and what stabilizes it etc., all fascinating physics in its own right).

I am sure there is somewhere model independent analysis of the LHC, and what irrelevant interactions it will probe. I am not a phenomenologist so I am the wrong person to ask, as I understand it the LHC will be more of a discovery machine, and will do less of measuring (or setting bounds) on interactions directly, but the honest answer is that I don't know, maybe someone else could educate us.

Hi Bee,

You're welcome. Feynman's Nobel Prize Lecture is a wonderful read. It's here. He also gives this example:

Many different physical ideas can describe the same physical reality. Thus, classical electrodynamics can be described by a field view, or an action at a distance view, etc. Originally, Maxwell filled space with idler wheels, and Faraday with fields lines, but somehow the Maxwell equations themselves are pristine and independent of the elaboration of words attempting a physical description.As far as "quantum gravitodynamics" goes, there is an alternative based on a gravity theory by Yilmaz I find interesting. It's controversial, as discussed here. However, I've never seen a rebuttal to the methods he uses in this paper to renormalize his theory:

HÃ¼seyn Yilmaz, "Gravity and Quantum Field Theory: A Modern Synthesis,"

Fundamental Problems in Quantum Theory, A Conference held in Honor of Professor John A. Wheeler.Edited by Daniel M. Greenberger and Anton Zelinger. Annals of the New York Academy of Sciences, Vol. 755 (New York, NY: The New York Academy of Sciences), 1995, p.476.(I have another alternative to general relativity, conceptually different from the Yilmaz theory, but mathematically similar. I hope the same basic renormalization methods can be applied.)

Lorentz violation need not contradict any observation at any scale in any venue - a mass sector chiral vacuum background. EM and achiral mass distributions are inert. Nobody has tested interactive cases.

Opposite parity crystallographic space groups P3(1)21 and P3(2)21 are calculated test masses. A parity EÃ¶tvÃ¶s experiment opposing single crystal solid spheres of cultured P3(1)21 and P3(2)21 alpha-quartz (Adelberger or Newman) requires 90 days. Each hand of quartz vs. fused silica are the two controls.

Differential energy of insertion of P3(1)21 and P3(2)21 benzil into chiral space requires two days in two calorimeters, including controls. Somebody should look.

There may be fifty ways to leave your lover, but the number of ways to quantum gravity should fall well below the number of fingers on one's hand!

"I am sure there is somewhere model independent analysis of the LHC, and what irrelevant interactions it will probe"

Theres always at least some glimmer of a 'model' behind things, but this has been the subject of quite a bit of work recently.

Nima has his Marmoset package that tries to abstract away the model dependance when analyzing potential LHC data (with mixed reviews). There are lots of other proposals as well, and it gets into quite an industry.

Bottomline, we'll know soon anyway.

-Haelfix

In the same vein as the physical significance (or lack of) of the renormalizability of the Standard Model, what are the lessons of the non-renormalizability of perturbative Einstein gravity?

------

Back to the Standard Model - one implication of renormalizability is that I can imagine regularizing Feynman graphs with a cut-off and then renormalizability says that I can make the physics of the model independent of the cut-off. All regularization details can be hidden in the "bare" coefficients in the Lagrangian.

However, the minute I think of the Standard Model as an effective theory only, I run into the problem of fine-tuning, hierarchy, etc. - the bare coefficients are very sensitive to the cut-off. Where I previously blithely absorbed an infinite series with each term infinite via renormalization, I'm suddenly worried about how sensitive this infinite series of infinities is to the (infinite) cut-off.

Is this a weakness of our mathematical formalism, or is there a genuine physical problem here?

Hi Moshe,

I was confused by your sentence

relevant or marginal is more or less synonymous with renormalizable, but will look up Peskin and Schroeder on the matter. Though I still don't understand: is there a reason why every theory needs to have these operators?Hi Arun,

regarding your latter question. To me it indicates a weakness of the maths rather than a physical problem. If one has reduced the theory to the effective limit then the parameters that enter are just measurable input, so a priori I wouldn't be worried about that as long as I knew the finetuning problem was absent in the full theory. However, we don't know that either, that is what worries me more.

Hi Cynthia,

Why do you think so?

Best,

B.

B., usually the renormalizable terms are taken to be those with dimensions less than 4, which guarantees by dimensional analysis they will be the most relevant at low energies. This is counting of dimensions in free field theory (aka Gaussian fixed point). The notion of relevant and marginal generalizes this to possibly strongly coupled fixed point.

Arun, I have no way of understanding of QFT and renormalization that does not automatically make QFT an incomplete effective theory. Albeit and extremely useful and powerful one.

As for Einstein gravity, the fact that it is non-renormalizable strongly suggests that it is an effective description of something completely different. To paraphrase Ted Jacobson, you don't quantize the metric for the same reason you don't go about quantizing ocean waves. There are people out there who strongly object to this viewpoint, blaming the problem on the lack of background independence. Who knows, maybe they are right.

I have no way of understanding of QFT and renormalization that does not automatically make QFT an incomplete effective theory.This is a very interesting statement. Am not sure what questions to ask to make this clearer, because my understanding is rather different. To me, renormalizability and asymptotic freedom make a QFT complete standalone theory.

You're right Arun, there are some field theories that make sense all by themselves, but I was referring to the most general one, that is: I have no general understanding of QFT that does not rely on it being an EFT. Even understanding that asymptotically free theories are special in this regard relies on such understanding.

BTW, even for QCD, which is principle is complete, oftentimes it is most efficient to construct EFT suited for specific processes (e.g soft collinear EFT etc.) where one systematically classifies effects according to their relevance to those processes. Renormalization is then naturally understood as part of that classification. Any other interpretation of QFT and renormalization is (to me) not remotely as convincing.

I was going to add a few words answering about the renormalizability of the SM but Moshe is doing a good job explaining things out. Just a few remarks, perhaps on the physics too. The Fermi theory of Weak interactions worked perfectly fine even though it wasnt renormalizable, the main problem however was the loss of unitarity near the W pole. Similar to what happens around TeV with longitudinal Ws, we need the Higgs. These things are somehow connected. Every field theory is to some extent an effective theory, the point is whether it is embeded in a well defined UV completion. For instance, if this higher dimensional operators (irrelevants) whose effects at low energy are suppressed by the NP scale, have the *incorrect* sign, some pathologies could emerge which could make impossible to fit them in a theory which respects the symmetries and properties of the low energy physics, unitarity, Lorentz invariance and causality might be broken. As you can see, renormalizabilty perse was a cornerstone in that gave us some confidence we understood things a bit better, that wasnt obvious at first and the need of gauge symmetries and the Higgs mechanism was crucial. However, these were introduced primarly for other reasons and renormalization turned out to be a side product. As Moshe is trying to explain, who cares if the SM wasnt renormalizable, as long as the tower of higher dimensional ops are irrelavent at low energy scales. Now, asking Sabine's question, why does the SM have dim less or equal 4 operators, well, the *free* theory does (F^2, psi* d psi etc), and higher derivative theories are problematic, even though they are good to ensure better convergence, they have ghosts and all sort of pathologies. Like Moshe explain, the interactions which are dim less or equal 4 (note: all terms in the action are dimensionless, I refer to the operator itself, that's actually crucial to understand the whole business...)., these are the relevant ones to the low energy dynamics, and thank good the SM has some otherwise we wouldnt be here! :)

Let me finish with a comment and a remark on QG.

First, teh issue of divergences it is still a source of conceptual tension, notoriously in the hierarchy problem. This problem regards the instability of the Higgs mass to quantum corrections, or in other words, no "symmetry protection". We believe this, the cuadratic divergences, are a *real* problem and many chapters have been written thus far, remarkably about a girl named SUSY ;)

That's the symmetry that protects the Higgs. But many other approaches have appeared, also XD. Here I'd like to point a quibble about the quadratic divergences. They do not appear in dim. reg.! so where did the hierarchy problem go? A few lines computation tells u where is it hidden, but I will leave it to the reader as an exercise, it is actually a very cool realisation, or something really *desert* must be going on out there ;)

Now a remark. Einstein theory isnt renormalizable, or in other words that G is dimensionful. Well, expect for N=8 sugra, but let's put that aside for a moment. That isnt a big deal, gravity as an effective theory works quite well as we all feel everyday. So what's the matter why QG then? The whole point is to understand the UV dynamics, black holes and the beginning of the universe, and the EFT knows nothing about that, one needs to 'match' and for that we need the full theory! That one better to be finite, and that's the holy grail...

Ok, this has grown more than I expected...gutten nacht...

Dear all,

while I had the privilege to know much of the content of my wife's post before she wrote it down in her unmatched manner, I'm really enjoying reading the comments, which are very informative and inspiring.

Thank you,

stefan

Regarding breaking of Lorentz invariance, there was a paper last week on the arxiv

New constraints on Planck-scale Lorentz Violation in QED from the Crab Nebula

which (as far as I can see) only considers astrophysical constraints on QED with breaking of LI, but has some useful figures on the constraints Fig 1, resp. Fig 8 with their update. The parameters are normalized with respect to the Planck scale, and the remaining dimensionless constants are constrained to be smaller than ~ 10^-5.

I might have missed it, but since their model violates CPT, I'd have thought they would at least mention the particle physics constraints on CPT violation. For other constraints, see also Kostelecky's website.

That's a good example of my general confusion about this literature. They find constraints from dimension 5 operators in QED. What about dimension 3 and 4? they certainly exist and would yield much stronger constraints, in effect falsifying the idea. Why are we ignoring those?

Hi Moshe: At least it is very clear what they are doing and what constraints they consider, in my perception it is one of the better papers on the matter. Still, for a work on the subject, I find it somewhat incomplete. I think they just don't consider relevant operators with local LI breaking? Is there a strict reason why it would be inconsistent to only have that violation in the higher order operators? Best,

B.

Sure, if some symmetry is violated at some high energy scale, the effect at low energy will include all possible terms with coefficients determined roughly by dimensional analysis, those will automatically generated once you renormalize your theory. If the coefficients turn out to be much smaller than expected, this calls for an explanation.

For example the hierarchy problem for the Higgs mass and the cosmological constant problems are puzzles of this sort. If you allow violations of LI, you have one such puzzle for each possible relevant operator. There are many many such possible terms.

BTW, I am only referring to global LI, explicitly breaking a local symmetry is much more problematic, but we can ignore gravity for now.

In any event, as fun as this conversation has been, I'd have to decouple soon. Enjoy the rest of the weekend.

Done with Harry Potter, back to the Neverending Story!

I was always intrigued by the logo and just thought I should mention it :) Just type in lorentz in the search feature at the top of my site.

The two clocks depicted in the official logo for the CPT '04 meeting are related by the parity transformation (P). The inversion of black and white represents charge conservation (C), while time reversal (T) is represented by the movement of the hands of the clock in opposite directions.Visualization can sometimes bring a better clarity? Funny pictures like penguins?:)

SLAC's BaBar collaboration has discovered that CP violation—an asymmetry between the behavior of matter and antimatter—exists even in a very rare class of particle decays. This result offers the most sensitive avenue yet for exploring matter-antimatter asymmetries, with implications for the future understanding of physics beyond the Standard Model.

"BaBar has proven to be a fantastic instrument for exploring the origins of matter-antimatter asymmetries, allowing us to probe with exquisite precision very rare processes related to how the early universe came to be matter dominated," said David MacFarlane, BaBar Spokesperson and Professor at the Stanford Linear Accelerator Center.

Please let me rephrase Moshe's last comment as I think it is very important in many discussions of beyond standard physics: Yes, of course, you can write down lots of theories that lack certain terms in the action and thus are not immediately ruled out. For example you can write down theories that break Lorentz invariance only through dimension 5 operators whereas symmetry breaking operators of dimension 3 have zero coefficient.

However, very very generically this pattern is not preserved by quantum corrections and typically these lower dimension terms have to be included in the form of counter terms. Now, you can either fine tune huge bare+ huge counter=tiny or the breaking is transmitted to lower scales via quantum corrections.

This problem is suffered by many classical extensions if ordinary physics such as varying constant theories etc. The only known way to prevent this from happening is to invoke some symmetry which protects the zeros: For example gauge symmetry can do this for vector boson masses or chiral symmetry for fermion masses. Similarly, gauge symmetry links the coefficients of three and four point interactions of vector bosons.

Unless this happens in general no simple pattern in coefficients (some vanishing, some being related) survives quantisation as we know it.

Hi Robert:

Thanks for your explanation. Let me see whether I got this correctly: a theory that had only Lorentz violation in the higher (5+) order operators would need lower dimensional counter terms - unless they are explicitly forbidden that is? And (sorry for the many questions) maybe you could elaborate somewhat on Moshe's comment reg. local/global symmetry. I am severely confuzzled about it. Best,

B.

Coincidentally, I got an email today from Stefano Liberati who is one of the authors of the above paper. He says he presently can't engage in the discussion, but here is his reply:

Hi,

I read your blog but had missed the comments on my paper. Actually most of the comments are aiming to the well known "naturalness problem" in Lorentz violation (see e.g. the discussion in our hastro-ph/0505267). I.e. a theory with dim 5 operators would generically produce order 3 LIV operators via radiative corrections and there we have strong constraints because these terms are included in Kostelecky SM extension (only LV renormalizable operators). Note these includes also the generally stated constraints on CPT.

The problem has been discussed in several reviews (like the one cited above) and some ideas about how to get around this have been proposed (basically ideas about possible symmetries protecting against these effects, in particular SUSY). BTW In defining the SM extension with dimension 5 operators Pospelov and Bolokhov have recently argued that their opeators are protected against transmutation to lower dimensions even at the loop level (hep-ph/0703291 ). Personally I also tried to address the problem from a totally different perspective gr-qc/0512139 (searching for lessons from analogue models).

In this last paper I simply wanted to stick to a well defined test-theory (anyway I'd say the most popular one after Kostelecky SME) and show how an astrophysical study can be used to get a very stringent constraint.

Oh, lots of information to absorb, thanks. I think I got my naive question answered in great detail.

As for my comment about local symmetry: if we couple to gravity, Lorentz invariance becomes a gauge symmetry, and then it is needed to protect unitarity. That is, it is pretty dangerous to explicitly break it, the unphysical polarizations of the graviton will re-appear in physical processes.

This basically means LI has to be broken spontaneously only, by starting with locally LI theory and choosing a background (which is not LI). This restriction I expect means that most LI violation schemes in flat space would be difficult to couple to gravity in a consistent way. This is not based on a whole lot of thought, so I may well be missing something.

Thanks again for the interesting discussion, I certainly am learning things.

Hey there,

Sabine: "A theory that had only Lorentz violation in the higher (5+) order operators would need lower dimensional counter terms - unless they are explicitly forbidden that is?"

A theory that is LV by 5+ can generate, via loops, lower dimensional operators in the effective action. Even if one starts with zero Wilson coefficients the RG will run it away to non-zero values, unless they are *protected*, a fixed point, like gauge symmetry prevent a mass for the photon and chiral symmetry for fermions.

Now the problem is that these d<4 ops are not Planck suppressed on dim grounds and fine tunning is necessary. There is an interesting paper by Collins et al. regarding this issue and QG

http://arxiv.org/abs/gr-qc/0403053

With respect to symmetries, SUSY does the job for the Higgs mass and SUSY is also related to spacetime so impossing its invariance constrains further the new terms. Not sure if it solves the problem though. With respect to SB of LI and the graviton, what about massive modes? we do have longitudinal Ws after all. I never understood why is SUSY broken but Poincare isnt by the way...

G

Bee, so you ask me: "Why do you think so?"

I suppose it might be alright to be in a funhouse of mirrors as you cook-up ways to leave your lover, but I just don't think it's optimum to be in such a loopy locale as you're concocting ways to quantum gravity! Just a humble hunch of mine, that's all...

G., spontaneous breaking of Lorentz invariance is pretty mundane, as every solid state physicist will tell you. The symmetry is hidden at low energies but becomes important just at high enough energy. No news there.

The situation regarding LV, as I understand it, is different. Here the scenario is that some high energy theory violates LI, perhaps related to some quantum gravity effects. In my mind the effects of this high energy LV will not be small at low energies, and in addition I would worry about basic consistency checks such as unitarity for the reasons I outlined before.

"spontaneous breaking of Lorentz invariance is pretty mundane, as every solid state physicist will tell you. The symmetry is hidden at low energies but becomes important just at high enough energy. No news there."

Sure, no need CM, in EM one can set up a background field to break even translational invariance. Actually it is pretty cool to figure out the counting when LI is broken since goldstone modes are no longer one to one. There was a paper by Anesh Manohar while ago about this. I msyself figured something cool about spin waves when I took CM few years back, the counting is kinda interesting :)

My point is not just the background field but the excitations and I agree with you that I am not sure about consistency either, in particular whether we get longitudinal massive gravitons (after all we *have* a gravitino) and decoupling we know doesnt occur, also ghosts. As you point out and I agree it doesnt matter whether LI is broken at Mpl, it will load back into lower dim ops. even if one starts with zero at some scale. I recall some papers where SUSY could prevent this from happening, also cus we want *soft* SUSY breaking.

I like the idea of SB LI, although it is just hard to reconcile with what we observe, and we definitely need gravity to make it work (where are the goldstones though? even though the counting doesnt properly work, still we get some massless modes) nevertheless, it wouldnt surprise me since SUSY, if there at all, is broken anyhow. It seems as if we dont want to mess up with LI in the other hand, it is a damn good symmetry :)

In the other hand, I think Lubos is confused about breaking LI and the discrete e-t hypothsis. If for instance one has an operator $L$ which measures lenght, and Lorentz transformations are unitaritly implemented (in the manifold), then there is no issue about LI and minimum lenght, the spectrum wont change. One has to be careful not to get confused with Lorentz contraction since 'proper' distance is a well defined LI concept...

best,

Garbage

ps Hey Sabine, I am having problem somehow posting as 'other', hope you havent banned :)

Btw,

one this which comes immediately to our mind is somehow related to what Lubos says, the Lorentz group is non compact ( we cant reach the speed of light) so unitary representations are infinite dimensional. However, this is not *really* a problem, didnt stop us from defining spinor, all we need is a different inner product where unitarity is well defined. This is however missleading, since we have Rotational invariance for the angular momentum operator whose eigenvalues are also discrete. I admit it isnt straightforward, but doesnt strick me as incompatible... sorry I was just thinking over whether discreteness necesarly implies LV.

G

Incidentally Garbage, i'd like to hear your thoughts on that exercise you left to the reader. Namely the dissappearance of all unwanted mass scales when you use dimensional regularization with say ms bar scheme.

I had a few discussions several years ago about this topic with a few theorists, and their answers never really entirely satisfied me, or rather I didn't understand the details explicitly.

I really want to say the 1/epsilon divergences gobble up all the relevant terms with mass scales, so in essence you are pushing all that stuff to the next order in perturbation theory (a mirage) or into marginal operators.

-Haelfix

Err irrelevant operators

Hi Garbage:

ps Hey Sabine, I am having problem somehow posting as 'other', hope you havent banned :)I haven't banned anything or anybody. (In fact, I wouldn't even know how to.) Might be a blogger problem. Should you ever have trouble commenting, send me an email and I'll try to put it in. (Email is on my homepage).

Hi Cynthia:

I suppose it might be alright to be in a funhouse of mirrors as you cook-up ways to leave your lover, but I just don't think it's optimum to be in such a loopy locale as you're concocting ways to quantum gravity! Just a humble hunch of mine, that's all...Well, to add my humble hunch, I think you have no idea about my 'loopy locale', and if you had made the effort to read what I wrote, you might have noticed that I am far from 'concocting' ways to qg. What I am saying is as long as one is willing, and able to readjust a wrong direction, scientific approaches should eventually all lead towards the same conclusion. But it requires that we listen to each other.

Best,

B.

Imagine doing regularization with a cutoff. Now imagine doing the same integrals with dimensional regularization. The dimensions of the final answer cannot be any different. In cutoff regularization, the cutoff is used to provide the appropriate powers of mass. In dimensional regularization, we have to introduce a mass scale by hand. The bare parameters of the Lagrangian will exhibit the same sensitivity to either.

At least, so is my answer to Garbage's homework problem.

"The bare parameters of the Lagrangian will exhibit the same sensitivity to either."

Are u sure about that? how do we renormalize quadratic divergences in dim. reg.?

Try a simple example, a phi^4 theory, try dim. reg. and cutoff. Check if you get the same dependence in 'Lambda' and 'mu'...

Hey Haelfix, I am not sure I follow your reasoning, I have another way of looking into this, maybe by doing the phi^4 example you could get a hint too :)

G

G, an integral, however regulated, retains the same dimensions, say, of [Mass]^2. It will be therefore quadratic in the cut-off or quadratic in whatever other mass scale you introduce.

C' mon, do you think it is the same if the mass scale of the integral goes as cuttoff^2 or m_higgs^2?

Why do you think there is a hierarchy problem for the Higgs and not for let's say the electron?

In dim. reg. the quadratic divergences vanish btw... the only thing we care of is the logs...

G

You are not regularizing Feynman graphs absent of context, you are doing it within a renormalizable theory which is the only place where this procedure makes sense.

What renormalizability means is that physical quantities are independent of your cutoff (or any other mass scale introduced as part of the regularization). In particular, you can send them to infinity. Therefore your integral is not going to go as M_Higgs^2, unless your theory doesn't care about infinite Higgs mass (i.e, the Higgs is not a part of your theory).

Now when I turn around and say that is an effective field theory, and I'm not really sending the cut-off to infinity, the regulation of the integral doesn't suddenly change.

Here's a counter question to you, G: how do you write the running-of-the-coupling-constant RG beta function equation in a dimensional regulation scheme? The mass scale one in there isn't the Higgs or the electron or any other physical mass.

I am afraid Arun you dont understand the RG equations, there is no running without logs (unless there is a strongly coupled theory) and henceforth in dim. reg. there are no quadratic divergences, in other words, they are *pure* counterterms. Since perturbatively the only non-analytic piece in momentum is given by the logs, dim. reg. set all quadratic divergences to zero. Non-perturbatively in the other hand one can get other type of non-analytic behavior. You see, the important part of RG is the non-analyticity, otherwise it is just a shift of a bare coupling.

The RG flow, as it was indeed originally presented in 54', is how physical observables, such as scattering amplitudes, change with external momentum. The 'mu' is just a book-keeping since they are tied up to the momentum flow. In other words, the RG is just the sum of the logarithms.

To answer your question, how do you write the running for beta is irrelevant, how do you write the running w/r to momentum for the cross section, that's scheme and regularization independent.

Now, to talk about the hierarchy problem, imagine we have the SM coupled to NP at some scale between tev and gut. Calcualte the contribution from this particles running in the loop to the Higgs mass and you'll see where the hierarchy problem lies, that's somehow equivalent to having some NP at the cuttoff scale.

Unfortunately there is a big confusion about renormalization, RG flow and all that jazz regarding its *true* physical meaning. Look at Peskin-Schroeder 12.73 (old version) for instance. There is always a presecription, which is physical, to *match* the value at some point with an observable, that's what M stands for, nothing mysterious about that... By the way, this happens even classically, it is not due to QM at all...

G

G,

Yes, passage of time and death of neurons contributes to my fading understanding. Problem is I don't understand your reply as well.

such as

"You see, the important part of RG is the non-analyticity, otherwise it is just a shift of a bare coupling."

:(

"In other words, the RG is just the sum of the logarithms."

Post a Comment