- Dangerous implications of a minimum length in quantum gravity
arXiv:0803.0749 [hep-th]
Abstract: The existence of a minimum length and a generalization of the Heisenberg uncertainty principle seem to be two fundamental ingredients required in any consistent theory of quantum gravity. In this letter we show that they naturally predict dangerous processes which somehow must be suppressed. For example, long-lived virtual super--Planck mass black holes may lead to rapid proton decay. Possible solutions of this puzzle are briefly discussed.
Since I've been working on quantum field theories with a minimal length and a generalized uncertainty principle for a while (for a brief intro, see here), reading the paper was somehow mandatory. It is an interesting examination, but I don't agree on the conclusions drawn in the paper. Here is in a nutshell what I read out of the paper:
Preliminaries
Heisenberg's usual uncertainty principle relates the measurement uncertainties in position and momentum space to each other. Within this context, it is possible to localize particles arbitrarily good in position space only on the expenses of losing more and more information about their momentum. To measure smallest distances you need to probe your sample with very small wavelengths, i.e. with large energies. In standard quantum mechanics, you can in principle measure arbitrarily precise if you can only probe your sample with particles of high enough energies. This is why we build larger and larger particle colliders, and accelerate particles to higher and higher energies.
However, General Relativity tells us that all energy gravitates. If you use a probe with a very high energy density you will get spacetime around it to curve. But all particles have to move in that curved spacetime. Such, if you get the probing particle near your sample whose position you want to measure, the background becomes dynamical, and the sample will wiggle. This gravitationally induced motion causes an additional uncertainty.
This effect is strictly speaking always present, but since the gravitational interaction is so weak compared to the other interactions, one can neglect this additional contribution in all experiments that we have ever done. One can however expect this effect to become relevant when enough energy is clumped into a region small enough so gravitational perturbations are no longer negligible. This will typically happen somewhere at the Planck Scale, and leads to a so-called 'generalized uncertainty' in which the position uncertainty has a minimum at the Planck length that one can not get below no matter what. If you go to even higher energies, the distortions only become worse. Thus, the generalized uncertainty typically increases with higher energies.
This kind of thought experiment can be somewhat straightened, for a very nice introduction see e.g. section II A of
- Quantum gravity and minimum length
By: Luis J. Garay
arXiv: gr-qc/9403008
- On Gravity and the Uncertainty Principle
By: Ronald J. Adler, David I. Santiago
arXiv: gr-qc/9904026
(Both papers are very readable). One should keep in mind though that besides the general sentiment being plausible, these arguments are not `derivations', since we don't yet have an observationally confirmed theory of quantum gravity to derive from. They are thought experiments - no more, no less - meant to investigate certain general aspects one can expect of quantum gravity. Different arguments lead to slightly different versions of generalized uncertainty principles, which indicates the limitations of such considerations - so far, there is no specific version of generalized uncertainty [1]. The Bambi and Freese paper uses another argument which says if you increase the energy in a space time region to measure more and more precise, you will eventually just form a black hole and not learn anything more precise than this. That's essentially what Giddings and Thomas dubbed so aptly "The End of Short Distance Physics" (hep-ph/0106219).
The consequences of such a generalized uncertainty principle have been examined extensively since the early nineties, most notably by Achim Kempf, see e.g. "On Quantum Field Theory with Nonzero Minimal Uncertainties in Positions and Momenta" (hep-th/9602085), who considers even more general generalized uncertainties.
Virtual Black Holes
Another ingredient to the paper are virtual black holes. In particle physics, virtual particles are not 'really' produced in scattering experiment, they only come into life in intermediate exchanges. Black holes can at least in theory be 'really' produced in particle collisions if a high enough energy is centered in a small enough region of space. Black holes with masses close by the Planck mass evaporate extremely fast, and in this process they can violate certain conservation laws like e.g. baryon number. This is because the black hole doesn't care what it was formed of, it just evaporates democratically into all particles of the Standard Model [2].
If black holes can be produced in particle collisions, one would expect them to also appear in virtual exchanges, where they could mediate baryon or flavor violating processes, e.g. proton decay. Since the black holes are rather heavy and short lived these process are usually very suppressed though. This was for example examined in
- Proton Decay, Black Holes, and Large Extra Dimensions
By: Fred C. Adams, Gordon L. Kane, Manasse Mbonye, Malcolm J. Perry
arXiv: hep-ph/0009154
They used the virtual black holes to set constraints on the size of extra dimensions. In the presence of 'large' (meaning, larger than the Planck scale) extra dimensions one can expect the production of black holes no longer to be suppressed by the usual Planck scale but by the new, lowered, one. Then, these processes are significantly enhanced. This problem doesn't only occur for virtual black holes but essentially for all higher order operators who are no longer suppressed because the Planck scale usually is far off. Either way, the Bambi and Freese paper isn't concerned with extra dimensions.
Dangerous Implications
If I understand that correctly, Bambi and Freese in the paper essentially argue that the generalized uncertainty, which has a minimum position uncertainty, should also result in a minimum time-uncertainty (see e.g. "Quantum fluctuations of space-time" by Michael Maziashvili, hep-ph/0605146). One can construct similar thought experiments to the one mentioned above by attempting to build more and more precise clocks to measure exactly when a particle will e.g. decay. Again, General Relativity puts a limit to these efforts. (By starting from the uncertainty principle instead of from the commutation relations one circumvents virtuously to address the question what actually a time operator is supposed to be, see e.g. John Baez on The Time-Energy Uncertainty Relation.) This argument usually employs some Lorentz-symmetry (you bounce a particle back and forth via some distance to get a time measure), so one thing that springs into mind is whether one still has this Lorentz-symmetry.
Either way, the point they are making in the paper is now that the lifetime of virtual states with masses above the Planck scale should with the generalized uncertainty not as usual become shorter and shorter, because it can never drop below Planck time. Now, in comparison to the usual scenario these virtual contributions should become more important. In the paper they provide some general estimates that would arise from such a scenario, e.g. for proton decay and find
"that super–Planck mass virtual black holes predict naturally dangerous processes, clearly inconsistent with the observed universe. In particular, rapid baryon number violating processes may lead to predictions of proton decay lifetimes that are ruled out by experiment."
Question Mark
Well, if you've followed this blog for a while (e.g. this post on 'test theories' or this recent progress report), you probably know what I am going to say. This is all well and fine, but estimates based on dimensional argumentation can't replace a consistent model that includes a generalized uncertainty relation. The uncertainty relation is derived from the commutation relations of position and momenta. If you 'generalize' it, you need to modify these commutation relations. This is actually the better starting point, also the one most commonly used, and this modified commutator in quantum mechanics has its equivalent in quantum field theory.
Such a modification however, has a couple of consequences. The one is that you can't have a minimal length without also having a modification of special relativity. The other one is that in some cases one also has a modified dispersion relation (though not necessarily so, since the modification can factor out). The most important one in this context is that one has a modification of the measure in momentum space. Essentially, high momentum states are less populated. This modification of the measure in momentum space is not optional; it is a consequence of the generalized uncertainty principle. This was probably first pointed out in the above mentioned paper by Kempf. You find some details on the relations between these ingredients in my paper hep-th/0510245 [3].
I do not see how the estimations in the Bambi-Freese paper take into account that within a scenario with a generalized uncertainty principle the measure in momentum space is modified, which naturally suppresses virtual particles with masses above the Planck mass. As one would expect from a theory with a minimal length, this provides essentially a natural regulator, and all virtual contributions above the Planck scale are very suppressed.
Related, some years ago I calculated loop contributions in an extra-dimensional scenario with Kaluza-Klein excitations, by using a model with generalized uncertainty (see hep-ph/0405127). These excitations can become arbitrarily massive, which usually poses a problem, much like the massive virtual black holes considered by Bambi and Freese. In the scenario with the minimal length, these very massive virtual contributions are suppressed, and the result is naturally regularized (since it's with extra dimensions, the usual renormalization schemes don't work).
Besides this, I admittedly have a general problem with putting black holes with masses considerably above the Planck scale in Feynman loops. One should keep in mind that the quantum aspects of black holes become important only if the curvature on the horizon is in the Planckian regime, which is the case for Planck mass holes. If one increases the mass (and they are talking about a thousand times Planck mass in the paper), the curvature drops and the black hole becomes classical very fast. This is also the reason why the semi-classical treatment for Hawking-evaporation is extremely good up to the very late stages of decay.
I wrote an email to one of the authors about these concerns, and I am courious to hear what they think. I will keep you updated on that.
Bottomline
So I agree with the conclusion in the paper that "this problem must somehow be addressed". But unlike what they suggest I don't think it needs a new symmetry for it, it needs in the first place a consistent quantum field theory that incorporates a minimal length. Since the minimal length acts as a regulator at the Planck scale, I expect it to suppress these processes.
Post Scriptum:
As it seems Lubos Motl has already commented on the same paper. Since his writing is as usually entertaining, let me briefly summarize his criticism.
Lubos begins with doubting the author's authority "First of all, you can see that the authors are complete outsiders in quantum gravity.", an excellent starting point for a scientific argument. His next point about the allegedly confused notation in the Bambi-Freese paper probably means that he didn't even bother the check the first some references which could have clarified his problem. Lubos then proceeds to his speciality, content free ad hominem attacks
"In the literature, most of the talk about the "minimum length" in quantum gravity is a vague sloppy babbling of incompetent people who don't have enough imagination to ever understand how Nature actually solves these things - think about profoundly and permanently confused authors such as Ng, Amelino-Camelia, Hossenfelder, and dozens of others."
Which he then attempts to explain with
"In reality, something new is indeed going on at the Planck scale, but to assume that 1.) it must be possible to talk about distances even in this regime and 2.) all the distances must always be strictly greater the Planck scale is a double naivite."
Sadly, this shows that despite all the effort I made explaining my model to him - on his blog, on this blog, and by email - was completely wasted. Otherwise he should at least have noticed by now that the minimal length in my model is a wavelength, and besides this the model has not very much in common with Ng and Amelino-Camelia's work. But this unfortunate limitation of his mental capability is explained with the following sentence
"If you actually look at any consistent realization of quantum gravity - and we have quite many setups to do so, including AdS/CFT, Matrix theory, perturbative string theory, and more informal descriptions of quantum gravity involving effective field theory - you will see that the "generalized uncertainty principle" in the strict Bambi-Freese sense is certainly wrong."
Which we should probably translate into "my head is full with string theory, and nothing else will fit in", the poor guy. This problem is further clarified in his statement
"Even if you are a bizarre, speculative alternative physicist who thinks that the reality is described by an entirely different theory of quantum gravity than the theory we still call "string theory", you must agree that string theory provides us with one or many realizations (depending how you count) of a physical system that satisfies all the consistency criteria expected from a theory of quantum gravity."
Well, as a scientist I don't have to agree on an description of reality as long as there is no experimental evidence whatsoever. I can't but wonder who of us is bizarre and speculative. Lubos continues his argumentation in the usual manner ( "I think that every competent person would agree with me ..." etc). The only thing that's surprising is that neither the name Smolin nor Woit appears in his writing. So maybe this is an improvement, and it leads me to hope that the air in Pilsen might clean his mind.
The only reason why I am telling you that (and break my own no-link policy) is that despite his disgusting way of writing, as far as I can tell, the conclusions he draws seem to be compatible with mine: "very large black holes "in loops" cannot cause any very large violation of the baryon and lepton numbers."
[1] In most cases, a first order approximation is used with an integer for the first power in energy over Planck mass that appears, and an expansion parameter of order one.
[2] At least that is what most people in the field believe today. Just to make sure I want to add that nobody has ever seen a real black hole evaporating.
[3] The paper was originally titled 'Self-consistency in Theories with a Minimal Length'. However, one of the referees didn't like the title, so it got published as 'Note on Theories with a Minimal Length', where the essential word 'self-consistency' dropped out.
There is of course no "an observationally confirmed theory of quantum gravity", but we do have models of quantum gravity in simpler setups, and I think it is not wise to ignore the fact that none of them exhibits generalized uncertainty relations. Certainly if this is claimed to be a universal consequence of gravity and quantum mechanics.
ReplyDeleteAdditionally there is pretty transparent confusion there - there are many situations where the Planck mass or length is reached without quantum gravity being important, since it is the energy density that is important for quantum gravity, not the total energy. Any uncertainty relation that depend universally on the Planck mass is expected to give wrong answers in some situations, the paper in question is a good example.
Dear Bee,
ReplyDeleteIn "practice", unless the high energy probe is a purely gravitational probe, is it not going to be producing showers of other particles long before gravity becomes important? If by nothing else then by interacting with the CMBR photons or neutrinos?
Hi Moshe,
ReplyDeleteGood to see you around. I was wondering could you comment on that remark from David Gross' paper (p. 14)
"In string theory, however, the probes themselves are not pointlike, but rather extended objects, and thus there is another limitation as to how precisely we can measure short distances. As energy is pumped into the string it expands and thus there is an additional uncertainty proportional to the energy. All together..."
which I think refers to one of his papers from the 80s. I always understood that to mean that 'effectively' string theory has a lower limit on the resolution. I got however confused about this recently however by a remark from Robert (atdotde).
"Additionally there is pretty transparent confusion there - there are many situations where the Planck mass or length is reached without quantum gravity being important, since it is the energy density that is important for quantum gravity, not the total energy.
Yes, I hope I was sufficiently careful with the above writing. One doesn't only need a large center of mass energy, but also a small impact parameter. (This is also the reason why I argued the 'soccer-ball' problem is a result of this dimensional confusion, see hep-th/0702016, I believe I briefly mentioned this in Morelia). Best,
B.
Dear Arun,
ReplyDeleteWell, it's 16 orders of magnitude to go, all kind of things can happen. E.g. you'd expect unification of SM forces to come into play and who knows what else. Yeah, I would guess if one could build more and more powerful accelerators with increasing energy the vacuum in the beam pipe would become a real problem, and as the GZK cutoff tells us the CMB does set a very real limit to the energies with which particles can travel over long distances through what we like to call 'empty' space. Best,
B.
Yeah, there are some distances that cannot be measured with arbitrary accuracy, and some that can. So it is probably not a good idea to formulate a general principle and apply it blindly to every distance in sight.
ReplyDeleteI know that you appreciate the confusion between energy versus energy density and wrote about this. Nevertheless it is still very common mistake, relevant for this discussion as well.
Hi Moshe,
ReplyDeleteYeah, there are some distances that cannot be measured with arbitrary accuracy, and some that can. So it is probably not a good idea to formulate a general principle and apply it blindly to every distance in sight.
Could you maybe let me know what you mean? Under which circumstances and with which assumptions can measurements become more precise in the super-Planckian regime? What is wrong with the Gross+Mende analysis? I'd be fine with a reference. Best,
B.
An example is the center of mass coordinate: for the black hole it is discussed in the comments to Lubos' post, and for the fundamental string it is also an arbitrary continuous number. I think in general to measure any length/time with Planckian precision takes Planckian momentum/energy. However, such Planckian momenta/energies are not necessarily prohibited, it all depends on the context.
ReplyDeleteDear Sabine,
ReplyDeleteSurely, the postscript could have been avoided. Not that I am a Motl sympathiser, don't get me wrong. This should stand as a clear, definite, elaborate demonstration that he should not be taken seriously anymore in many matters of physics research.
That his viewpoint agrees with yours on some subjects is very likely to happen many times since you are both trained as physicists, but this does not mean that he is completely serious, and has it right, about anything else.
I am sad to see you are referring to him, and engaging yourself and this otherwise informative blog in his psychological games.
Hi Moshe,
ReplyDeleteSorry, I didn't read the comments at Lubos', and since I dislike the tone over there I have no inclination to make this effort (my browser keeps crashing down on the attempt to open the site).
I think I roughly understand what you mean (and it has apparently nothing to do with what Robert was referring to). Neither in the context of the Bambi Freese paper nor in my model are super-Planckian energies prohibited. The uncertainty they are referring to is one in the position measurement. (It was probably unwise to start with the black hole argument to motivate the generalized uncertainty, it is not a particularly good argument, though a brief one.).
I don't quite understand why you emphasise the center of mass coordinate is an 'arbitrary continuous number', none of the arguments in the paper refers to discreteness. The question would be whether it is possible to measure the center of mass position to an accuracy better than the Planck length (that being the delta x)? Do you think this is possible? Best,
B.
Yes, I think there is no problem measuring CM position with arbitrary accuracy, and the argument does not rely on quantum gravity in any way. Basically in asymptotically flat space one can use asymptotic boosts to get black hole with any mass or momentum, or use asymptotic translational invariance to translate it to any position. Gravity is arbitrarily weak asymptotically so we really don't need to know anything more than we already do. If you try to construct the typical gedanken experiment to prohibit such measurements you find that you'd need total energy/momentum above the Planck scale. But of course we know this is not a problem, one can easily get the total energy/momentum to be trans-Planckian.
ReplyDeleteHi Theoreticalminimum,
ReplyDeleteBelieve me I hesitated to add the ps. Three reasons why I did so. First, new people come around (blogs have no memory), and every now and then the reason for ignoring him should probably be clarified. Second, his post on the same paper was earlier than mine so I think it's appropriate to at least mention it. Third, I don't particularly like to be called a vague sloppy babbling incompetent person for no reason, and reserve myself the right for self-defense. I understand you dislike that, and I am sorry if you are disturbed. I tried to keep it down, but I hope you understand that I occasionally want to mention this person's remarks about myself (and others) are very inappropriate. It was a coincidence I read his post (it came up in a Google search), and I assure you I have no intention to play any games, psychologically or otherwise, with Lubos.
Best,
B.
Hi Moshe,
ReplyDeleteSorry I totally don't get it. A boost changes the momentum, but it doesn't change the mass of the hole. I also don't see how this has anything to do with measuring the CM. Are you saying that instead of making a scattering experiment at the black hole you could equally well measure the asymptotically very weak gravitational field to extremely high precision to determine the center of symmetry? Yes, one can get the energy to be trans Planckian, but how good is that if you don't know precisely enough 'where' it is, i.e if it isn't localized enough.
B.
Black holes at different center of mass locations are related by symmetry, the asymptotic translation invariance of asymptotically flat space. In order for those locations to be somehow discrete you'd have to break this symmetry. But asymptotically gravity is arbitrarily weak, so I have no rationale for such breaking.
ReplyDelete(This discussion also has a Fourier transform involving the total mass being trans-Planckian, and that not really being an issue, but the above paragraph is sufficient).
What is wrong with looking at Planck scale lattices anyways? It's not like everybody is going to do it but somebody should be.
ReplyDeleteYour blogging certainly seems assertively into lots of physics ideas. I can't imagine even by lousy male standards that you'd have any trouble employment-wise in physics. There's lots of female programmers and very few female engineers too. Can't be the math but can't be the ruthless factor either cause there was a decent number of ruthless female managers but almost no female engineers while I was at IBM. A few female quality engineers but zero female process engineers, not sure why?
Lubos believes peturbational string theory is mathematically rigorous and physically inescapable. Then, Lubos. OK, string theory is as rigorously derived as quantum field theory.
ReplyDeleteMetric and non-metric gravitation both accurately predict empirical reality. Both support BRST invariance uniting the effects of a massive body and an accelerated reference frame. Validating their vast areas of agreement is sterile.
Non-metric theory also allows violations of isotropic vacuum and the Equivalence Principle. String theory and GR may be falsified by suitable experiment, or not.
Lubos' universe is weak against demonstration that it is not our universe. Two classes of experiments, either 90 days or a week, to find out. Somebody should look.
Hi Moshe,
ReplyDeleteBlack holes at different center of mass locations are related by symmetry, the asymptotic translation invariance of asymptotically flat space. In order for those locations to be somehow discrete you'd have to break this symmetry. But asymptotically gravity is arbitrarily weak, so I have no rationale for such breaking.
I am not actually sure whom or what you are criticizing. As I already mentioned in my above comment, neither the paper by Bambi + Freese, nor any of my papers, nor those that I've referred to above (or those by Amelino-Camelia for that matter) talks about discrete locations.
Hi John G,
I can't imagine even by lousy male standards that you'd have any trouble employment-wise in physics.
I have never had trouble finding employment, and despite my occasionally depressive writing I am optimistic I will find some future employment also next year. My recent writing was supposed to describe how that postdoctorial job hunt does look like in practice, and that it is a small surprise - at least to me - when even smart and talented people decide to leave the academic world for a decent work contract.
Best,
B.
I think this is a more general issue but to be concrete: in the paper by Bambi and freese all the quantities referring to virtual black holes (time energy etc., say in equations 1-4), are center of mass quantities, they do not discuss any internal excitations of the black hole. I see no reason to have QG related uncertainty in those quantities.
ReplyDeleteHi Moshe,
ReplyDeleteThe argument they have doesn't make use of QG, but of classical gravity in the regime where one would expect QG to become important. To reiterate my question from above, are you saying it is possible to measure the CM position of a black hole to better precision than a Planck length? Do you mean to say this is the case for any object or just black holes? If the former, then why do you think black holes are different, if the latter then what is wrong with the thought experiments that indicate such a precision can not be reached because the probe you need will affect the sample. I am sorry, but this really did not become clear to me from your above comments.
Best,
B.
I am saying that in the process of trying to measure the center of mass position of any object (black hole, string, whatever) quantum gravity corrections are negligible. The known arguments involving the growth of the string or gravitational collapse refer to trying to measure internal structure of the object (e.g the "size" of it), which is a different issue altogether.
ReplyDelete(this BTW is borrowed entirely from Lubos, I just feel an obligation to give credit where it is due).
Hi Bee,
ReplyDeleteAll this is indeed interesting and yet for me at present totally incomprehensible. Therefore I took your advice and read what you referred to as the primer paper on all this which is the one called “Quantum gravity and minimum length” by Luis J. Garay. This is a well written and clear overview which has left me with a lot to consider. It also has served to at least have me have some idea of what is being commented on.
As a little factoid format of what points the paper made that stood out for me and will serve as the focal points of my further study I quote the following from the paper:
Quantum gravity, the yet-to-be-built quantum theory of gravity
“any back-reaction can be neglected if the mass of the test body is sufficiently high; and finally, the borders of the test body are separated by a spacelike interval.”
“This means that the test body should not be a black hole, i.e. its size should not exceed its gravitational radius, and that both its mass and linear dimensions
should be larger than Planck’s mass and length, respectively”
“In this sense, in the context of quantum gravity, Planck’s scale establishes the border between the measuring device and the system that is being measured.”
“We can see that the problem of measuring the gravitational field, i.e. the structure of spacetime, can be traced back to the fact that any such measurement is non-local, i.e. the measurement device is aware of what is happening at different points of spacetime and
takes them into account. In other words, the measurement device averages over a spacetime region. The equivalence principle also plays a fundamental role: the measurement device cannot decouple from the measured system and back reaction is unavoidable.”
“The physical laws appear the same in all reference systems but the description of the physical reality may vary from observer to observer.”
“Therefore, as the particle is boosted, two different, simultaneous processes occur: first, the longitudinal information size of the particle decreases up to Planck’s length and, second, its transverse information size increases until it covers the stretched horizon.”
“For fluctuations in the gravitational field of the order of the gravitational field itself or, in other words, when the size of the probe is close to Planck’s scale, the uncertainty in the light
cone slope is as large as the slope itself. This means that the distinction between spacelike and timelike separations is lost close to Planck’s scale.”
“The presence of a lower bound to the uncertainty of distance measurements seems to be a model-independent feature of quantum gravity. They can be summarized in the three following well-know statements:
(i) the uncertainty principle, (ii) the speed of light is finite and constant, and (iii) the equivalence principle.”
Thanks,
Phil
Hi Moshe,
ReplyDeleteNeither the arguments in Garay's paper, nor the one's in Adler's paper (or others, see references therein) involve black holes, gravitational collapse, strings, or attempt to measure internal sizes. I agree that measuring the size of an object is a different issue than measuring its CM location, and I noticed that Lubos critizised that unclearness in the Bambi+Freese paper (see my remark in the post).
The known arguments involving the growth of the string or gravitational collapse refer to trying to measure internal structure of the object (e.g the "size" of it), which is a different issue altogether.
Do you mean the known arguments about the growth of strings that measure internal structure imply a bound on the resolution with which one can investigate such internal structure of an object?
Best,
B.
Correction: Garay does discuss the Black Hole argument lateron, and also mentions strings.
ReplyDeleteHi Phil,
ReplyDeleteOn a more philosophical level, I find it hard to decide what seems more or less 'natural'. Not being able to resolve structures below a certain limit, or being able to resolve everything better and better up to 'truly' infinitely small distances?
Best,
B.
I don't have time right now to read those two papers in detail. Let me just note that I don't believe the following sentences in the abstracts: "there is an absolute minimum uncertainty in the position of any particle" from the Adler paper, or "a lower bound to any output of a position measurement" in the Gray paper. Those statements refer in my mind to the position and not to any internal structure, relatedly the resulting uncertainty principle involves the overall mass of the object and not just the energy density. Unless I'm completely mistaken about interpreting these statements, they don't make too much sense to me.
ReplyDeleteIt just occurred to me that there might be a possible confusion about the argumentation in the introduction of the Bambi+Freese paper. The motivation for the generalized uncertainty with the black hole argument they provide (which I find, as mentioned, not the best motivation) has nothing to do with the fact that they later study specifically black holes within the context of a generalized uncertainty principle. I therefore don't see what the point is in insisting on whether or not that thought experiment is so particularly great. The generalized uncertainty principle as I use it (and as I thought it is used also in the Bambi+Freese paper) refers to a fundamental limit on the resolution of structures one can possibly reach, something that is effectively described by the generalized uncertainty/lower bound on the wavelength. Hope that clarifies some points?
ReplyDeleteBest,
B.
okay, I think I get your point. i keep thinking in cross-sections, and preferably momentum space, so positions isn't something that is prominently in my mind. I still don't know though in how far this is relevant to the Bambi-Freese paper, will think about it. Best,
ReplyDeleteB.
Hi Bee,
ReplyDelete“On a more philosophical level, I find it hard to decide what seems more or less 'natural'. Not being able to resolve structures below a certain limit, or being able to resolve everything better and better up to 'truly' infinitely small distances?”
Now this I can identify with. So you wonder if there truly is a final Russian doll inside the one just before? I to have often wondered the same.
Years ago I was fascinated by Chaos theory, as it relates to fractal geometry and the imaginary/complex number plane. If one takes the Mandelbrot set for instance, it should resolve to a limit and yet that limit cannot be a point. The complex structure is also repetitive , yet also varies somewhat at every scale. Also, it remains somewhat uncertain from level to level. I’m not claiming that this should serve as our model for reality. It did however serve to convince me that if this simply defined mathematical structure can exist to be so complex and seemingly endless that we should expect that the mathematical structure of reality to be no less. I also found it interesting that the first iteration of the set forms a circle, with the complexity to be found with each subsequent iteration and increasing.
Best,
Phil
The de Broglie wavelength of a tennis ball is probably shorter than a Planck length; does the discussion here mean that objects of that size cannot exhibit quantum behavior?
ReplyDeleteBee, as a non-scientist who nonetheless has an interest in physics, may I ask what may be a very dumb question? Does the paper in question suggest in any way actual potential danger from micro black holes that could be made at the LHC? I assumed when I read the paper that they meant "dangerous to current theory", not literally dangerous. Am I right?
ReplyDeleteIncidentally (and please don't ignore me for this) I have been interested in the issue of whether micro black holes are potentially dangerous, and note that very, very few scientists will even consider the possibility of Hawking radiation not existing. (Lubos had an aggressive post recently against this concern, but like you I don't find his style of argument helpful.) Anyway, even if HR doesn't exist, and slow moving micro black holes sank into the earth, I think Landsberg wrote once that they would take billions of years to cause any harm because of the very slow rate they can absorb any particle. I have never understood what would determine how long a micro hole would take to absorb something else, so any "simple" explanation you may offer for that would be welcome.
And yes, I am aware of the issue that if LHC can create black holes, then so can cosmic rays and none of them have destroyed a planet yet. Still, if thousands of slow moving ones settled into earth over a short period of time, I would like to know it was still safe.
Dear Arun,
ReplyDeleteThe de Broglie wavelength of a tennis ball is probably shorter than a Planck length; does the discussion here mean that objects of that size cannot exhibit quantum behavior?
I am afraid, I don't quite understand the question. What about the discussion here would lead you to conclude that objects that size do not exhibit quantum behavior (besides, I don't play tennis, but I can't recall a tennis ball exhibiting much quantum behavior).
To make a guess (sorry if I misinterpret you) it might lead you to suspect that objects that size do exhibit MORE quantum behavior than usual (since the effects for smaller than Planckian wavelength get larger) - this would be the 'soccer-ball' problem of DSR (that was also discussed in the comments to this post). Indeed, I think it also appears in the Bambi-Freese paper though it seems they don't know the literature well enough to make the connection, at least no references appear (bottom of the 1st page "In principle eq (3) could permit classical objects to live for very long because their mass is much larger than the Planck mass..."), and they explain the problem with saying classical objects are 'tidy' states.
Well, in a certain sense this comes close to my argumentation (that I also mentioned in my comments to Moshe above). The relevant quantities for GR that dictate when QG effects become important are neither masses nor lengths, but densities. QG effects should become important if energy densities are close by m_p/l_p^3. Typically, if you go to macroscopic (bound / 'tidy') states (tennis balls/soccer balls) the density drops, so there is the appropriate limit you've been looking for. More on that matter in hep-th/0702016. Best,
B.
Hi Steve,
ReplyDeleteIt's not a dumb question, and yes, you got that right. The paper claims to have discovered 'Dangerous Implications' that are 'Dangerous' because (if correct) they would point out a problem to existing theories, not an actual real 'Danger'. The argument is roughly 1) For very general reasons we expect there to be a Generalized Uncertainty Principle 2) This Generalized Uncertainty Principle would imply heavy virtual states to be longer lived than usual (i.e. without the Generalized Uncertainty), therefore 3) processes mediated by such heavy virtual states should contribute more than usual, this would then 4) imply that for instance the proton would decay faster than we have observed which is 5) incompatible with experiment.
The content of my post is roughly 2) is doubtful 3) is wrong, therefore 4) doesn't apply and 5) isn't a problem.
About the Black Holes at LHC. To begin with, please check this post for the basics. I didn't read Lubos' post, but as far as I recall he's had a very sensible opinion on that matter, so he probably made the relevant points.
Many people like to think black holes are dangerous because they attract lots of stuff in their surrounding. The black holes they typically think of are astrophysical ones. These have masses from the sun-mass up to millions of sun-masses. Yes, their gravitational attraction is high, and you don't want them in your backyard.
The black holes that would be produced at the LHC (which is unlikely, but anyway) would have masses about a TeV. In macroscopic numbers, that is an extremely tiny number. A TeV is about 10^{-21} grams. The gravitational attraction that these things have is completely negligible. The only way something can fall 'into' these mini black holes is by coincidence, i.e. if it comes into the way. You can picture the black hole to go through some matter and eating up everything along the way, thereby growing. However, as you probably know, the matter that surrounds us is in microscopic terms mostly empty. Most of it is clumped in the nucleus. So take the densest matter that we can create on Earth: the quark gluon plasma, which has about nuclear density, and place a black hole in it.
Two effects are important: a) occasionally something falls into the black hole, that depends on the density (temperature) of the medium which leads the black hole to grow and b) the black hole evaporates. The question is which effect is larger. One can estimate very easily that the evaporation rate is several orders of magnitude larger, i.e. the black hole can not grow even in such an incredibly dense matter. Even if you accelerate the black hole to high velocities (Lorentz contraction becomes important then, so it 'feels' a larger density) you can not get the black hole to grow. (You find numbers for these estimates in the last section of this paper).
The reason for this is that the temperature of a black hole is inversely proportional to its mass (with an dimension-dependent power), and these very tiny black holes are incredibly hot (some hundred GeV or 10^16 Kelvin). The time they need to decay is roughly 1 fm/c that's about 10^-23 seconds. They don't even reach the detector, and one could only investigate the decay products.
Besides this, I find it kind of funny that I occasionally come across this idea that these micro-black holes would 'sink' into the earth and collect at the earth's center. That most definitely wouldn't be the case - they would just go through and leave on the other side, even if 'slowly moving' or 'falling'. Why would they stop in the center of the earth?
I hope that helps.
Best,
B.
Hi Moshe,
ReplyDeleteEven after sleeping about it, I am still somewhat confused.
To begin with I think it would be helpful to disentangle the various issues.
1) The question I raised in the post above was whether or not the claim in the Bambi-Freese paper is correct, the claim I would summarize as "If there is a generalized uncertainty, then proton decay would be a problem". As I wrote, I think their conclusions based on some estimates is incorrect due to a lack of a consistent model. As it seems, so far nobody does agree on their reasoning anyhow.
2) The other question that was raised instead is whether or not one can expect a generalized uncertainty at all. From your above statement "there are some distances that cannot be measured with arbitrary accuracy, and some that can." it seems to me you don't argue there is no generalized uncertainty at all but that one
3) has to be careful to what 'distances' it applies. As I wrote above, the way in which I use the generalized uncertainty refers to resolution of structures (that must be a sentence which is in every one of my papers). I should also add that I don't actually start with a generalized uncertainty, but with a modified commutation relation - so the above mentioned thought experiments mostly serve as a motivation.
But despite that it isn't actually a central question to the Bambi-Freese paper, nor to the framework I use, I don't get your argumentation.
For one, I simply can't picture how you could measure the location of the CM of whatever to a precision better than the Planck length, if you can not resolve any structures better than the Planck length. Just take your object and divide into Planck-size cells and tell me how you do that?
Then I don't see what's wrong with the Adler/Garay arguments, but I understand you don't have the time to read them.
And finally, let me make a guess what you might mean (and excuse me if I am wrong). Are you saying the observable for the CM position has a distinct and sharp eigenvalue and you see no reason for QG effects to play a role in that because the quantum effects would only be relevant for excitations? If so, obtaining the eigenvalue of an observable is a measurement that usually isn't explicitly described. It is this measurement and its implications that Garay/Adler/Mead etc. try to capture with their thought experiments. I don't know how one can evade this reasoning by pointing to an operator and saying it has a peaked eigenvalue, without taking into account some actual measuring process, typically a scattering experiment -which brings us back to the question how good structures can be resolved.
Best,
B.
This is related, not quite the same issue but still about "dimensional" (in the LMT sense) issues in the physics of different spaces: It's true that some constants etc. must be different in spaces with other numbers of large space dimensions, but others do not have to be. For example, "c" can be L/T anywhere, even "h" can be ML^2/T. However, "G" has to vary with number of large space dimensions d, since g = GM/r^(d-1), from the Gaussian analog of gravity flux in other dimensions (at least low mass approx.)
ReplyDeleteThat surely has implications when we have to understand special lengths etc. in physics that could pertain to various numbers of dimensions. I gave the example of gravitation, relevant here, but there is also the curious need for special constants when you try to extrapolate electromagnetism to other dimensionalities. The Coulomb law can be rather easily generalized to f = q1q2/r^(d-1), but just try to extrapolate the Larmor formula for radiated power and the simple Abraham-Lorentz eqn. for radiation self-reaction. They don't work out right because of the change in exponent of "L" in the dimensions of Q (when Q is dimensioned out directly, with no "k", as per Gaussian units.) So, EM radiation must be governed by peculiar rules, special constants etc, when d <> 3? Does that relate to gravity issues? tx
Neil',
ReplyDeleteLook, your questions are certainly interesting, and you're a nice guy that I like to have around. But I find it very annoying you cover my threads with completely unrelated questions. To repeat a phrase I must have said a thousand times, I am not a public ask-the-expert forum. If you have a general question about physics that has nothing to do with what I just wrote about (and am thus interested in discussing), please use one of the forums designed for that purpose. Thanks,
B.
Hi Phil,
ReplyDeleteYeah, in fact, I too like the idea that our world has some kind of fractal structure, though I am not sure how that would in practice affect anything. I like to imagine if we look closer and closer into the structure of elementary matter we will eventually discover universes, that have galaxies that have stars, etc. Dolls in dolls in dolls, but they repeat after a finite order of magnitudes. (Consider that, a multiverse in every atom of your brain ;-)
(Besides, I think this kind of redundancy in structures is something that is very confusing about Tegmark's suggestion of the mathematical universe. I.e. I could imagine all kinds of mathematical 'structures' to have the same emerging features on larger levels, i.e. you'd have potentially huge equivalent classes, and I have no idea how one could sensibly tell them apart.)
Best,
B.
I'll try again. If there is a minimum length of the order of the Planck scale, then what does it mean when a tennis ball moving at 25 meters/second has a wavelength that is shorter than this minimum length? Compare to an object that has a wavelength of 10^(-10) meters.
ReplyDeletePresumably one meaning is that all objects with a sufficiently high momentum (such as the tennis ball above) cease to behave in the familiar quantum way; not because of decoherence or many degrees of freedom or whatever, but simply because of the minimum length scale.
Is this an avoidable conclusion of having a minimum length scale? If the Planck scale was 10^-15 meters, would ordinary non-relativistic Schrodinger-equation quantum mechanics out in empty space still work?
Hi Arun,
ReplyDeletefrom my limited understanding, you have exactly "reinvented" the soccer ball problem, in guise of a tennis ball problem ;-)
I was convincded that there is a post around here with a nice soccer ball, but I cannot find it...
Best, Stefan
I have just deleted a post from a person I do not want to see here anymore. Sorry.
ReplyDeleteArun raises a very interesting point. As a practical matter, you can't do an interference experiment with tennis or soccer balls to measure their de Broglie wavelengths. But this has been done with "buckyballs" consisting of 60 carbon atoms. Always the de Broglie wavelength is there, for things much larger than elementary particles -- even as this decreases with the body's increasing mass. I think there is definitely a conflict if you want to create some minimum distance scale.
ReplyDeleteDear Arun,
ReplyDeleteSo I think I did understand your question correctly, and if you re-read my above comment you will find that I already answered it (as Stefan also said). Yes, this is a paradox if you think about the relevant quantity being a mass, but not if you keep in mind that the relevant quantity that couples to gravity and thus determines the strength of gravitational effects is a density - not an integrated quantity.
Dear Stefan,
I too was looking for a post on that matter, but I think it only came up in the discussion to the threat I linked to in my above reply. The soccer-ball picture that you probably had in mind is from my slides.
Aside: Deleting Lubos' comment was unnecessary. Since I get the comments by email (and so I believe those who are subscribed to this thread), let me summarize it as he had nothing substantial to say (i.e. further insults, no scientific arguments).
Best,
B.
Hi Kris,
ReplyDeletePlease see my above reply to Arun. If you go to larger mass bound states (bucky-balls/tennis-balls/soccer-balls), the total energy can get larger than the Planck mass, but the density of these objects definitely does never even come remotely close to m_p/l_p^3? That is the very reason why it is so incredibly hard to find any observables for quantum gravity. Best,
B.
Hi Bee,
ReplyDeleteI agree, if the issue is only the practical matter of what can be observed. (No way to observe quantum-mechanical interference in macroscopic balls.) But we are still faced with a possible violation of principle. We don't know at what specific mass or distance scale we should be allowed to disregard interference effects. (Another manifestation of the "measurement problem" in quantum mechanics where the boundary with classical mechanics is undefined.)
In this paper I've argued it may not be necessary to rule out gravitational interference effects in macroscopic bodies, which may exist even though we lack the means to measure them. (I think the Moon's de Broglie wavelength is less than the Planck scale. Its orbit may be quantized, although there is no way to check.)
Bee, this is a teaching day so this will be necessarily brief. Also, I have not read the relevant papers or thought about it in great detail, all I can offer is my knee jerk reactions, and anyone with more expertise is invited to ignore those as misguided.
ReplyDeleteSo, for what it's worth I think Arun had his eyes on the ball. Suppose you measure it's average position any way you like, for example getting better and better accuracy by making repeated measurements. We know that quantum mechanics is important when the accuracy reaches the deBroglie wavelength of the ball, we know that quantum gravity is important when the accuracy reaches the Schwartzschild radius of the ball, but I see nothing breaking down in the description of the ball as classical non-gravitating object when you reaches accuracy of the Planck length. It is simply not true that QG is important whenever anything with dimensions of length approaches the Planck length.
So, if for example you are getting better accuracy by making repeated measurements (over time, or in ensemble sense) the total energy involved in all those measurements will become trans-Planckian, but so what?
So, my knee jerk reaction is that any proposed quantum gravity effect that applies to a tennis ball sitting on my desk is likely already falsified. Sadly I don't have time for detailed debugging of all claims in the literature, so again this should all be taken with a grain of salt.
(I also know that you are well aware of the issue, but most of the generalized uncertainty models out there do apply to the tennis ball, and therefore don't pass my first filter).
This comment has been removed by the author.
ReplyDeleteHi Moshe,
ReplyDeleteWell, as you say in closing I am aware of the issue. I have nothing to add except what I already said above. Though e.g considerations about virtual black holes in loops with a gup (or likewise) are somehow interesting, it's too handwavy to get anybody anywhere.
Best,
B.
---------
Aside: after having had a brief look at Lubos' recent post, I just want to mention the crudest misrepresentations (that I have told him a dozen times, and that you find explained in all of my papers) a) I have no modification of the wavelength for free particles, photons or otherwise b) Sabine teaches us "E = randomfunction(p, Planck scale)" - An equation which I have never used anywhere, and indeed I argued against in several in my papers. c) "Analogously, new things appear when the energy exceeds the Planck energy but you must carefully interpret the term "the energy" in this sentence. For example, it can mean the total center-of-mass energy in a collision (and you start to create black holes when the energy is higher than that)." As I mentioned above, a com energy above the Planck energy isn't sufficient, it also needs a small impact parameter.
Etc etc.
That tactic btw of pretending to 'reveal' constructed mistakes is a locigal fallacy also known as the straw man
"Straw man.
This is the fallacy of refuting a caricatured or extreme version of somebody's argument, rather than the actual argument they've made [...]"
It is somewhat more intelligent than argumentum ad hominem, but scientifically equally vacuos.
Dear Sabine,
ReplyDeleteMoshe has very patiently localized the focus of your problems, explained the wrong assumptions that lead you to these things, but you just decided not to listen and to forget everything.
I insist that my description of your ignorance about basic physics is fully accurate, you indeed do believe that there is e.g. a "lower bound on the wavelength" and you explicitly wrote this belief of yours in the thread above.
I have also added explicit links to your papers that contain the "randomfunction" to the text on my blog, to make clear that your comments about "straw men" are just pure lies.
It is really annoying to talk with someone who is ready to "deform" and introduce "uncertainty" to absolutely everything, including very explicit sentences that she has written herself.
You are not only a crank but a very dishonest one.
Cheers
Lubos
Dear Lubos,
ReplyDeleteI have also added explicit links to your papers that contain the "randomfunction" to the text on my blog, to make clear that your comments about "straw men" are just pure lies.
Your attempts to give even a minimal amount of credibility to your "arguments" are pathetic. The only instance in the paper you mention where I used an equation of the type you assert to me is an example from somebody elses paper (eq (8) which refers to Amelino-Camelia ref [49] and it is clearly marked as such).
Also, it seems to have escaped your attention, but the whole point of that specific paper was to clarify the differences of my model to other models that are used (e.g. those by Amelino-Camelia), so it is of a small surprised they are mentioned there. You find that essential piece of information for example in the first sentence of the abstract.
you indeed do believe that there is e.g. a "lower bound on the wavelength" and you explicitly wrote this belief of yours in the thread above.
The thread which is about somebody else's paper, that I started with a general motivation for the topic considered?
I have explained all the details of my model to you repeatedly. I admittedly can't decide whether you are indeed unable to recall it or whether you just chose to forget it when it seems to be to your advantage.
Best,
B.
Just in case somebody should actually be interested: the examination of the paper Lubos' mentioned above was continued in this later paper which studied the issue of whether or not DSR acts on free particles (in my model it doesn't, in AC 'test theories' it does). I'd be happy to answer open questions. There are some open questions that I can't answer, you find these usually in the conclusions of my most recent paper, which is currently this one.
ReplyDeleteI think there is a serious flaw in the assumption that a photon whose wavelength is equivalent to the Schwarzschild radius will suddenly turn into a black hole.
ReplyDeleteSince we define a black hole as an object with an event horizon, the implication is that we have a particle that can't escape itself.
So the photon that is a packet of information traveling at the speed of light can't escape itself by emitting a photon moving at the speed of light?
The next question is whether the high energy photon will energize pair production processes.
The answer is: Sure, why the heck not? That process can occur without violating any conservation laws, and happens all the time in high energy physics.
Finally, what is time to a photon? If you extend special relativity to objects traveling at the speed of light; time in that localized space appears to be at a standstill.
So time will be slower somehow? I'm not sure what that means, which makes me think that the paper in question is a gross misrepresention of the situation in general, and arguing that it is impossible for a hypothetical theory to lead to nonsensical contraditions is somewhat ambitious on the part of the authors.
Hi All:
ReplyDeleteAfter some discussion with my husband on the recurring Lubos-issue, I thought some reader advice would be helpful.
Lubos' insults of my person and that of several of my friends and colleagues (and many others) are publicly available and show up in Google searches for papers, names, keywords - which is exactly his intention. I am not a lawyer, but I am sure he repeatedly violated laws of various countries, and Harvard made a very wise decision to get rid of him.
One of the reasons I added the post scriptum (as mentioned above) is that I think readers on his blog should at least occasionally have an opportunity to find out what is behind his accusations (usually: nothing), and we all know there is no point in commenting at his place, so the natural place for me to do that is here. I find it very funny actually, for being criticized if I defend myself.
I just do not want disregard to appear as tolerance of his writing, and I am not sure how to do that without occasionally pointing out that his accusations are as inappropriate as wrong. I do not think this is actually an issue in the scientific community itself, but it is possibly relevant for those who are not familiar with the topics, and who read blogs to get a taste of what theoretical physicists do today. And since there are always new people coming around (little of whom check the archives as I had to notice), how can one assume they all understand the reason for nobody paying attention to his rants?
On the other hand, I have no specific wish to engage in discussions that seem to be inevitably attached with name-calling, and I also don't want to put off readers. (Besides this, it seems to upset my husband.)
Any advice is greatly appreciated.
Best,
B.
Hi Elias,
ReplyDeleteI think there is a serious flaw in the assumption that a photon whose wavelength is equivalent to the Schwarzschild radius will suddenly turn into a black hole.
Yes indeed, this assumption is complete nonsense, and I don't know where you got it from.
Best,
B.
Bee, first of all you have my sympathies about Lumo. His own writing looks brilliant enough (to my middle-brow view), but healthy smart people aren't inclined to bitterly rail against others the way he sometimes does. I think it relates to his rightist political slant (despises belief in AGW, enthused about nationalism), just look at the discourse we get from Ann Coulter, Limbaugh, et al. Yes many conservatives are quite polite, it's just a pattern we can see.
ReplyDeleteYou have well handled your responses to him, being firm enough without being corrupted in like manner. It is then maybe better to go ahead and show his comments, since your replies serve to display your own character.
As for my own goof-up here, I'm sorry my question seemed/was OT Bee - I'll tamp things down again (to use a now obscure word.) I did think it might have relevance to the question of a minimum length (admittedly, not its "dangerous implications.") I shouldn't have tossed in EM issues, so aside from them, it is true that MLT means that the Planck length needs a different formula in different spaces. That is considered relevant in the very interesting and useful pdf at www.hep.caltech.edu/~phys199/lectures/zweibach.pdf. There I found (per my expectations) that the exponent on G is G^for formulating l_P varies with space dimensionality. Dimensional analysis enters into Zweibach's discussion of G and how the Planck Length in our space is conditioned by the scales of the various other space dimensions. Here's what looks like a "money quote":
Although the Planck length l_P is an important length scale in four dimensions, if there
are large extra dimensions, the truly fundamental Planck length would be much bigger
than the effective four-dimensional one.
This may still not really be relevant to your point, so I'm staying away from this topic after this. I'm not sure of whether I get what's relevant at the high-brown end.
Hi Neil',
ReplyDeleteThanks for the sympathy, which is appreciated. Yes, the dimensionality of the gravitational coupling constant depends on the number of dimensions. The possibility that a higher dimensional 'truly fundamental' Planck length could be much larger than 10^-20 fm (or so) is the idea behind Large Extra Dimensions (that also makes a brief appearance in the above post). If that scenario turned out to be a description of nature, then I think one would also expect the 'minimal length' to be not as far off as presently thought. Was that what you meant to say?
Best,
B.
Advice: up to a certain point in his history, it was worth while to refute LM; after all he *was* a prof at Harvard. But things are changing fast. First, his posts on the arrow of time came as a shock to many people I know: not only were they wrong, they revealed a basic lack of understanding of simple statistical mechanics. Also, LM genuinely seemed not to realise that absolutely nobody in the community agrees with him. As AA wrote on TRF [before giving up posting]: "Is it a joke?". That made many people realise that his much-praised expertise in *physics* was lacking in some directions. Some of the things he has said about general relativity are also astonishingly amateurish.
ReplyDeleteNext, of course, came his departure from Harvard, combined with a failure to put anything substantial on the arXiv for a long time.
Finally we have the Bogdanov book.
In short, LM has gradually become so notorious as a *physics* crank --- I deliberately leave out any of his other quirks --- that being criticised by him no longer looks bad on your "record" on the internet; for some people it is a sign that you must be doing something right. So if that is your only motivation for mentioning him, then I suggest that you drop it.
Also, you will have noticed that nobody but his standard gang of strange people comment on TRF any more. LM is sinking into obscurity at a rapid rate. I mentioned his name to a famous person at MIT and that person obviously labored to recall who I was talking about.
The only justification I can think of for mentioning his stuff is that some of his errors and misunderstandings are in fact fairly widespread; because of his arrogance he is willing to parade these errors in a blatant way, for all to see. But precisely because these errors are so widespread it should be possible to find examples of it elsewhere.
In short, my advice is: go for a total ban.
Hi Dr. Who,
ReplyDeleteThanks. I hardly read anything on TRF during the last year, and I never read the comments, so I wasn't really aware it even got worse. I only occasionally stumble across his writing when something shows up on Google, or if Peter makes a remark (for whatever reason I usually feel compelled to 'check sources'). If you think it is clear that his writings are of low quality then your advise to ban him is probably indeed the best idea. Since you mention it, I too noticed that the attention that is paid to him in the community has almost dropped to completely zero. There was a time when people would every now and then talk about some of his posts, but at least all of those who I know have stopped reading TRF.
Best,
B.
Hi Bee,
ReplyDelete“Any advice is greatly appreciated.”
Unfortunately the advice I would like to offer cannot be acted upon in the context of your age and the place in time we exist. Plainly you’re antagonist is the lowest of creatures and yet unfortunately is not unique or rare, for he is simply a bully. In the times I grew up, when one was a child this inevitably became a problem for two types of people. They were either those who were venerable or those that were exceptional. The first became the target of the bully because they posed no risk to the perpetrator; the second assaulted out of envy. The attacks in both cases are resultant of the same reason which was the fruitless attempt of the bully’s to lessen deep feelings of inadequacy and insecurity. My bully I confronted at the age of ten and ended the treat in a pugilist manner. However, shortly after I was overwhelmed with a personal sense of failure and feelings of remorse mixed with sympathy for my antagonist.
With the wisdom of that experience and the level of persons I’m talking about my advice would be to understand that your antagonist will still remain in pain despite any course of action you might take. So I suggest you then pity him and wish that he may soon be able to deal with his issues. I also wish that he realizes this before someone of lesser character and understanding takes action that would be of the sort when I was ten.
Best,
Phil
Well Bee I wasn't adept enough to know exactly what the point should be, but had the general idea. I am still not sure how Lisa Randall's ideas fit in with the now "old fashioned" ideas of tiny extra dimensions. In any case, the millimeter-size (or so) curled EDs are not viable, now that gravity is known to be 1/r^2 to a fraction of a mm, true?
ReplyDeleteAs for disturbing commenters, one strategy is to "hold" their posts, and you can let them on if not so bad, some people also filter for certain phrases etc.
"this generalised uncertainty principle does not allow to resolve space time distances below the Planck length"
ReplyDelete"Little is known about the fundamental theory valid at Planckian energies, except that it necessarily seems to imply the occurrence of a minimal length scale, providing a natural ultraviolet cutoff and a limit to the possible resolution of spacetime."
These are your and your colleagues words.
A region of space that prohibits a free exchange of information has some very distinct problems.
If your photon's wavelength approaches that minimum length, what happens? Is it absorbed into the minimal volume? Is it repelled? The only definition I find suitable in your schema is that of an event horizon.
The small regions of space would represent miniscule blackbodies of maximal local entropy. This sounds more and more like a blackhole.
If we follow this line of reason, my statement seems perfectly natural under the circumstances.
I think the existance of minimally extended objects is a natural consequence of mathematics, and not unexpected. However, the role those objects play in physics needs to be carefully defined.
Another problem with minimal length is how it is applied to the notion of wave number (the inverse of wave length); does that have a minimal length too? (Implying a low energy cut-off?)
I think the notion that it is impossible to resolve objects below a certain threshold length is misleading. Our ability to resolve anything is strictly based on our ability to formulate strong cause and effect relationships. Dirac realized this and thus understood that physics relying on strictly physical probes was not a viable endeavor. Our adventures into the quantum realm rely heavily on the notion that certain inputs will result in certain outputs. It is the underlying mathematical entities (and how they are connected to the physical world) which act as probes, not the physical object itself.
HI Bee! It seems we have been put in the same boat. A good theory can survive the criticism.
ReplyDeleteHi Bee,
ReplyDeleteI can understand your frustration and desire to respond to Lubos' attacks,
but really I think you would be better off to just ignore him and his posts and delete any comments here (and without mentioning or responding to them). At this point the empirical evidence for the futility of trying to engage him in any remotely serious discussion is comparable to the evidence for Newton's theory of gravity in the solar system.
While his writings might come across as "brilliant" to some non-physicists, the unserious and strawman-like nature of his attacks should be pretty obvious to professionals. The opinion of anyone who is impressed by Lubos probably isn't worth caring about, so you shouldn't.
On the other hand, for some reason (being too easily amused) I find his ravings entertaining - minus the personal attacks on people like you and sexist/racist rubbish - so for selfish reasons I would like him to be encouraged in it... But it's not fun to see the frustration he causes you
here, so please just ignore/delete him. From an entertainment perspective it's best when he gets together with a kindred spirit, so I wish he would go back to a certain other blog and rant and rave with the proprietor there like in the good old days.
P.S. Good luck with your job search!
I have not had time to read all the comments but let me remark one thing: One has to be really careful what kind of quantities one considers. In general it's not true that the Schwarzschild radius is the limit to resolution: For the sun it's about 10km but of course the cm position of the sun is know to much higher precission (I would guess) and also by scattering the sun off other things I would guess I can determine structures below the 10km scale.
ReplyDeleteHi Neil,
ReplyDeleteI am still not sure how Lisa Randall's ideas fit in with the now "old fashioned" ideas of tiny extra dimensions. In any case, the millimeter-size (or so) curled EDs are not viable, now that gravity is known to be 1/r^2 to a fraction of a mm, true?
I am sure you find answers to this question if you read the post I linked to above, or the one about the early extra dimensions. I don't like to repeat myself unnecessarily. What you say is true, that strongly disfavors the case with d=2, but doesn't affect the cases with more than two extra dimensions.
As for disturbing commenters, one strategy is to "hold" their posts, and you can let them on if not so bad, some people also filter for certain phrases etc.
I considered that but its too much effort. I don't want to spend more time on that blog than absolutely necessary. I.e. each time you are wasting my time, you add an argument on the side of blogging less.
Best,
B.
Hi Robert:
ReplyDeleteI have not had time to read all the comments but let me remark one thing: One has to be really careful what kind of quantities one considers. In general it's not true that the Schwarzschild radius is the limit to resolution:
Well, that is an interesting interpretation. I don't know who you think claimed this was the case.
Hi Louise,
Yeah, I noticed Lubos confused mix up included both of is. I hope you don't take his insults too seriously.
Hi Elias,
If your photon's wavelength approaches that minimum length, what happens? Is it absorbed into the minimal volume? Is it repelled? The only definition I find suitable in your schema is that of an event horizon.
The small regions of space would represent miniscule blackbodies of maximal local entropy. This sounds more and more like a blackhole.
I think you misunderstand several points. For one, a black hole is a black hole is a black hole. If you take a particle and boost it, it doesn't become a black hole. Then, in my model a free particle (photon) behaves just like usual (the reason is that otherwise you could boost particles in and out of the qg regime which is something I can't make sense of, I've even written a paper on theat). The sentences you quote sound somewhat familiar to me, so I guess it was me who wrote them. If you re-read them you will note that I spoke very carefully about a 'resolution of space-time', please see my exchange with Moshe above on that matter.
Another problem with minimal length is how it is applied to the notion of wave number (the inverse of wave length); does that have a minimal length too? (Implying a low energy cut-off?)
At least in my model, yes, and this is the motivation for setting it up to begin with. You find all that in previosly mentioned previous posts. I am presently not in the mood to comment on other people's work, something that I have done in various previosly mentioned papers. Feel free to download and read them, that's what they are there for.
Best,
B.
Hi Amused,
ReplyDeleteThanks for letting us know your opinion.
I think you would be better off to just ignore him
My problem aren't only his attacks on my person, but that he covers a whole spectrum among my friends and colleagues in the same spirit (in many instances apparently for no other reason than that they know each other). Luckily, so far most of them don't seem to notice (I certainly don't tell them), and those who notice don't care. It is only a matter of time however until somebody comes along (new grad stud? new post doc?) who is enough online engaged (blogs?) to be bothered (preferably a woman and/or member of other minority groups). Lubos has a whole series of bloggers who have fallen victim to his insults, and I think this behaviour is far off from being tolerable. Even if one or the other reader finds his insults amusing, public unjustified name-calling isn't something that a working society or community can ignore. The tactic of looking away is easy, but doesn't always lead to the desired results. Do you know that quote:
“Sticks and stones may break my bones, but words will make me go in a corner and cry by myself for hours.” ~ Eric Idle
Best,
B.
Hello Sabine,
ReplyDeletesurely you must know by now that Moshe and Lubos are right and your work is factually wrong. I understand that can be very hard to accept, but what's the alternative? Do you want to follow the path of self-deception and lies to defend a lost position? Consider this: it will be a huge relief to acknowledge the mistakes you made. While perhaps difficult, it would certainly be the right thing to do in this situation. I wish you luck in finding the strength and honesty to face reality.
Best,
Michael
That's the obvious mess when dealing with this pathological Lubos guy: You instantaneously have to deal also with all his pathetic anonymous groupies :-)
ReplyDeleteMichael
Hello Michael,
ReplyDeletesurely you must know by now that Moshe and Lubos are right and your work is factually wrong.
I appreciate your concern about my wellbeing, and your kind wishes that I might have the strenght to face my alleged self-deception. I on my behalf hope that your wisdom and insight will eventually help you to distinguish between the reality of science and paranoid fantasies.
Best,
B.
Bee,
ReplyDeleteFirst, I will give you props for taking the time to respond to my posts. I consider that a polite gesture on your part (a trait you oddly share with Mr Motl); I kindly appreciate your responses.
As for my concern, my comment is about the low energy consequences of a discrete space. I am still investigating this aspect, but I am primarily concerned with how this impacts our notions of absolute zero, and the approachability to that limit. I think a discrete space is inadequate in describing physics in this area. I will refrain from making any bolder statements than that until my arguments are more developed.
I still think you will eventually come to the realization that one needs to be very careful about how to use minimal length in physics.
"While his writings might come across as "brilliant" to some non-physicists, the unserious and strawman-like nature of his attacks should be pretty obvious to professionals. The opinion of anyone who is impressed by Lubos probably isn't worth caring about, so you shouldn't."
ReplyDeleteThis is elitist snobbery. However, since you know how to juggle; I will cut you some slack as one professional to another.
I would think it would be interesting to have anyone ask if there was a relativistic approach to any super fluid produced from a particle collisions?
ReplyDeleteNavier-Stokes equations?
If this was the case, then would it not reveal new thinking must be considered since we now know that such energies reached can and do provide for new information( in continuum) with which to look at blackholes. Speak about "quantum gravity" in new ways?
If such an approach is in context of "a continuum," then such a feature might be related to how "genus figures" may be of use?
Sorry for the stupid questions.
Elias,
ReplyDeleteas a coauthor of this paper that started Bee's work on "minimal length" physics, I'd like to emphasize once more that this is not about a discrete space, as you seem to believe. I guess that's not the fist time that this has been mentioned in this thread.
So, before arriving at benevolent suggestions what to do or not to do in the future, could you (and your other concerned co-commenters) please at least try to pretend to have read the papers you are talking about?
Sorry if this sounds a bit harsh, but this comments section completely depresses me.
Thanks, Plato, for dropping in, cheers me up!
Best, Stefan
Hi Elias + All,
ReplyDeleteI was about to say the same as Stefan. This comment section is unfortunately a very good example of how poorly scientific discussions on blogs run. The only person who said something substantial that was actually related to the post I wrote was Moshe.
The large majority of the rest feels apparently they have to let us know about their idea of a minimal length and why they like or don't like something. Whenever somebody has a problem or a personal disliking - may that be discreteness, particles collapsing to black holes, the multi-particle limit, measurements of the horizon location? or whatever - he feels apparently compelled to attack me for his own confusion. Even better are those who claim others have already made their point without apparently having read what was said, or about what.
The only reason why I didn't delete this comment by Michael is that it is utterly ridiculous. The 'scientific' level of these comments, my friends, is non-existent, and it is not a discussion I have any interest engaging in.
As an aside, I want to mention something that I noticed isn't obvious to everybody: Typically, part of the discussion about threats I write is lead privately and outside the comment section. Especially when it comes to criticising other people's work (in case you haven't noticed, the post is about somebody else's paper) not everybody wants to publicly share his or her opinion - something that I can very well understand.
And if you are still entirely convinced you know everything better, why don't you just go and publish your 'insights' and 'facts' in a peer reviewed journal. Good luck.
Best,
B.
Dr. Who's remark from above about "fairly widespread errors and misunderstandings" just came back into my mind, so let me clarify this discreteness matter.
ReplyDeleteA discrete spacetime with a finite stepsize has a 'minimal length' (A) and because of this there is a fundamentally finite resolution that can possibly achieved (B). Yet, if one has a fundamentally finite resolution (B) that does not mean spacetime is discrete (A).
To make that very clear: (A) => (B), does not imply (B) => (A).
At least I can picture maximally localized states in a continuous background, and I can imagine that every structure below a certain distance can never be uncovered even though the space it sits in and its location are completely continuous. (One might in this case maybe wonder whether it is meaningful to say that ther 'is' a structure below that scale at all).
Maybe that helps?
Best,
B.
Bee, as an alternative to completely ignoring Lubos you could add a short postscript along these lines:
ReplyDelete"Apparently Lubos has written some more charming things about me and my work on his blog. For the info of any new readers who arrive here from there: my policy is not to bother reading and responding to what he writes, since past experience has shown this to be a complete waste of time. The work he attacks has been published, so if he has any serious criticism to make he can go publish it in the scientific literature, then I'll respond to it there. Otherwise I won't bother with what he has to say and will continue to regard him as the joke that he is.
As usual, any comments by Lubos or his surrogates will be deleted here so that I don't have to waste my time responding to them."
I can understand your worry about the effect his attacks could have on others, but hopefully after discovering his track record in the blogsphere they will realize it isn't something to be bothered about. It's analogous to getting barked at by a dog in the street: somehow, something about you triggers an instinct of aggression in the creature, but it's nothing you need to care or worry about.
A FYI, and tangent to this post: Scott Aaronson and Robin Hanson in their respective blogs are discussing a 'technical results' bias in the type of scientific publications over publications which describe 'conceptual' results. Scott says: People will often say, "sure, but as soon as you've asked the question / defined the model that way, the answer is obvious." They recognize, but don't sufficiently appreciate, the fact that before the paper in question no one had asked the question or defined the model that way.
ReplyDeleteHe's right. And this could be a interesting topic to include in your Fall conference, Bee.
Bee points out some basic flaws or hidden assumptions in people's thinking and I'll continue in that vein.
ReplyDeleteFirstly it's silly to think of continuous versus discrete as some kind of either/or dichotomy. But that's not the dichotomy I want to talk about.
In most physical models there is a strict dichotomy between space (or spacetime) on the one hand, and fields, particles etc. sitting in that space, on the other hand.
This dichotomy is obvious in Newton's gravitation or in Maxwell's equations, but this dichotomy can also be clearly seen in GR and QFT (modulo caveats; but they certainly involve an underlying manifold).
But perhaps in an even more fundamental theory there simply cannot be such a dichotomy, so that one simply cannot talk about space (or spacetime) as an entity in its own right, except as an effective approximation. Thus the approximate dichotomy will break down in certain extremes and "short distances" are exactly one of those extremes where the concept of an underlying space simply becomes meaningless.
So it's really a mistake to assume that there must be some underlying space.
Hi Amara,
ReplyDeleteThanks for pointing this out, this is interesting! I will keep it in mind for the conference.
Hi Amused,
That was about the content of this writing. See, I could ignore Lubos but I've had to notice that some of my friends and family are seriously disturbed by reading public insults about me. And since I on my behalf am more bothered if he goes on about my friends, I can relate to this. As you can see from e.g. Michael's comment above, there are indeed people reading this stuff who are too far detached from the topic to have a basis to judge on whether any of this makes sense. Or maybe they just don't care.
A friend of mine mentioned that his popularity arises mostly because he has become a publicly available freak show. This kind of seems to go along with what you say above.
As an aside, it seems Lubos has re-written his text again. After I pointed out above that the reference he came up with doesn't contain what he claimed it did, he now essentially says he couldn't find any reference that supports his accusation, but he is perfectly sure it must be in one of the papers that I wrote. Since he didn't read them, the reader should go look for themselves (comes with a link to part of my publ list on SPIRES). It's quite an astonishing demonstration of self-confidence combined with a completely distorted perception of reality.
Hi Phil,
Indeed, I find the story somewhat tragic admittedly. After all, Lubos is without doubt intelligent, and I don't understand why he has wasted his talent this way. On the other hand, I am really fed up with his bullshit. I was hoping that he'd calm down once out of Harvard, but recently it's gotten worse again. Now I have to wonder where this is going. Best,
B.
Bee
ReplyDelete"To make that very clear: (A) => (B), does not imply (B) => (A)."
I agree entirely with this particular point. And that's my point.
I confess to being a pigheaded, stubborn, opinionated person, (a positive trait in my line of work) and that invariably leads to a certain level of hypocrisy that is exhibited by ALL people; however, I will admit that I haven't read the papers in question in depth.
I will read them in depth; however my specific point is that there are very bold statements regarding our ability to obtain knowledge of events that occur below a certain scale. This is what I interpret as implying a discrete space.
If you say that it is impossible to gather information below a certain scale (a cutoff) it requires one to ask what are the implications of such a thing.
That low scale cutoff defines a boundary, and a boundary implies an object defined by its surface area. What is that object?
It seems arbitrary to simple insert the existance of such a thing into physics, regardless of cause.
Perhaps that is not what the intent is; perhaps that is not what is actually being said, but I am struggling to grasp another interpretation.
Off-topic - Lumos should have been luminous, instead collapsed into some type of dark matter.
ReplyDeleteHow to help people not self-destruct - this is another challenge to our ingenuity gap.
Good day
ReplyDeleteI read on a regular way several blogs dedicated to HE physics because blogs tend to become a new vector of disseminating information that competes/completes the traditionnal channels (published papers) .
In many ways it is a very free communication channel what leads to the mechanical consequence that the proportion of very stupid AND very original ideas increases as compared to the traditionnal channels where the "originality band" is extremely narrow .
However I very seldom post for many reasons that are irrelevant to the issue here .
This time I will make an exception , not because I want to offer comment on the minimal length (I broadly agree with Moshe) but because you explicitely asked for opinions how to handle "the Lubos problem" and that issue seemed interesting to me from the reader's perspective .
Both your and Lubos' blogs belong to the list of blogs I visit .
Obviously I am neither your nor Lubos' "fan" but I find that both blogs provide interesting comments , I mean by that interesting to my professional endeavours .
I am not much bothered by the fact that you consider Lubos vacuous and Lubos considers you as a crank .
The personnal hostilities among people have always a reason and I do not think that most blog readers are interested by those reasons - not that they can know them anyway .
So how to handle the Lubos' problem ?
By simply stating that you don't like him and he doesn't like you for a lot of personal reasons that are noone's business .
There is no law that everybody should like everybody and this includes but is not limited to words like "crank", "vacuous" etc .
There is no rationnal way to distinguish between two dislikes so readers who define themselves as mostly rationnal will not react to words emotionnaly loaded .
Perhaps you don't like Lubos and he doesn't like you because you have different political opinions or different philosophical preferences or whatever but who cares ?
So if the priority parameter that you have in mind in order to define your behaviour on the web are the readers of your blog (that's the natural parameter that springs in mind but you might have others) then you should realise that most people who come here with a scientific interest don't care about personnal likings or dislikings .
In other words , they don't really see the Lubos problem as a problem that might modify their reading habits here or elsewhere .
That is exactly the same opinion that I would offer to Lubos if he asked one day his blog readers about the "Bee problem" .
Lubotomy n. An attempt to dissociate oneself permanently from Dr. Lubos Motl, normally following insults by the same. The operation is not always successful, often leading to severe headaches and paranoia attacks.
ReplyDeleteJust wanted people's thoughts
ReplyDeletehttp://www.newcomensengine.com/2008/03/give-me-beautiful-hair-srlet.html
"a physicist", your lack of social awareness is appalling. There really are such things as predators and victims. A victim is not just someone who has exercised their optional personal opinion to enter into a mutual agreement with the predator that they will engage in a private dispute. A victim is a victim.
ReplyDeleteAnyway, I wish there was still some scientific dialog in this thread. So I will ask Bee, what do you think of my comment above, ending with "So it's really a mistake to assume that there must be some underlying space."
Hi Elias,
ReplyDeleteThanks for the link, it is an interesting point of view you mention. However, if you read the first sentence of the Wikipedia entry you link to, you will find that it is not a 'celebrated result' but a 'conjectured equivalence'. Very possibly a 'celebrated conjectured equivalence' but still conjectured. It is not apparent to me why it would have been mandatory for Bambi+Freese to comment on Hawkings paper.
Hi Mathematician,
The reason why I didn't reply to your above comment is that you arrive at a 'Thus' as if you had followed something from a previous statement, but even if I try hard, there is no argument that your conclusion "it's really a mistake to assume that there must be some underlying space" does follow from. I agree - and I think many people find the idea intuitively appealing - that one would hope there is on a fundamental level no distinction between 'space' (+time) and 'things in that space', and thus there is no 'underlying space'. It is however a construct that so far works extremely well. Many people argue dissolving this distinction is the way solve our problems with quantizing gravity (or the other way round, quantizing gravity is the way to erase this distinction), and the idea that the concept of space is somehow emerging in itself is certainly interesting and of some popularity, but am not aware of any formal prove why this must necessarily be the case.
Best,
B.
Bee,
ReplyDeleteThank you for the kind comment.
Regarding Bambi and Freese; I will always maintain that any exploration of the limits of a physical theory is a legitimate exercise. However, they strongly used an assumption that "at a minimum" is strongly disproven by theories using AdS/CFT correspondence.
As for the Ads/CFT correspondence "conjecture vs result" issue; guilty as charged, but awaiting appeal.
Just another twist; the Bambi/Freese paper might be used as an argument in support of AdS/CFT correspondence; they have pointed out that violations can occur under lesser theories, but Hawking has shown that this won't happen if AdS/CFT is true.
Bee,
ReplyDeleteIn my defense, my use of "thus" occurred while a "perhaps" remained in force. This may have been unclear, as I don't know how to do an "end perhaps" symbol in plain English; maybe [/perhaps] or something. Perhaps if I had used "would" instead of "will" then the continuing perhapsedness of the paragraph would have been more apparent [/perhaps].
Also my last sentence essentially said
"It is not the case that there must be some underlying space."
which is not the same as the following sentence, (which I certainly would not assert unless I thought I could justify it)
"It is the case that there must not be some underlying space."
The words don't commute, of course. My assertion was merely that the possibility of there not being some underlying space, is one that should not be overlooked or dismissed. (Also it is not unreasonable to believe this possibility may be real, and in fact I personally believe it. And it's just a good idea to be aware of the possibilities)I'm sure you, Bee, don't overlook or dismiss it, but it is clear that some posters were overlooking or dismissing it, especially when they make unwarranted conclusions that something must be the case. That really gets my hackles up, Grrr!
Hi Mathematician,
ReplyDeleteSorry for the misunderstanding. I'd agree that 'the possibility of there not being some underlying space should not be overlooked or dismissed'.
Hi Elias,
I think the point of the paper was to use as little assumptions as possible to draw a conclusion as general as possible. What you say could be one solution to the problem (if you think their conclusion holds), but not necessarily the only one.
Best,
B.