[Image: Dreamstime.com] |
Really the idea that entropy measures disorder is totally not helpful. Suppose I make a dough and I break an egg and dump it on the flour. I add sugar and butter and mix it until the dough is smooth. Which state is more orderly, the broken egg on flour with butter over it, or the final dough?
I’d go for the dough. But that’s the state with higher entropy. And if you opted for the egg on flour, how about oil and water? Is the entropy higher when they’re separated, or when you shake them vigorously so that they’re mixed? In this case the better sorted case has the higher entropy.
Entropy is defined as the number of “microstates” that give the same “macrostate”. Microstates contain all details about a system’s individual constituents. The macrostate on the other hand is characterized only by general information, like “separated in two layers” or “smooth on average”. There are a lot of states for the dough ingredients that will turn to dough when mixed, but very few states that will separate into eggs and flour when mixed. Hence, the dough has the higher entropy. Similar story for oil and water: Easy to unmix, hard to mix, hence the unmixed state has the higher entropy.
2. Quantum mechanics is not a theory for short distances only, it’s just difficult to observe its effects on long distances.
Nothing in the theory of quantum mechanics implies that it’s good on short distances only. It just so happens that large objects we observe are composed of many smaller constituents and these constituents’ thermal motion destroys the typical quantum effects. This is a process known as decoherence and it’s the reason we don’t usually see quantum behavior in daily life.
But quantum effect have been measured in experiments spanning hundreds of kilometers and they could span longer distances if the environment is sufficiently cold and steady. They could even span through entire galaxies.
3. Heavy particles do not decay to reach a state of smallest energy, but to reach a state of highest entropy.
Energy is conserved. So the idea that any system tries to minimize its energy is just nonsense. The reason that heavy particles decay if they can is because they can. If you have one heavy particle (say, a muon) it can decay into an electron, a muon-neutrino and an electron anti-neutrino. The opposite process is also possible, but it requires that the three decay products come together. It is hence unlikely to happen.
This isn’t always the case. If you put heavy particles in a hot enough soup, production and decay can reach equilibrium with a non-zero fraction of the heavy particles around.
4. Lines in Feynman diagrams do not depict how particles move, they are visual aids for difficult calculations.
Every once in a while I get an email from someone who notices that many Feynman diagrams have momenta assigned to the lines. And since everyone knows one cannot at the same time measure the position and momentum of a particle arbitrarily well, it doesn’t make sense to draw lines for the particles. It follows that all of particle physics is wrong!
But no, nothing is wrong with particle physics. There are several types of Feynman diagrams and the ones with the momenta are for momentum space. In this case the lines have nothing to do with paths the particles move on. They really don’t. They are merely a way to depict certain types of integrals.
There are some types of Feynman diagrams in which the lines do depict the possible paths that a particle could go, but also in this case the diagram itself doesn’t tell you what the particle actual does. For this you actually have to do the calculation.
5. Quantum mechanics is non-local, but you cannot use it to transfer information non-locally.
Quantum mechanics gives rise to non-local correlations that are quantifiably stronger than those of non-quantum theories. This is what Einstein referred to as “spooky action at a distance.”
Alas, quantum mechanics is also fundamentally random. So, while you have those awesome non-local correlations, you cannot use them to send messages. Quantum mechanics is indeed perfectly compatible with Einstein’s speed-of-light limit.
6. Quantum gravity becomes relevant at high curvature, not at short distances.
If you estimate the strength of quantum gravitational effects, you find that they should become non-negligible if the curvature of space-time is comparable to the inverse of the Planck length squared. This does not mean that you would see this effect at distances close by the Planck length. I believe the confusion here comes from the term “Planck length.” The Planck length has the unit of a length, but it’s not the length of anything.
Importantly, that the curvature gets close to the inverse of the Planck length squared is an observer-independent statement. It does not depend on the velocity by which you move. The trouble with thinking that quantum gravity becomes relevant at short distances is that it’s incompatible with Special Relativity.
In Special Relativity, lengths can contract. For an observer who moves fast enough, the Earth is a pancake of a width below the Planck length. This would mean we should either long have seen quantum gravitational effects, or Special Relativity must be wrong. Evidence speaks against both.
7. Atoms do not expand when the universe expands. Neither does Brooklyn.
The expansion of the universe is incredibly slow and the force it exerts is weak. Systems that are bound together by forces exceeding that of the expansion remain unaffected. The systems that are being torn apart are those larger than the size of galaxy clusters. The clusters themselves still hold together under their own gravitational pull. So do galaxies, solar systems, planets and of course atoms. Yes, that’s right, atomic forces are much stronger than the pull of the whole universe.
8. Wormholes are science fiction, black holes are not.
The observational evidence for black holes is solid. Astrophysicists can tell the presence of a black hole in various ways.
The easiest way may be to deduce how much mass must be combined in some volume of space to cause the observed motion of visible objects. This alone does not tell you whether the dark object that influences the visible ones has an event horizon. But you can tell the difference between an event horizon and a solid surface by examining the radiation that is emitted by the dark object. You can also use black holes as extreme gravitational lenses to test that they comply with the predictions of Einstein’s theory of General Relativity. This is why physicists are excitedly looking forward to the data from the Event Horizon Telescope.
Maybe most importantly, we know that black holes are a typical end-state of certain types of stellar collapse. It is hard to avoid them, not hard to get them, in general relativity.
Wormholes on the other hand are space-time deformations for which we don’t know any way how they could come about in natural processes. Their presence also requires negative energy, something that has never been observed, and that many physicists believe cannot exist.
9. You can fall into a black hole in finite time. It just looks like it takes forever.
Time slows down if you approach the event horizon, but this doesn’t mean that you actually stop falling before you reach the horizon. This slow-down is merely what an observer in the distance would see. You can calculate how much time it would take to fall into a black hole, as measured by a clock that the observer herself carries. The result is finite. You do indeed fall into the black hole. It’s just that your friend who stays outside never sees you falling in.
10. Energy is not conserved in the universe as a whole, but the effect is so tiny you won’t notice it.
So I said that energy is conserved, but that is only approximately correct. It would be entirely correct for a universe in which space does not change with time. But we know that in our universe space expands, and this expansion results in a violation of energy conservation.
This violation of energy conservation, however, is so minuscule that you don’t notice it in any experiment on Earth. It takes very long times and long distances to notice. Indeed, if the effect was any larger we would have noticed much earlier that the universe expands! So don’t try to blame your electricity bill on the universe, but close the window when the AC is running.
*minuscule*
ReplyDeleteHuh, Thanks for #3. I'm thinking more of nuclear fission, and not muons, but still... energy is conserved. My fav, physics fact is that it's the Pauli exclusion principle, that keeps you on top of your chair. (And not falling through it.)
ReplyDelete"10. Energy is not conserved in the universe as a whole, but the effect is so tiny you won’t notice it. “
ReplyDeleteConserved mass-energy requires homogeneous time plus Noether's theorems. GPS falsifies that boundary condition. SR slows and GR speeds satellite atomic clocks versus ground atomic clocks deeper in the gravitational potential well. Net ppb overall offers no net gain cyclic process (engine). Black hole mergers - the most extreme gravitational potential wells - are classical physics.
Exact postulates excluding goo and dribble (e.g., baryoengenesis) from beautiful maths suggest Einstein accepting Newton’s convenient infinite lightspeed. One microwave spectrometer-day heals physics. Look.
#6. For an observer moving so fast that the Earth was a Planck thickness pancake, wouldn't the Earth appear to have an Energy so great that spacetime would be crumpled about it?
ReplyDeleteIn fact, in observations (measurement) spherical symmetry occurs inevitably due delays. So you won't see any pancake or shortened Planck lengths. Length contradiction is coordinate tool in the theory for right predictions.
DeleteNo. 6 needs a bit of clarification. What is meant by high curvature?
ReplyDeleteI have a complaint about six...Quantum gravity becomes relevant at high curvature, not at short distances...Well, curvature is indeed the reciprocal of the curvature radius. So, they are "dual". I can not understand how that fits with your sixth idea...It is just nonsense to me...Perhaps you had another stuff in mind. But I think it is wrong...
ReplyDelete@capitalistimperialistpig
ReplyDelete"#6. For an observer moving so fast that the Earth was a Planck thickness pancake, wouldn't the Earth appear to have an Energy so great that spacetime would be crumpled about it?"
Why would you think that?
Daniel,
ReplyDeleteThanks for pointing out, I fixed that :)
CIP,
ReplyDeleteNo, it would not.
Space Time,
ReplyDeleteHigh curvature means that the curvature scalar is close by the inverse of the Planck length.
J, it becomes relevant when r_curvature is small, but the misunderstanding people may be tempted by is that it becomes relevant when the distance scale of a measurement is small.
ReplyDeleteRegarding "quantum mechanics is non-local," I tend to think this either indicates that something important is non-local and the "information is still local" is a dodge, a little caveat. That, bottom line, Einstein should still persist in feeling spooked. Instead I go with something more like "quantum states are a-local, they live in Hilbert space, not our nice familiar R^3." This is a lot less satisfying but I think it is less wrong? Does this seem like a worthwhile distinction? There are these pictures in textbooks of an "electron cloud model," people get impressions that quantum mechanics just makes things "kind of like normal, but fuzzy and smeared out."
- Doug
Dr. H. You said: "Alas, quantum mechanics is also fundamentally random". Did you really mean random, or perhaps "indeterminate"?
ReplyDeleteWhich curvature scalar? The scalar curvature? In a vacume spacetime it is zero.
ReplyDeleteThank you for #7! I had always wondered about that one, and had not seen it addressed in the lay explanations I can follow.
ReplyDeleteThe oil-and-water example was a problem for me, so probably I'm about to learn something. I had previously analysed it as a case of an external source (gravity) acting on a non-closed system to counteract entropy (as when the Sun's energy is said to have produced thermodynamic results on Earth which would not have happened if the Earth were a closed system). That is, in the absence of gravity, oil and water in a closed system (no external influences) would mix thoroughly in its highest-entropy condition. With gravity, they can't maintain that condition.
ReplyDeleteMaybe there is a way to include gravity in the entropy calculation, but for example, if I were to spend a lifetime separating dough back into egg and flour molecules, wouldn't we say I had used my energy to counteract entropy, rather than including me in the entropy calculation?
It seems like there should be one correct way to describe such situations. Up to now, it has seemed typical to me to say that a particular sub-system (the oil and water) is not at its maximum entropy (when separated) even though the larger system of which it is a part (the solar system) has increased its entropy to create that condition.
Re#5
ReplyDelete"Historically, the most important of these misconceptions has been that Bell's theorem implies that entangled quantum systems violate Einstein's criterion. As discussed in Deutsch & Hayden (2000), Bell's theorem is about correlations (joint probabilities) of stochastic real variables and therefore does not apply to quantum theory, which neither describes stochastic motion nor uses real-valued observables."
David Deutsch's paper says it is local but needs a physicist to understand it!
http://rspa.royalsocietypublishing.org/content/468/2138/531
Re #6 once more: Unless I misunderstand this paper: https://arxiv.org/PS_cache/gr-qc/pdf/9909/9909014v1.pdf by S. Carlip, it seems clear that the stress-energy tensor of the relativistic "pancake" is indeed changed by the kinetic energy, as is the deflection of the test particle. This would seem to be a GR effect. Or are you claiming Carlip is wrong?
ReplyDeleteI think CIP is right, because of the uncertainty principle. It's the Bronstein's argument (cf. Covariant Loop Quantum Gravity by C. Rovelli and F. Vidotto, page 17)
ReplyDeleteDear Sabine,
ReplyDelete#9: If it seen from our frame takes infinetely long time for "stuff" to cross the event horizon of a black hole, does then not all of the stuff sit at the horizon (for us). How does then stuff get into the singularity in a finite time (for us)? Do black holes then exist?
Best,
-Mogens
Re six: this is slightly more complicated. Of course, saying a tensor component is large is not frame independent. But large Ricci scalar is not the only quantum gravity indicator, it could be any scalar that you can form from covariant derivatives of the Riemann tensor.
ReplyDeleteHow does falling into a black hole work with Hawking radiation? From a far-away observer, it would seem that the person falls towards the black hole, slows down to a stop (because of time dilation), and then later the black hole itself evaporates. Does that mean that you can't actually cross the event horizon because the hole would evaporate before you hit it, or what?
ReplyDeleteI'm confused about #9. Will not the black hole evaporate (and the universe as we know it cease to exist) before the person falling in crosses the event horizon? Will the person falling then actually have time to cross the horizon, even in its own reference frame?
ReplyDeleteWhat I never understood about #9 is: Our evidence for Black holes is from the outside so for us observers anything takes infinite time to get in so for us the BH would even never form... ?
ReplyDeleteWhat's the entropy of a single particle?
ReplyDeleteI believe n. 6 is still a matter of discussion.
ReplyDeleteFirst of all, the notion of "length" requires an operational definition. There is really no operational definition of "length", "geometry" and so on, that would not involve wave-like scattering once we get to the length-scales of Angstroms and shorter. In this sense, statements such as "this and that becomes relevant at this and that length" are just a way of saying "this and that will become relevant for scattering at this and that center-of-mass energy". And quantum gravity should certainly become relevant if the center-of-mass energies of collisions will be around the Planck energy. And there is no problem with special relativity, because the frame is fixed by the center of mass.
However, I do get what you are saying, the classical vacuum field theory should get modified at high curvature by quantum effects. This would look like the Maxwell Lagrangian getting modified by an F^4 term due to the "fermion box" diagram from QED. Nevertheless, I have doubts that this will be the case, because unlike in Maxwell theory and other field theories, any additional term in the GR Lagrangian adds new degrees of freedom to the classical theory (more derivatives in the equations of motion). In principle, this can lead to an infinite number of new degrees of freedom and a theory that is completely unpredictable. This leads me to the intuition that for gravity things will work very differently - and that the key will be in the details of the gravity-matter interaction.
One last point is that curvature scalars are tricky and using them in a "trustworthy" specific criterion for the onset of quantum gravity is not a good idea. For instance the Kretschmann scalar will generally vanish all around Kerr even *really* close to the curvature singularity (see gr-qc/0302095).
If grad school counts as “school” — then I pretty much had all these bases covered.
ReplyDeleteI disagree with the oil example in #1. The entropy is higher in the mixed state but so is energy, and the E terms wins over TS in the free energy, F= E-TS.
ReplyDeleteLong time reader, first time commenter.
ReplyDeleteIn regards to #1, I'm used to thinking about entropy in the information-theoretic sense. If I have a book written in a language of your choice and ask what is the 'likelihood' describing the next letter, the 'book' system has a lower entropy than another text compiled from randomly selecting letters from that alphabet by whatever randomness measure you like. The entropy is a measure of our information about the system and the likelihood that our information specifies it. Maybe you could cast this into the thermo macro/micro language by the "microstate" of how much we know about a particular language configuration given the "macrostate" of the book we have.
I'm not sure how your oil and water example leads to a higher entropy state when it is unmixed. You have a system of two parts of incredibly "low" entropy - the likelihood inside those regions of the system being a given element of your alphabet (oil or water) is incredibly high. Compare that to the vigorously mixed case (just like your dough example) - we have less information about the actual configuration of the oil/water system.
It's why we often like to use entropy in discussing things like entanglement between two parts of a system - the higher this entanglement entropy, the more information (or measurements, if you'd like) is needed to specify the state you've found yourself in. Just looking at one subspace or the other is no longer sufficient information. In the dough case, that's clear. In the water / oil case, your 'water' and 'oil' subspaces when unmixed are easily distinguished and characterized with little information.
I'm probably misunderstanding your point, but I'm having difficulty understanding why oil and water unmixed has the higher entropy here.
"...as measured by a clock that the observer herself carries." I think you mean the "victim" (who falls and dies), not the observer for whom the falling-in is more a "fading" that takes "forever"...
ReplyDeleteinteresting. sean s.
ReplyDeleteIn your point 8, " Wormholes are science fiction, black holes are not", you said the wormholes need negative mass. Perhaps you missed this: https://arxiv.org/abs/1807.04726
ReplyDeleteBee, hi
ReplyDeleteI am you fan & appreciate what you are doing but as a chemist I believe the example you gave in the point 1 - the separation of oil/water emulsion without detergent - supposedly being driven by entropy increase in the separated system - is not quite correct. I think there is significant enthalpy change when you force oil into tiny droplets with lots of surface area that does not like to be in contact with water - and that stored form of energy is released when oil and water separates and it is the driver of the process. You can lower that contact surface energy by adding a detergent, and then the emulsion becomes far more stable, although you still have to do some work and agitate your laundry for the emulsion-forming process to proceed efficiently...
As you know water molecules hold together quite strongly by hydrogen bonds, and hydrocarbon chains of fats stick together by a weaker interaction called LDF which is not actually that weak (compare for example boiling points of neopentane with pentane and cyclopentane to see what I mean), and these greasy chains don't have a good way to interact with water molecules at the interface. It is true that at the interface there is increase of entropy because of higher degree of ordering of molecules along the interface - they definitely are losing degrees of of freedom by being at the border - but the enthalpy factor (of missing interaction with neighbor molecules) is probably more important...
I think I learned half of them reading your blog. Thanks.
ReplyDeleteSpace Time,
ReplyDeleteAny scalar contraction of the curvature tensor. As Robert says above, if the first contraction vanishes, you look at higher orders. Second order is typically enough.
Milkshake,
ReplyDeleteThanks for the clarification. Do you mean to say that in this case entropy decreases?
Peter,
ReplyDeleteYes, the observer who falls in.
I agree with the comments above of milkshake and Vyacheslavs Kashcheyevs about the oil and water example: in the kitchen the oil and water are coupled to a heat bath (and probably pressure bath) - the energy (and probably volume) are not conserved, and there is no reason for the entropy to reach a maximum. Indeed, the mixed state has a higher entropy. So to answer your question to milkshake: yes, when the oil and water separate the entropy decreases. But this is irrelevant in this case, the important thermodynamic potential for processes with constant temperature (and pressure) is the free energy (or enthalpy), and these decrease when the oil and water de-mix (at low enough temperatures).
ReplyDeleteA related comment about #3, which has nothing to do with particle decay: The phrasing "the idea that any system tries to minimize its energy is just nonsense" might be quite misleading, as many no-nonsense systems do indeed minimize their energy. While energy is of course conserved (almost, I guess...), the energy you keep track of might not be. So there are thermodynamic processes which lead a system to minimize its internal energy (the energy in the degrees of freedom you do keep track of): e.g. when the system is connected to a heat bath at zero temperature, or when the system undergoes a process in which entropy is conserved (https://en.wikipedia.org/wiki/Principle_of_minimum_energy).
And if you're looking to expand your list, you may add #11: Gravity does not always pull objects together - it can also pull them apart (I was shocked when I recently learned this, a few years after completing a phd in physics).
Re: Oil and water:
ReplyDeleteI had the same first instinct as many here (minimizing F, not maximizing S), but I think it would still separate in an isolated system, which means that criterion doesn't work. (And "external gravitational potential" is still isolated, because external potentials don't matter so long as our molecules don't act on the base matter, contrary to the assumption of JimV. I think.)
Instead, a deeper explanation (or at least alternative model; I don't actually know that this is accurate to oil and water, I guess): note that the number of "states" depends not only on the mixing, but on the number of energy states available.
If energy goes into the "mixing" (in any way), that's energy that can't be kinetic energy, and therefore that lost energy loses microstates. Sufficiently many microstates lost, and separated is the higher-entropy state.
My own nomination for the list: "Rest mass is not additive." E.g., a box of massless things can have mass (see also: protons getting most of their mass from gluons).
Took me a surprisingly long time to figure out that one, given working with invariant mass for research purposes. But as soon as I read someone explicitly stating that a box of photons can have mass, it clicked.
Daniel,
ReplyDeleteThere are no charged, massless, fermions.
You caught me - I actually don't know how to model entropy change of oil and water emulsion on the molecular level - and as with most real-life scenarios the calculations will get out of hand quickly - but I would say that enthalpy change prevails over entropy so separation of emulsion might not the best example of entropy gain - driven process.
ReplyDeleteA cleaner example of entropically driven process would be the cooling effect of dissolving some liquids (acetonitile) or solids (ammonium nitrate) in water - you don't gain enough in solvating energy in water for the interactions lost in the process of dissolution, so it gets colder - but it still goes readily because gain in entropy pushes it over.
sorry I made a typo - of course there is some entropy decrease on the oil-water interface, because of loss of degrees of freedom close to the interface, resulting in better ordering of the molecules
ReplyDeletebee:
ReplyDeletethanks for pointing out that entropy is NOT aa measure of disorder. this misconception is fundamentally wrong and yet it is endemic in physics, chemistry, and biology. Relating entropy to 'liklihood' connects it directly to probability as it should be. i am appalled that university students are invariably introduced to the concept of entropy through classical thermodynamics as if atoms didn't exist. i think Rovelli's paper "Relative information at the foundation of physics" (https://arxiv.org/pdf/1311.0054.pdf) is a better way to think about entropy in terms of objective information which avoids the trap of subjectivism, another sort of misunderstanding of entropy).
naive theorist.
At what scale is your dough really dough though?
ReplyDeleteI get myself in a knot when thinking about breaking down phase space into cells. A macroscopic description of states relies on a scale-dependent definition of interlocking cells. That each cell is suitably represented by a coarse-grained macroscopic descriptor, Temperature or Virial energy say. This means at some finer scale, its inhomogeneities are too significant that the cell needs to be broken down into smaller islands of relative tranquility. But these are not tranquil islands. Not because, by thermal contact with other islands are they exchanging energy and moving towards (away from?) equilibrium but because at the outset we have asked of them to be inhomogenous (more structured- see below). It seems to me as Eddington says of all definitions a "cyclical definition" but in this instance a contradictory tautological one?
I note the attempts to use the Bel-Robinson tensor, Weyl.bar(Weyl) in spinors lingo to pin down entropy in Cosmology:
"If the second law is valid in the presence of gravity, such that entropy increases monotonically into the future, then the current state of the universe must be considered more probable than the initial state, even though it is more structured. For this to be true, the gravitational field itself must be carrying entropy."
http://garra.iar.unlp.edu.ar/journal/Ellis-Grav-Entropy-Proposal-1303.5612v1.pdf
Re. 6: In special relativity, there are 3D lengths (distances between points in a 3D slice of spacetime), which are subject to Lorentz contraction, and 4D lengths (distances between spacetime events) which are not subject to Lorentz contraction.
ReplyDeleteWhen people say that quantum gravity becomes important at the Planck length, I gather they mean that you have to take quantum gravity into account when you discuss spacetime events which are one Planck length apart, or smaller. This is a Lorentz-invariant thing, no contraction happens when you change the reference frame. In such circumstances, the theory of quantum gravity should predict whether curvature is small or large at that scale, it's not automatically large.
On the other hand, when curvature is known to be large enough, one can deduce that distances between given spacetime events can become small enough, so that quantum gravity effects are certainly important at high curvature. But the opposite does not need to hold --- even when curvature is zero (or close to zero), quantum gravity generically may be important when you look at spacetime events separated by one Planck length or so. I guess it depends on the quantum gravity theory you discuss.
ReplyDeleteRe. 6: In special relativity, there are 3D lengths (distances between points in a 3D slice of spacetime), which are subject to Lorentz contraction, and 4D lengths (distances between spacetime events) which are not subject to Lorentz contraction.
When people say that quantum gravity becomes important at the Planck length, I gather they mean that you have to take quantum gravity into account when you discuss spacetime events which are one Planck length apart, or smaller. This is a Lorentz-invariant thing, no contraction happens when you change the reference frame. In such circumstances, the theory of quantum gravity should predict whether curvature is small or large at that scale, it's not automatically large.
On the other hand, when curvature is known to be large enough, one can deduce that distances between given spacetime events can become small enough, so that quantum gravity effects are certainly important at high curvature. But the opposite does not need to hold --- even when curvature is zero (or close to zero), quantum gravity generically may be important when you look at spacetime events separated by one Planck length or so. I guess it depends on the quantum gravity theory you discuss.
"You do indeed fall into the black hole. It’s just that your friend who stays outside never sees you falling in." Why then do we "see" matter falling into black holes, at least through the X-rays, etc that are generated as a consequence of this falling in? If you don't have the time to answer, or the answer is too complicated, its fine, but in this case, even a link to a suitable answer would be great, as I have never found a satisfactory answer. Thanks.
ReplyDelete2. Quantum mechanics is not a theory for short distances only, it’s just difficult to observe its effects on long distances.
ReplyDeleteTotally agree. As a corollary, there is no such thing as a classical observer - every physical observer is a quantum object. She may be in an incoherent superposition of many quantum states, which makes her appear approximately classical, but fundamentally every observer is quantum. I really cannot reconcile this with decoherence, since the system plus observer is a quantum universe that must evolve unitarily.
Daniel,
ReplyDeleteAlso, that paper is about a long wormhole. These are sort of neat, but when people say "wormhole," they generally mean wormholes which are shortcuts. Certainly in science fiction, it's the shortcuts that are interesting, and those are what are not allowed. In principle there's no reason why a long wormhole shouldn't exist. But a specific construction of one hasn't happened, and so the paper is interesting, but not earth shattering.
What about non-flat space-times which have all curvature scalars equal to zero? Is quantum gravity not relevant there?
ReplyDeleteMilkshake's argument hereby supersedes mine; I think the tendency of oil to glob together and water to glob together is more of a factor than gravity for the tendency of a homogeneous oil and water mixture to separate; but in either case it seems to me to be an energy-minimizing process rather than an entropy-maximizing process.
ReplyDeleteMy argument would work I think for a homogeneous mixture of hydrogen and nitrogen in a container. Gravity would cause them to separate, whereas without gravity a homogeneous mixture would have higher entropy.
(For my part, yes, I think, based on what I learned about entropy in school, that a near-homogeneous mixture of two or more different particles has higher entropy - more indistinguishable microstates - than a completely separated mixture. The example one hears is slightly different - a vacuum and a set of particles, with the particles being uniformly distributed for maximum entropy - but it seems analogous to me.)
>9. You can fall into a black hole in finite time. It just looks like it takes forever.
ReplyDeleteOh, I always wanted to ask someone about this.
We say that observer never sees you falling below even horizon. That's clear.
But what if observer sends a light ray to the (seemingly infinitely) falling object? Will the reflected photon return to the observer (heavily redshifted, indeed) or not?
Space Time,
ReplyDeleteIn flat space-time there's no gravity, hence also no quantum gravity. Let me emphasize that flat means the curvature *tensor* vanishes, not merely the curvature scalar.
Mohsin,
ReplyDeleteWe don't actually see it crossing the horizon, we see it dimming and vanishing and that's that.
CIP,
ReplyDeleteOf course the stress-energy-tensor changes when you boost it; it's not a scalar. And an active boost is not the same as a coordinate transformation. If you bang an observer into Earth at high enough momentum, you will eventually create curvature large enough to make quantum gravitational effects relevant. If that's what you are trying to say, I agree. But the point is, if you have any physical effect that depends on the choice of coordinate system, you have thrown out general covariance, and that's highly problematic. There is no reason to expect this to happen.
"But you can tell the difference between an event horizon and a solid surface by examining the radiation that is emitted by the dark object."
ReplyDeleteName one object, astrophysical or otherwise, where this difference has been observed. There have been many conjectures about how advective accretion flows in low mass x-ray binary systems might provide examples of luminosity differences between neutron stars and black hole candidates, but none have been confirmed.
I like the point #10 (GR doesn't conserve energy) and would add that this is a simple answer to a simple question: if light gets redshifted and red photons have less energy than blue photons, where does that energy go?
ReplyDeleteI am familiar with #9 (infalling coordinates for Schwarzschild black holes) but I am not sure that you never see the mass fall in. Certainly that's true for point particles, giving a nice interpretation of the entropy ~ area formula: the info is "smeared out on the surface." But I do wonder if the principle changes when a continuous distribution of mass rather than a point particle falls in.
I got a lot out of the #6 (quantum gravity is about high curvature), and it will be something I have to think a lot more about.
I like to introduce people to #5 (QM is nonlocal but doesn't transfer info) with a cooperative game based on GHZ states that I call "betrayal". A team of 3 people works together to beat 100 trials. Each trial, they go into relativistically separated rooms which have: two buttons labeled 0, 1, a screen giving instructions, and a timer counting down. They press exactly one button after the instructions, before time runs out, and the 3 numbers are summed.
Once separated, 1/4 of the time we run a "control" trial: we flash the instruction "make the sum of your numbers even," the team wins if that sum is even. The rest of the time we choose one at random to be a "traitor": we flash to them a false goal "make the sum of your numbers even," and to their two friends, we give the true goal "make the sum of your numbers odd." The classical analysis: your maximum win rate is 75%, you can pass 90/100 trials with probability < 0.005%. With a GHZ state shared, a 100% win: |+++〉 + |−−−〉 gives even terms, |+++〉 − |−−−〉 gives odd, two map {|+〉→|+〉,|−〉→i|−〉}. With 5% decoherence, still a 97% chance to pass 90/100.
I like your point #3 (think of particles seeking minimum energy as particles seeking maximum entropy)... but the minimum energy principle as a heuristic is not bad, it just comes from the fact that your degree of freedom is in contact with an environment, "all DoFs with energy > kT decrease, all with < kT increase." We have a lot of experience with nonthermalized things. But yes, I'd always like to discuss of environments and temperature and the dissipation/fluctuation connection.
For #1 I guess that's where I would take a real, if slight, disagreement. The likelihood to be in a state comes from our uncertainty about the microscopic world, and this must be communicated to students otherwise they will fail to truly appreciate what we mean by "spontaneous," something like "our uncertainties multiply across calculations, so as it does things that we cannot observe, we must become less certain about internal state, but this causes macroscopic changes that we call spontaneous." This is also why you get a notion of entropy in information theory -- information is precisely anything which reduces our uncertainty about something.
With that said I do think it's very helpful to articulate this idea that "disorder" or "uncertainty" is more involved than its most obvious interpretation. My favorite way is by a classroom experiment that the students themselves can run: you buy small cheap plastic boxes and lots of cotton swabs as thin sticks, ask some students to create a big jumbled disordered "mess" of cotton swabs on a desk/table, put them into a box in this "disordered" state, and then shake the box to reveal a liquid crystal phase where everything lines up. And I think it's good to get them to think about the fact that there is both position and momentum uncertainty and that by maximizing position uncertainty they minimized momentum uncertainty, so that everything "just lines up" as the system travels towards overall uncertainty.
Sabine,
ReplyDeleteI wrote non-flat.
The question is about those space-times that are non-flat, the Riemann tensor is not zero, but have all curvature scalars equal to zero. Not just he scalar curvature but all scalars that you can get, not just from contraction of the Riemann tensor but from any polynomials of it and any polynomials of its derivatives.
Hi Sabine,
ReplyDeleteon your last point, if energy increases, does that create new particles? Or does that affect the energy of the existing ones?
If the answer to the second question is yes, then Brooklyn should shrink. Where am I wrong?
Best,
J.
Sabine, see section 8 of this paper by Maldacena et al https://arxiv.org/abs/1807.04726 for a explicit realization of this wormhole solution within the standard model
ReplyDeleteI suppose that calling entropy not to be "disorder" must depend on the meaning of the word "disorder". I taught physical chemisty for decades, and did indeed call it disorder. The example I used was the microcanonical ensemble of a small number
ReplyDelete(<15 for computer time reasons on my 1971 minicomputer) of excited quantum harmonic oscillators (indistinguishable). It seemed to me, and my students, that the arrangement with all of the oscillators in one state (the average energy state) was less disordered than the most probable arrangement (approximating a Boltzmann distribution, and equalling it in the limit of a large number of oscillators.
The least ordered state is more probable, and in the true thermodynamic limit becomes the "only" occupied state. Homework: calculate the relative probability by hand for the most probable, least probable, and one selected at random arrangements.
The actual oil-water case, as mentioned, is complicated by surface tension, and,
for even an asteroid size ball of oil-glycerin (so they don't evaporate), you must include gravity. That's not even considering the kinetic aspects of reaching equilibrium ... something real chemists know is often the most important aspect.
Space time: I would be surprised if you could have a non-zero curvature tensor but all invariants vanishing. If it were a matrix, that would be impossible and my guess is the same is true for a four index object. Maybe check the pp wave as iirc there was something special there.
ReplyDeleteAnd only if that is your condition for flatness. Of course globally that needs not be minkowski space time, a torus being a counter example.
Thomas Larsson: any state (even a thermal one expressed by a density matrix) can be realized as a pure vector state in a possibly enlarged Hilbert space. But the observable needed to detect the coherent information (leading to interference) become more and more complicated. The point of decoherence is that by interacting with many degrees of freedom that you choose not to observe in the following, your system is effectively described by a density matrix that is diagonal in a preferred basis and is classical in that sense.
Gerard t Hooft has a theory that "the other end of the wormhole" is the opposite side of the same black hole the observer falls into.
ReplyDeleteAs far as I understand it (which is not much), he argues that for us, at the outside, there is no inside of the black hole and particles falling in the black hole simply appear again eventually (very long time) at the polar opposite side, but CPT mirrored. What the observer falling in sees is, for him, not relevant.
I am very sure I have missed some very important stuff here.
I'm a bit taken aback by the oil-water example. It does seem that, in most cases, the simple number of microstates corresponding to a given macrostate governs the probability of transitioning from one macrostate to another. And if this is applied to the oil-water example, there does seem to be more microstates in a mixture, irrespective of the tendency of water to stick to water and oil to stick to oil. Yet, you know that the transition probability from unmixed to mixed is low; and therefore you translate this into relative microstate cardinalities. But don't we here have a case of some microstate transitioning to other microstates with uneven probabilities due to the oil's hydrophobic tendency? My impression is that the implicit definition of entropy here is dynamic (number of microstates times transition distribution between microstate types, where those types are macrostates). It looks to me like a random walk with very uneven probabilities of going in one direction rather than the other; but then, the simple number of microstates tells you nothing; you still have to look at the distribution of microtransitions.
ReplyDeleteIt sounds legitimate. But this doesn't look any more like Boltzmann entropy.
Regarding no. 10 (claimed non-conservation of energy due to the expansion of the universe within the context of General Relativity) and the link to your other post that discusses Noether's theorem:
ReplyDeleteNoether's theorem does not know or care about symmetries of geometry (whatever that might be), but only symmetries of the Lagrangian density. (Noether's paper is here in translation: physics/0503066 [physics.hist-ph]) General Relativity has symmetries of the Lagrangian in abundance. Hence it has far _more_ conserved currents than do pre-GR theories---which is more or less the opposite of not conserving energy. When collected as a gravitational energy-momentum pseudotensor, the currents depend on the coordinate system in a way that one wouldn't expect if they are all faces of the same energy-momentum. But with infinitely many symmetries, why shouldn't there be infinitely many momenta? Thus the claim that there is no energy conservation in General Relativity is based on an interpretation that is used to ignore the actual Noether-based mathematics of conservation laws rooted in symmetries of the action.
Hi Dr H., regarding "7. Atoms do not expand when the universe expands. Neither does Brooklyn", I just asked Dr. M (see comments in https://tritonstation.wordpress.com/2018/06/19/the-acceleration-scale-in-the-data/) about the relationship between a0 and the CC. What are your thoughts - could that be the scale where expansion's effect become manifest - and we see it as modified gravity?
ReplyDeleteVery happy to see your take on entropy. Entropy is the most credible observble process, all systems tend to dissipate, no exceptions.The formula is dead simple as youstate, log of microtsates. The notion of entropy as a concept is useful in a number of different applications. For me the original thermodynamic explanation is the most useful i.e. energy becomes less concentrated and so less useful as kinetic energy microstates greatly increase. The ability and will of living organisms to constrain
ReplyDeleteentropy is a marvelous mystery even though this is denied with unconvincing arguments. Nothing magical/sacred, just that we do not know. It seems popular for eminent scientists to expalain entropy with silly examples such as mixing coffee and cream.That seems either insulting to the reader or disengenious. Same for claims about about characterisics/origin of life.What exactly explains the difference between inanimate and living has not been even begun to be explored. Entropy may welll be a central part. Intelligent design explains nothing here.
As for the black hole, wontw that mean we should still see all of the matter that fell in still appear outside the black hole. Why don't we see 1 billion solar masses still around a 1 billion solar masses black hole?
ReplyDeleteThe Einstein-Rosen bridge is a wormhole. This means not all the solutions to the equations of general relativity are science. It's science if observable. If only theoretically possible, it's science fiction?
ReplyDeleteThe energy-time uncertainty principle allows the violation of energy conservation. This is how virtual particles can appear out of nowhere thus creating energy. Of course if we include vacuum energy in the calculation, energy is conserved. But we don't even know how deep is the "vacuum well" a.k.a. the vacuum catastrophe. Is dark energy also vacuum energy?
Space Time,
ReplyDeleteSorry, I misread your comment! I don't know if there is a theorem, but I suspect what Robert says is correct... If it is correct, there really should be a theorem.
Milkshake,
ReplyDeleteI thought I had seen another comment from you, but now I can't seem to find it. Sorry about this, the comment feature on blogger has gotten so crappy I am thinking of moving the blog elsewhere entirely. In any case, thanks for your feedback and next time I'll think of a better example. Best,
B.
Well, pp waves are examples of such spacetimes.
ReplyDeleteA few comments on these. Largely these are not disagreements.
ReplyDelete1. Entropy is the largest where there is a large macrostate where the reshuffling of microstates leaves the macrostate the same. This is of course somewhat qualitative and what saves us with statistical mechanics is that entropy is a logarithm over the region in phase space a system occupies. Because of this errors are not that significant. Because the largest macrostate is not changed by changes in microstates this tends to be a highly disordered state. If you were to throw some bombs in a junk yard full of inoperative cars you don't change the operating status of the cars. Doing the same in a lot of new cars will.
3. Sort of. Quantum mechanics does not directly admit entropy, and we tend to compute QFT amplitudes that way. It is the case that a massive particle decays into primary particles that in turn often decay into many secondary particles. In our measurements there is entropy. This of course gets into the matter of quantum measurements.
5. This is true; quantum nonlocality has nonsignalling property.
6. Quantum gravitation is relevant at high curvature, but also at small distances. The reason is we can think of metric fluctuations as δg ~ δL/L and as a result the fluctuations in the Ricci curvature is δR ~ δL/L^3. Here we think of δL as the uncertainty in the position of a particle in the sense of Heisenberg's microscope argument. Clearly the fluctuations in the curvature becomes relevant as L approaches the Planck length. We may of course think of the fluctuations in the Ricci curvature as δR ~ (δL/L)(1/L^2) ~ δg×R. So it also can be see as what occurs at extreme curvatures.
8. Traversable wormholes are probably fiction. We really do not know this for sure. Kip Thorne demonstrated how an opening of a wormhole when accelerated outwards and back the so called twin paradox results in a closed timelike curves. This is a sort of time machine. As a result an observer may duplicate a quantum state. A quantum state possessed by an observer can appear in a duplicated state from the wormhole, but of course later the observer must throw the quantu state into the time machine. So a wormhole can duplicate a quantum state, It is not hard to show this is not a unitary process and thus can't occur by quantum mechanics itself. If the equivalence principle and unitary principles hold then this would preclude the existence of wormholes.
I had to split this because I overshot the word limit
ReplyDelete10, Energy conservation in general relativity is problematic. In order for energy conservation to be proven there must be a timelike Killing vector, Killing vectors have eigenvalued relationships with the Weyl tensor, and there are certain Petrov types, such as type D for black holes, where there are timelike Killing vectors. In this case the Killing vector in a Noether theorem gives conservation of energy. Another way to think of this with black holes there is an asymptotically flat region. This means we can set up a Gaussian surface to define mass=energy.. Other spacetimes such as type O for cosmologies do not have a timelike Killing vector.
The ADM approach to general relativity gives the odd constraint H = 0. What this really means is there is no general way to get us a Gaussian surface to define mass-energy. A spacetime with an uniform distribution of galaxies or particles does not admit such a surface. However, you can for H = 0 define a kinetic term and a gravitational potential energy term. This defines a sort of energy conservation on the Hubble frame in the FLRW equations. This is though largely frame dependent. I think this is a frame with a hidden dimension reduced to zero by being high;y boosted, such as what might happen near the null boundary separating a causal wedge in AdS_5.
This energy conservation is approximate. The de Sitter vacuum is probably not absolutely stable, and we appear to exist in an approximate de Sitter spacetime. Think of event horizons, whether black holes or the null surface separating a causal wedge in AdS_5, as emitting radiation and no eternal. In around 10^(10^(10^{70}}} years the dS spacetime of the universe may transition into another state.
#5:
ReplyDelete"Quantum mechanics is indeed perfectly compatible with Einstein’s speed-of-light limit."
Not according to Einstein himself. The reason he so strongly rejected Copenhagen quantum mechanics was exactly because of the spooky-action-at-a-distance. He was not concerned about the indeterminism of the theory in itself, it is just that he saw immediately that regarding the theory as complete (which Bohr definitely did) meant that the sort of indeterminism in the theory demanded spooky action-at-a-distance. And Einstein did not see how to reconcile that with Special Relativity (or with the more general principle that as systems get further and further apart from each other in space their mutual interactions and influences become negligible.)
As for "Einstein's speed-of-light limit", you can search his 1905 paper up and down and you will find no mention of any such limit. The fact that one cannot use quantum theory to superluminally signal was—to my knowledge—first proven by Bell with the so-called no-Bell-telephone theorems. There is no way to translate the property of not allowing superluminal signaling into the property of fundamental Lorentz invariance, so there is no theorem of the form that a theory prohibiting superluminal signaling must be "Relativistic" in any recognizable sense.
But it is good to acknowledge that quantum mechanics is a non-local theory. Confusion on that point has led to the widespread misunderstanding of what Bell proved.
#7
ReplyDelete"The expansion of the universe is incredibly slow and the force it exerts is weak. Systems that are bound together by forces exceeding that of the expansion remain unaffected."
The usual definition of an "expanding universe" does not entail that the "expansion" produces *any* "force" or that you need "any" force, no matter how weak, to resist it. For example, Milne space-time is commonly cited as an example of an expanding universe, and is one according to the usual definitions, but Milne is a piece of Minkowski, i.e. is completely flat. In Milne, there would be no tendency of objects to drift apart "due to the expansion of space". Because in Milne the relative acceleration of geodesics is uniformly zero.
So one would have to think through carefully exactly how the "expansion of the universe" is being defined in order to know whether any force at all is needed to keep objects from separating. This is not to say that your comment is incorrect (the particular structure of the universe with a positive cosmological constant may indeed display relative acceleration of geodesics), but rather a warning that the term "expansion of the universe" tends to be used in what could easily be a misleading way.
Regarding #1, oil and water:
ReplyDeleteThe Boltzmann definition of entropy (which is the right one to use) does not mention anything either about order/disorder in any everyday sense, nor about probability. Boltzmann defined the entropy of a system as proportional to the log of the volume of phase space compatible with a certain (generally macroscopic) description of the system. So there are as many Boltzmann entropies as there are possible macroscopic descriptions or, even more generally, as there are partitions of phase space. If you want the Boltzmann entropy that corresponds to the classical thermodynamic entropy, then you have to use a partition according to the appropriate thermodynamics magnitudes, such as temperature, volume and pressure for a gas. When you start taking chemical processes into account, as with oil and water, the situation becomes much more complicated.
But for sure 1) as the oil and water separate, the Boltzmann entropy of the system goes up, despite that fact that the system becomes more "ordered" to casual inspection. 2) the process is aided by gravity, but would happen even in zero gravity for the reasons that milkshake gives. And from beginning to end in both cases the Boltzmann entropy will increase.
In response to Lawrence Crowell: A time-like Killing vector allows one to define a conserved energy that is inferred solely from material stress-energy and that is not coordinate-dependent. But the symmetries of the action imply that there are always pseudotensor conservation laws, whether or not there are Killing vectors. That is Noether's first theorem as applied to GR. Thus energies and momenta are always conserved (in the sense of a continuity equation) in General Relativity as a consequence of Einstein's equations. These do not have the coordinate-independence and uniqueness properties that people usually want. But if our standards are utopian, it might make sense to reconsider the standards, especially once the coordinate dependence makes sense as a way to accommodate infinitely many distinct energies and momenta. The alternative, denying conservation laws in GR, undermines what little people remember from high school chemistry (that energy is conserved) and at least verbally licenses the conclusion that energy non-conserving processes (such as spirit-to-matter causation) are facilitated by GR (which is the reverse of the truth). People do in fact draw such inferences. Others, like Soviet and Russian Academician A. A. Logunov, reject GR instead. If one takes the Noether mathematics at face value, neither sort of conclusion is tempting.
ReplyDeleteDoing a bit of googling reveals that my conjecture above was true only for euclidean signature but is violated for lorentzian signature with pp waves being indeed the prominent example, see https://en.wikipedia.org/wiki/Vanishing_scalar_invariant_spacetime
ReplyDeletesee also https://arxiv.org/pdf/0806.2144.pdf
The point is that the only non-zero components of the curvature with all indices down have two u indices and the inverse metric you need to contract indices has the g^uu component vanishing. So there is no non-zero scalar to form (same when including covariant dervatives as only those in the u direction are non-zero).
There are non-polynomial invariants, though (where you form components in the directions of the null eigen-directions of the curvature) that don't vanish.
But this leaves the interesting question if pp-waves would receive quantum corrections (from the effective field theory point of view where you add higher derivative terms to the action). Clearly, they contain physical curvature, geodesics in these space-times converge to the center thanks to their energy and you could ask what happens when this energy (density) eventually becomes of Planck size.
Re Tim Maudlin's comments: Causality is independent of Lorentz invariance. One way to see this is to look at a higher derivative theory with
ReplyDeleteL = f BOX f + a (f BOX f)^2
with f being a real scalar field, BOX the d'Alambertian and a some constant. When you work out the equations of motion you see that any solution to 0=BOX f is also a solution to those. So, in particular you can take f = v_m x^m for some vector v. Then you can look at small fluctuations around those solutions and see that those obey a wave equation with a speed of light (or better: sound) that depends on the a and |v|. There is no obstruction to this velocity being larger than 1 even though the theory is manifestly covariant.
Regarding the question of locality I do not agree. Quantum physics is only non-local (according to any definition of the term which does not already render a classical theory with correlations non-local) if you insist that it is realistic. But that discussion was already had many times. Let me just point out that Haag's book ("Local quantum physics") shows clearly that you can build the whole framework of QFT based on locality from the start.
@ J. Brian Pitts
ReplyDeleteEnergy is widely considered to be the conserved quantity associated with only a translation along a time-like Killing vector. For example, translating only along a time-like direction is not a symmetry of the action of a free Klein-Gordon field with a FRW background. The statement is not based on an incorrect understanding of Noether's theorem.
I'm curious about your statement "General Relativity has symmetries of the Lagrangian in abundance. Hence it has far _more_ conserved currents than do pre-GR theories". Do you mean the action of GR, i.e. the integral of the Ricci scalar, or do you mean considering fields with a non-flat background metric. I was unaware that there was an abundance of symmetries in GR, do you have a reference?
@akidbelle
Non-conservation of energy in GR does indeed mean that particles are created. It was demonstrated long ago that the expansion of the universe causes particles to be produced.
@Sabine Hossenfelder
Great post!
@ Mauldin
ReplyDeleteThink of the oil drop on water. The shape is assumes is circular, which minimizes the tension it has with water. If you start an oil drop on the surface with a very irregular shape it will quickly wiggle around until it reaches the circular shape. Since it assumes a final shape with a minimal tension it means the tension is larger when it starts out shaped with lots of lobes and the rest. The energy is dissipated away.
LC
Space Time, Robert,
ReplyDeleteThat's an interesting case indeed. As Robert says, the somewhat more formal formulation of the statement in my blogpost is that you have to estimate which terms in the Lagrangian become relevant in which limits. Now I don't know what you'd do in this case, maybe the conclusion is that you need some non-trivial interaction to see physical effects. It seems to me like this should have been discussed in the literature somewhere, I'll have an eye out for this.
Robert Helling
ReplyDeleteEvery single-world understanding of the quantum formalism is non-local according the Bell's precise criterion, which of course does not make every "classical theory with correlations" non-local if I am understanding that somewhat obscure phrase. As for theories that are not single-world theories, such as Everett, it is also non-local, but for more subtle reasons.
QFT, if you understand it as a single world theory (i.e. if you just sort of brute-force stipulate a solution to the measurement problem) is manifestly non-local already via the EPR argument. The idea that it is not is a common error. QFT is non-signaling, as the ETCRs imply, but signaling was never the issue. Einstein refused to accept the Copenhagen understanding of quantum theory for his entire life due to its "spooky action-at-a-distance", but he was never under the impression you could use the theory to send superluminal signals. If he had been under that impression, he would have demanded an experimental demonstration.
The phrase "if you insist that it is realistic" applied to a theory is not even grammatically well-formed. According to standard usage, theories are neither "realistic" or "non-realitistic". It is a person's attitude toward a theory that is either realistic or non-realistic. And that has nothing at all to do with Bell's theorem. The theorem does not care what attitude you adopt toward a theory, and cannot be circumvented by adopting some attitude toward the theory at issue.
Since you want to use this phase in some non-standard way, it would be most productive if you start by giving a clear definition of what it is supposed to mean.
Sabine,
ReplyDeleteI've some questions concerning #5:
1) What are non-local correlations? How do they differ from local correlations?
2) What does it mean that theses correlations are quantifiably stronger than correlations of non-quantum theories?
3) How is randomness connected to non-signalling? The de Broglie–Bohm theory is deterministic and prohibits signalling.
4) Can you clarify how there's spooky action at a distance in quantum mechanics and at the same time quantum mechanics is compatible with Einstein's speed of light limit?
@ J. Brian Pitts,
ReplyDeleteYou are right there are pseudotensor conservation laws. The universe at large has the FLRW equation from H = 0 that
(a'^2/a)^2 - 8πGρ/c^2 - k/a^2 = 0.
This applies on the Hubble frame where galaxies are all at rest with respect to the comoving frame of the approximately de Sitter spacetime. This is a case of a pseudotensor conservation law, for it is not correct in general. If galaxies were moving around at a huge range of relative velocities they would not serve as objects that could properly define the scale factor a.
LC
@Sabine: Sorry if this is a repeat; my browser crashed, so I am not sure if my previous effort went through.
ReplyDelete@Lawrence Crowell: Thank you.
@Unknown:
That there are infinitely many symmetries of the _action_ of General Relativity is just general covariance in its original mid-1910s form, but it tends to be viewed from other angles. See Peter Bergmann, "Conservation Laws in General Relativity as the Generators of Coordinate Transformations," _Physical Review_ 112 (1958) pp. 287-289. I have in mind the Hilbert action, or Einstein's \Gamma-\Gamma, or Moeller's tetrad action, or any other one that is quasi-invariant (or invariant) under coordinate transformations---which is all of them. Let the vector field \xi^{\mu} describe an infinitesimal coordinate transformation. If \xi is timelike, there is a coordinate system such that \xi's components are (1,0,0,0). (This works at least in a large-ish neighborhood, rather like our data.) Thus this is a 'rigid' translation, formally, the same everywhere componentwise. Of course if \xi is my favorite vector field but some other vector field \psi is your favorite, then yours will look wiggly to me and vice versa; they won't look rigid in the same coordinate system. Taking the math literally, I say that we are both right and infer infinitely many conserved energies. (I say more about this in "Gauge-Invariant Localization of Infinitely Many Gravitational Energies from All Possible Auxiliary Structures," _GRG_ 42 (2010) pp. 601-622, 0902.1288 [gr-qc]). Most of the reason that my energy doesn't transform into yours (pseudotensor behavior) is that we are talking about different energies---much as "John is short" and "Juan no es pequen~o" ["John is not short" if my rusty Spanish suffices] are not equivalent under translation if John and Juan are two different people. At a minimum, one at least formally has infinitely many conserved currents with many energy-like properties.
The reason that it is incorrect to say that there must be a time-like Killing vector field for energy to be conserved is that the metric in GR has Euler-Lagrange equations. Noether's theorem as given by her assumes that every field has Euler-Lagrange equations; it doesn't assume anything much about what it or isn't geometry. In pre-GR theories, 'fields' that one would count as geometric (if one bothered to count them as fields at all, rather than hiding them as collections of 1's and 0's in their adapted coordinate systems as people actually did!) had symmetries---Killing vectors such as the Poincare group, affine collineations, or whatnot. With such a formally generally covariant formulation of a not-substantively generally covariant theory, the fields that lack Euler-Lagrange equations (flat metric tensors, etc.) have symmetries instead. The more such 'absolute' fields, the fewer vector fields leave the Lagrangian density (quasi-)invariant, typically. In GR there are no such fields, so every vector field is a symmetry of the Lagrangian.
Andrzej Trautman considers the possibility of some fields that lack Euler-Lagrange equations here: "The General Theory of Relativity," _Soviet Physics Uspekhi_ 89 (1966), pp. 319-339. What one sees there is that every field in the action is a _threat_ to conservation, and this threat can be met in one of two ways: a given field can have Euler-Lagrange equations OR have symmetries. As long as for every field, one condition or the other is satified, then there are conservation laws. It is not necessary to have both (which is the case of GR with Killing vectors). In that very special case, there is a NICE conservation law that meets the extra standards that many people wish to impose. But one always has pseudotensor conservation laws, and they are better than nothing and not as bad as most people think. On that last point, James Nester's linking pseudotensors to quasilocal quantities is noteworthy.
I'm also curious about point 5. It seems an overinterpretation, to me, to claim that quantum mechanics is non-local. Ultimately, what Bell inequality violation demonstrates is that a set of events (measurements) cannot have a joint probability distribution. That the events influence one another is one possible way for this to come to pass, but not the only one.
ReplyDeleteA Bell inequality is a facet of a polytope constructed by taking convex combinations of measurement outcomes, including those outcomes that can't be simultaneously observed. So the whole polytope depends on the assumption that it's meaningful to talk about outcomes of measurements that can't be simultaneously performed. If you deny this, then you can't construct the polytope in the first place, and consequently, the theory's correlations won't be bounded by Bell inequalities. But this doesn't incur any non-locality, at least not as far as I can see.
Jochen
ReplyDeleteThe notion that Bell's argument has anything at all to do with polytopes or joint probability distributions or (to use Reinhard Werner's favorite canard) state spaces that are simplexes, is a fiction. Bell's 1963 argument starts from the correct conclusion of the 1935 EPR argument: if you want to save locality in an EPR setting then the theory cannot be indeterministic (as Bohr insisted upon) but must be deterministic. Therefore 1) the quantum-mechanical description of a system (i.e. the wavefunction) is not a complete physical description (cf. the title of the EPR paper and 2) if you want to save locality you can immediately narrow your search to deterministic theories.
Bohm's publication of his paper in 1952 alerted Bell to the possibility of recovering all the non-relativistic quantum phenomena from a deterministic theory. But of course Bohm's theory is manifestly non-local, so it would not have appealed to Einstein, and didn't. (A straight historical refutation of the myth that what Einstein really cared about was determinism. What he really cared about, as he said over and over, was locality.)
Bell set about asking the question whether a local theory could possibly recover all of the phenomena predicted by relativity *without any sort of non-locality such as appeared in Copenhagen and in Bohm.* And via his theorem he concluded (correctly) "no". The *only* assumptions of his theorem are locality (i.e. what is called Bell-locality or Einstein-locality) and the Statistical Independence assumption that underlies all scientific approaches to understanding the world. That's it. Nothing else. So if Bell's inequality is violated by experiments done at spacelike separation (it is) there are only two options: abandon locality or try to wiggle your way out of the statistical independence assumption. But since that assumption underlies all scientific thinking, abandoning it is a rather pyrrhic victory: you get to keep locality at the price of the entire scientific method. So the only acceptable conclusion is that the world is non-local, and we should try to understand how to incorporate that non-locality into our fundamental physics.
SH: "Quantum mechanics is indeed perfectly compatible with Einstein’s speed-of-light limit."
ReplyDeleteTo be clear, you're presumably talking about relativistic quantum mechanics, because obviously non-relativistic quantum mechanics is not relativistic, and doesn't respect the speed of light limit. For example, the non-relativistic Schrodinger equation is not Lorentz invariant, and it allows to accelerate a particle to arbitrary speeds. The constant c doesn't appear in the non-relativistic equations of motion. Once we stipulate that we're talking about relativistic quantum mechanics, the fact that it is relativistic is not surprising.
TM: Not according to Einstein himself. The reason he so strongly rejected Copenhagen quantum mechanics was exactly because of the spooky-action-at-a-distance.
After about 1927 Einstein conceded that quantum theory didn't violate special relativity in any operational way, and people like Bohr never thought that it did. (Recall that Bohr used general relativity to refute one of the last of Einstein's attempts to prove some operational conflict.) If anyone had ever thought that EPR correlations enabled faster than light signaling, it would have overturned all of physics. Einstein's mature objection was to the completeness of quantum theory, not to its correctness. If he had thought it actually violated special relativity in any operational way he would have denied its correctness... and so would have everyone else. Even regarding completeness, he conceded that the "extra" completion might involve things that are in principle unobservable, hence a matter of interpretation.
TM: As for "Einstein's speed-of-light limit", you can search his 1905 paper up and down and you will find no mention of any such limit.
I don't think that's true. For example, in section 4, discussing length contraction, he says "The greater the value of v, the greater the contraction. At v=c all moving objects - observed from the system at rest - shrink into plane structures. For superluminal velocities our considerations become meaningless: we shall see in the considerations that follow that in our theory the velocity of light physically plays the part of infinitely great velocity."
I would say that statements like that can rightly be interpreted as "mentioning the speed of light limit". He went into more detail in his 1907 review article, in which he explained that any putative causal effect propagating faster than c in terms of any system of inertial coordinates would be going backwards in time in terms of some other system of inertial coordinates, and hence "the effect would precede the cause". He carefully says
"Even though this does not contain any contradiction from a purely logical point of view, it conflicts with the character of all our experience to such an extent that this seems sufficient to prove the impossibility of the assumption v > c".
One might say, well, Einstein didn't know about EPR correlations (in 1907), but the most important point is that even EPR correlations respect the causal restriction to v < c. Spacelike observables still commute, and there is no superluminal action (transfer of energy, momentum, or information).
By classical non-locality I mean that in the dark I chooses pair of socks from my drawer, so I don’t know their color. When later that day I observe the color of the sock on my left foot to be red the does the information travel instantly to the right foot? Before looking there was a „non-local“ correlation between sock colors at the two feet. That is, states, even in the classical theory are global objects. The thing where it makes sense to discuss locality are Note the states but the dual objects, the observable. And the point of Haag kastler is that you can build all of qft from local observables.
ReplyDeleteWhat is different in the quantum theory is that for classical socks you can alway say “I did not look but the socks actually have a color”. In the quantum theory you are not allowed to do that if you actually do a measurement that is not compatible with the color measurement. Or in more common terms: it makes no sense to speculate about the spin of a qubit in the x direction if you happen to measure the spin in the z direction. This means it is not realistic: it does not allow you to reason about all values of observables simultaneously as it is allowed classically. Or more mathematically: the classical state space is a simplex, every state has a unique decomposition into pure states. This makes it a probability distribution. The quantum state space is still convex but in general not a simplex, the Bloch sphere for the qubit being the simplest example.
Dropping the “all things secretly have values even when it is possible to observe them” assumption, which I refer to as realism but one could also call classical prejudice makes violation of bell inequalities or ghz no longer being in tension with locality.
Hi Sabine,
ReplyDeleteRegarding your #9, if you observe some VERY distant stars (such that MOND-like behavior dominates) falling into a VERY VERY distant MOTHER OF ALL MOTHERS blackhole, would their photons appear redshifted proportional to their distance?
Water-oil experiment in space
ReplyDeletehttps://www.youtube.com/watch?v=EXH7mR_b21g
The stress-energy and does obey a conservation law. The integral of the Stress-energy tensor around an area dA gives ∫T^{ab}dA_{ab} = ∫∇_ cT^{ab}dV^{abc}, by Stokes' rule. This is a big symmetry with relativity, because the Bianchi identities with the Riemann curvature and the Einstein field equation insures this is zero. So stress-energy crossing a surface is guaranteed to pass back out again.
ReplyDeleteThis is a bit different than energy E = e_aT^{a0}, and now do this Stokes's rule you find there is a derivative of the basis element e_a. Now integrate between two spatial surfaces, three space volumes in four dimensions, dS^{abc} and dS'^{abc}. The differential must then be with time, and for a Killing vector K = K_t∂_t the covariant derivative is constant on the e_a and we no longer have these muddlesome connection terms.
This is why in a funny way the continuity equation is not quite the same as energy conservation. It is not something to lose a lot of sleep over, though it becomes a bit tricky with quantum gravity. A related issue with quantum gravity is that a wave function may not evaluate a finite normalization.
correction: In my concluding statement I should have said a sum over geometries does not sum to a finite value.
ReplyDelete