- (Information) Paradox Lost

Tim Maudlin

arXiv:1705.03541 [physics.hist-ph]

Here is the problem. The dynamics of quantum field theories is always reversible. It also preserves probabilities which, taken together (assuming linearity), means the time-evolution is unitary. That quantum field theories are unitary depends on certain assumptions about space-time, notably that space-like hypersurfaces – a generalized version of moments of ‘equal time’ – are complete. Space-like hypersurfaces after the entire evaporation of black holes violate this assumption. They are, as the terminology has it, not complete Cauchy surfaces. Hence, there is no reason for time-evolution to be unitary in a space-time that contains a black hole. What’s the paradox then, Maudlin asks.

First, let me point out that this is hardly news. As Maudlin himself notes, this is an old story, though I admit it’s often not spelled out very clearly in the literature. In particular the Susskind-Thorlacius paper that Maudlin picks on is wrong in more ways than I can possibly get into here. Everyone in the field who has their marbles together knows that time-evolution is unitary on “nice slices”– which are complete Cauchy-hypersurfaces –

*at all finite times*. The non-unitarity comes from eventually cutting these slices. The slices that Maudlin uses aren’t quite as nice because they’re discontinuous, but they essentially tell the same story.

What Maudlin does not spell out however is that knowing where the non-unitarity comes from doesn’t help much to explain why we observe it to be respected. Physicists are using quantum field theory here on planet Earth to describe, for example, what happens in LHC collisions. For all these Earthlings know, there are lots of black holes throughout the universe and their current hypersurface hence isn’t complete. Worse still, in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles. This would mean then, according to Maudlin’s argument, we’d have no reason to even expect a unitary evolution because the mathematical requirements for the necessary proof aren’t fulfilled. But we do.

So that’s what irks physicists: If black holes would violate unitarity all over the place how come we don’t notice? This issue is usually phrased in terms of the scattering-matrix which asks a concrete question: If I could create a black hole in a scattering process how come that we never see any violation of unitarity.

Maybe we do, you might say, or maybe it’s just too small an effect. Yes, people have tried that argument, which is the whole discussion about whether unitarity maybe just is violated etc. That’s the place where Hawking came from all these years ago. Does Maudlin want us to go back to the 1980s?

In his paper, he also points out correctly that – from a strictly logical point of view – there’s nothing to worry about because the information that fell into a black hole can be kept in the black hole forever without any contradictions. I am not sure why he doesn’t mention this isn’t a new insight either – it’s what goes in the literature as a remnant solution. Now, physicists normally assume that inside of remnants there is no singularity because nobody really believes the singularity is physical, whereas Maudlin keeps the singularity, but from the outside perspective that’s entirely irrelevant.

It is also correct, as Maudlin writes, that remnant solutions have been discarded on spurious grounds with the result that research on the black hole information loss problem has grown into a huge bubble of nonsense. The most commonly named objection to remnants – the pair production problem – has no justification because – as Maudlin writes – it presumes that the volume inside the remnant is small for which there is no reason. This too is hardly news. Lee and I pointed this out, for example, in our 2009 paper. You can find more details in a recent review by Chen

*et al*.

The other objection against remnants is that this solution would imply that the Bekenstein-Hawking entropy doesn’t count microstates of the black hole. This idea is very unpopular with string theorists who believe that they have shown the Bekenstein-Hawking entropy counts microstates. (Fyi, I think it’s a circular argument because it assumes a bulk-boundary correspondence ab initio.)

Either way, none of this is really new. Maudlin’s paper is just reiterating all the options that physicists have been chewing on forever: Accept unitarity violation, store information in remnants, or finally get it out.

The real problem with black hole information is that nobody knows what happens with it. As time passes, you inevitably come into a regime where quantum effects of gravity are strong and nobody can calculate what happens then. The main argument we are seeing in the literature is whether quantum gravitational effects become noticeable before the black hole has shrunk to a tiny size.

So what’s new about Maudlin’s paper? The condescending tone by which he attempts public ridicule strikes me as bad news for the – already conflict-laden – relation between physicists and philosophers.

ReplyDeleteI ask: what reason do have to prefer the usual SM Lagrangian from one in which the top Yukawa coupling constant varies in time at a rate of 1 part in 10^(-1000)? Is it because of some "fundamental" requirement? If so, please spell that out. The correct answer is that the theory with constant Yukawa coupling has more symmetry, and it seems to be a good rule of thumb that nature chooses the more symmetrical equations when possible. I.e. Nature prefers symmetry (beauty). Without such a symmetry/beauty rule of thumb to follow, you would have to assign equal "probability" to both theories, since they fit the data equally well. It's just a question of identifying your assumptions.No, if the top Yukawa coupling constant is constant or varies in time at a rate of α * 10^(-1000) where α ~ O(1), yields the same physics, practically speaking, then I don't introduce α because it is an unnecessary and indeterminate constant, and not because of beauty or because of nature allegedly preferring symmetrical equations. It is a matter of logic, not of beauty.

The absurd example of two competing theories - the standard model of physics as the first theory and the second theory being (the standard model of physics and there are unicorns on a planet in the Andromeda galaxy) - that are experimentally equivalent today and for the indefinite future perhaps should make the point clear. It is not beauty or nature's alleged preference for symmetrical equations that makes me choose the first theory over the second; nor do I assign equal probabilities to the two theories. This is where the clarity brought by philosophers, which is different in many aspects from the clarity brought by mathematicians, is of value.

Physphill,

ReplyDeleteThere are QFT versions of hidden variable theories that work perfectly well. How do these not count as counterexamples?

For example: https://arxiv.org/abs/0707.3487

For what it's worth, I'm not a Bohmian, I think they have their own problems in terms of their potential for pointing us towards deeper theories, but it's just untrue that you can't reproduce all of relativistic QM with hidden variable theories. It has been done explicitly in multiple ways.

Physphill,

ReplyDeleteI already know that all of the claims I'm making are false in many worlds. I said that at the beginning. The point is that in all single-world theories, there must be nonlocality. So your answer to what happens after measurement, if we assume single-world, must be that the wavefunction collapses. How can you get the right predictions without assuming that the collapse happens instantaneously?

Bee,

ReplyDeleteyou write "I have never heard anyone say that the constants in the standard model have an unknown time-dependence. But leaving aside the nomenclature, maybe more importantly if the constants had an undetermined time-dependence the model would be useless."

Unlike you I have heard many times about such ideas. For instance this prl has 800+ citations: "A Search for Time Variation of the Fine Structure Constant" https://arxiv.org/abs/astro-ph/9803165

For AdS/CFT I do not understand your worry. Suppose you want to test black hole evaporation properties in a lab. So, you build a box to hold your black hole. Then you create one by shooting in some energy from walls of box. Now, you can leave it alone (with walls reflective) until some later time when it has evaporated, when you detect the radiation. Or, you can try other experiments, like shooting more stuff after it formed, or letting some radiation out before it has fully evaporated, etc. Obviously can you do and control all these experiments and you might want to perform them all to test whether the BH is preserving information. Or, maybe you are satisfied with just the first. Either way, what is the problem? This control helps you figure out physics, it does not hinder you.

Bee,

ReplyDeleteI am just not seeing what point you are trying to make here. Every QFT known to humanity has the property that you can deform it by adding source terms. It's true of the Standard Model Lagrangian and it's true of the CFTs that appear in AdS/CFT. So any critique you level at AdS/CFT based on the fact that we can add source terms to the CFT can be equally well applied to the Standard Model Lagrangian. As everyone who has studied the SM knows, we restrict the couplings by demanding symmetry -- there is simply no other reason. Similarly, we can fix the CFT Lagrangian by imposing symmetry. The situation is entirely parallel, and I just don't see what more there is to say about this. It seems absurd to regard this as some failure of determinism.

I certainly don't advocate adding spacetime dependent sources to the Standard Model, but the difference between us is that I recognize that this not because of some "fundamental" reason as you put it (what that means, I have no idea), but simply that experience tells us to use symmetry as a guide to the laws of nature. You haven't explained what this "fundamental" reason is -- please do so. If not symmetry, what is this principle that makes you so sure that there are no spacetime dependent couplings?

As a historical point, I suspect you are in fact aware of people talking about the fact the couplings in the Standard Model could have some time dependence: there is a rather famous story about the fine structure constant involving Dirac regarding this. I am of course not endorsing this; quite the opposite.

Anyway, having fixed the CFT Lagrangian in AdS/CFT, the evolution is unitary, the information comes out in the Hawking radiation, a state at time t can be evolved backwards to some earlier time in a unique fashion etc. What more do you want? I don't understand your "cheating" comment. We can put the equations on a computer and run them forwards and backwards to your heart's content.

Tim will be very pleased to hear that you agree with him. The last time I recall you coming down on his side was when you told me that I was confused about the meaning of Gauss' law, and that you were sure Weinberg's book supported this. This year long exchange was basically repeating this time after time. That said, if you have managed to distill anything from him that is both new and correct, I would be happy to hear about it, because I sure don't see it.

bhg,

ReplyDeleteSure I know that people have studied the possibility that constants have a time-dependence. But that's usually considered new physics, not part of the standard model.

Sure you can add sources to the standard model. What do you think these sources come from if not the standard model? That's what my comment about fundamental is about.

I haven't followed most of your exchange with Tim, I was commenting on the statements in his paper.

So I mistakenly thought you were referring to Gauss' law when you were referring to Gauss' theorem (or the other way round, I've already forgotten). And you'll probably walk around for the rest of your life making proclamations about it. Wish you much fun with that, I hope you realize how silly that is.

"Anyway, having fixed the CFT Lagrangian in AdS/CFT, the evolution is unitary, the information comes out in the Hawking radiation, a state at time t can be evolved backwards to some earlier time in a unique fashion etc."You told me earlier you can't do that, so what are you referring to now? I give you the state at time t_end, can you evolve it back to t_i. My understanding was the answer is "no". See earlier example about throwing in a book. You don't have this information at t_end. I have to give it to you for you to be able to evolve the state back.

Anonymous Black Hole Guy

ReplyDeleteAfter an entire year of comment, the very, very worst thing you can cite to show how terribly incompetent I am is a confusion about the term "Gauss's Law", which happens to be used ambiguously? That's it? Meanwhile, you, the supposed "expert" in AdS/CFT are completely unaware of the role that the Hamiltonian constraint plays in specifying the time evolution of the bulk theory (insisting, absurdly, that the *entire time evolution* of the theory is determined by the boundary Schrödinger equation, which is only there because of the timelike boundary, which is exactly what renders the theory non-globally-hyperbolic and therefore not even a candidate for the sort of determinism that we believe could implement information conservation). Sabine has been correctly pointing out this trivial point. It has been a standard assumption in the history of physics that the fundamental laws of nature are fixed and independent of time. Hence the observation of the present world can provide information about what those laws are, which can in turn be used for both prediction and retrodiction. If you are of a mind to question that be my guest, but Sabine's point is that in that case you are not in the business of postulating information preservation from the get-go.

Among the many things you seem to the terminally confused about, here is one that may help. We often model a subsystem of the universe as if it were the entire universe, with its own Hamiltonian, boundary values, etc. And that fictional Hamiltonian can vary in time because it really represents the rest of the universe, the bit we are not trying to explicitly model. Thus, if the system of interest is an atom, and I have put it in a time-varying magnet field, I will build the field and its time variation into the fictional Hamiltonian that I apply to the atom. Similarly, if the experiment calls for irradiating that atom with x-rays, I deal with that by changing the boundary conditions of the little fictional universe that starts out containing just the atom. But all of this is just manifest idealization, simplification and fiction. In principle, I should be able to expand the system under analysis to include not just the atom but also all of the laboratory equipment. Then the Hamiltonian no longer varies in time: the variation of the magnetic field is accounted for by how the lab equipment has been constructed, operating under a fixed Hamiltonian. And similarly (and even more obviously) the injection of the x-ways, which I treated by simply changing the boundary conditions of the little atomic system, are now instead produced by the physical x-ray source, operating by the single fixed Hamiltonian. Ideally, we expand the system under analysis to a point where it is an *isolated system* FAPP. Then the Hamiltonian is unaffected by the rest of the universe and nothing passes either way through the boundary.

Con't

Of course, this is always just a fictional idealization: there is a larger universe that influences my system, at least gravitationally, no matter what precautions I take in the lab. But in a globally hyperbolic space-time, the physics should be able to treat of *the entire universe* as the system under investigation. Then there is no *environment* that has been left out which needs to be compensated for in the Hamiltonian, and there are no timelike boundaries to deal with. You have a fixed Hamiltonian (which could have a fixed time dependence, but that would be revolutionary) and a universal boundary condition and the rest is analysis. That is a straightforwardly Laplacian deterministic system if the dynamical laws are deterministic (e.g. no random collapses). That is the setting in which one discusses "information conservation". Going to a non-globally-hyperbolic setting with timelike boundary conditions, like AdS, is already messing the entire situation up. What you can try to do there is fix the timelike boundary conditions (e.g. put in Dirichlet or Neumann conditions). This is absolutely essential to what you are trying to do, although I don't recall you even stating explicitly what the boundary conditions you are using are. Instead, you just wave your hands and talk about “putting gravity in a box” (which, of course, cannot be done: you don’t confine gravitational interactions by a potential well the way you can confine electromagnetic interactions). In fact, the overall impression one gets reading your posts is that you have only the vaguest mathematical and technical understanding of what you are doing, and are guided by flawed analogies and metaphors rather than any sort of strict analysis at all.

ReplyDeleteI think this is what Sabine is getting at, but she is too polite to come out and say it. But she can correct me if I am off here.

Bee,

ReplyDelete"Sure I know that people have studied the possibility that constants have a time-dependence. But that's usually considered new physics, not part of the standard model.

Sure you can add sources to the standard model. What do you think these sources come from if not the standard model? That's what my comment about fundamental is about."

Again, I can't make any sense of this whatsoever. Your argument now appears be that the SM is the way it is because someone once named a specific Lagrangian "the Standard Model", as opposed to "new physics". This is not a question of arbitrary names. The question is on what grounds do you think that nature is described by this Lagrangian as opposed to one in which some "constants" have a spacetime dependence below observational limits? The correct answer is that you have no reason to think this apart from noting that one specific choice has greater symmetry and so is a "natural" choice. I have no problem making that assumption, but I don't dress it up in bogus arguments.

"You told me earlier you can't do that, so what are you referring to now? I give you the state at time t_end, can you evolve it back to t_i. My understanding was the answer is "no". See earlier example about throwing in a book. You don't have this information at t_end. I have to give it to you for you to be able to evolve the state back."I am not sure how many times I have to say this, but (as in every other QFT) you first need to tell me what the CFT Lagrangian is and only then can I run the equations back in time. This is the same situation as in every physical system. How do you expect me to solve the equations until the equations are specified?

" Wish you much fun with that, I hope you realize how silly that is."That was precisely my point. There has been so much silliness in this thread, and that just happened to be the most comical example.

Travis:

ReplyDeleteThe hidden variables themselves don't propagate across spacelike distances...Okay, we’re in agreement that there is no superluminal propagation of hidden variables. (Previously you said if anything was propagating superluminally it was hidden variables.)

…it's just that the particles in one location influence the particles everywhere, so there is a superluminal influence.The word “influence” still smuggles in a sense of action and directional propagation, but the events are symmetrical -- each precedes the other in some frame -- so we can’t assert any directionality.

Again, the situation isn't directionally symmetrical because in Bohmian mechanics (and probably in all deterministic hidden variable theories) there is a preferred frame…Relativistic theory has no

physically meaningfulpreferred frame. (Bohmian mechanics is inherently non-relativistic -- even the so-called "relativistic" versions -- so I don’t consider it viable.)It's just not true that we can claim that in a relativistic context we have no superluminal signaling, if by relativistic context you mean QFT.I think it’s true. The so-called causality condition -- spacelike separated observables commute -- is a cornerstone of QFT, and it prohibits superluminal signaling. Of course, this does not conflict with the correlations in quantum entanglement.

Bell's theorem shows that superluminal signaling of some kind is going on…I don’t think so. Bell just made more explicit what Einstein had already discerned, i.e., that certain spacelike separated events which, based on certain assumptions, one might expect to be statistically independent are not actually statistically independent, so something about the assumptions is wrong. One assumption is the so-called “Einstein causality condition”, which implies no superluminal signaling, but it’s well known that quantum entanglement does

notviolate that assumption.Superdeterminism really is unscientific as an explanation…I’m not here to advocate super-determinism, but I don’t think it’s inherently unscientific. We don’t regard Laplace’s deterministic block universe as unscientific, even though it has no contemporary free will, no counterfactuals, etc. The difference with superdeteminism, in addition to the much more constrained initial conditions, is closely analogous to spatial non-separability. People feel more comfortable with Laplace determinism because they can block off the causal past beyond some point and apply simple proximate laws of physics to just the local conditions (position, first and second derivatives), without worrying about the distant history of these memoryless particles, and it works fine (except for those EPR correlations!). So Laplace determinism seems safe and “separable” in this sense, whereas superdeterminism entails “non-separability”, i.e., particles could be “entangled” with their more distant pasts (perhaps even involving some subtle laws not yet discerned that would causally account for the statistical correlations as Einstein hoped). Now, your argument is that if we allow such temporal non-separability, science itself is impossible, just as Einstein argued about spatial non-separability. In each case, the counter-argument is more or less the same.

Travis,

ReplyDelete"There are QFT versions of hidden variable theories that work perfectly well. How do these not count as counterexamples?

For example: https://arxiv.org/abs/0707.3487"

This is not a counterexample, as the authors themselves seem to agree: "Our pilot-wave model is not Lorentz covariant at the fundamental level. However, since the models reproduce the predictions of standard quantum field theory, Lorentz covariance is regained at the statistical level."

Lorentz covariance is an exact statement in relativistic QFT, not a statistical one. So, the predictions of this theory do not coincide with QED. Not a counterexample. There are other fundamental disagreements. This theory cannot reproduce many well-measured predictions of QFT, such as renormalization of couplings. It cannot handle pair production, or even uncertainties in particle number. It handle any fermions at all.

"So your answer to what happens after measurement, if we assume single-world, must be that the wavefunction collapses. How can you get the right predictions without assuming that the collapse happens instantaneously?"

It can happen instantaneously. When I observe classical correlated variable my knowledge of distant state may change instantaneously. Obviously that is not non-local. You may object that PBR proved wavefunction cannot be "epistemic" like that. That is false. They "proved" only that wavefunction cannot be represent uncertainty over local hidden variables (classical). Actual world is not classical. Actual world is not described by real probabilities.

There exist actually interesting and new questions for philosophers. What means "ontic" and what is "epsistmic" in actual quantum world? What are implications of superposition principle? Few seem to even understand those questions. Instead they chase dust, assuming world is classical and trying to reconcile QM predictions. That is precisely backwards.

This blog was going on for over one year more or less under the premise of ‘information conservation’ or as Tim just said “That is a straightforwardly Laplacian deterministic system if the dynamical laws are deterministic (e.g. no random collapses). That is the setting in which one discusses ‘information conservation’”.

ReplyDeleteThe main statement in Tim´s paper "(Information) Paradox Lost" is:

"... is a Cauchy-to-non-Cauchy transition. That means that there never has been any grounds to expect the transition to be either retrodictable or unitary. Quantum theory does not imply, and never has, that such a transition must be retrodictable or unitary or 'preserve information'. There were no grounds in 1975, and remain no grounds today, to expect information to be preserved. There was never any information loss paradox based in fundamental properties of quantum theory and relativity."

As I already said Tim righteously points out in "What Bell Did" that EPR is primarily about non-locality. There I also found: “… Einstein’s aim was to restore causality (i.e. determinism) and locality, …. Determinism can, in fact, be restored while keeping the same predictions, as the pilot wave theory shows.”

Thus let´s talk about determinism and realism.

Since Tim is busy, I just googled for "is not so much the question of causality but the question of realism" and e.g. here are some options what Einstein could have meant with realism:

1. His belief that a reality exists independent of our ability to observe it.

2. His belief in separability and locality.

3. His belief in strict causality, which implies certainty and classical determinism.

I simply cannot understand what is so attractive about determinism or better an

exclusivelyunitary evolution. You immediately run into e.g. the BH information loss problem.Wouldn´t it be much better to break this determinism, break the symmetry between the past and the future?

To close the “freedom of choice loophole” in EPR we of course assume we are no puppets. But we are just part of physics and if physics is completely deterministic – well, this would be depressing.

So, where does the randomness enter? At the very initial conditions, an ensemble (that even nature does not know about … hmm)?

Is not the QM measurement calling for almost over 100 years: “Pick me”? (It provides randomness that is unitarily calculated, the best we can get, adds up nicely to one.) Also, an observer independent triggered measurement would be nice, because we want that “reality exists independent of our ability to observe it”. And yes, I want one reality. Nature is the boss, not us. But we are no puppets, therefore we need a bit of randomness.

There seems to be a lot of talking past each other. Correct me if I'm wrong, but this seems to be the case based on what BHG has said, and there doesn't seem to be anything in principle wrong with it, as far as adding sources is concerned:

ReplyDeleteIt is typically the case that we evolve the state in the bulk by assuming reflecting boundary conditions. Having specified that the boundary is reflective, the evolution is perfectly well-defined. These reflecting boundary conditions are implemented in the CFT by adding sources in a very specific way. We could add different sources which represent different boundary conditions or the injection of matter at various times, but we don't need to.

Tim's point is a good one though... does the reflective boundary condition refer only to "electromagnetically reflective" or "gravitationally reflective"?

bhg,

ReplyDelete"I am not sure how many times I have to say this, but (as in every other QFT) you first need to tell me what the CFT Lagrangian is and only then can I run the equations back in time."For all I can see I'd actually have to tell you what the Lagrangian is back in time so you can run the equations back in time. I am not sure whether you really believe that is how QFTs normally work or whether you just claim this to be so for the sake of your argument. But really it doesn't matter, what matters is only that this is indeed what you claim.

Let me be honest here, until a few days ago I wasn't really aware of this. I never really bothered to look into how you throw stuff in from the boundary and such because I didn't think it matters. Now that you say it though, it makes of course perfect sense that you have to change something about the CFT if you have stuff reaching the boundary. So, well, thanks for the clarification. I'll have to think about what to make out of is.

I basically agree with what Tim said above about closed systems and such. Jim btw made the same point earlier.

In any case, I'd still appreciate a reference for how to encode a bulk-delta on the boundary. I can't get this to match with what I know, or believed to know, about expanding the bulk states around their boundary values and such. Best,

B.

Physphill,

ReplyDeleteQED, like all current QFT's, is an effective field theory. It's not valid at all energy cutoffs. So choose some really high energy cutoff. Then a Bohmian QFT will give the correct empirical results up to that cutoff. Yes, Lorentz invariance is only statistically recovered, and I actually agree with you that that's a point in favor of MWI over Bohmian mechanics. But the idea that Bohmian mechanics has been empirically refuted is not true. Your point about renormalization is a good one, and again I agree it's a point in favor of MWI over Bohmian mechanics. But any given finite loop order can be incorporated into Bohmian mechanics, so again it hasn't been empirically refuted. However, your statement that it can't handle pair production or uncertainties in particle number is just wrong. There are many different models, some of which incorporate fermions as fundamental entities and some which don't. The paper I linked to doesn't; the fermions are emergent entities of the theory (or maybe it would be better to say they are implicit entities in the theory). But it still gives the same empirical results for the process that we dub pair production.

You seem to be operating under the principle that there is nothing wrong with having probabilities interfere with each other. I don't know what to say, other than that's not how probability works. If you're going to claim that nothing non-local is happening, you have to be careful to separate out what parts of the mathematical apparatus represent your own imperfect knowledge and what parts represent actual things in the world independent of your own mind (because surely you're not going to claim that your own mind is having an effect on quantum particles). That's why it's useful to deal only with macroscopic objects, which I hope we can all agree exist independently of our own knowledge, since we don't agree about the wave function. And I hope we can also agree that our probabilities about the state of macroscopic objects don't interfere; they follow the usual calculus. The direction of the Stern-Gerlach magnets in the GHZ experiment is a macroscopic variable. Similarly, the experimenters can write down on a piece of paper the observed spin in that direction, so the marking on the paper is a macroscopic variable. So have three rooms with Stern-Gerlach magnets be separated by a light-year. Fix two of the rooms to have horizontal magnet directions. One minute before detection is about to take place, have the experimenter in the third room choose a direction. Now assume that you know everything there is to know about the current state of the universe except for whatever is contained in a sphere of radius two light-minutes surrounding the third experimenter (and by everything I mean absolutely everything, not just macroscopic knowledge). Give your probability distribution, based on that knowledge, for what will be written on the two pieces of paper in the other two rooms. If physics is local, your probability distributions shouldn't change in the next minute because you already know everything that could possibly influence it within that time. However, the fact of the matter is that the macroscopic direction of the Stern-Gerlach magnet and the marking on the piece of paper inside the two light-minute sphere of the third researcher will completely determine what the sum of the markings on the other two papers will be. Hence, physics is not local.

Travis,

ReplyDeleteyou wrote "... a Bohmian QFT will give the correct empirical results up to that cutoff. Yes, Lorentz invariance is only statistically recovered, and I actually agree with you that that's a point in favor of MWI over Bohmian mechanics. ...Your point about renormalization is a good one, and again I agree it's a point in favor of MWI over Bohmian mechanics. But any given finite loop order can be incorporated into Bohmian mechanics, so again it hasn't been empirically refuted."

Where is the paper that shows any version of Bohmian mechanics can reproduce running of couplings? I have never seen anything like that. The papers on Bohmian QFT I have seen are like the one you linked to, very vague and with zero calculations.

"ou seem to be operating under the principle that there is nothing wrong with having probabilities interfere with each other. I don't know what to say, other than that's not how probability works. "

It is how amplitudes work in QM. Why should I care about classical probabilities? My world is not classical.

"Now assume that you know everything there is to know about the current state of the universe ...Give your probability distribution, based on that knowledge, for what will be written on the two pieces of paper in the other two rooms. If physics is local, your probability distributions shouldn't change in the next minute because you already know everything that could possibly influence it within that time."

Obviously this cannot be explained by local classical physics. That is well known and was known for many years. The correct reaction is to abandon classical physics.

There is no probability distribution (until you specify what you measure). Before that, there is a complex amplitude. That does not mean world is non-local. It is not, all interactions are local. It only means world is QM.

Bee,

ReplyDeleteyou write "For all I can see I'd actually have to tell you what the Lagrangian is back in time so you can run the equations back in time. I am not sure whether you really believe that is how QFTs normally work or whether you just claim this to be so for the sake of your argument."

That is true in QFT (or just QM). It is standard QFT, nothing special to CFT or AdS/CFT. The QFT time-evolution operator is U = T(exp{i \int H(t) dt}). Of course you must know H(t) (or L(t)) to find U. Why is that surprising?

Travis wrote:

ReplyDeletePhysphill,

QED, like all current QFT's, is an effective field theory.

. . .

However, the fact of the matter is that the macroscopic direction of the Stern-Gerlach magnet and the marking on the piece of paper inside the two light-minute sphere of the third researcher will completely determine what the sum of the markings on the other two papers will be. Hence, physics is not local.

Travis, that was a masterful post, and I hope Physphill reads and thinks about it carefully. He'll need to clearly understand the GHZ setup, however, so I hope that if he is not familiar with it he looks it up.

It occurs to me that we can use your example also to stress to Amos how nuts Superdeterminism is. In order to avoid the result that the setting + outcome of the third experimenter, a light-year away, determines the sum of the markings in the other two labs, Superdeterminism posits that the initial conditions in the far-past -- the causal past of all 3 measurement events - were finely tuned in just such a way as to make the 3rd experimenter choose just the right direction for his measurement, so as to allow the markings on the papers in the other 2 experiments to be (a) locally caused and (b) come out the way that QM says they must. And this has to work out whether the 3rd experimenter makes his choice by flipping a coin, or measuring the polarity of cosmic-microwave photon, or by playing rock-paper-scissors with his assistant. This is not scientific, but not because "it makes doing science impossible", but rather for the same reason that Creationism plus the hypothesis that God deliberately made dinosaur fossils to challenge our faith is not scientific.

Carl3,

ReplyDeleteof course I am aware of GHZ.

Let me try one more time. What Bell/GHZ/PBR rule out is this: that QM wavefunction cannot represent a state of (uncertain) knowledge about classical hidden variables in a world with local physics. That is all. It does not rule out possibility that wavefunction represents a state of knowledge in some other kind of world. One such world has non-local classical physics (but other considerations do rule out such worlds).

Another world Bell/GHZ/PBR does not rule out is local but quantum (complex probability amplitudes instead of real probabilities, local interactions). The latter is what everyone who actually understands QM already believed in. That is why these theorems are not interesting.

“‘superdeterminism’……. Of course, such a purely abstract proposal cannot be refuted, but besides being insane, it too would undercut scientific method.” Tim in "What Bell Did".

ReplyDeleteIf you believe in determinism then maybe only a possible inflation (superluminal) after the big bang is a little problem to arrive at superdeterminism. But besides this it is a perfectly valid solution to EPR.

Of course, it is nuts as Creationism.

As I said long ago “Superdeterminism is definitely a tenable CHOICE ;-)”, but not a choice that a super depressed puppet on strings can make. Again, we need randomness, otherwise no free will.

Isn´t there also some randomness, or probability distribution in Bohmian mechanics? If so, where does the distribution come from? This is a serious question to Carl3, Travis or Tim. I simply do not know it.

ReplyDelete…will completely determine what the sum of the markings on the other two papers will be. Hence, physics is not local.To avoid any chance of confusion, I’d replace that last sentence with something like “What I just described is an example of what I mean when I say quantum entanglement is ‘not local’, while acknowledging that it doesn’t violate ‘locality’ in the sense of ‘no superluminal signaling, spacelike observables commute’, etc.” In other words, your last sentence isn’t really a conclusion, it’s a definition.

It occurs to me that we can use your example also to stress to Amos how nuts Superdeterminism is... And this has to work out whether the 3rd experimenter makes his choice by flipping a coin, or measuring the polarity of cosmic-microwave photon...I think

anyattempt to give a “mechanistic” account of how quantum correlations are produced is going to seem fantastically implausible, i.e., “nuts”. For example, imagine that some kind of hidden energyless information propagates superluminally from a coin toss at event A to the measurement of the polarity of a cosmic microwave photon at event B, light-years away, and somehow affects the outcome (without exerting any action) in just the right way to cause the correlation… and don’t forget that B happens before A in some frames, so the information should really be going the opposite way (photon measurement to coin toss). The events are causally symmetrical at the ends of a spacelike interval, so there is no Lorentz invariant way of deciding which way the indescribable “influence” is supposed to indescribably “propagate” (unless we impose some prior directionality, such as saying that the experimenter at A went to Harvard and the experimenter at B went to Yale, so the information flows from A to B). This is barely intelligible, let alone plausible.The only point of mentioning superdeterminism (as Bell did), however implausible, is to acknowledge that one of the assumptions in his analysis is independence of the measurement “choices”, which is not

a prioriguaranteed by locality (in the sense of spacelike observables commuting, etc.), because of the common causal past. Yes, superdeterminism seems fantastically implausible – but so does every other ‘mechanistic’ account, which is why I advocate just saying the wave functions are not separable, and acknowledging the different meanings of the word “local”, and not telling people that quantum entanglement proves that “something is going faster than light”.This is not scientific, but not because "it makes doing science impossible", but rather for the same reason that Creationism plus the hypothesis that God deliberately made dinosaur fossils to challenge our faith is not scientific.If I understand your point, you’re saying that there can be many different interpretations (e.g., creationism, many worlds, etc.,) of the same set of observational facts (assuming we agree on what those are) so interpretations are not scientific. I agree that it’s easy for interpretations to slide into the non-scientific realm. On the other hand, I think interpretations are really just attempts to rationalize the facts, and the effort to rationalize our knowledge has sometimes led to real advances in science (and at other times has led to nonsense). I think science makes use of both empiricism and rationalism. I don’t think it’s inherently unscientific to consider the timelike and lightlike paths from the common causal past along which information might flow, to see if it’s possible to account for the apparent lack of statistical independence between spacelike events in EPR experiments. Most people conclude that it’s difficult to categorically rule it out, but without grasping the details it doesn’t seem much different than just saying “the correlations exist”, i.e., the wave functions are not separable.

Carl,

ReplyDeleteYeah, totally agree. I wrote a little parable a while back that uses a similar setup to stress how insane superdeterminism is:

https://www.facebook.com/tim.maudlin/posts/10155717609538398

Just search the post for "Travis Myers's Parable".

I screwed up the scenario a here though, see below.

Physphill,

I agree that Bohmian mechanics can't reproduce the running of the couplings except by the usual methods, and furthermore agree that this is a point against the theory. But if the couplings are given for a given energy scale, it can reproduce (in principle, though not in practice) the running of the couplings at all energies below that.

For the GHZ scenario, you say:

"There is no probability distribution (until you specify what you measure). Before that, there is a complex amplitude."

I did specify what you're measuring. You're measuring the sum of what's written on the pieces of paper in the other two rooms. That being said, I screwed up the scenario (the scenario as described would allow FTL communication between the experimenters). My attempt to fix it just made it too complicated.

In any case, since you deny hidden variables, the basic EPR scenario shows the same nonlocality (which was of course the whole point of that paper). If you know absolutely everything there is to know about the universe except for a small sphere around one of the experimenters, you only have a 50% chance of predicting what the other experimenter writes down. You also know everything there is to know about the entire past of the universe too. The only thing you don't know is whatever is contained in the small region of spacetime consisting of a sphere of radius two light-minutes around one of the experimenters starting one minute before he writes his result on his piece of paper. A person who knew this tiny extra bit of extremely space-like-separated knowledge would be able to predict what the other experimenter writes down, and you wouldn't be able to even though you know the entire rest of the history of the universe and all of the laws of physics. That's a strange definition of locality.

It's quite a stretch to change how probability itself works in order to accommodate quantum mechanics. Why not go all the way, as some have, and change how logic works in order to accommodate quantum mechanics?

Bee,

ReplyDeleteFirst, let me apologize for sounding exasperated. I am frustrated that we are talking past each other, and at my failure to convey a particularly simple and beautiful story. The beautiful story is basically all contained in Witten's original AdS/CFT paper.

Think of AdS as a box. Like with any physical system in a box, we would like to be able inject energy into the box, let it evolve in time, and then analyze the result. The way we do this in GR is that allow the boundary conditions for, say, a scalar field, to change with time. If we change the boundary conditions in some finite time interval we can inject energy into the system during that time interval, and after that we can revert to the standard (maximally symmetric) boundary conditions. This can all be made as rigorous as you please. Similarly, in QFT the way we compute correlation functions is that we add time dependent sources to the Lagrangian. These inject energy/particles into the system, and then we study the subsequent time evolution. Note that this does no violence to unitary time evolution since all we are doing is making the Hamiltonian time dependent -- it is still Hermitian. A *beautiful* fact about AdS/CFT, explained in detail in Witten's paper, is that there is a very simple relation between the AdS boundary conditions and the CFT sources. This is a big part of what makes AdS/CFT so powerful. For example, it means we can relate CFT correlation functions to the response of the bulk to an infinitesimal change in the boundary conditions. Once again, I cannot understand why you are casting aspersion on the fact that we add time dependent sources to the CFT action -- is this not done in every single modern QFT textbook?

If we were ever able to produce black holes in the lab we would proceed as above. We would inject energy into our lab, wait awhile, and then measure what comes out. The mathematical description of the injection and measurement is via source terms added to the action.

I did several times point you to a reference on how to map a bulk delta function state to the CFT: google "Harlow TASI lectures" and you will find a nice pedagogical discussion with references. If you have any questions I will be happy to answer them, as this is actually quite straightforward.

"For all I can see I'd actually have to tell you what the Lagrangian is back in time so you can run the equations back in time. I am not sure whether you really believe that is how QFTs normally work or whether you just claim this to be so for the sake of your argument."I am sorry, but are you claiming the converse: that you *don't* need to know the Lagrangian for all times to run the equations forward and backward in time? If so, you are mistaken.

Travis

ReplyDelete"Tim's point is a good one though... does the reflective boundary condition refer only to "electromagnetically reflective" or "gravitationally reflective"? "Trust me, Tim's point is not a "good one". If you are interested in the true story rather than misinformation from someone who knows nothing about the subject, I direct you to Witten's original paper where he give a precise and rigorous mathematical statement of the fact that AdS is like a box with "gravitationally reflective" properties, based on the work of the mathematicians Fefferman and Graham. AdS gravity, with Fefferman-Graham boundary conditions, has a conserved symplectic form, which is simply a precise way of stating this it is a self-consistent Hamiltonian system. Who to believe: Tim or Edward Witten? It's a tossup...

Physphill,

ReplyDeleteAs I (and Tim and Jim) have pointed out a few times already, normally you can evolve the system forward from an initial state. The initial state gives you the full information. You seem to actually insist on simply denying this. Of course you can integrate out part of the system and get a limited part of the initial state that does no longer contain the full information, but that's irrelevant to the causal structure.

bhg,

ReplyDeleteThanks for the reference, I must have missed it earlier (the issue with the comment notifications still hasn't been resolved).

Regarding the forward and backward evolution. As I said earlier, if you want to maintain that all functions in your laws of nature can have an in-principle-unpredictable time-dependence you are throwing out science in its entirety. You can't as much as solve a harmonic oscillator if you conjecture the constants have an unknown time-dependence. The only way you normally get such a time-dependence is by an interaction with another system, ie you must have a cause for that time-dependence (and that cause better be within the universe). Of course you can integrate this out and effectively describe it by a time-dependence in the coupling (say, if you have a driven oscillator). But if there's no external system, the evolution is fully determined by the initial state.

In any case, I'll see that I do some reading.

Best,

B.

bhg,

ReplyDeleteTim probably meant that if you put a box in Minkowski space it will not reflect gravitational waves, in contrast to AdS.

Bee,

ReplyDelete"Regarding the forward and backward evolution. As I said earlier, if you want to maintain that all functions in your laws of nature can have an in-principle-unpredictable time-dependence you are throwing out science in its entirety. "Again, you are missing the point. As a mathematical fact, of course they can in principle have some as yet unknown time dependence -- do you have a counterargument for this? Is allowing for the possibility that maybe the fine structure constant evolves with time "throwing out science"? The whole point of science is that we frequently have to make assumptions that are based not on mathematical necessity but on choosing the simplest or "nicest" set of equations that fit the data. Science is not math. It is wise to accept that we have to make these kind of assumptions rather than pretending they don't exist, particularly since sometime these assumptions turn out to be false.

A General Note on the Rhetorical Strategies of People Who Won't AcknowledgeThat They Are Just Blowing Smoke

ReplyDeleteWhat do you do if you have taken a public stance as someone who deeply understands an issue, but it is becoming obvious to everyone that you are terribly confused, and you are too egocentric to admit that you are just repeating things that you don't really understand? We are being treated to a wonderful display of the fog-producing and attempts at bullying that are used. I particularly want to signpost this for philosophers, who can all to soften be intimidated by the tactics m employed by physicists who are so far out of their depth that they could not touch bottom if you handed them an anchor.

Travis mentions an particular fact about AdS. This fact is not in dispute in the least: AdS is not globally hyperbolic as it has a timelike boundary. So if you want to even begin to discuss predictability and retrodictability in AdS you have to address the issue of the boundary condition. Anonymous Black Hole Guy has taken to analogizing AdS as "gravity in a box". This appears to be a very serious analogy in his mind, because he apparently thinks that all sorts of physical characteristics in AdS just can obviously be inferred from familiar cases of "physics in a box", i.e. an electromagnetic system confined by an infinite square well. In fact, the discreteness of the spectrum of the Hamiltonian is supposed to be a main example of such an inference. It is the supposed discreteness and low degeneracy of the spectrum in AdS that has driven most of Anonymous Black Hole Guy's arguments here.

So let's ask some really simple questions to see if Anonymous Black Hole Guy really has any clue about what he is talking about or if he can really answer some of the obvious questions about this supposed analogy of gravity in AdS and electromagnetism confined by an infinite square well.

The confining of a free particle to an infinite square well does indeed change a continuous spectrum to a discrete one. How does it do that? By enforcing Dirichlet boundary condition there. In particular, it enforces that the wavefunction be zero at the boundaries. From that, it is easy to see why only particular energy eigenstates are going to be allowed: the size of the box must be a multiple of the wavelength of the solution. But are we using Dirichlet boundary conditions in AdS? Only if we are insane or being completely non-physical. So whatever the supposed analogy to "in a box" for AdS is supposed to be, it is not a strong enough analogy to account for the supposed discreteness of the spectrum of AdS. But it seems as though Anonymous Black Hole Guy does not understand any of this in any other way than through this obviously flawed analogy. For if he did, he could have explained to Travis what the relevant boundary conditions are. It's a simple question. To say "go read Witten" rather than "This is the answer" shows beyond all doubt that Anonymous Black Hole Guy is blowing smoke. He does not have a clue. He is just repeating the same BS that his little in-group repeats to each other, none of them having a vague notion what they are talking about.

Con't.

bhg,

ReplyDeleteYes, if you allow for an *undetermined* time-dependence in the constants, you are throwing out science. It is correct (as I said earlier) that we cannot rule out that the laws of nature will change tomorrow in an unpredictable way, but if you want to pull that argument you can't do any science.

Con't.

ReplyDeleteThere are other straightforward reasons to see that Anonymous Black Hole Guy is just pulling things out of his posterior. The AdS theory is supposed to be a theory of gravity, and to reproduce GR at least in the appropriate limit. But the idea that the spectrum of the Hamiltonian is discrete and (nearly) non-degenerate contradicts basic features of GR. I have several times mentioned the gluing theorems, and Sabine, seeing the point, has repeated this. Has Anonymous Black Hole Guy ever even pretended to respond to this observation. No. Instead Anonymous Black Hole Guy insisted that all discussions take place under the assumption that AdS/CFT—which has not even been sharply formulated, much less proven—be unquestioned. These issues about the boundary conditions and about the gluing theorems raise powerful arguments questioning AdS/CFT itself. And instead having a single word to say in response, all Anonymous Black Hole Guy can do is keeping saying that some people did some calculations in some regime and some answers matched. That's it. That's how pitiful this situation is. And now he waves Ed Witten's name around like a magic talisman.

So is there any response to the powerful argument "Who should you trust, Ed Witten or Tim?". How about this: don't blindly trust anyone. Instead, if someone makes a claim demand some argument or evidence to believe that claim, and then evaluate the strength of that argument or evidence. (As a side note: when I corrected Feynman's account of the Twins Paradox, you could have asked: who should you trust, Feynman or Maudlin? However you are inclined to answer that question—which, as I said is a dumb question—in fact in that case Feynman was wrong and Maudlin was right. Similarly for the error about rotation in GR that Weinberg made, and I keep mentioning. If you are surprised that a philosopher can be correcting Nobel Laureate physicists on basic physical questions, you damn well ought to be. This should not be happening. The fact that is does proves how ignorant and incompetent you can be on basic foundational and conceptual questions and still do excellent work in physics. And how a physics training is systematically producing people ignorant and incompetent of foundations. How could it be otherwise, when their professors are ignorant and incompetent?)

How about Anonymous Physphil? If possible, his obfuscation is even more pitiful than Anonymous Black Hole Guy. All the mysteries go away if you just replace probabilities with probability amplitudes. Then—Hey Presto!—everything is local again, violations of Bell's inequality mean nothing, etc. , etc. The fact that the GHZ example requires no mention of probabilities or statistics? A mere trifle! The answer is so obvious that it does not even have to be stated.

No one in their right mind could take this kind of stuff seriously. But this is the tradecraft in modern foundations of physics as practiced by the typical physicist. And to such a group, maybe the catchphrase is appropriate: shut up an calculate. You are wasting the rest of our time.

Tim,

ReplyDeleteThe boundary condition is that there is a CFT on the boundary. Or not, if you throw something in. The difference is that in AdS there is no outside of the box. In Minkowski space there is. So, in Minkowski space if you have a box the boundary condition is deterministic, at least in principle: it comes from the outside of the box. In AdS it just isn't, because it's not globally hyperbolic. You really have to distribute the information over all times at the boundary.

Now my problem here is that if you do this you cannot, of course, evolve any state in the boundary CFT back without knowing the boundary conditions at all time (which by construction is unknowable at any one moment in time). So, if I merely give you a state in the boundary CFT at time t_0, you have no idea whatsoever where it might have come from. Not unless I tell you all that I have thrown in from the boundary. But if I tell you what I've thrown in, it's not surprising you know what I've thrown in.

How does this solve the black hole information loss problem, then? I don't get it. Does it make any sense to you?

Carl3,

ReplyDelete“But I am also puzzled about why you think that in QM there is no violation of Einstein locality (defined as faster-than-light transfer of energy/momentum). Suppose I send electrons, one at a time, through some appropriate beam-splitter that directs them to opposite ends of my lab where I have detector screens in place. The QM story is that the electron is not fully on one side or the other until detected. So if we ask "where was the mass/energy of the electron prior to detection?, the honest answer would have to be: on both sides of the lab, equally, in the instants just prior to detection. (The mass/energy expectation, you might prefer to say, is equal on both sides of the lab just before impact. Massage the language however you like.) But once the detection happens, instantaneously we must say that all of the mass/energy is on one side. That sounds an awful lot like an instantaneous transfer of mass/energy from one side of the lab to the other, if you ask me.”You are correct about the non-locality, but it happens not with respect of the “transfer” of the electron (or photon). Here you have to be careful.

[Because otherwise you could also state that in the HBT effect you and the star are connected instantaneously, which would kill the locality of SR. Admittingly, the HBT effect is a tiny bit different (you just take into account that photons are bosons) – was just meant to exaggerate the “non-local transfer”]

The difference between a beam splitter (or Mach-Zehnder device or even a double slit or any other optical device, like my glasses that influences probability amplitudes) and EPR is, that in a beam splitter source and sinks are connected by a path (here more a less 2 propagators almost on mass shell) and in EPR there is no particle “travelling” between A and B. In EPR A and B are just part of an entangled state, once entangled and then brought apart. (I was just about to use the word separated, but this is already loaded with the meaning of locality ;-)

Thus, in the beam splitter when the measurement acts (here we should go into detail about the measurement problem…) the result is that the electron (energy/momentum) is “being transferred” with the speed of light (on mass shell) between source and one of the sinks.

Of course, and here you are correct, if it shows up here or there and if here and there are lightyears away, then you know (and this is not meant to be epistemic ;-) here that it is not there in our single reality and this is instantaneous (non-local).

Thus, the EPR non-locality is analog (just wanted to say correlates ;-) to the beam splitter´s here or there, but not the transfer from source to sink.

I guess in Bohmian mechanics this has something to do with the superluminal pilot wave… In QM this is just the state being a superposition of 2 paths and then reduced with the QM randomness, nicely unitarily calculated.

If this “superluminal transfer” would work, I would put on my glasses (which actually puts many paths into superposition) predict the future and get rich. A future, that in an exclusively deterministic world would already be fixed … I am getting depressed again.

Bee,

ReplyDeletepsi(t) = U(t) psi(0). So no, initial state does not give "full information". You must know U(t).

I think in AdS/CFT, if Bee threw in a book from the boundary of AdS aimed at the blackhole that forming somewhere in the interior, there are conceptually two copies of the book. One is in the CFT and the other is in the AdS. The claim is that (the information about) the book in the CFT is in correspondence with the one in the AdS and secondly, the fate of the book in the CFT is as knowable as anything in quantum field theory; while that in the AdS is subject to the black hole information mystery. Since the book is not lost in the CFT, it is not lost in the AdS black hole, we just do not know how exactly that works in AdS.

ReplyDeletePhysphil,

ReplyDeleteYes, exactly. Your point being what?

I think that the issues of interpretation of quantum mechanics are entangled up with issues of language; perhaps necessarily so, because language itself is classical, not quantum.

ReplyDeleteYou wrote that normally you can evolve forward and that the initial state gives full information. That is not correct. You must know initial state and also you must know U(t) for all t.

ReplyDeleteBee,

ReplyDelete"Yes, if you allow for an *undetermined* time-dependence in the constants, you are throwing out science. It is correct (as I said earlier) that we cannot rule out that the laws of nature will change tomorrow in an unpredictable way, but if you want to pull that argument you can't do any science."Come on. You are saying that we should not allow for the possibility that, say, the fine structure constant is actually slowly varying time, because if we do so we are "throwing out science"? You have it exactly backwards, and I am sure you don't really believe this. As we all know, the proper strategy is to assume the simplest set of equations consistent with the observed facts, but be alert to the possible failure of such an assumption and indeed think of ways to test it.

To put it concretely, suppose someone came to you with a new experimental method to check the time dependence of the fine structure constant to far greater accuracy than currently possible. Would you: a) say "no, you musn't allow for an unknown time dependence or you are throwing out science" or b) say "great, let's see if our assumption of no time dependence fails in this new regime"

I find it hard to believe you are making this radical a claim, and would prefer to believe that we are simply miscommunicating. "Throwing out science" means dogmatically insisting that all "constants" must be truly constants, and refusing to contemplate otherwise.

Bee,

ReplyDeleteThinking about this a bit more, I think part of the issue is that you are misconstruing what BH info loss means. In general, to retrodict a QM state I of course need to know not just the state now but also the equations of motion that govern it. Obviously, I always need to make assumptions about what those equations are, or else I can't do anything. On the other hand, the BH info loss scenario is the statement that *even if I know everything about about the equations of motion and boundary conditions at infinity*, if I am only given the state of the Hawking radiation then I will still be unable to retrodict the state. AdS/CFT shows that, on the contrary, you can retrodict the state given this information. So the only meaningful version of information loss is ruled out in AdS/CFT. The issue of "uncertainty" about boundary conditions is a side issue that has no bearing on black hole physics of the sort we are discussing.

If you are still, for reasons I do not comprehend, bothered by the freedom to choose boundary conditions, then you can just demand that conformal symmetry is preserved, in which case there is a unique CFT action and choice of boundary conditions, and all ambiguity is removed. This is no different than saying that you will disallow any spacetime dependence of coupling constants by demanding exact translation invariance -- they are both restrictions based on symmetry.

Physphill,

ReplyDeleteIn light of your statement:

"Let me try one more time. What Bell/GHZ/PBR rule out is this: that QM wavefunction cannot represent a state of (uncertain) knowledge about classical hidden variables in a world with local physics. That is all. It does not rule out possibility that wavefunction represents a state of knowledge in some other kind of world."

What part of Bell's theorem do you consider to be the assumption that the hidden variables are classical? They can be matrices, complex numbers, the outputs of hypercomputable functions or NP-complete oracles, anything. Whatever they are, so long as they don't depend on the polarizer settings (i.e. so long as the world is local), the observed correlations cannot occur. So what else could the state of knowledge be about?

bhg,

ReplyDeleteYou still misunderstand the point I make about the variation of the constants. You keep trying to answer it by using examples in which the time-variation is determined. This is not what I am talking about. You earlier tried to accuse me of using an argument from beauty in keeping the constants constant. That is not so. You can of course make them time-dependent, but you will only have a theory *if you have a law for that time-dependence*. My point is that if you do not have a law for the time-dependence, you have no theory by which you can make predictions. You do not have a law - you can't have one. The very structure of AdS doesn't allow it.

Look, suppose I give you the eom for a Harmonic oscillator but with time-dependent constants. I then tell you that the time-dependence of the constants could be anything. Can you make a prediction? No, you can't. It's not possible. It's a totally useless theory.

As I said earlier, you can of course integrate out some part of the system and get an effective time-dependence for a boundary-condition, but that time-dependence will still be deterministic (in principle).

Now you say you could just go and assume the boundary to obey conformal symmetry and thus avoid the ambiguity. Ok, fine. But if I understand correctly what you are saying in this case you have no in- and out-states. So where's the S-matrix? Best,

B.

Physphil,

ReplyDeleteWell, if you really think this is the case for all theories we currently use then no one would be able to make any prediction for anything. If you want to make a prediction, you need a law for the time-evolution of U(t) and of course the theories we use have such a law.

Bee,

ReplyDeleteActually, I am pretty sure I do understand what you are claiming, but I disagree with it. It is a simple fact that on the purely mathematical level the "constants" appearing in the SM Lagrangian can in principle be spacetime dependent. So why does this not render the theory useless, as you say? Because we do measurement over some period of time and observe that the "constants" seem to indeed be constant up to experimental uncertainty. We then make the reasonable assumption that this will continue to hold in the future. I fully endorse this approach, but appreciate it for what it is: a series of reasonable guesses that might turn out to be wrong. Hence I think it's useful to continue doing experiments to test for spacetime dependence, because we really don't know if that's the case or not, and I vehemently disagree that this approach is "throwing out science".

Now imagine you are an observer in AdS. You do a bunch of experiments and notice that conformal symmetry seems to be exact up to experimental accuracy, and the boundary conditions are compatible with this. Just as in the last paragraph, it is perfectly sensible for the observer to make the assumption that this will continue to hold unless contrary evidence presents itself. Again, this is not "throwing out science", but is rather a good example of how science is actually done.

Why can't you admit that at bottom you really don't *know* if the fine structure constant is time dependent. I can accept this, because I am perfectly happy to make provisional assumptions based on incomplete knowledge and inference.

"But if I understand correctly what you are saying in this case you have no in- and out-states. So where's the S-matrix? "You lost me. There is of course no S-matrix in AdS. But there are initial and final states, and unitary evolution between them, As far as I can tell, this setup satisfies all of your desiderata, and I don't know what more you could ask for.

I hope you read my comment about what the BH info paradox really means, as I think this will clear up some confusion.

Arun

ReplyDeleteI am at this moment at a conference on foundations of physics with both physicists and philosophers. Yesterday, there was a certain amount of talk about how to determine to what extent a theory is "classical" or "quantum". (This is not the particular example that was under discussion, but for illustrative purposes consider the question: Is Bohmian Mechanics a "classical" theory or a "quantum" theory?) It became apparent that although people would readily make some judgments of this form, there was no agreement whatever—and indeed not even any clear proposed definition—about what the "classical" vs. "quantum" distinction even means. By one proposal ("Classical physics is what in Goldstein's book "Classical Mechanics"), Maxwellian electrodynamics is not classical physics. Similarly by the definition that the fundamental dynamical equation of a classical theory is F = mA. But by another definition (A quantum theory is a theory that uses Planck's constant and a classical theory is any non-quantum theory), not only is Maxwell's theory classical, so is General Relativity.

So before trying to diagnose whether "language itself is classical rather than quantum", try to formulate what the adjectives "classical" and "quantum" mean. I think you will find the exercise instructive.

black hole guy,

ReplyDeletecould you please point to your comment “what the BH info paradox really means”, because it has been a lot of comments so far.

Quantum is the physics with probability amplitudes and non-commuting physical quantities. Perhaps that is too naive?

ReplyDeleteTravis,

ReplyDelete"Whatever they are, so long as they don't depend on the polarizer settings (i.e. so long as the world is local), the observed correlations cannot occur."

False. They could be complex probability amplitudes, where the results are determined by Born rule.

Bee,

ReplyDelete"you need a law for the time-evolution of U(t)"

Good, I am glad you agree with that. Now, U(t) follows from Hamiltonian H(t). If I give you H(t), are you still claiming that theory is not predictive or not science?

Just like with quantum phenomena, our language has a problem with relativity. Looking at the whole of a space time, our verb tenses of past, present, future tend to mislead us. In particular, the present tense.

ReplyDeleteTo quote Tim,

“What about the phrase “X still exists”? Relativity makes a dent in our classical intuitions here. If the only three options we have are “X no longer exists”, “X exists at present” and “X has yet to exist”, and if the first means “The entirety of X is in or on my past light cone” and the last (by parallel reasoning) means “The entirety of X lies in or on my future light cone”, then “X exists at present” must mean “Some part of X is space-like separated from me”. This does a little bit of violence to our usual intuitions, since it means that “what exists at present” has temporal thickness.”

bhg,

ReplyDeleteFirst let me say that I very much like your phenomenological approach. If you were serious about it, though, why do you wouldn't work with string theory to begin with. You'd just take perturbatibe qg until evidence speaks against it. So I conclude you are not consistent about your philosophy.

Second, as you correctly say in your comment, you assume that the constants are constant. You seem to say that you do this after making an observation but of course you need this assumption already to make a prediction that you can compare with observation to begin with. Be that as it may, if you want to even be able to compare your theory with observation, you need to make such an assumption.

Now my point is that if you postulate a theory in which the time-dependence could be whatever, that theory isn't predictive, hence not science.

About the in and out states. You want to carry over the AdS-calculation to a box in Minkowski-space, isn't this what you are saying? The scenario that you laid out earlier (the usual scenario) is that you throw stuff into the box, let it form a black hole, wait for it to evaporate, and measure what comes out. For all I can tell, if you want to do this, the CFT-evolution on the boundary won't do. And if it doesn't, then it looks like to retro-dict the initial state you need to know the boundary values at all times, including the initial state (on the boundary). At least that's what your earlier comments seem to say. Best,

B.

Physphil,

ReplyDelete"If I give you H(t), are you still claiming that theory is not predictive or not science? "If you give me H(t) then you cannot claim that your theory predicts any information that is in H(t). Or let me put this differently. You sit at t_e and you have the state at t_e and I ask you what was the state at t_i<t_e, then were do you get H(t) from if I don't tell you the state at t_i already?

Bee,

ReplyDeleteWhat you are describing bears no relation to how science is actually done, because you speak in an abstract language of "postulating" theories. In pretty much every theory ever considered, there are an infinite number of unknown parameters. Eg. in Einstein's theory of GR there are an infinite number of higher derivative terms with a priori unknown coefficients. We then use observation and guesses based on symmetry etc. to narrow down the theory. An example of where you are wrong is your statement: "Second, as you correctly say in your comment, you assume that the constants are constant. You seem to say that you do this after making an observation but of course you need this assumption already to make a prediction that you can compare with observation to begin with.". No: one can of course do computations in which the fine structure constant, say, has some time dependence and then see if that fits the data better. You are making the strange claim that you need to "postulate" the value of all parameters once and for all and are not allowed to do computations with free parameters and then fit the results to experiment. It is definitely not "throwing out science" to study a possible time variation the fine structure constant.

Anyway, this discussion has very little to do with black holes. Back to the main event:

"You want to carry over the AdS-calculation to a box in Minkowski-space, isn't this what you are saying? "No, I didn't say anything about Minkowski space (why is that relevant? Remember we definitely do no live in an asymptotically Minkowski space). Let's just consider AdS on its own terms so as to not get sidetracked. We demand conformal symmetry is respected, which fixes the time evolution. We start with some initial state (prepared in some way that is not relevant) consisting of collapsing dust. We then then use the CFT to uniquely run the state forward and backward in time as we please. Unitarity is preserved and the information comes out in the Hawking radiation. TIm's scenario is shown to be wrong. Now, doesn't this accomplish everything you want?

Reimond,

ReplyDelete"could you please point to your comment “what the BH info paradox really means”, because it has been a lot of comments so far. "I will summarize it here for convenience. First let's make sure we understand what Hawking actually claimed, since that tends to get mangled. It's not only Tim's paper that does this -- many physicists also get this wrong. The following quote from Hawking concisely captures his position

"The situation changed, however, when it was realized that black holes evaporate by emitting particles with a thermal spectrum [1]. Suppose that one started from an initial pure quantum state which could be described in terms of a complete set of commuting observables on a space-like surface in the past. The same quantum state could also be described in terms of observables in the future only in this case one had to have two sets of observables, observables at infinity which described the outgoing particles and observables inside the black hole which described what fell through the event horizon. The system would still be in a pure quantum state but an observer at infinity could measure only part of the state; he could not even in principle measure what fell into the hole. Such an observer would have to describe his observation by a mixed state which was obtained by summing with equal probability over all the possible black hole states. One could still claim that the system was in a pure quantum state though this would be rather metaphysical because it could be measured only by an angel and not by a human observer. "Tim's paper (and countless other similar ones) contains nothing that is not in this paragraph. To flesh this out some more, suppose you knew all the laws of physics and all the boundary conditions and you have arbitrarily powerful computers. We ask: given the Hawking radiation resulting from black hole evaporation can you retrodict the initial state that formed the black hole? According to Hawking, as described above, the answer is no. In AdS/CFT the answer is yes. The "paradox" is that the "yes" answer coming from AdS/CFT is rather indirect, and at face value seems to conflict with nominally trustworthy approximations in semiclassical gravity. So to two different, but apparently trustworthy, computations give a different answer --- paradox!

Physhill,

ReplyDelete"False. They could be complex probability amplitudes, where the results are determined by Born rule."

The Born rule probability for one of the measurements depends on the polarizer setting and outcome of the other measurement, or else you won't get the observed correlation (even though it's an interesting fact that you get to choose which measurement depends on the other without changing the results. But you do have to choose one or the other; you can't generate the correct correlations using only the local Born rule). So it doesn't satisfy the restriction that it be independent of the space-like separated polarizer setting. Any kind of variable at all, "hidden" or not, whose value is set by any process which doesn't depend on the value of the distant polarizer setting will not give the right answer.

Bee,

ReplyDelete"If you give me H(t) then you cannot claim that your theory predicts any information that is in H(t)."

Of course, I agree.

"Or let me put this differently. You sit at t_e and you have the state at t_e and I ask you what was the state at t_i<t_e, then were do you get H(t) from if I don't tell you the state at t_i already?"

You cannot.

What you seem to miss is that H independent of t is just one specific example of H(t). So, all these statements and questions apply equally well to H independent of t as they to to non-trivial H(t). In all cases, there are some H(t) that are consistent with experiment and some that are not. Usually, we take the "simplest" one that works and then try to test it.

For BH info paradox in AdS, the question is whether evolution is unitary according to the H(t) we choose on the boundary. According to AdS/CFT, answer is yes. According to Tim, answer is no, because pure state evolves to mixed on outside, which is not possible no matter what H(t) we choose.

ReplyDelete. Eg. in Einstein's theory of GR there are an infinite number of higher derivative terms with a priori unknown coefficients. We then use observation and guesses based on symmetry etc. to narrow down the theory.My understanding is, classical physics-wise, which is what Einstein’s GR is, anything beyond second order partial differential equations get us into a lot of difficulties.

Arun,

ReplyDelete“Just like with quantum phenomena, our language has a problem with relativity. Looking at the whole of a space time, our verb tenses of past, present, future tend to mislead us. In particular, the present tense”Yes, language itself is a minefield of misunderstandings, it´s a burden to convey Shannon information via this classical channel (and the subadditive von Neumann entropy does not help).

Since QM is not yet married with GR, the language of the block universe, I would simply say with respect to the flow of time does not yet correspond to our reality.

I am absolutely sympathetic about a deeper philosophical delve into the matter … and gravity. A careful thinking is needed right from the basics.

You once absolutely correctly pointed out

“Ouch. Lorentz invariance relates the observations of observers with different 4-velocities at a point in space-time; i.e., Lorentz invariance applies in thetangent space. To relate observations at different points, one must use the connection given by the space-time metric. Think of the two-metric on the surface of a balloon. It is Euclidean everywhere.”Yes, one of the symmetries of GR is

localLorentz invariance, local SO(3,1).Let us use your example of the balloon, i.e. the central extension from E(2) to SO(3).

The two revolutions in physics, SR and QM can be regarded as central extensions of the Galilean algebra.

Einstein resolved two tensions

1.) Maxwell (c=const) and Galileo ==> SR

2.) atomistic, granular matter and Maxwell (smooth field) ==> light quanta, QM

In SR, in Poincaré algebra the generators for boost and translations are not commutating any more. And this precisely is a central extension, with the “central charge” being 1/c times time translation, or just the mass. Now c is the parameter instead of the radius above.

This is also reflected in QM in Schrödinger algebra where the “central charge” is the mass. (Since Schrödinger eq. is Klein-Gordon eq. in slow motion.)

When we go from SR to GR then the group SO(3,1) of Lorentz transformations becomes a local symmetry.

When we go from QM to QFT the gauge groups U(1), SU(2), SU(3) pop up as local symmetries.

We also have to distinguish local (metric or better curvature) and global (topology) aspects.

This is reflected that the groups: SO(3,1) is the analytical continuation of SO(4), which is (SU(2)⊗SU(2))/Z2.

Cont.

The “squares” are plenty in this area, e.g. the Dirac eq. is the “square root” of the Klein-Gordon eq. (spinors are a representation of SO(3,1)). Or the 1/2 in the Einstein field equation.

ReplyDeleteFurther between Yang-Mills and Elie Cartan’s formulation of curvature (using tetrads, i.e. the “square root” of the metric, and you need tetrads to finally incorporate fermions) there is a remarkable analogy between the curvature, the spin connection under transformations of the non-compact real group SO(3,1) and the Yang-Mills field strength, the gauge connection under a compact complex group of transformations (U(1)), SU(2), SU(3).

But the analogy apparently fails at the level of the actions. The Einstein-Hilbert action is proportional to the scalar curvature and the Yang-Mills action is

quadraticin the Yang-Mills field strength.“Quantum is the physics with probability amplitudes and non-commuting physical quantities.”This is just the most prominent “square”: you “square” amplitudes to get probabilities.

Doesn´t all this tell us, that to join QM and GR the measurement is the central point?

We have for almost 100 years a tension between GR´s non-quantized spacetime, not being able to be in superposition and QM particles being in superposition. I never “saw” a Schrödinger cat, that is fatter than a tiny fraction of a Planck mass (0.02 milligrams). To explain chemistry of course we need Schrödinger cats, but we only need tiny ones.

I guess nature has an observer-independent triggered measurement implemented to keep its spacetime smooth and save above the Plank scale.

The message is: Do not quantize spacetime, but use the tension.

This predicts null results for these kind of experiments.

The central extension of SR, the not commuting of boost and translation generators, led to the Lorentz transformation of space and time into each other.

The next central extension will transform QM fields and spacetime into each other and the parameter will be the total mass/energy of the superposed particles of small entanglements, also providing a cutoff for renormalization.

This will finally explain why we need all this non-commuting.

Physphil,

ReplyDelete"What you seem to miss is that H independent of t is just one specific example of H(t). So, all these statements and questions apply equally well to H independent of t as they to to non-trivial H(t). In all cases, there are some H(t) that are consistent with experiment and some that are not. Usually, we take the "simplest" one that works and then try to test it."All that is fine with me, but not what you are doing. To begin with you're not testing anything. Maybe more importantly, the simplest H(t) will not do, see my above example about throwing in information. Maybe let me repeat this again. Say, I throw a book into a black hole from the boundary at time t^*, and give you the boundary state at time t^end> t^*, can you tell me what was in the book using the information you have at t^end? That's a simple question that you should be able to answer with yes or no. So what is it?

bhg,

ReplyDeleteYou misconstrued my statement about assuming that the constants are constant. This was referring to the example that you yourself offered. As I have said several times above, you can of course make any other assumption about the time-dependence of those constants (or any other constants, like those for higher-order couplings and so on). The point is if you do not make such an assumption, you have no prediction.

You can of course instead (as you suggested above) take the phenomenological approach and just say let's collect observations and find the best-fit result. Again, that's all fine with me, but clearly not what you are doing. If that was what you wanted to do you'd wait until you measure a black hole evaporate and then see what theory fits it.

As to my question about Minkowski, I asked this because I don't understand what you are trying to achieve.

"We demand conformal symmetry is respected, which fixes the time evolution. We start with some initial state (prepared in some way that is not relevant) consisting of collapsing dust. We then then use the CFT to uniquely run the state forward and backward in time as we please. Unitarity is preserved and the information comes out in the Hawking radiation. TIm's scenario is shown to be wrong. Now, doesn't this accomplish everything you want?"I already told you above what my problem is with this. What if I throw in a book from the boundary between the initial and end time? Can you or can you not recover the book's information from the boundary state at t_end? Yes or no? Best,

B.

Bee,

ReplyDelete"Say, I throw a book into a black hole from the boundary at time t^*, and give you the boundary state at time t^end> t^*, can you tell me what was in the book using the information you have at t^end? That's a simple question that you should be able to answer with yes or no. So what is it?"

I already answered: no, if you do not tell me H(t) in between.

To expect the contrary is absurd. If I tell you state of BH at some time, but I do not tell you what I threw into BH at earlier times (that is information encoded in boundary H(t) in AdS), of course you cannot reconstruct state at some earlier time. You cannot even know what energy was at earlier time. That is true classically, QM-ly, in AdS, in flat space, and with or without information loss. It is obvious.

Travis,

ReplyDelete"The Born rule probability for one of the measurements depends on the polarizer setting and outcome of the other measurement, or else you won't get the observed correlation (even though it's an interesting fact that you get to choose which measurement depends on the other without changing the results. But you do have to choose one or the other; you can't generate the correct correlations using only the local Born rule). So it doesn't satisfy the restriction that it be independent of the space-like separated polarizer setting. Any kind of variable at all, "hidden" or not, whose value is set by any process which doesn't depend on the value of the distant polarizer setting will not give the right answer."

You already agreed that is not true in MW. But, you insist it is true in "one world". Maybe you need to think about what is "one world", and what is precise definition of "non-local". My definition is very simple and clear (operators commute outside lightcone). What is yours?

"it's an interesting fact that you get to choose which measurement depends on the other without changing the results."

It is more than "interesting fact". It proves there is no causal influence or true non-locality. It is no different from gaining information in classical correlation.

Physphil,

ReplyDeleteYou write:

"If I tell you state of BH at some time, but I do not tell you what I threw into BH at earlier times (that is information encoded in boundary H(t) in AdS), of course you cannot reconstruct state at some earlier time. You cannot even know what energy was at earlier time. That is true classically, QM-ly, in AdS, in flat space, and with or without information loss. It is obvious."You may be saying one of two things. Either you try the same move as bhg, in that you proclaim you can never know the laws of nature with certainty hence there aren't any and you can't make any predictions. Or what you say is just wrong. If you don't have the AdS boundary, there's no boundary (duh), hence it's sufficient if I give you the Hamiltonian at the initial time, or an evolution law for the Hamiltonian with an initial condition at the initial time, and the initial state. The rest is determined forwards and backwards.

Besides this, if you really believed what you said why would you worry about black hole information loss to begin with? You seem to believe you can't say anything about the initial state anyway. Best,

B.

Arun,

ReplyDelete"Quantum is the physics with probability amplitudes and non-commuting physical quantities. Perhaps that is too naive?"It's futile.

Physphill,

ReplyDeleteIn the context of an experiment where we're performing a measurement, it's very clear what one world means: there are only single outcomes, not multiple outcomes as in MWI. Then the question of locality is whether there can possibly be any mechanism for generating the outcome at one location which doesn't need to know the polarizer settings at the other locations. There isn't any possible mechanism.

It is *absolutely* different from gaining information in classical correlation. Classically, if you know everything about the past light cone of an event, you can't possibly make any better prediction if you also know something outside the light cone. There is always a mechanism that can generate the outcomes without needing to know facts about distance regions. The information which generates the outcomes is always locally present. In single-world QM, this is no longer true.

Physphill,

ReplyDeleteAs far as causal influence, we can talk about that using counterfactuals: if you keep everything else the same and only change one thing, then anything else that changes as a result of that is an effect of that one thing. In our case, if we keep everything about the universe the same but change the polarizer setting and measurement outcome at a distant location, then the outcome at our local location changes. In classical physics, you can change anything you like outside the light cone and it won't change anything within the light cone.

Bee,

ReplyDelete"If you don't have the AdS boundary, there's no boundary (duh), hence it's sufficient if I give you the Hamiltonian at the initial time, or an evolution law for the Hamiltonian with an initial condition at the initial time, and the initial state. The rest is determined forwards and backwards."

Now you have added "an evolution law for the Hamiltonian". I guess you mean U(t) or H(t). Yes, that (plus initial state) will suffice, just as it does in AdS or QM.

"you proclaim you can never know the laws of nature with certainty hence there aren't any and you can't make any predictions"

I did not say that. I did not read bhg to say so either.

"Besides this, if you really believed what you said why would you worry about black hole information loss to begin with? You seem to believe you can't say anything about the initial state anyway."

I did not say that either.

First comment. Pure state cannot evolve to mixed no matter what H(t) is. That is major difference between Tim and prediction of AdS/CFT. S

Second comment. I do not understand your objection to specifying boundary H(t). Consider alternate scenario, forming BH inside large spherical detector/emitter in Minkowski spacetime. You can ask, given initial ingoing particles emitted by detector at time t_i that will form BH, and given control over anything else emitted by detector later that will fall in to hole (that is, like H(t) on AdS boundary), what radiation will come out later. If I can answer that I have resolved BH info paradox in favor of predictability. If some saboteur invades my detector/emitter and sends energy in after BH forms and I do not know what they did, I cannot predict final state. That is simply obvious. It does not mean BH evaporation is not unitary, it just means I have incomplete information to predict outcome. But if I know what went in (maybe nothing after initial pulse, maybe something), then I can predict.

black hole guy,

ReplyDeleteThanks, so you referred to this one. I made my very first comment 5 days later - only became aware of Sabine´s great blog “backreaction” this year.

You are citing Hawking from 1982, where he describes how a pure density matrix becomes a mixed one by tracing out the interior of the BH. He continues

“… One could still claim that the system was in a pure quantum state though this would be rather metaphysical because it could be measured only by an angel and not by a human observer.”You said

“… can you retrodict the initial state … According to Hawking, …, the answer is no. In AdS/CFT the answer is yes.”No kidding, my very first thought was: ok, then Hawking is the human and AdS/CFT is the angel – what´s the problem? Seraphim, alias messenger is a hell of a retrodictive power.

But I was too hasty, now comes your

“what the BH info paradox really means”:“The "paradox" is that the "yes" answer coming from AdS/CFT is rather indirect, and at face value seems to conflict with nominally trustworthy approximations in semiclassical gravity. So to two different, but apparently trustworthy, computations give a different answer --- paradox!”I just tried to extract the essence:

“two different … computations give a different answer”. And you useapproximations in semiclassical gravity, i.e. an approximation of an approximation. Ok, well … I would not be worried too much.Cont.

And I would not call semiclassical gravity trustworthy. It just mixes QM and GR at will and assumes some kind of backreaction. This will be far from quantum gravity.

ReplyDeleteIt becomes less or even invalid when masses or a single mass gets into superposition. Actually, exactly this hints to something very interesting. And I looked through the following comments and found something from Tim and Arun, that connects to this breakdown of semiclassical gravity when it comes to superposition of mass. Ishibashi and Wald of course can only describe a

staticspacetime, because how the backreaction works is not yet known, we simply have no valid quantum gravity yet. But we know how Fermion and Boson fields evolve on a static curved spacetime.Maybe when the “tension” between quantized matter fields and non-quantized spacetime reaches a certain threshold a measurement will be triggered. Then this tiny entangled state will be reduced and becomes a product state, with masses

distributed, but not in superposition anymore. Here it would be reasonable to talk about a backreaction on a non-quantized intermediary static curved spacetime. This tiny state itself is just a product with the state of the world (a huge product of tiny entangled states). The particles from this tiny state now on mass shell start to get entangled again with whatever is in their neighborhood and form a new tiny state … and so on. This happens everywhere all the “time” and exchanges energy/momentum when reduced.(And these tiny states would also transfer the right particles to different Rindler observers)

By the way since we were talking about mixed density matrices, we also could talk about real thermal entropy. Maybe you know the reason why temperature T is equivalent to cyclic imaginary time t. What has AdS/CFT to say about this?

And I am not talking about the derivation of Hawking temperature in the Schwarzschild metric, and also not about the fact, that e^(iHt) becomes e^(-Hß) with the Wick rotated Euclidian t -> ß=1/kT, also not that this is analytical continued. I am talking about the fact that:

Euclidean quantum field theory in (D+1)-dimensional spacetime, 0=t<ß

cyclic boundary condition,becomes

Quantum statistical mechanics in D-dimensional space.

The question is: why? And especially why this cyclic boundary condition? What has AdS/CFT to say about this?

And now I am looking forward to read soon “Lost in Math”!

Will the Pain Never End?

ReplyDeleteAnonymous Physphil:

"You already agreed that is not true in MW. But, you insist it is true in "one world". Maybe you need to think about what is "one world", and what is precise definition of "non-local". My definition is very simple and clear (operators commute outside lightcone). What is yours?

"it's an interesting fact that you get to choose which measurement depends on the other without changing the results."

It is more than "interesting fact". It proves there is no causal influence or true non-locality. It is no different from gaining information in classical correlation. "

Can there be a more incompetent comment about Bell? (Please, no one construe this as a challenge!) The issue about non-locality and Spooky Action-at-a-Distance was never, ever, ever, ever about the ETCRs! Everyone knows that the the ETCRs hold. Everyone know that you can't use the violations of Bell's inequality to signal. It was Bell himself who proved the no-Bell-telephone theorem. When Einstein was complaining, since at least 1927 and probably earlier, about the non-locality in quantum theory according to Copenhagen, he was never claiming that you could signal using the quantum surface correlations!. If he had thought that then he would have demanded that the relevant experiment be done, fully expecting it not to show signals. If Einstein had thought that the predictions of quantum theory allowed for the ability to signal (as a violation of the ETCRs would imply) then he never would have thought that you could account for the predictions of quantum mechanics with an Einstein local theory. He knew you could account for the EPR correlations with a local theory, but only if the theory were deterministic and hence if the quantum description (the wavefunction) were incomplete. The sheer, complete, total, unspeakable incompetence of this comment should shame Anonymous Physphil off the internet forever, except of course, shielded by his anonymity, he suffers no real consequences for his nonsense.

The remarks about using probability amplitudes rather than probabilities are fundamentally no better, but there is at least a little room to get confused about that.

As for the comment about proving there is no causal connection between the two sides by reference the independence of the surface statistics on the order with which the distant experiments are made, could a more direct proof of the importance of learning Bohmian mechanics be possible? Of course it returns the usual surface statistics, and it clearly has superluminal causal connections. So satisfying the surface statistics cannot possibly prove that there are no causal connections.

And this litany of idiocy is topped off by likening the case to that of Bertlmann's socks. Bell's whole point in bringing up the example was to insist that it can't possibly be like that. That is what he proved. And the GHZ case demonstrates it beyond any doubt.

Bell proved his result over half a century ago. I can explain it to a class of freshmen with zero background in physics and zero advanced mathematics in about 20 minutes. Any of my students understands this better than Anonymous Physphil, yet he prates on about it as if he understood it. Is there no shame left in physics, and no intellectual integrity? Or all most physicists like this, with all the self-regard coupled with all the ignorance and incompetence of Trump?

Bee,

ReplyDelete". What if I throw in a book from the boundary between the initial and end time? Can you or can you not recover the book's information from the boundary state at t_end? Yes or no? "Yes, provided I know the Hamiltonian for the times in question. The situation is no different than in any other QM system. If you hand me a state at one time I can tell you the state at another time provided I know the Hamiltonian for all intermediate times; if I don't know the Hamiltonian than I can't say anything about time evolution.

I have to say, I don't see the relevance of this for the questions at hand. As I've said, the interesting part of the BH info paradox is the statement that even if you know all the equations and all the boundary conditions you can't (according to Hawking) retrodict the QM state from the Hawking radiation.

bhg,

ReplyDeleteYou write

"Yes, provided I know the Hamiltonian for the times in question. The situation is no different than in any other QM system. If you hand me a state at one time I can tell you the state at another time provided I know the Hamiltonian for all intermediate times; if I don't know the Hamiltonian than I can't say anything about time evolution."We are going in a circle here. You answer "yes, provided I know the Hamiltonian", so please let me whether you know the Hamiltonian?

The situation *is* different from other QM systems because in your case the Hamiltonian contains information about that which you want to predict, ie, the initial state.

Besides, as I have said several times before, if you want to insist that you cannot know the Hamiltonian for all times you can't do any science. If you want to go with this argument, fine, then we can end the discussion right here. If not, then why do you keep repeating it? Best,

B.

Physphil,

ReplyDeleteYes, I added the evolution law for the Hamiltonian because bhg keeps repeating the false assertion that I insist the Hamiltonian does not have a time-dependence and I was trying to avoid that I have to say yet one more time that this is false. Of course the Hamiltonian can have a time-dependence, provided it has an evolution law. No, an evolution law is not the same as a time-dependence. An evolution law is an equation that can be evolved from an initial state.

"you proclaim you can never know the laws of nature with certainty hence there aren't any and you can't make any predictions"

I did not say that. I did not read bhg to say so either."

Wonderful. Then we seem to agree. If you sit at t_end and you have the state and t_end, you cannot tell me what was the initial state at t_in. Case settled, information still lost. Do we agree on that?

"Consider alternate scenario, forming BH inside large spherical detector/emitter in Minkowski spacetime. You can ask, given initial ingoing particles emitted by detector at time t_i that will form BH, and given control over anything else emitted by detector later that will fall in to hole (that is, like H(t) on AdS boundary), what radiation will come out later. If I can answer that I have resolved BH info paradox in favor of predictability.You wouldn't expect the evolution to be unitary in your detector so there's no paradox to solve here.

"If some saboteur invades my detector/emitter and sends energy in after BH forms and I do not know what they did, I cannot predict final state. That is simply obvious. It does not mean BH evaporation is not unitary, it just means I have incomplete information to predict outcome. But if I know what went in (maybe nothing after initial pulse, maybe something), then I can predict.Huh? You are not supposed to predict the final state from an (incomplete) initial state, you are supposed to predict the initial state from the (complete) final state. In Minkowski-space of course the detector boundary does have an evolution-law. By evolution law I do *not* mean H(t), I mean a differential equation by use of which you obtain H(t) from an initial state.

Best,

B.

Typo: It should be 0≤τ<ß, where τ is the time parameter in the integral of the Lagrangian in the path integral. Anyway best look here. And the equality of the partition function with the path integral under cyclic boundary condition is on page 288 eq. (5)

ReplyDeleteBee,

ReplyDeleteIn the hope that another voice might help clarify--

"Wonderful. Then we seem to agree. If you sit at t_end and you have the state and t_end, you cannot tell me what was the initial state at t_in. Case settled, information still lost. Do we agree on that?"

I don't want to speak for anyone else here but concluding that information is lost seems unjustifiable to me, since one can explicitly construct the initial state from the final state if given the Hamiltonian at all intermediate times. The construction just follows from the rules of ordinary quantum mechanics: since the CFT is an ordinary quantum system and evolves in time according to the Heisenberg equation, the initial state is given by psi_in = U(-t) psi_end, where U(t) = T exp(i * (integral from t_in to t_end of H(t) dt)), where T is the time-ordering operator. If either psi_in or psi_end is a pure state then the other will be too.

Knowing the full time-dependence of the Hamiltonian is exactly the same thing as saying we know how the environment (including the experimenter) interacts with the system as a function of time. Let's suppose the system of interest is a rock, which is kept in a perfectly sealed box, which is filled with a magnetic field that the experimenter can adjust by turning a knob that is outside of the box. Turning the knob of course modifies the Hamiltonian under which the rock time-evolves. Now the key point: in this situation, perfect knowledge of the initial state of the rock does not suffice to determine its state after some amount of time has passed, since the experimenter could always decide to turn the knob -- or not -- at any stage of the experiment. One can determine the final state from the initial only if also told exactly what the experimenter plans to do, which amounts to knowledge of the Hamiltonian at all intermediate times. This is the same as BHG's forced oscillator example. Let's suppose you don't know what the experimenter is going to do. Do you believe that information will be lost during the experiment, i.e. that the time evolution from psi_in to psi_end will be non-unitary? I would say that in this case you just don't know the unitary, though maybe you have a different definition of "information loss".

dark star,

ReplyDelete"one can explicitly construct the initial state from the final state if given the Hamiltonian at all intermediate times"Right... as I said earlier, you can reconstruct what was in the book if I tell you what was in the book. Congrats. My point being that at the final time you don't have the Hamiltonian at all intermediate times. And, no, this is not the same situation as you have in Minkowski space (as bhg and Physphil seem to believe) because in your case the Hamiltonian does not have an evolution law and it actually must contain the very information you claim to be able to reconstruct.

Bee,

ReplyDeleteI might be misinterpreting you, but that analogy seems misleading to me. By "what's in the book", are you referring to having knowledge of the initial state? The standard (Heisenberg) construction of U(t) is completely independent of the initial state -- no knowledge of "what's in the book" is used at any step. The only information required to completely construct the time evolution is the same info we always need to specify in order to specify the time evolution of any experiment, namely the initial (or final) state together with the way the experimenter acts on the system over the course of the experiment.

Sorry if you've already been specific above, but I'm really not sure what you mean by an evolution law for the Hamiltonian. If it's a rule for how the Hamiltonian depends on time then that is determined by the actions of the experimenter, both in a Minkowski box and in AdS. In AdS we do experiments by changing the boundary conditions, which corresponds to changing the CFT Hamiltonian, exactly as we would do if we were performing experiments on a box in flat space.

travis,

ReplyDelete"In the context of an experiment where we're performing a measurement, it's very clear what one world means: there are only single outcomes, not multiple outcomes as in MWI."

In the context of performing an experiment, you cannot determine if there are multiple outcomes or not. That is why I am agnostic about collapse versus MW, and why this definition does not make sense.

"The information which generates the outcomes is always locally present. In single-world QM, this is no longer true."

You did not define "single world QM" in a satisfactory way.

Here is a definition. Each observer keeps only the branch of the MW wavefunction relevant to them, when keeping other branches is not necessary for sufficiently accurate prediction. That is actual spirit of Copenhagen. Physics is local because QFT/MW is local.

"In our case, if we keep everything about the universe the same but change the polarizer setting and measurement outcome at a distant location, then the outcome at our local location changes."

Of course this is not true. The fact that operators commute outside the lightcone proves that changing detector setting has no effect on outcome outside lightcone.

Tim,

ReplyDelete"The sheer, complete, total, unspeakable incompetence of this comment should shame Anonymous Physphil off the internet forever, except of course, shielded by his anonymity, he suffers no real consequences for his nonsense."

You are very angry. I observe this is common with you. Paul Hayes' link above is relevant. https://arxiv.org/abs/1411.2120

"And this litany of idiocy is topped off by likening the case to that of Bertlmann's socks. Bell's whole point in bringing up the example was to insist that it can't possibly be like that. That is what he proved. And the GHZ case demonstrates it beyond any doubt."

You are angry and wrong.

Bell/CHZ proves only that "it can't possible be like that" if world is classical. Bell assumes world is described by local hidden variables that parametrize "real" state of world. But if state of world is QM, not classical. Observables like position do not have "actual" values.

As for Bohm, it does not reproduce relativistic QFT and never will (as already discussed here), so is pointless waste of time.

"I can explain it to a class of freshmen with zero background in physics and zero advanced mathematics in about 20 minutes. Any of my students understands this better than Anonymous Physphil, yet he prates on about it as if he understood it."

I expect your students can see what you are.

Bee,

ReplyDelete"Huh? You are not supposed to predict the final state from an (incomplete) initial state, you are supposed to predict the initial state from the (complete) final state."

For me it works both ways. But, OK, please replace what I wrote before:

If some saboteur invades my detector/emitter and sends energy in after BH forms and I do not know what they did, I cannot postdict initial state from final state. That is simply obvious. It does not mean BH evaporation is not unitary, it just means I have incomplete information to postdict initial state. But if I know what went in (maybe nothing after initial pulse, maybe something), then I can postdict.

Example of this point. Suppose I measure electric field at boundary at final time. Can I then postdict what was total charge inside at some earlier time? Obviously, only if I know how much charge passed through spherical detector in between those times. That is like knowing boundary H(t).

"In Minkowski-space of course the detector boundary does have an evolution-law. By evolution law I do *not* mean H(t), I mean a differential equation by use of which you obtain H(t) from an initial state."

I think from your comments, that you are imagining initial state to include all of Minkowski, including region beyond detector, while I am only considering region inside. If that is correct, please replace "Minkowski" with "manifold with boundary". It can be ball times time (like large region of Minkowski bounded by large sphere near infinity).

Of course, you must specify boundary conditions on fields else physics is ill-defined. Those can be Dirichlet BCs, with time dependence or not. You can specify any BCs and no evolution law restricts them, they are part of specification of math problem on manifold with boundary.

Then, you can ask if BH formation/evaporation exterior state evolution is unitary or not, where U(t) depends on BCs you specified. You are not assuming answer by specifying BCs. Maybe BH is hole in space and information falls in, so exterior state does not evolve unitarily. Or, maybe not. AdS/CFT gives answer to that question (which is, it is unitary). What is the problem there?

Physhill,

ReplyDeleteOf course you can't determine whether there are multiple outcomes. The point is that you can assume that only one outcome occurred and draw conclusions from that, or assume that multiple outcomes occurred (unbeknownst to you) and draw conclusions from that. If multiple outcomes occur, it's possible to maintain locality because the multiple outcomes at each detector location don't have to line up with each other until their light cones intersect. If only one outcome occurs at each detector location, then the universe has to decide right then and there which outcome to go with, and it can't decide the right one without using the information about what's happening at the other detector location.

"Of course this is not true. The fact that operators commute outside the lightcone proves that changing detector setting has no effect on outcome outside lightcone."

Let me be more precise in what I was saying. Suppose that, in your local reference frame, the measurement at the distant detector is performed first (but still at space-like separation), and then you perform yours and get an outcome. Then go back in time (relative to your reference frame) and change the detector setting and outcome of the distant location. Your outcome will change too, and this never happens in any classical theory: in any classical theory, changing something at space-like separation never has an effect.

Bee,

ReplyDeleteI don't understand what you are asking. You ask "so please tell me whether you know the Hamiltonian? ". AdS/CFT does not address the question of what is the One True Hamiltonian. There is a space of holographic CFTs and a space of quantum gravity theories in AdS, and the claim is that there is a precise correspondence between elements of the two spaces. Let's take a particular example, say N=4 Super Yang Mills at the conformally symmetric point. Each state in this gauge theory corresponds to some state in AdS_5 and vice versa. So if you prepare a state in AdS_ 5 corresponding to a collapsing star we can map this to a state in the CFT, and then run it forwards and backwards in time by unitary evolution. The evolution is uniquely specified with no ambiguity. Furthermore, if you wish you can include a bulk observer who at some time throws in a book towards the black hole; this is governed by the same unitary evolution as before, so doesn't change anything. So having specified which example of AdS/CFT we are talking about -- N=4 SYM in this case -- the answer to your original question is "yes". Since time evolution is governed by the N=4 SYM Hamiltonian, given the full state at one time I can completely determine the state at any earlier time.

All of this is just to say that the CFT is a standard QM system and so obeys all the properties we are used to

I don't know if perhaps you are unhappy with the fact that AdS/CFT allows for dualities between many different pairs of CFTs and AdS QG gravity theories, rather than there just being One True Example? I apologize if I am missing your point.

ReplyDelete”"non-local". My definition is very simple and clear (operators commute outside lightcone).”This is absolutely correct.

"The fact that operators commute outside the lightcone proves that changing detector setting has no effect on outcome outside lightcone.”This is simply wrong, if Alice and Bob live in the same world and are able to compare their results later.

They will see the correlation even if they were spacelike separated at the measurement.

The “non-locality”, the distant spacelike correlation in EPR and the equal time commutation relation (ETCR) seem at first glance indeed to be a contradiction.

I hope you see that something is missing.

As Sabine says “nonlocality of quantum mechanics is subtle” – at least as subtle as this cross promotion for her book ;-)

Physphil

ReplyDelete"You are very angry"

I congratulate you on your perspicacity.

The phrase "He doesn't suffer fools gladly" will be of some assistance in understanding this.

"I expect your students can see what you are."

Indeed they can. That is why I just got several very thoughtful gifts and e-mails of thanks after my class in philosophy of science.

I can only hope that you have no students to mislead. Of course, for all I know your actual training is in, say, comparative literature. That would explain a lot.

Physphill, Travis,

ReplyDeleteI guess that non-locality in EPR can be circumvented in many worlds (MWI) – see e.g. here “we conclude either that valid physical theories are nonlocal, or they are local, deterministic, and counterfactually

indefinite.” Here Tim certainly could provide much more details and explain it in depth maybe including the measure problem (how probabilities are handled over different branches …) or refute it.I also was for many years a strong believer in MWI, not so much to circumvent non-locality in EPR or the measurement problem, but without MW I could not grasp how it could be possible to evaluate all the amplitudes, e.g. in Feynman diagrams in “real time”. I was influenced by David Deutsch, who imagines that all these calculations only can be done in the joint effort of many parallel worlds. His prime example is that already a fully entangled state of 400 qubits described by about 10^120 complex numbers exceeds the number of real (on mass shell) particles of our single universe – and so he asks: where should all the calculation power reside?

But I changed my mind. And I am not at all agnostic about this problem, on the contrary – the last 5 years I focused on how the measurement problem can be solved.

Again as Sabine says: “Another unappealing aspect of quantum mechanics is that by referring to measurements, the axioms assume the existence of macroscopic objects— detectors, computers, brains, et cetera— and this is a blow to reductionism. A fundamental theory should explain the emergence of the macroscopic world and not assume its presence in the axioms.” (Lost in Math)

Tim, Travis,

ReplyDeleteI want to ask you about the form of the following equation, upon which Bell’s inequality is based:

(*) Prob(A,B|x,y)=int d lambda Prob(lambda) Prob(A|x,lambda) Prob(B|y,lambda)

where lambda can be any sort of variable, x and y are Alice’s and Bob’s measurement device settings and A and B are the outcomes of Alice’s and Bob’s measurements, respectively.

It seems to me that

(1) the locality assumption is used when writing down the product of two probability distributions

(Alice’s distribution independent of Bob’s setting y, Bob’s independent of x)

(2) a different assumption is used when adding those products of probabilities, weighed by the probability of lambda, to get the total probability.

It is this second assumption that most physicists would call “classical” because it uses the classical way of calculating probabilities: to get the probability of obtaining the result (A,B), add the probabilities for all different possible ways (described by different possible values of lambda) to get that same final result (A,B).

In contrast, no matter what lambda is (even if it is a wave function or a density matrix), one does not calculate the probability for (A,B) that way within quantum mechanics. Instead, one adds probability amplitudes first, and then one takes the absolute value squared of that sum.

It seems you disagree with there being a second assumption underlying (*). Why?

Steven

Reimond,

ReplyDeleteNo, no valid argument has ever been given that MWI is a local theory. And many hardcore MWI advocates (e.g. Sean Carroll) freely admit that the theory is highly non-local. (I published a response to Blaylock's paper in the AJP when it was published. As a referee, I made that a condition of publication. I thought that there could be some pedagogical value in having the paper and its refutation available side-by-side.)

The deal with MWI is this: since it is not a single-world theory, it is not at all obvious what the notion of a "correlation between space-like separated outcomes" even means. If Alice and Bob do spin measurements on both sides of a Bell test, and if both outcomes occur on both sides, then what could it even mean to claim that the distant outcomes are correlated in some way? In order to get a correlation you have to somehow pair up particular outcomes on one side with particular outcomes on the other.

Many MWI advocates, having not really thought anything through, just assert that the correlations only come into existence once Bob and Alice have communicated with each other, and that communication cannot take place faster than light. Therefore, they think, there never are violations of Bell's inequality for events at space-like separation. But that ignores the critical question: in MWI, what determines which of the various Bobs (who each saw a different outcome) gets to communicate with which of the different Alice's? Is that crucial fact only determined *when Alice and Bob communicate or get together*? Or is it already determined *before* they communicate or get together?

One could imagine, I suppose, some non-superluminal mechanism which determines who gets to communicate with who. But MWI does not employ any such non-superluminal mechanism: like all quantum theories, it makes use of the wavefunction or quantum state. And the quantum state, as a non-local beable, effects the pairing up superluminally and instantaneously. For example, in a straight EPR set-up, there is nothing that one needs to wait for to determine which Bob will communicate with which Alice: that information is encoded in the entanglement structure of the global quantum state. So MWI actually has non-locality up the wazoo.

Sloppy MWI advocates have been saying forever that MWI is local, but they never give any good arguments. Because they can't.

There is an even more fundamental problem with MWI. Most advocates claim that in MWI all there is is the quantum state. But the quantum state is not a local beable (as Bell insisted), so if that's all there is in the theory then there are no local beables at all. And if there are no local beables at all, it is very obscure what it could even mean to say that any pair of events happen at space-like separation. And that's because it is very obscure what it could mean to say that in MWI anything happens anywhere in space-time. But if nothing happens anywhere in space-time, then the question is how the theory could predict any of the sorts of events we take to happen in laboratories at all. So what it could possibly mean to say that MWI is empirically adequate is basically incomprehensible.

Travis,

ReplyDelete"in any classical theory, changing something at space-like separation never has an effect."

Of course. QM is not a classical theory.

Again. Say "collapse" means that you decide to keep only branch of MW that describes you, because it is unnecessary to keep other branches in order to make accurate enough predictions in future. This physics is identical to MW, which you agree is local. I am just deciding to ignore other branches because that is a very good approximation. So, that version of Copenhagen (which is original version in new words, I think) is local.

Reimond,

ReplyDelete""The fact that operators commute outside the lightcone proves that changing detector setting has no effect on outcome outside lightcone.”

This is simply wrong, if Alice and Bob live in the same world and are able to compare their results later.

They will see the correlation even if they were spacelike separated at the measurement."

No, it is not wrong. MW is simple proof by example, or collapse as I have defined it above. Correlations over distance generated in past do not constitute non-locality.

Tim,

ReplyDeleteI think you are example of something I read about here: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect.

Anonymous Physphil,

ReplyDeleteI read the article with great interest, although somehow I have the feeling that I had heard of the effect before. Since each of of appears to believe that we have a complete and thorough understanding of these issues, and that the other one is either a troll, an outright fraud, or incurably stupid, it seems as though one or the other of us does suffer from this syndrome. This awakened in me a truly frightening bout of extreme anxiety and self-doubt. What if, among the two of us, it is me who is incompetent and just completely unaware of it? In a blind panic, I rushed to check my Curriculum Vitae.

2017 2017 Townsend Visitor at Berkeley

2015 Elected to the American Academy of Arts and Sciences

2015 American Council of Learned Societies Fellowship

2014 “What Bell Did” chosen as 2014 Highlight article in Journal of Physics A: Mathematical and Theoretical

2011 Invited to deliver the Shearman Lectures at University College London

2011 Member of Foundational Questions Institute (FQXi)

2008 Guggenheim Fellowship

2007 Elected to Academie Internationale de Philosophie des Sciences

As we used to say at the beginning of a game of tag:

You're It.

Dear Steven,

ReplyDeleteWe could go into a long discussion of why this line of thought—which we have seen advocated here by some anonymous poster—is completely off-target, but the simplest thing to do is just side-step the issue altogether. The two places where probabilities and the magnitudes of probability amplitudes completely agree are the two real solutions to the equation x-squared = x, namely 1 and 0. That is, if the magnitude of the probability amplitude is 1 then the probability is 1 and vice-versa. And if the magnitude of the probability amplitude is 0 then the probability is 0 and vice-versa. So in the extreme cases of probability 1 and probability 0 there is no difference between the probability amplitude and the probability.

Unlike Bell's original proof, which concerned probabilities and hence statistics between 0 and 1, the Greenberger-Horne-Zeilinger example uses only probabilities of 1 and 0. So no appeal to probability amplitudes can possibly make any difference there. You can read about the GHZ experiment, if you are not familiar with it, in my paper "What Bell Did", or my book Quantum Non-Locality and Relativity, or on Wikipedia. If you have any questions about it, just ask and we can clear them up.

By the way, one of the reasons that there is so much confusion in the literature is because people use the terms "classical" and "quantum" in many different ways, and much too often having nothing at all precise in mind. For example, there are things that have been called "classical logic" and "quantum logic", and things that have been called "classical probability" and "quantum probability", and people often say of some theory that it is a "classical theory" or a "quantum theory". At a recent conference in the foundations of physics I raised the question of what people even mean when they say that a physical theory is "classical", and I think it is fair to say that many people were taken aback at the fact that no one could offer any sort of adequate definition. One person went so far as to suggest that "classical physics" meant exactly those topics covered in Herbert Goldstein's book Classical Mechanics. But by that criterion even Maxwellian electro-dynamics is not classical! Is Special Relativity classical? (It certainly is not quantum.) Is General Relativity? Is Bohmian Mechanics? (Some people have complained about Bohmian mechanics that it is too classical, and other that it is not classical enough! You can't win for losing in this business.) So you might take a moment to ask yourself what you even mean by "classical" and "quantum" when you suggest that Bell might have made some tacit classical presupposition that is denied by quantum theory. In fact that suggestion is just flat wrong.

Physphill,

ReplyDeleteYour deciding to worry only about the branch of the wave function that affects you is a different question from whether the universe itself only keeps that branch. If collapse really does occur when you perform a measurement, it must be a nonlocal process because otherwise there is no way for the universe to know which branch of the outcome to keep at the distant location; the universe must coordinate between your side of the experiment and the other side.

In many worlds, the correct way to describe branching (although this is often not understood even by Everettians) is that branching is a local process. The entire universe doesn't branch at once (basically because decoherence requires interaction, and interactions propagate at the speed of light or slower). Rather, your side of the experiment will branch into various outcomes, and the other side of the experiment will branch into various outcomes, and those branches will spread out at the speed of light, and when they come into contact with each other the branches consistent with the correct results will line up with each other and not interact with the other branches. But in order for this story to remain local, it is absolutely necessary that multiple outcomes are kept from the distant location until that distant location's light cone intersects with yours. In other words, when say that you choose to keep only your branch: if you mean you only choose to keep your local branch, then you must still believe that multiple outcomes occurred at the distant location and only collapsed down to one outcome after their light cone intersected with yours. But if you mean that you choose a branch across the entire universe at once then such a description of the universe doesn't afford a local account of how events take place. It really does matter whether instantaneous collapse actually occurs or not.

Physphill,

ReplyDeleteLook, if you want to insult other people, at least put your name behind it. Your behavior is as disgusting as cowardly.

bhg,

ReplyDeleteI'm stuck on the point of throwing stuff in from the boundary. See, you keep saying AdS is like a box and you want to think of what you can do in AdS as similar to that of a box in asymptotically flat space. But clearly in the latter case you can throw stuff through the boundary. Now you say that in these cases you can change the boundary-theory in AdS to account for stuff that falls in, fine. But then you give the information about what has fallen in to the observer at the final time, because otherwise you cannot evolve the state backwards. That's why I say it looks like cheating to me.

The similar situation in Minkowski-space would be that you, say, start throwing something from outside the box at the initial time. Then at some intermediate time it crosses the box-wall. Indeed, that might be the only thing you want to let into the box to begin with. And what falls in, of course, follows from the initial state. In the box, you have your interactions, you form your black hole, it evaporates, you get the end state at t_e. Now you have to reconstruct the initial state from that end-state. Of course for this you do *not* get the information that has fallen in at the intermediate time. There wouldn't be anything in need of reconstruction if you had this information.

And, thinking about it, if you think AdS/CFT can describe a box in Minkowski-space, why would you want the evolution to be unitary to begin with? Best,

B.

Bee,

ReplyDeleteI did not start the insults, or reach close to the level of Tim. Just look. But, I will stop posting on your blog if you do not want me here.

Steven,

ReplyDeleteNice try but I'm not surprised it didn't work. Werner's tried the simplex approach and I've tried the "possessed values" and "existence of joint sample space" approaches, but it seems Werner's right that these people's thinking is trapped by a "metaphysical assumption" that they can't even see is an assumption. That's why precise explanations of what "classical" means go in one ear and out the other; GHZ is invoked in naive, P(X)=1 ≡ X, mode; referrals to the math. phys. and phil. literature are ignored / dismissed; etc. I'm afraid the basic concepts are simply beyond them and the "bad philosophy and bad physics" is irremediable.

Tim,

ReplyDeleteI've shown you this paper before, but it's relevant because you're claiming that MWI advocates *never* give any good arguments that MWI is local:

https://arxiv.org/pdf/quant-ph/9906007.pdf

In particular, you say "But MWI does not employ any such non-superluminal mechanism: like all quantum theories, it makes use of the wavefunction or quantum state."

This isn't true of every formulation of QM. The formulation of quantum mechanics that is local is not the Schrodinger picture in which you have a quantum state that evolves in time, but the Heisenberg picture in which the quantum state is fixed once and for all (and therefore is only "nonlocal" in the sense that a global constant is nonlocal), and only the local operators evolve in time. The paper above steps through Bell's experiment showing how the local operators need only interact locally with other operators to give the right outcomes. I've also reproduced the results in the paper myself for the GHZ experiment; it's not hard to do for any quantum experiment once you understand the basic trick.

You ask: "in MWI, what determines which of the various Bobs (who each saw a different outcome) gets to communicate with which of the different Alice's? Is that crucial fact only determined *when Alice and Bob communicate or get together*? Or is it already determined *before* they communicate or get together?"

In a sense, it's already determined before they get together. Believe me, working through the above paper is worth it, it really does provide some insight into this. The short answer, which you have to work through the math to really grok, is that information is sort of encoded in the local operators in a kind of key pair combination, and only the branches that should interact with each other based on our expectations end up having keys that match up.

All of this being said, I agree with you that much more work needs to be done in clarifying what the local beables are in MWI, and honestly I think this is one of the most important problems for people to be working on. Using the operator formalism only gets you so far; at some point you still have to apply an interpretation to the operator in terms of some outcome of a measurement, which is unacceptable.

Briefly to black hole guy (having rediscovered this discussion through a narcissistic google search for discussion of my work):

ReplyDelete1) Thanks for the call-out to my paper. I hope the paper is reasonably clear that it's giving exegesis of the literature rather than saying something profoundly new; in the case of recurrence I couldn't find a statement of the result in the literature that had quite the right form to respond to Unruh and Wald so I rolled my own, but I don't intend to claim any deep originality for it.

2) Drop me an email sometime (dmwallac@usc.edu) if you'd be interested in discussing some of the conceptual issues further. (I don't want to try to enter this 1500-element thread!) Pseudonymously if you like though of course I'm also happy to protect anonymity.

Paul Hayes,

ReplyDeleteDo macroscopic objects have "possessed values"? For Bell's theorem to go through, we just have to assume that it makes sense to say that a macroscopic pointer on an instrument is pointing this way or that way. We don't have to assume anything about whether subatomic particles have a definite position or spin or anything like that.

I mean by a classical theory one in which the observables have counterfactual definiteness; and by quantum theory my naive view of having amplitudes and non-commutating observables.

ReplyDeleteTravis,

ReplyDeleteSo let me probe your understanding of locality and branching in MWI a bit. we certainly disagree, so let's start by trying to bring the disagreement into focus.

Alice and Bob are stationed in their labs, many, many light years apart. A pair of electrons in the singlet state is created, with one being sent to Alice and the other to Bob. Aside from the electrons, Alice and Bob are separated by a huge interstellar vacuum, through which nothing passes. Alice and Bob each measure the x-spin of their electron. Since this is MWI, we end up with 2 Alices and 2 Bobs, each having seen one of the two possible outcomes. And let's suppose that Alice and Bob never, ever communicate with each other afterwards. The vacuum between them remains a vacuum.

As far as I can tell, there are only three possible ways to describe what happens from a Many Worlds perspective.

1) Instantaneously—i.e. with respect to any Cauchy surface that cuts to the future of both Alice's and Bob's experiments—there are only two branches in this world: one inhabited by the up-outcome-Alice and the down-outcome-Bob and the other by the down-outcome-Alice and the up-outcome-Bob. That is my understanding of the implications of MWI.

2) Neither Alice gets paired with either Bob until they send some classical message to each other announcing their outcomes. On this model, Up-outomce-Alice never comes to share a branch with either of the Bob's, and there is not fact of the matter about whether the EPR correlations ever obtain.

3) each Alice eventually get paired with the appropriate Bob, but not through any communication, The entanglement spreads at the speed of light independently of there being any communication between the sides.

Can you explain which of these three scenarios you endorse, and why?

Bee,

ReplyDeleteI don't want to think of AdS as box in Minkowski space in the sense you are describing it (I may have chosen my words poorly). A better analogy is to say that AdS is a sealed box which has some knobs that allow you to adjust the boundary conditions. You aren't allowed to literally"throw stuff into the box" -- all you can do is adjust the knobs (boundary conditions). In AdS, the dynamical equations and the boundary conditions are on the same footing, as both are required to evolve states in time. Specifying the theory means specifying both. Of course, when I say that "you can adjust the boundary conditions" I am referring to some external "god"; a person inside AdS cannot do this, and for them the boundary conditions are just taken to be what they are, just as for the values of the coupling constants in the field equations.

So if you hand me a state at some time and ask me to evolve it back in time, I need to know the equations and boundary conditions. I don't see why this is cheating. For example, Hawking's arguments still go through in full force, since it's the horizon that is purportedly causing the info loss, not the AdS boundary. The info paradox is the statement that even if you know the field equations and the boundary conditions, the state of the Hawking radiation is insufficient to evolve a state back in time.

Tim,

ReplyDeleteThanks for your reply! It doesn’t really answer my question, though, which is specifically about this equation:

(*) Prob(A,B|x,y)=int d lambda Prob(lambda) Prob(A|x,lambda) Prob(B|y,lambda)

Again, I’m asking you why you think there is no other assumption necessary than locality to write down something like (*).

[If I have time I may come back to GHZ.]

The equation (*) is meant to model a certain aspect of the world, and I’d say that the model uses, apart from locality, the following:

(a) a variable that really (i.e., it’s real in the model of the world we use to posit (*)) takes on a certain value lambda, whether we know it or not, whether we measure it or not, whether we can measure it or not. If it has a particular value lambda, then that determines the probability to get outcome A when setting x is used, namely

Prob(A|x,lambda). (And similarly for B|y.)

(b) The variable takes on particular values according to a certain probability distribution. Prob(lambda) d lambda is the probability to find a value in an infinitesimal “volume” d lambda around the value lambda. Then we sum all probabilities Prob(A,B|x,y,lambda) corresponding to all possible values lambda, to get the total probability Prob(A,B|x,y), the quantity we can estimate in an experiment.

I see only one reason why we would legitimately add up such probabilities to get the total probability, and it is what I said in (a): there is one particular value of lambda *actually* occurring, and different values of lambda are expected to occur with particular frequencies in many repeated experiments (and frequencies add up!).

(Again, I think it’s this sort of picture most physicists have in mind when talking about “realism” or a “classical model” in the context of (*). However, I, too, most definitely do not want to argue about words like that, which are indeed too vague and are used with too many different meanings.)

So then, without using the words “classical” and “realism” my question for you is, what assumptions do you think are needed to write down (*)?

Or to be more explicit:

What do you think Bell meant by Prob(A|x,lambda): isn’t there the underlying assumption that something is actually the case, namely that the variable actually has a value lambda?

And what did Bell assume that allows him to conclude one sums the probabilities appearing on the right-hand side of (*)? Again, isn’t this assuming that certain quantities actually have values? [And we average over those actual values because we only know their probability of occurrence]

Steven

bhg,

ReplyDeleteBut what's the relevance of this construction? Why would you even think it has something to do with the asymptotically flat case?

To Steven and any others who think some "classicality" assumption is being smuggled into lambda:

ReplyDeleteForget lambda. Here's a proof that doesn't involve lambda:

Set up the GHZ experiment with three experimenters A, B, and C at separated locations. The only things we care about are the macroscopic configurations of the detectors and the macroscopic instrument pointers. When experimenter A's instrument responds, the pointer points one way or another. Since the direction it points in is not supposed to be influenced by the detector settings at B and C (i.e., since locality holds), there must be some function F_A(a) which sets the direction, where "a" is the setting of the detector, which doesn't depend on the detector settings at B and C. This function can be absolutely anything you like: it can be the quantum state with the born rule applied to it, for instance. It can be stochastic or deterministic. Doesn't matter. The only requirement is that this function can't depend on b and c, the settings of the distant detectors. Similarly, there are also functions F_B(b) and F_C(c) for the other detectors. Now, the assumption that the detectors have a common past means that we can set all three functions however we like. We can make each of them have access to the same quantum state, or even to the entire universe if we want. The only restriction at all is that F_A(a) can't depend on b and c, F_B(b) can't depend on a and c, and F_C(c) can't depend on a and b.

The GHZ paper shows that there is no set of three functions such that the right results are produced for any combination of a, b, and c. Therefore, locality must be violated. At least one of the functions must depend on the setting of at least one of the distant detectors.

Now, where in the above description did I implicitly assume "classicality"?

Bee,

ReplyDeleteWe already covered this, so I will be brief and then drop this topic unless you have something new to add.

1) First a question: why are you obsessed with asymptotically flat spacetime, given that this does not (apparently) describe our Universe? Personally, I seriously doubt that that the behavior of solar mass (say) black holes depends in any significant way on the asymptotic structure of the spacetime it is in. If it does, then we apparently have, in principle, a remarkable tool for ascertaining the structure of spacetime at arbitrarily large distances from ourselves, far beyond our causal horizon...

2) Hawking's argument goes through essentially unchanged for a black hole in Anti-de Sitter as it does in Minkowski, hence it seems quite reasonable to suppose that the solution might be essentially the same in the two cases.

3) Taking the large radius limit of AdS/CFT reproduces gravity in asymptotically flat spacetimes in the regimes where the latter has actually been tested. So it provides a nonperturbative completion consistent with known facts about gravity in asymptotically flat spacetime. To most people, this makes it obvious why " it has something to do with the asymptotically flat case".

It sounds like unless we incorporate every single feature of our universe down to the last planet and galaxy, you will just say "why do you think this has anything to do with black holes in our universe?". I find this attitude defeatist and frankly depressing.

Tim,

ReplyDelete#3. Of course #2 can't be right: it would be weird if the pairing of the branches depended on actions performed by macroscopic beings. And #1 makes everything nonlocal which defeats the whole point. I agree that people like Sean Carroll tend to think in #1 terms, but I think they're wrong to do so. As soon as we go to QFT, the most natural thing to do is to use operators parameterized by spacetime points, not nonlocal wave functions.

Of course, for the entanglement to spread, you need *some* kind of interaction, and that interaction might be considered a classical communication of some kind. So if that's all you mean by communication, then #2.

travis,

ReplyDeleteA "macroscopic" object such as a measuring instrument admits a classical description but - obviously -

"this does not mean that the information held on the instrument, in the numbers indicated by the dials, [must] obey classical statistics."(p 26).Arun,

ReplyDelete"Counterfactual definiteness" is just code for determinism, and determinism cuts orthogonal to the quantum/classical divide on any way of understanding it. Bohmian mechanics, for example, is deterministic but certainly must count as a quantum theory in any reasonable sense of the term. And on the other hand, there is not problem with a "classical" indeterministic theory that uses a local stochastic process, and hence does not support counterfactual claims. As for non-commutativity, that is an completely unsurprising consequence of causal structure that also happens all the time in classical physics. If I have an incoming beam of horizontally polarized light, then passing the light through a diagonally oriented polarizing filter and a vertically oriented polarizing filter do not commute. This is unsurprising and rather trivial, and does not reveal anything non-classical about the physics.

Travis,

ReplyDeleteExcellent! As I suspected, we are making quick progress here. But I wonder if you could flesh out your version of 3 for the precise example I described, where after the passage of the electrons, and forever after, there is a vacuum between Alice and Bob. Is it really your contention that in such a case the branch structure spreads along the light cone from Alice to Bob and Bob to Alice through *some* interaction? Since there is a vacuum state between them, I can't figure out what the interaction that goes along the light cone is supposed to be. I take it that you envisage the formation of a single branch with the up-outcome-Alice and the down-outcome-Bob to start where the future light cones of Alice's and Bob's measurements first intersect? Is that the right picture?

My own intuition, if I were an advocate of MWI, would be to go with 1 and Sean: the non-locality must be absolutely ubiquitous to keep the decoherent branching structure intact. But that's part of why I am not an advocate of MWI: the absence if any clear non-local beables makes understanding the theory difficult, to say the least.

travis,

ReplyDelete"To Steven and any others who think some "classicality" assumption is being smuggled into lambda [..] Here's a proof that doesn't involve lambda:"The irony of all this is that GHZ more clearly exposes (and shoots down) the assumption of classicality than it does the assumption of locality in the conjunction of the two. Using your notation and substituting the spin measurement directions {x, y} for a,b,c as appropriate, QM predicts

the correlationsF_A(x)F_B(y)F_C(y) = 1

F_A(y)F_B(x)F_C(y) = 1

F_A(y)F_B(y)F_C(x) = 1

The classicality assumption in its possessed values, independent of measurement, guise allows multiplication of the above three equations so that (after removal of squares)

F_A(x)F_B(x)F_C(x) = 1

But QM predicts F_A(x)F_B(x)F_C(x) = -1.

Tim, Arun,

ReplyDeletelet us add some more classical non-commutators.

Instead of throwing another poor book into a BH, this time we just rotate it around the x-axis and then y e.g. 90 degrees each, and then y, x which looks different, i.e. rotations do not commutate [Rx,Ry]<>0 as well as the generators [Jx,Jy]=Jz<>0, the algebra of SO(3).

(In Arun´s balloon example we could this time let the radius go to infinity, i.e.

contractioninstead ofextensionand end up in flat E(2) with one rotation e.g. around z and 2 translations x,y.Another classical is the curvature as the non-commuting of covariant derivatives [∇.,∇.]=R.... .

But now we go to QM. Here we have Heisenberg´s [p(t),q(t)]=ih. This is an equal time commutation relation (ETCR). Without SR no worries about equal time, but even in QFT for creation and annihilation operators it works fine. It works so well, that it explains indistinguishable particles, since they are just created from a field, which turn out must be grouped into Bosons [a,a+]=1 and Fermions {b,b+}=1.

Even hidden in the path integral [p(t),q(t)]=ih can be reconstructed as Feynman showed.

I do not know it, but I guess that in Bohmian mechanics the [p(t),q(t)]=ih somehow must be circumvented, because there are – as far as I remember – real, classical particles which have position

andmomentum at the same time. Thus, somehow Heisenberg must be integrated in the superluminal pilot wave that guides the classical particles … here better Tim takes over and corrects me …I found recently Bender´s discrete time approach. Here it is nicely described that solving a differential eq. we specify the initial condition as usual. But when p and q become operators and do not commutate any more the [p(t),q(t)]=ih serves as “initial condition” and even solves the operator ordering ambiguity.

By the way for massive particles SO(3), the normal rotation is also called the “little group” of SO(3,1). For massless particles the “little group” of SO(3,1) becomes O(2), left right polarization for a photon, topological quantized. Very interesting is that in the massless case a Lorentz boost becomes a gauge transformation U(1).

Counterfactual definiteness merely says it is meaningful to talk about physical quantities that I haven’t measured. Nothing about determinism there.

ReplyDeleteAgreed that the term non-commutativity doesn’t quite capture the intuition. The right word that describes the difference between polarization of a Maxwell EM wave from that of a photon escapes me.

ReplyDeleteAdS is unstable - generic small perturbations grow into black holes. This is different from asymptotically flat or DeSitter space. So yes, sit around for a while, no need to peer to infinity and some local behavior can in some cases reveal global structure.

ReplyDeleteOk I am out of date. Bell-type experiments can’t distinguish between classical and quantum.

ReplyDeletehttps://www.osapublishing.org/DirectPDFAccess/645117D1-F764-B5EB-691E7D60786C210B_321243/optica-2-7-611.pdf?da=1&id=321243&seq=0&mobile=yes

Paul Hayes,

ReplyDeleteThe outputs of the functions represent the directions of the instrument pointers. Are you saying that by assuming that the instrument pointers have definite directions, I am assuming classicality?

Tim,

ReplyDeletecould you please elaborate on

“determinism cuts orthogonal to the quantum/classical divide on any way of understanding it.”. It seems to contain the very essence, but I could not yet grasp it. I tried to interpret it, but please correct me: Bohmian mechanics (BM) is observer independent and does not need a measurement and thus has no measurement problem, thus no “quantum/classical divide”. And BM evolves deterministically with initial conditions for all particles positionsandmomenta, thus all that will ever happen is already set in the very beginning of time. Is this correct?I also want to have a realist description of nature, independent of us as observers. And I also want no blow to reductionism: “… by referring to measurements, the axioms assume the existence of macroscopic objects … and this is a blow to reductionism.” (Lost in Math)

Tim,

ReplyDeleteIf no radiation or anything travels between Alice and Bob, then I think the best way to describe it would be two branches on Bob's side and two branches on Alice's side, forever disconnected. Completely noninteracting systems may as well be in separate universes after all. Note that, in reality, gravitational radiation at the very least will travel between Alice and Bob. I suppose they could be beyond each other's cosmic horizon though.

If there is some radiation or something that travels between Alice and Bob, then I think the best description is what you said: Bob's and Alice's branches will connect when their light cones intersect.

I completely agree that the absence of local beables is a problem. I've come to believe that MWI in its current formulation is basically Copenhagen without collapse: we can't assign any meaning to the ontology without talking about outcomes of measurement. A deeper theory is needed.

There are two reasons why I think many worlds has a good chance of being true:

1) I think relativity is actually telling us something deep about the structure of spacetime, and it can't survive without many worlds.

2) The hidden stuff of reality seems to behave very similarly to the visible stuff of reality, suggesting that they should be the same type of ontological thing. In Bohmian Mechanics and GRW, the hidden stuff is the wave function, and the visible stuff is particles and flashes respectively. This is a very sharp dichotomy between two different kinds of stuff. But consider the double slit experiment with a single electron. The electron is definitely affected by something hidden which goes through both slits; but that something can be manipulated in ways very similar to the electron itself: it is stopped by the same types of materials, and reflected by the same types of materials. There is every reason to believe that the hidden stuff is also just electrons, existing in parallel worlds.

On the other hand, the reason most Everettians seem to favor MWI is just the simplicity of keeping the bare formalism and nothing more, as exemplified by Sean Carroll's "Mad Dog Everettianism". I think this is a mistake, but under this view it makes sense why one would no longer care about using a nonlocal wave function and simultaneous branching across the entire universe.

Paul Hayes,

ReplyDeleteJust replace the term "classicality" with "Einstein locality" and you will finally understand why quantum mechanics is necessarily non-local, no matter how it is interpreted. Just directly apply the the Einstein locality criterion from the EPR paper.

Arun, your link does not work.

ReplyDeleteI also had a typo: Wigner´s “little group” (massless) is E(2) instead of O(2).

Reimond,

ReplyDeleteThere is an interesting story here about why there are only fermions and bosons possible in Bohmian Mechanics, while standard QM allows for the possibility of anyons, which have never been observed. Here is how it goes.

The wavefunction is defined as a complex function on the configuration space of a system. If there are no identical particles, the topology of the configuration space is simple: every closed curve can be continuously shrunk to the identity map. But if there are qualitatively identical particles, this is no longer true. Let A and B be two identical particles. Now consider a motion in configuration space that takes A continuously to the initial location of B and B continuously to the initial location of B, i.e. A and B exchange locations. Since they are identical particles, this motion bring us back to the original (unlabeled) configuration of the system.

Now working with topologically non-trivial spaces is hard, so the usual thing to do is to pass to the universal covering space. But if you work on the covering space, then there have to be extra constraints that are put on the allowable wavefunctions. What should those extra constraints be?

In standard QM, you demand that the exchange of labeled identical particles yield the same ray in Hilbert space, i.e. the same wavefunction up to an overall phase. This yields bosons and fermions, where the change under exchange is +1 and -1 respectively, but it also allows for anyons, the change the phase in any other way you like. Anyons have never been observed, of course.

In Bohmian mechanics, though, to constraint on acceptable wavefunctions on the covering space is stronger. Since you have the guidance equation, you can demand that the detailed particle trajectories be preserved, not just the magnitude of the wave function. And that demand can only be met by bosons and fermions: anyons are ruled out.

You can see a powerpoint about this here: https://www.math.uni-tuebingen.de/arbeitsbereiche/maphy/personen/roderich-tumulka-dateien/munich11.pdf

This: https://arxiv.org/pdf/1506.01305.pdf needs some thought.

ReplyDeleteShifting the Quantum-Classical Boundary:

Theory and Experiment for Statistically Classical Optical Fields

“One naturally asks, how are these results possible? We know that a field with classically random statistics is a local real field, and we also know that Bell inequalities prevent local physics from containing correlations as strong as what quantum states provide. But the experimental results directly contradict this. The resolution of the apparent contradiction is not complicated but does mandate a shift in the conventional understanding of the role of Bell inequalities, particularly as markers of a classical-quantum border. Bell himself came close to addressing this point. He pointed out [2] that even adding classical indeterminism still wouldn’t be enough for any type of hidden variable system to overcome the restriction imposed by his inequalities. This is correct as far as it goes but fails to engage the point that local fields can be statistically classical and exhibit entanglement at the same time. For the fields under study, the entanglement is a strong correlation that is intrinsically present between the amplitude and polarization degrees of freedom, and it is embedded in the field from the start (as it also is embedded ab initio in any quantum states that violate a Bell inequality). The possibility of such pre-existing structural correlation is bypassed in a CHSH derivation. Thus one sees that Bell violation has less to do with quantum theory than previously thought, but everything to do with entanglement.”

David Wallace,

ReplyDeleteI enjoyed reading your paper, and I encourage anyone interested in the black hole information paradox to read it. For one thing, it serves as a useful corrective to Tim's paper.

Thanks for the offer to discuss further offline, which I may take you up on. My main goal in commenting on this thread was to try to convey that the black hole information paradox really is a paradox in the best sense, and as such gives us a foot in the door in our quest to understand quantum gravity. The worst thing that could happen is for people to believe that there really is no problem, and so it's important to counter papers and statements that give this impression.

bhg,

ReplyDeleteSorry you find my attitude "defeatist and frankly depressing," I would say I am being realistic. In any case, I thought we had already earlier debunked your claim that taking the large radius limit reproduces Mink space - this is an assumption you make, not a conclusion you draw, so why do you again state it as if it was a conclusion?

Having said this, you are right of course that we don't live in asymptotically flat space. Are you trying to say that it's easier to transfer the AdS argument to dS than to Minkowski? And if so, why don't you say so? Best,

B.

Bee,

ReplyDeleteI don't think you are being "realistic". Instead, I think you are looking for the most negative possible slant in order to confirm your prior biases.

You are making the assumption, with no apparent justification that I can see, that the physics of a solar mass black hole depends sensitively on the asymptotic structure of spacetime. This is a pretty radical assumption that goes against everything we have learnt about physics so far. You are free to make that assumption, but you need to be honest about how radical it is.

"debunked the claim"? No, exactly the opposite is true. As I explained, it is known from explicit computation that the S-matrix in Minkowski space is recovered from the large radius limit of AdS/CFT. I noted that several times, in order to debunk your claim to the contrary. All available evidence points against your claim.

I don't care much about transferring the argument to dS since I don't think black hole physics depends on the asymptotic structure of spacetime. This is your point of view not mine. I just brought this up to note the irony of you bringing up Minkowski space given that this is not the world we live in.

Tim,

ReplyDeletethis is a good explanation by Bohmian mechanics (BM) why

“ anyons are ruled out.”, but I hope that BM permits anyons with fractional angles (topological) in 2D, that occur in the Fractional quantum Hall effect, which is by the way related to the Aharonov-Bohm phase or more general the Berry phase.Do you know the reason why particles get entangled in the first place? Might it be that it has something to do with them being indistinguishable?

bhg,

ReplyDelete"I don't think you are being "realistic". Instead, I think you are looking for the most negative possible slant in order to confirm your prior biases."I can think of many more negative possible slants than the one I am putting forward here (like the ones Tim is putting forward...) But yeah, I almost certainly have a confirmation bias. That's why I'm trying to have a conversation with you.

"As I explained, it is known from explicit computation that the S-matrix in Minkowski space is recovered from the large radius limit of AdS/CFT. I noted that several times, in order to debunk your claim to the contrary. All available evidence points against your claim."Gosh, do we have to chew this through again? Is there like zero memory effect in this thread? I thought we had agreed that to solve the black hole information loss problem the very point is that you carry over non-local effects from AdS to Minkowski. The local ones won't do.

"I don't care much about transferring the argument to dS since I don't think black hole physics depends on the asymptotic structure of spacetime. This is your point of view not mine. I just brought this up to note the irony of you bringing up Minkowski space given that this is not the world we live in."As I said earlier, this is because I think the limit from \Lambda \to zero from above is probably continuous while the limit \to zero from below isn't. You don't have to believe this of course.

In any case, to get back to the point, I am still perplexed why you want the time-evolution in a box to be unitary to begin with and why you think it's ok to put God-given boundary conditions on the boundary of the box. Best,

B.

Arun

ReplyDeleteThat passage does not require a lot of thought to see that, as is typical, it is confused. Here are a few points.

Once again, the terms "classical" and "quantum" are tossed around without any clear meaning. Bell nowhere observes that "even adding classical indeterminism still wouldn’t be enough for any type of hidden variable system to overcome the restriction imposed by his inequalities", because he nowhere uses the obscure phrase "classical indeterminism". It is of course true—rather trivially—that adding *local* indeterminism does not help, and indeed makes the situation worse if you are trying to replicate the quantum-mechanical predictions with a local physics. For if you add *local* indeterminism, you can't even replicate the perfect EPR correlations! That was exactly Einstein's point with the example. That was why Einstein paired "God plays dice" with "and uses telepathic methods" when describing "the present quantum theory". Einstein did not reject indeterminism, he rejected telepathy—spooky action-at-a-distance—and saw that the two went hand-in-hand in the Copenhagen Interpretation.

But that is not even the main point. There are two linked fundamental errors in this passage the first of which is quite common. The common one is the misuse of the term "correlation". It is an endemic error of speech to say, for example, that a particular pair of entangled particles are "correlated". Given only a single pair of items, it makes no sense to either assert or deny that they are correlated. Example: right now, take a coin out of your pocket and flip it. I will do the same. will the outcomes of our two little experiments be "correlated"?

Well, one of two things will happen: either we get the same outcome (both heads or both tails) or were get different outcomes (one heads and one tails). Those are the only possibilities. Does either of these outcomes count as the coins being "correlated"?

Of course not! Now: if we each flip our coin 1,000 times and we always get *the same* sort of outcome, e.g. always both come heads or both come tails or else always one comes heads and the other comes tails, *then* we can say that the outcomes of these experiments are correlated. In such a case observing the outcome of my flip provides information about the outcome of yours, even though you are far away and we seem not to be in causal contact. And of course we would also say that the outcomes are correlated if the same sort of pairing happens only 90% of the time, or 80%, or 70%. That is what statistical analysis is all about. We can calculate the statistical significance of the outcome of 1,000 flips. If after 1,000 flips we get different outcomes (one of us gets heads and one tails) 100% of the time (as quantum mechanics predicts for EPR) and also each individual coin falls heads only about 50% of the time with a random distribution (as quantum mechanics also entails), then we can be highly confident that the two coins are not fair coins! Highly, highly, highly confident, since the chances of such a result for fair coins is 1 out of my-calculator-just-broke. So let's try a string of 100 flips, that all come out different on both sides. The probability is 1 out of 1,577,721,810,442,000,000,000,000,000,000. But on a trial run of only one flip, that chance of getting different outcomes is 50%. So in our original case of only flipping once, the concepts of being "correlated" or "not correlated" are completely meaningless.

Con't,

When physicists say that a pair of observables are correlated for an entangled system, what do they mean? The locution is, on the face of it, very odd. For a physicist will say things like this: "in a singlet state, neither particle has any value for its z-spin but the z-spins of the two particles are perfectly anti-correlated". On the face of it, this is just gibberish: if neither particle *has* a value for its z-spin, what could it possibly mean to say these non-existent values are anti-correlated? What they mean, of course, is that *according to quantum mechanical predictions, if you were to prepare many pairs of particles entangled in this way and then were to measure (e.g.) the z-spin of each particle, the outcomes of such experiments would be perfectly anti-correlated (i.e. always different). *That* counterfactual assertion about what the outcomes of certain experiments would be *is* endorsed by quantum mechanics.

ReplyDeleteFinally, the passage makes this extremely silly error: "For the fields under study, the entanglement is a strong correlation that is intrinsically present between the amplitude and polarization degrees of freedom." This is again complete gibberish. What is the "polarization degree of freedom" in a singlet state? The "amplitude degree of freedom"? This just makes no coherent sense at all.

Travis

ReplyDeleteThanks for the clear reply. You see how fast we can make progress if everyone is just straightforward and clear!

2 comments.

First, I understand what you are saying about my case where there is just a vacuum between Alice and Bob (and yes, let them pass out of each other's horizons while the electrons are en route). I think this is a necessary consequence of saying that the decoherence that forms branches spreads locally at the speed of light. I also think that this answer is highly heterodox in the MWI community, and runs counter to the sorts of analyses of decoherence that are actually offered. But I'll let that community speak for itself.

Second, after more than a year and hundreds of comments, let me say something actually controversial! There is, as you say, a great difficulty trying to reconcile a single-world theory with the account of space-time offered by Special and General Relativity. I agree. But my view is: all the worse for Relativity! In order to implement the required non-locality of quantum predictions in a Relativistic setting, the easy (some might say "cheap") thing to do is to add a preferred foliation to the Relativistic space-time structure. That would mean concluding that Einstein was wrong, not in any of the spatio-temporal structure he did postulate (Lorentzian metric) but in his belief that the Relativistic amount of spatio-temporal structure is complete. In one of the great ironies of all time, Einstein, who railed against Bohr's insistence that quantum mechanics is complete, himself fell pray to the error of believing that the General Theory of Relativity is not merely true but complete!

But why in the world should we believe that the General Theory *is* complete? Einstein was only tasked with accounting for gravity and classical electromagnetism: both local theories. He was unaware of the violations of Bell's inequality. So he reasonably stuck with locality since he had no empirical reason to reject it. But once we know that locality is wrong, all bets are off.

So instead of struggling with all the conceptual and technical obscurity of Everett, just throw in a preferred foliation and go Bohmian. You get to live in a single world, and everything is comprehensible again.

Reimond

ReplyDeleteLet's just be concerned for the moment about the predictions of non-relativistic quantum theory, leaving QFT and quantum gravity aside. The non-relativistic quantum mechanics that we all learn surely counts as a quantum theory rather than a classical theory. So it is reasonable to count any precisely formulated theory that either exactly or FAPP replicates the predictions of the standard accounts of non-relativistic quantum mechanics as "quantum". My point is that on this way of thinking about things there are both deterministic non-relativistic quantum theories (Bohmian mechanics) and indeterministic non-relativistic quantum theories (GRW). And on the other side, there are deterministic non-quantum theories (Maxwellian electromagnetism) and one could easily formulate a classical indeterministic theory using a local stochastic process. So the determinism/indeterminism divide is literally orthogonal to the quantum/non-quantum divide.

Reimond

ReplyDeleteEntanglement is related to indistinguishability in the way I mentioned. The main point is that the configuration space of N identical particles in Euclidean space is just the space of all N-tuples of points in Euclidean space, while the configuration space of N distinguishable particles in Euclidean space is the set of all *ordered* N-tuples of points. In the notation used by Shelly Goldstein, Nino Zanghì, Detlef Dürr and Roddy Tumulka, the configuration space of N identical particles is represented by ^NR^3 and the configurations space of N distinguishable particles by R^3N. And once you try to represent a wavefuncion on ^NR^3 by a wavefunction on the universal covering space, it will of necessity be entangled and satisfy permutation invariance.

I have not thought through the treatment of the quantum Hall effect, but I feel certain that a proper Bohmian treatment just falls out with no difficulty. I can ask around.

travis,

ReplyDeleteI'm afraid I've no idea what you're talking about now. I've given an explanation of where the classicality assumption comes in to the GHZ argument so please address that directly if you think there's some problem with it. Not least because I can't see any contact with GHZ in what you've written: at first I thought your 'proof' was just meant as a sketch of the well-known argument, and you'd not intended e.g.

"F_A(a) which sets the direction, where "a" is the setting of the detector"to be taken literally, but now you've repeated it ("The outputs of the functions represent the directions of the instrument pointers"). Functions from {x, y} - the detector settings - to itself instead of to the {1, -1} outcomes?!Tim Maudlin,

If I didn't already know it's futile I might suggest, as Werner did, that you just "read the basic texts" so that you can finally understand what "classicality" means and stop making risibly false claims.

Reimond, the arxiv link that I posted later gives a preprint of the published paper.

ReplyDeletePaul Hayes

ReplyDeleteSo here is something I find interesting. I thought at first that at least Paul Hayes is posting under his own name. And I guess that is still possible. But a little searching does not reveal any Paul Hayes who is actually employed as a physicist. That by itself is not any argument against your claims, but it is something of a curiosity, especially given the extreme self-confidence of your tone. There appears to be a blog entitled "Not Even Psiontology" started by one Paul Hayes in May 2012, but it contains zero posts. And the author of that blog has declined to offer even a single sentence about himself in his author profile. So that's also intriguing.

Be that as it may, I have gone round with Werner more than once, and he never came out ahead. If you want to specify what these "basic texts" are, then there might be a basis for some discussion. Werner's claim that to say that a theory is "classical" is to say that the state space is a simplex is—what is the word you used—risible. Do you want to try to defend it?

Paul Hayes,

ReplyDeleteI thought I was addressing your argument. I guess I don't understand what you were saying. It just seems like you were just showing the proof that no set of three such functions can exist which actually give the outcomes of the experiment, which is the whole point.

The functions are not from the detector setting to itself. They are functions from the detector settings to the outcomes, where the outcomes are the direction of the instrument pointer (which is different from the direction of the detector setting). So the point is that the instrument pointer is a macroscopic object with a definite position. I'm just trying to make it clear that the outcomes themselves are perfectly definite. We don't have to assume anything about how the outcomes are generated. So long as we assume that the mechanism which generates them does not depend on the distant detector settings, the proof goes through.

So where do you think I am making an assumption other than locality that you want to reject?

Tim,

ReplyDeleteWhy do you say that it runs counter to the decoherence analyses that are usually offered? In order for decoherence to occur, there must be an interaction between an environment and a system. If there is no such interaction, then there can be no decoherence. I grant you that very little work has been done on understanding how decoherence in a relativistic context occurs and so talk of decoherence spreading at the speed of light is rare, but I feel like it's just a consequence of the usual understanding of decoherence.

Adding a preferred foliation doesn't really seem like just completing relativity; it runs counter to its entire spirit. It would mean that the relativity of space and time is an illusion. In any case, locality is the main thing we should be trying to preserve if we can, not just the relativistic metric. We should only abandon locality when we're absolutely forced to, and there is still an option open. Moreover, if the world is local but there's just a lot going on around you that you don't have access to, then we have an immediate explanation for why we can't signal faster than light but can nevertheless coax some weird correlations in situations like GHZ. On the other hand, it would be just plain weird if fundamental interactions are instantaneous but coordinated in such a way like in Bohmian mechanics that macroscopic beings can't utilize them to transmit instantaneous signals. I understand that reality can be weird, but something just doesn't seem right about it.

Tim, Travis,

ReplyDeleteYour answer to my question about what Bell may have presupposed by writing probability distributions involving lambda was: Forget probabilities! Forget lambda!

Do you agree this can’t be quite an actual answer to my question?

About GHZ: As Tim explains in one of the replies above [June 3,6:15am], if you use 3 computers (but not quantum computers!) in 3 different locations you cannot reproduce the GHZ results. I agree. But what are the properties of computers that prevent them from reproducing the GHZ results? I would have thought that one of the properties is that every single bit in a computer really has a value 1 or 0 at all times. We could even, at least in principle, keep track of the values of all those bits during a computation (for example one in which we are trying to reproduce the GHZ results) without disturbing that calculation.

(This is one aspect in which classical computers differ from quantum computers.)

Are you denying you need this sort of assumption in the GHZ proof?

Steven

Bee,

ReplyDelete" I thought we had agreed that to solve the black hole information loss problem the very point is that you carry over non-local effects from AdS to Minkowski. The local ones won't do."Good, this clarifies what you are thinking, and I think we can now resolve this confusion. Consider a solar mass black hole in an AdS space whose radius R_AdS is vastly bigger than the radius of the black hole. In AdS/CFT any nonlocality is occurring at the black hole horizon scale; the physics on the AdS scale is entirely local. This latter statement is something one can essentially prove (to be honest, this has not been totally proven, but will be before long). Now, if the nonlocality were instead occuring at the AdS scale your objection would make sense and we would have no reason to expect a smooth limit to flat space. But since the nonlocality is occurring at a fixed scale as we take R_AdS->\infty the implication is that physics at the black hole scale goes over smoothly. Again, this circle of ideas has been carefully tested by explicit computations, by computing AdS scattering amplitudes in which the particles are aimed to collide in a fixed region, then taking the AdS radius to infinity, and checking that one correctly recovers flat space S-matrices. No discontinuity arises. It's not just words. So this procedure definitely produces *a* flat space theory of quantum gravity in which locality is respected in the absence of black holes. I don't see why you are expecting there to be a discontinuity when every computation ever done says otherwise. I simply see no rational basis for this claim.

"In any case, to get back to the point, I am still perplexed why you want the time-evolution in a box to be unitary to begin with and why you think it's ok to put God-given boundary conditions on the boundary of the box"Quantum gravity in AdS with specified boundary conditions is unitary. So is physics in a box with specified boundary conditions. But it's just an analogy, and if you don't find it helpful feel free to ignore it (or blame Hawking and Page). All I really care is that QG in AdS correctly describes local physics in the regimes where we have actually observed it. Why "is it OK to put god-given boundary conditions"? I don't understand the question. In AdS the boundary conditions are on the same footing as any other coupling constants, as they are both needed to define time evolution. If we lived in AdS we would try to measure them just like we would the other couplings.

Tim,

ReplyDelete“Entanglement is related to indistinguishability”- I could not agree more, but to entangle particles something is still missing. E.g. to entangle photons in a teleportation experiment, you need to bring them “close for a while”. This is typically done in an optical fiber. Or the 2 electrons of para/ortho helium are entangled because they are part of the same atom.Thus “locality” is also needed to entangle particles. Here I put “locality” in quotes, because it is not as strong as the locality of SR or spacetime coincidences in GR or the ETCR of creation and annihilation operators in QFT, but particles just need to be in a proximity or close neighborhood. (I also always quote “non-locality” to code spooky-action-at-a-distance and no faster than light signaling.)

I am not sure whether this is already thought through in BM. In QFT it also enters indirectly via the path integral and diffeomorphism invariance.

Tim,

ReplyDeleteit is just a thought out of the blue and I am not at all sure, but maybe to bring back anyons in BM for the 2D case, the generalized Gauss-Bonnet theorem might help. It relates topology (global) and curvature (local) and works only in even dimensions.

BHG wrote:

ReplyDeleteBut since the nonlocality is occurring at a fixed scale as we take R_AdS->\infty the implication is that physics at the black hole scale goes over smoothly. Again, this circle of ideas has been carefully tested by explicit computations, by computing AdS scattering amplitudes in which the particles are aimed to collide in a fixed region, then taking the AdS radius to infinity, and checking that one correctly recovers flat space S-matrices.BHG, can you give one or more references for these computations that confirm that what goes for big AdS goes for asymptotically flat spacetimes?

Steven,

ReplyDeleteLetting the computers have bits without a definite 1 or 0 won't help you. Ultimately, the *output* has to be a definite 1 or 0. We can be as agnostic as we like about the mechanism which generates the output. Similarly, I will grant you that lambda represents all information that has a definite value in the past light cone of the two detectors. Having this information can only help you maintain locality, not hurt you. If there is no such information with definite values, then just set lambda equal to the null set and the proof goes through the same.

Reimond

ReplyDeleteNo, local interaction is not needed to get entanglement. In QFT the vacuum state is highly entangled, and not because of any local interaction terms in the Hamiltonian. Indeed, trying to produce a product state in QFT requires a lot of energy. That is the source of the so-called "firewall" scenario: AMPS demands that the exterior region to the event horizon be pure, so the total state must be a product state of the exterior with the interior with no entanglement, and that is supposed to produce the firewall. It is sort of a ridiculous scenario, that there is it.

Travis

ReplyDeleteI disagree with you about the "spirit" of relativity. There are a lot of phenomena concerning light that have to be explained, and that can't be explained in a neo-Newtonian space-time. The most obvious is that the trajectory of light emitted in a vacuum is fixed by the point of emission and completely independent of the physical state ("state of motion") of the emitter. Newton could explain this in Absolute space and time by there being a fixed absolute speed of light. But there is no prospect of explaining it in a neo-Newtonian space-time, that has no absolute velocities. So if you take the "Principle of Galiean Relativity" to essentially say that there is no absolute space that persists through time, the Relativistic Lorentzian metric shows how to account for the facts about light without absolute space. What is objective (in both SR and GR) is the metric, which contains a lightcone structure, which accounts for the behavior of light. Adding a foliation on top of that does not change that explanation one bit. For phenomena that can be accounted for using the metric alone, without the foliation, use the metric alone. Adding the foliation cannot result in a loss of explanatory power.

Suppose that the foliation is needed to account for non-locality. Then it is hardly a surprise that Einstein missed it: he was only concerned with phenomena that do not make reference to the foliation.

As for why the non-locality cannot be used to signal, that too has a very natural explanation: quantum equilibrium. You can't send signals in a classical system at thermal equilibrium: the system has to have a non-maximal entropy in order to support signaling. So the existence of quantum equilibrium accounts for the peculiarity that of violations of Bell's inequality still do not allow for signaling in a very. very natural way. You do not want to put the premium you seem to have on preserving locality at all costs: that way leads to sxperdeterminism, as you know!

Stephen,

ReplyDeleteI was just trying to avoid the whole issue of lambda by giving a proof of non-locality without even mentioning it. If you accept that proof, then you have to acknowledge that Bell's result (that quantum mechanical predictions cannot be recovered by any local theory) cannot hinge on lambda.

But if you want the fuller story it is this: Bell makes zero substantial assumptions in writing down lambda. He says explicitly that lambda can stand for whatever mathematical item you like: a complex function, a matrix, a fiber bundle, and collection of all three, etc. So the only presupposition is that the physical state of the system can be represented somehow or other mathematically. If you don't think that then you should just quit doing mathematical physics altogether. If, with Bohr, you think the wavefunction is complete then fine, let lambda be the wavefunction of the system. To deny Bell's "presupposition" is to deny the possibility of doing physics at all. In particular, there is exactly zero presupposition about counterfactual definiteness or anything like that in the formalism. That is determined by the dynamics of the theory.

travis,

ReplyDelete"The functions are not from the detector setting to itself. [...]"Okay that's clear now. As, I think, is the reason you haven't understood what I was saying.

"So the point is that the instrument pointer is a macroscopic object with a definite position."It isn't [/ they aren't]. The outcomes of the 4 combined system operators - (tensor) product triples of individual system spin operators - are definite in the GHZ state. That of course is because the GHZ state is a simultaneous eigenstate of those (U, V, W, X) operators with eigenvalues (-1, 1, 1, 1).

But the outcomes of their factors - in particular the individual spins - are

notdefinite.The assumption that they are is the assumption of possessed values that leads to the wrong deduction of the definite value of U in the GHZ state (by multiplication of V, W and X and cancellation of squares) which I gave.

Still can't see it? Try Mermin's video, where it's even clearer because the assumption is made at the outset.

Tim,

ReplyDeleteI talked about how to entangle two photons or electrons in the lab, far away from any BH. In good old non-quantized spacetime. No entangled vacuum in sight and not needed, no "firewall" scenario, no AMPS, no Wheeler-deWitt, no strings, no AdS/CFT. Smooth and easy in the lab e.g. of Harald Weinfurter here in Munich.

And you are absolutely right:

“local interaction terms in the Hamiltonian”are not needed, because, since photons are indistinguishable it is enough to bring them “close for a while” in an optical fiber.And to slow down a bit, let me just say thank you: I appreciate it very much that you take your time to discuss the various questions with us.

Tim,

ReplyDeleteYou misunderstand what the substantial assumption concerning lambda is. It does not hinge on what kind of object lambda is [as I wrote explicitly before], but that, whatever it is, it (really!) takes on a particular value with some particular probability (in the model of the world that will satisfy Bell’s inequality). And again, within Bell’s model, such probabilities are used to find probabilities for the different ways (corresponding to different values for lambda) in which we may get a specific final measurement result. Those probabilities are summed to get the total probability of the measurement result. That’s a substantial assumption, one that quantum mechanics denies. And so the fact that the model fails to agree with quantum mechanics, could be blamed on that substantial assumption.

Paul Hayes mentions a video of Mermin’s Oppenheimer lecture. Where exactly do you disagree with Mermin that there is a substantial assumption (one that one could deny) about “things having colors” even when we don’t measure them?

Steven

Paul Hayes,

ReplyDeleteClearly we have a different understanding of what definite means. Do you agree or disagree that at say time t = 0, when all measurements have been performed, the instrument pointers are either pointing one way or the other, and that we can make a definite statement about which way the instrument pointers are pointing?

Tim,

ReplyDeleteI understand why quantum equilibrium means you can't signal faster than light. It just still seems somewhat ad hoc; it's sort of unexplained why there is an emergent speed limit. In relativity, the speed limit is explained by locality: things can only affect their immediate surroundings, and it follows directly from that that there must be a finite propagation time for interactions. So locality to me is essential to the spirit of relativity. In any case, like I said, we shouldn't give up on locality until we're forced to; it's a very deep principle, as Einstein recognized. Do you think that locality leads to superdeterminism even in many worlds? That would be an extremely interesting thing to know, if it could be proven.

Stephen

ReplyDeleteYour "substantial assumption" is just that there is some physical state of the every experiment at the beginning, and therefore a statistical distribution of such states over a course of experiments. If you don't believe that, you don't believe in physics at all.

Reimond,

ReplyDeleteThanks so much for the complement. I really enjoy discussing things with anyone who really wants to listen and have a conversation. If I am mistaken in my own beliefs, I can do no better than express them as clearly and forcefully as I can so the mistake can be pointed out. I know some people react badly to that, but it is essential to how I have come to understand what I do understand.

travis,

ReplyDeleteI think maybe the problem is that we have a different understanding of what is and isn't valid inductive logic here (and perhaps I should've been clearer). The 3 individual pointer directions composing each of the measurements U, V, W and X will of course be definite (for some suitable observer)

afterthey're made but won't (all) be definitebeforethey're made in each of those "whole-system" measurements.Tim, Travis,

ReplyDeleteAbout the GHZ paradox: for each relevant joint measurement setting for devices A,B,C, there are 8=2^3 logically possible outcomes, but quantum mechanics tells us that only 4 of those occur with nonzero probability. In fact those 4 outcomes occur with probability 1/4 each. Even quantum mechanics does not manage to assign a probability of 1 to a single one of those 4 specific outcomes. Instead, each probability of 1/4 can be written as the square of a sum of 2 amplitudes [writing everything in the standard basis |0>, |1> for each qubit], that sum being +/- 1/2. [For each of the four zero-probability outcomes, the 2 amplitudes cancel each other.]

Importantly [here comes the point!], we only use that non-classical (quantum) procedure for calculating the probability of one *single* outcome [where each of the 3 individual outcomes is specified], whereas we simply add up such probabilities for a set of *different* outcomes to calculate the total probability of getting one of the outcomes in that set. That's how your argument that probability 1 can't possibly depend on the quantum way of getting probabilities, fails. That argument would apply only if a specific outcome had probability 1, but in a GZH experiment there is no such probability-1 specific outcome for each of the three qubits (please see below for more on this!).

You might object that the essence of the GHZ paradox can, nonetheless, be understood by just looking at the probability-1 events, i.e., by throwing the 4 actually possible different outcomes together on one heap and summarizing those results as, for example, "an odd number of "+1" outcomes occurs among the three specific outcomes." However, if you go through the details, you'll see that the paradox arises only when you try to construct specific individual outcomes to go with the specific settings. The quantum description shows this very clearly:

The GHZ state is an eigenstate of the 4 commuting operators XXX, XYY, YXY, YYX [in short-hand notation] with eigenvalues -1, +1, +1, +1, respectively.

So, Alice, Bob, and Charlie could in fact measure all 4 of these observables (sequentially) and would find with probability 1 the values -1, +1, +1, +1, respectively. These measurements are non-local and Alice, Bob, and Charlie would have to use 3-party entanglement to perform those measurements or else get together and have their qubits interact with each other. [Such measurements would not demonstrate anything about nonlocality.]

However, the paradox arises only if Alice, Bob, and Charlie instead perform very different local measurements, namely where each of them measures either X on their own qubit, or Y. They thus force specific outcomes (+1 or -1) for their individual qubits [and then they could multiply those three individual outcomes to get one number, either +1 or -1, which will agree with the appropriate eigenvalue]. That’s how GHZ experiments are done, NOT by measuring XXX or XYY, etc. Having individual outcomes (4 different possibilities occurring with probability 1/4 each) is absolutely crucial.

So, while there are probability-1 events connected to performing non-local measurements of observables, e.g., XXX, the actual experiment is done (and the paradox arises only) with three local measurements of, e.g., X_A and X_B and X_C, which do NOT have probability-1 outcomes.

And so it is only in a local model of the world where such individual values exist and occur with nontrivial probabilities that you reach a contradiction with the quantum-mechanical result.

Steven

Steven

ReplyDeleteYour error lies in these sentences:

"The GHZ state is an eigenstate of the 4 commuting operators XXX, XYY, YXY, YYX [in short-hand notation] with eigenvalues -1, +1, +1, +1, respectively.

So, Alice, Bob, and Charlie could in fact measure all 4 of these observables (sequentially) and would find with probability 1 the values -1, +1, +1, +1, respectively."

It is a little subtle, so I will let you think about it and find it yourself. If you can't, I'll explain it. But see if you can work it out first.

Tim

Steven,

ReplyDelete"And so it is only in a local model of the world where such individual values exist and occur with nontrivial probabilities that you reach a contradiction with the quantum-mechanical result."

But such individual values do exist. The instrument pointer at t = 0 is either pointing this way or that way. So suppose it is not determined yet at t = -1 which way it will point. Whatever mechanism it is that decides which way it will point at t = 0, whether it's a random or deterministic mechanism or whatever, that mechanism cannot depend on the settings of the distant detector. That is what locality means. Do we agree on this? This is all that I'm trying to capture with my functions F_A(a), F_B(b), and F_C(c). I really don't understand what assumption you are rejecting here.

Paul Hayes,

ReplyDeleteThe proof only requires that they be definite after they're made. The three functions represent any type of mechanism whatsoever which doesn't depend on the settings of the distant detectors. Somehow, we have to go from the instrument in the ready state to the instrument pointing this way or that. F_A(a) just represents whatever mechanism causes the instrument to do that. It can be any mechanism at all, so long as it doesn't depend on the distant detectors. So the inputs of the functions are definite (the settings of the detectors) and the outputs are definite (the directions of the pointers). The actual functions can be anything at all that you want (I'm using the term function here in a broader sense than the mathematical definition, because I'm allowing it to be a random function if you want so that I can't be accused of having things be too definite). Since no such local functions exist, locality must be violated. What assumption do you reject here?

What is the meaning of the term "quantum equilibrium"?

ReplyDeleteThanks in advance!

Arun,

ReplyDeleteI'll take a stab at defining quantum equilibrium for you, to save Travis & Tim the trouble (although they are both more qualified to do so).

Consider, in the context of the de Broglie-Bohm theory, an ensemble of identically prepared systems. As we often do in statistical physics, we can imagine there are so many of these systems in the ensemble that we can model the state of the ensemble as a continuous "ensemble density function" on configuration space.

Then the Born rule means that the ensemble density must be equal to the squared magnitude of the wavefunction at all times. Since the ensemble density and the squared magnitude of the wavefunction obey the same differential equation (a continuity equation in which the flow velocity is given by the usual guidance equation), it is sufficient to require that they are equal at any one time (we usually make this an initial condition, but any choice of time works).

Unlike in most interpretations, in dBB (and also in e.g. Nelsonian stochastic mechanics) we can ask about what happens when the Born rule is not satisfied - in other words we can ask about the time evolution of ensembles whose initial density is different from the squared magnitude of the initial wavefunction. Since an ensemble which _does_ satisfy the Born rule will continue to do so forever, that is a kind of equilibrium state. There is a growing body of work which suggests that the generic behavior for such an ensemble is a kind of "relaxation to equilibrium", wherein the dynamics of the guidance equation lead to the Born rule quickly becoming true for all practical purposes in typical systems. Just like in ordinary statistical mechanics, there is a "quantum h-theorem" which describes this equilibriation.

So "quantum equilibrium" is a kind of thermalization that leads to the Born rule being true, and to the impossibility of signaling. As Tim mentioned earlier, we should not expect to be able to send a signal through a system at equilibrium. In the dBB view, you can think of hbar as setting a scale below which the world is essentially thermalized and therefore impossible to exploit. Here are two papers to read for more information:

https://arxiv.org/abs/0911.2823

https://arxiv.org/abs/1310.1899

I hope the real experts here will correct any mistakes I have made!

- Mark Gomer

Arun

ReplyDeleteJust as in the standard statistical mechanical reduction of thermodynamics there is an equilibrium state for a collection of particles in a box, namely the maximum entropy state, so there is an equilibrium state for Bohmian particles relative to the wavefunction. And in the standard case, it would be impossible to signal if the universe were at equilibrium, because signaling requires the entropy of a system to increase. In just the same way, quantum equilibrium explains the no-signalling phenomenon in Bohmian Mechanics. The non-locality of the theory is such that one could superluminally signal if there were systems known to be out of quantum equilibrium, to have low entropy. But if the universe has come to quantum equilibrium—which is the state that occupies by far most the the configuration space of the particles—then there will be no signaling possible.

Tim,

ReplyDeleteDid you read on after reading the sentences you quoted? I added that those measurements are nonlocal and that Alice, Bob and Charlie either have to use 3-partite entanglement or get together to do a joint measurement. Let me assume they simply get together (that’s a bit easier to assume).

Here is what they can do in more detail:

Suppose they first measure XYY on the three-qubit system (a system described by an 8-dimensional Hilbert space). This measurement projects onto two subspaces, one 4-dimensional one corresponding to the eigenvalue +1, another 4-dimensional subspace corresponding to eigenvalue -1. They will find the eigenvalue +1 (since the GHZ state is an eigenstate of XYY with eigenvalue +1). The measurement is assumed ideal so that coherences within the 4-d subspace are preserved.

(The GHZ state lies already in that subspace, so the ideal measurement won’t change it. That shows clearly why the measurement of XYY is really a single joint measurement on the 3 qubits, not a combination of 3 single-qubit measurements.)

Suppose they then measure YXY, which commutes with XYY. The state will now be projected further down to a 2-dimensional subspace, the one corresponding to eigenvalue +1.

(Again, the GHZ state lies already in that subspace, so the ideal measurement won’t change it.)

Suppose they then measure YYX, which commutes with both XYY and YXY. This projects the 3 qubit state down to a 1D subspace, the one corresponding to eigenvalue +1. (Again, the GHZ state lies already in that subspace, so the ideal measurement won’t change it.)

Finally, they can measure XXX and find the eigenvalue -1, because that’s the eigenvalue for the GHZ state.

Please enlighten me by telling me what’s wrong about this.

No Travis,

ReplyDeleteWe're talking about the values of the quantities we did NOT measure. You have to assume those values exist in order to get a contradiction.

travis,

ReplyDeleteI don't understand your difficulty with this. Without any assumptions whatsoever the attempt to find suitable F_A(a) etc. is just an attempt to fill in the "Mermin table" with 1s and -1s so that they match the parity constraints. It fails. In its physical context it's a failed attempt to preassign values of x and y spin to 3 qubits in the GHZ state so that they match QT predictions.

Even from the bare, general probability-naive, QM perspective it appears immediately obvious why it fails: that assumption of preassignable - "possessed" - values is wrong. For example, suppose you measure A's x-spin (as part of the V measurement) and then you measure its y-spin (as part of the W measurement). Oops! You just dispossessed A of its definite x-spin value.

Nevertheless, a non-obvious reason why it fails has also been proposed: maybe the assumption that separated, non-interacting systems can't somehow confer values of observables on each other so as to respect the QM predictions is wrong.

Fine. Choose that explanation if you like. Just don't claim it's the only choice.

Paul Hayes,

ReplyDeleteI don't understand why you think we need to make the assumption that the values are preassigned in order to get the contradiction. Assume that the values have no meaning until time t = 0. Still, whatever value they get assigned at that time, so long as the mechanism which assigns it doesn't depend on the distant detectors, we will get a contradiction with QM. What part of this do you disagree with?

Steven,

ReplyDeleteWe don't need to assume that the values we don't measure have a value. We just need to assume that whatever value is assigned to the measurement that is performed, the mechanism which assigns it doesn't depend on the distant detector. It only depends on the local detector. We just need to assume that the movement of the instrument pointer from the ready state to one of the outcomes isn't influenced by the distant detector. That's all. So long as we assume this, we get a contradiction with QM.

Steven

ReplyDeleteYou're right. I quoted the wrong sentences, although why you go on about measuring observables of which the GHZ state is an eigenstate and then talking about "projecting the state down" to a subspace that it is already in is beyond me. What's the point of all that?

So the error lies here:

"So, while there are probability-1 events connected to performing non-local measurements of observables, e.g., XXX, the actual experiment is done (and the paradox arises only) with three local measurements of, e.g., X_A and X_B and X_C, which do NOT have probability-1 outcomes.

And so it is only in a local model of the world where such individual values exist and occur with nontrivial probabilities that you reach a contradiction with the quantum-mechanical result."

The simple fact is that there are different ways to measure the observable XXX. One way is the fancy way you mention (that Alice, Bob and Charlie cannot do without already sharing an entangled auxiliary) and another is the obvious way: Alice, Bob and Charlie each measures the x-spin of their particle and then they multiply the results together. They are both perfectly acceptable ways to "measure the operator XXX". The fact that they are so different, and that one is a disturbing measurement on the GHZ state and the the other isn't is neither here nor there. (And just to make the point, the fancy use-entangled-auxilliaries method is disturbing on the product eigenstate |x=up>_1|x_up>_2|x-up>_3 while the simple method of each measuring the x-spin and multiplying is not.) What all this shows is that the phrase "measuring an operator" is already unfortunate, and suggests the position that Shelly Goldstein, Detlef Dürr and Nino Zanghì call "Naive Realism About Operators".

Now: the point is very simple. The GHZ state is an eigenstate of XXX, and of XYY, and of YXY, and of YYX *no matter how these operators are measured*. it is an eigenstate of all four if you happen to use your method of measuring (which you would only choose because you already knew the state the particles are in) and it is an eigenstate of all four if you happen to measure them in the old-fashioned way, by measuring either the X-spin or the Y-spin of each particle and multiplying. With respect to either method, the probability of getting a particular outcome is 1. So there are no probabilities that are not either 0 or 1 in the problem. And, as I said, in exactly these two cases the probabilities are the same as the probability amplitudes, so trying to make a distinction between the two is pointless.

And what we know is that *no local theory can makes all of these predictions with certainty". But the outcomes do hold 100% of the time. Ergo, no local theory can accurately describe the world. Ergo the world itself is not local. QED.

Contrary to what you suggest, there is no escape from this conclusion. No choice that avoids the non-locality exists.

Tim,

ReplyDeleteAs I already explained in the note you attacked (with your wrong remark) the probability-1 event involving the measurements that are actually performed in a GHZ experiment, namely, individual measurements X_A and X_B and X_C, concerns 4 different measurement outcomes, each occurring with probability 1/4. Each of those probabilities is (and must be) calculated in the quantum way. Your argument

“So there are no probabilities that are not either 0 or 1 in the problem. And, as I said, in exactly these two cases the probabilities are the same as the probability amplitudes, so trying to make a distinction between the two is pointless.”

is simply wrong. There clearly are probabilities that are neither 0 or 1.

I only went into detail about a different way of measuring XXX to show that while the probability of the outcome -1 of that particular measurement is indeed simply 1 [and there are no probabilities that are neither 0 or 1, so that you could apply your argument to it], that measurement is very different indeed: it cannot be used to argue anything about locality, and it does not lead to a paradox either because there are no individual outcomes for the 3 qubits in that measurement.

Steven

Tim,

ReplyDeleteYou write:

“The simple fact is that there are different ways to measure the observable XXX.”

and then you proceed to describe the two measurements I described. However, by this statement you show what point you have missed. It is the crucial distinction between measuring the observable XXX, and measuring three observables, which in the same notation can be written as. XII, IXI, and IIX (with I the identity). The latter is NOT a measurement of XXX.

You also wrote:

“The fact that they are so different, and that one is a disturbing measurement on the GHZ state and the the other isn't is neither here nor there.”

That misses what the relevant difference is: it;’s not that one disturbs the GHZ state and the other one doesn’t.

It’s that XXX commutes with XYY and with YXY and with YYX (so one could measure all 4 of them, without one disturbing the outcome of the others), whereas the triple of operators

XII, IXI, and IIX

do NOT all three commute with the analogous triples

XII, IYI, and IIY,

YII, IXI, IIY, and

YII, IYI, IIX

So you cannot measure all 4 triples and thus find out all values of both X and Y for all 3 qubits.

As Mermin wrote in his AJP article about the GHZ paradox, it’s nice that the paradox in the end boils down to the non-commutativity of X and Y. And his conclusion there is also that that is exactly the reason why you cannot so simply assume that both X and Y have a value, even if you can measure only one of them. That is, in order to arrive at a contradiction between the quantum and the classical descriptions of the GHZ experiment, there is the substantial assumption involved that unmeasured quantities do have a value.

But you always deny that that is an assumption that one can deny. I think we see here the (wrong) reason why you deny this.

Travis,

ReplyDeleteYou write

"whatever value they get assigned at that time, so long as the mechanism which assigns it doesn't depend on the distant detectors, we will get a contradiction with QM"

Do you see there are *two* assumptions here? Apart from locality (on which assumption everyone agrees) you also assume that a value is assigned (to an unmeasured quantity).

Tim,

ReplyDeleteLet me repeat the point about the 4 probabilities 1/4 that add up to 1 in the GHZ experiment.

Both in quantum mechanics and in classical physics we add up those 4 probabilities to get the total probability. Why? Because we have 4 different outcomes, and we ask what the probability is to get any one (we don't care which) of those 4 outcomes. That is, the “quantum” procedure used here is exactly the same as the classical procedure.

BUT, to get each individual probability of 1/4, we do have to use quantum rules. And here the classical procedure may well give a different answer. So, the difference between the classical and quantum descriptions of the GHZ paradox does depend on the way probabilities are calculated. (Which is what you denied.)

travis,

ReplyDeleteI don't disagree. Surely it is clear in what I wrote that the assumption is simply that the values are

preassignable in the sense of being (potentially)predictive? And, obviously, the salient fact is thatnosuch assignment can reproduce the predictions of QM. No "mechanism" of the F_A(a) type is possible.Paul Hayes,

ReplyDeleteOk, it seems we agree on all the facts of the matter. I'm baffled as to why we don't agree that locality is violated. To me, no local mechanism, random or deterministic, for producing the experimental results simply means no locality. What does it mean for locality to hold other than that the laws which govern the behavior of objects are local laws? Or do you agree that the laws are not local, but you just want to maintain that there is locality in a different sense?

Steven,

I don't understand what you mean. Assigning values to the unmeasured quantities can only help you maintain locality, since without doing that you can't guarantee that the absolutely perfect correlations are maintained without communication. But we don't have to assume that if you don't want to (it will only hurt you to not assume it). We can assume that there is simply no meaningful sense in which the unmeasured quantities have a value. Then when we measure, a value is randomly assigned to the measured quantity and not to any of the other unmeasured quantities. If you do this assignment without communication with the other detectors, you will get a contradiction with the QM results.

Steven,

ReplyDeleteTry reading over your own posts and see if you can make sense of them. Is it your contention that the physical procedure of Alice measuring her particle's Y-spin, Bob measuring his particle's X-spin, and Charlie measuring his particle's Y-spin, then multiplying the results (or equivalently counting whether the number of "up" outcomes is even or odd) is *not* a measurement of the operator YXY? Since this is what is actually done in the experiment, if you are right then all of Mermin's discussion of the operators XXX, XYY, YXY, and YYX is completely pointless, since those operators have nothing at all to do with the actual experimental protocol.

Or are you denying that given the experimental procedures outlined the only possible outcomes of each of these three measurements are either +1 or -1 (or, in the other way of describing it, "odd" or "even")?

Or are you denying that the probabilities for the particular outcomes in each of these cases for the experiment being carried out on the GHZ are either 0 or 1?

If you are not denying any of these three propositions, then the additional remarks you are making are simply beside the point.

I guess that one could also make the following remark. Suppose what get measured in a particular run is XXY, and that the outcome, as it must be is -1 (or "even"). Your further (irrelevant) remark is that that outcome can come about in 4 distinct fine-grained ways: Up,Up,Down; Up,Down,Up; Down,Up,Up; and Down,Down,Down. And each of these has a ¼ chance of occurring. Correct. And what this gives us is an instance of perfect EPR-type correlations, correlations that can convey information about an arbitrarily distant event. Thus, if Alice and Bob stick together, by pooling their results they can make a prediction with certainty about Charlie's result (either supposing that they already know what he was going to measure or conditional on his making the measurement. ) And now comes the EPR reality criterion: if Alice and Bob did not in any way disturb Charlie's particle by making their measurements, then it must all along have had a surefire disposition to respond a certain way to a particular measurement. But of course, the GHZ total experimental possibilities and certain predictions about those possibilities rules out Charlie's (and by parity of reasoning Alice's and Bob's) particles from having such surefire dispositions that are independent of what the other two happen to measure. So it can't be like Bertlmann's socks, and there must be spooky action-as-a-distance. QED.

I suppose we can make one more observation. As you say, the probability of each of the four possible outcomes is ¼: all of the other outcomes have probability 0. But further, the probability of each possible outcome of YII, IXI and IIY is ½. But of course these outcomes are correlated, otherwise the probability of any particular set of three outcomes would be ⅛, whereas for four is if ¼ and for the other 4 it is 0. Hence, as Einstein said, if the wavefunction is complete then there must be spooky action-at-a-distance. The fact that the probabilities are calculated from amplitudes is simply irrelevant to Einstein's and then Bell's arguments. All that matter is what the probabilities...or really the observed frequencies....are.

Tim, Travis,

ReplyDeleteWith apologies to Bee (for not staying on-topic), here is my last post:

You could say that a classical model of a quantum experiment is obtained by replacing operators by their eigenvalues. For example, in the GHZ experiment we replace operators X by eigenvalues x (+1 or -1) and the identity operator I by the number 1, etcetera etcetera. That’s what you do when you fill in the Mermin table.

In a classical model, whether you measure xxx or x11, 1x1 and 11x doesn’t really make any essential difference. But, there’s a big difference (*) between measuring the observable XXX (in what you call the “fancy” way but it’s simply the “proper” way) and measuring the observables XII, IXI, and IIX. And so physicists find that assigning values to observables (especially those not being measured) involves a substantial assumption (one that could fail miserably). But you see no essential difference and no substantial assumption.

Anyway, that’s just my diagnosis of what lies behind this frequently encountered scenario: a physicist arguing there are two assumptions underlying Bell’s theorem or the GHZ paradox (locality plus realism or classicality) and you claiming there is only one (locality).

(*) To see the big difference, and how that difference can be exploited, consider cluster-state quantum computing. Measuring (many) observables like XXX (more precisely, the set of all stabilizer generators) creates a cluster state, while subsequently measuring (adaptively) many single-qubit observables like XII and IXI (just to keep the same notation as for the GHZ paradox) then implements an algorithm and in the end gets the (classical) information out.

You seem to think the only point of performing a measurement is to find a particular eigenvalue and that, therefore, different ways of finding that eigenvalue must be the same measurement. Again, go study cluster-state quantum computing to get rid of that idea.

This difference is essential to your “probability-1 argument”. As I said before, that argument applies to measuring XXX and the like, but not to the measurement of XII, IXI, and IIX, but it’s the latter type of measurements that are performed in the GHZ experiment (and as I said before, too, one does that to force each individual qubit to produce a value +1 or -1).

Now please go think about this yourself, dig a bit deeper, you can do it.

Adieu,

Steven

travis,

ReplyDeleteI made it as clear as I could that what I'm maintaining is that the (QM) laws which govern the behavior of objects

arelocal unless youchooseto add a mysterious nonlocal law to them. And if you do choose to add this nonlocality (in an attempt to preserve classicality) you're still left with a mysterious but manifestly local law (y-spin measurements making definite x-spin values indefinite again etc.). That's one of the reasons why, in my opinion, it's a very poor choice.Either way, there's no good reason for any bafflement. You just need to see past the mindless dogma - "if it ainʼt classical, it ainʼt physics" etc. - promulgated by a few obtuse and arrogantly ignorant cranks. Even those quantum foundations researchers who'd prefer to keep a naive, classical probability-friendly realism understand that it's either “locality” or “realism” that must go.

One last (I promise) try:

ReplyDeleteYou attempt to write each of the 4 eigenvalues of the commuting operators XXX, XYY, YXY, YYX as a product of the eigenvalues of the individual X’s and Y’s, only some of which you can measure (for one given set of three qubits) by measuring, e.g., XII, IXI, IIX.

You say, this attempt fails because of locality.

Most physicists ask, why do you even attempt to do that in the first place? It’s silly to try to replace operators by their eigenvalues. You shouldn’t try to go back to classical physics and assume those eigenvalues really exist, even when you don’t measure them.

Paul Hayes,

ReplyDeleteThe QM laws as usually stated are explicitly nonlocal. Just try writing down what the QM laws predict for the outcomes. After one of the measurements is performed, you have to collapse the entire state so that the outcome of the second measurement depends on the outcome of the first. Without doing this, you won't get the right answer. If the laws were truly local, you would be able to write down laws which generate the outcomes using only locally available information. That can't be done in a single world model, period.

Steven

ReplyDeleteYour post to Travis and me is a perfect example of why physics is in the mess it is in now, and why it will take a revolution in physics to get it out. It is not a long post, but it has all the generic characteristics.

The main characteristic is a simple lack of intellectual integrity. Travis and I have been making very specific, sharp, precise arguments, and in my last post I asked you a series a specific, sharp, precise questions. I believe (whether you are consciously aware of it or not) that it is your inability to answer these sharp, precise questions with sharp, precise answers—or with that wonderful straight admission: "I hadn't thought about that. I have to think about it for a while"—that has led to your sudden decision to bid us adieu, under the cover that you have suddenly had a crisis of conscience over not being "on-topic". If you sincerely wished to get at the truth then the very first thing that you would have done when presented a series of precise questions is answer them. What philosophical training teaches is the ability to stay on topic, not get deflected by distractions, and focus in on the heart of an argument. And what people do who are unprepared to deal with clear thinking do is what you are now doing: leave an investigation in the middle when they can't answer a Socratic line of questioning and walk off, muttering things to try to camouflage what has just happened. (People sometimes complain I am too blunt. I will defend everything I just claimed with specific examples below. If you think it is false or unfair, you have every opportunity to respond.)

The very first sophistical blather that you are using to cover your retreat is a favorite of physicists: the "we physicists" vs. "you non-physicists" garbage. Look at what we find in your response to Travis and me and your response to just Travis:

"Anyway, that’s just my diagnosis of what lies behind this frequently encountered scenario: a physicist arguing there are two assumptions underlying Bell’s theorem or the GHZ paradox (locality plus realism or classicality) and you claiming there is only one (locality)."

"Most physicists ask, why do you even attempt to do that in the first place?"

For your information: *Travis is a physicist*. How do I know? Because I know who he is, and who Carl3 is and who Arun is. I would prefer that they also post under their full real names, but despite their not doing so I know who they are, their background and training. Maybe Travis has a good practical reason to avoid using a name by which he can be identified, because of people like you (I assume you are a physicist, because you self-present as one): a physicist can put their whole career in peril by doing foundations, and especially by doing it in a clear, rigorous way that reveals the cluelessness of most physicists. Sabine might have a word to put in here as well.

(As mentioned above, I had a moment of admiration for "Paul Hayes" for the courage to post his nonsense under his real name, but after some Googling I now suspect that "Paul Hayes" is a pseudonym. The only web footprints of a physicist "Paul Hayes" are comments in other blogs, and a blog started 6 years ago on foundations of physics by one "Paul Hayes" who has never posted a single thing in it and who discloses no personal information in the section about the author.)

"Physicists think..." or "physicists say..." or "Most physicists ask..." carry no evidential weight on foundational issues. Standard physics training contains zero course work in or exposure to foundational questions, and is antithetical to doing foundational work. Students are constantly asked to accept a bunch of hogwash, non-answers and deflections in their classes, and are given cautionary tales about how "trying to understand quantum mechanics" is the kiss of death for an aspiring physicist. I can back this assertion up with specific evidence if you like.

Con't.

The second standard rhetorical move is posing the "why not go study?" question without having any actual evidence that the interlocutor has not, in fact, studied. "Paul Hayes" did this with a sort of pathological chutzpah by recommending that I study the "basic texts" without mentioning what they even are. You have this:

ReplyDelete"Again, go study cluster-state quantum computing to get rid of that idea."

and this:

"Now please go think about this yourself, dig a bit deeper, you can do it."

Your evidence that I have not studied cluster decomposition or quantum computing or stabilizers? Zero. So this is a deflection cum posturing cum assuming and air of superiority that we have seen over and over from BHG, the banned physphil,dark star, etc.

The third rhetorical dodge is to ascribe to the interlocutor motivations, beliefs or arguments they have not ever espoused and then attacking them, i.e. the classical straw man argument. Example:

"You could say that a classical model of a quantum experiment is obtained by replacing operators by their eigenvalues."

Well, yes, someone could say that, but I sure as hell never said it. It would be a pretty ridiculous thing to say. So go find someone who says it and respond to them.

Now let's watch all the little rhetorical diversions work together symbiotically in one sentence:

"And so physicists find that assigning values to observables (especially those not being measured) involves a substantial assumption (one that could fail miserably). But you see no essential difference and no substantial assumption."

Note the "find" in "physicists find". "Find" is what philosophers call a "success verb": you can only find things that are actually there. So the appeal to supposed expertise in "physicists" is followed by a disguised claim to correctness. This is akin to saying "Fuchs teaches us....", where "teaches" has the air of a success verb: you can only "teach" correct claims. If we erase the rhetoric we get "physicists claim to find..." and "Fuchs asserts...", which a plainly true and plainly prove nothing by themselves. As a homework problem, take your post and cleanse it of all these rhetorical tricks and see what is left.

Con't.

The fourth, and key rhetorical move made in your farewell address is the most obvious one: you fail to answer my straightforward questions. I asked:

ReplyDelete"Is it your contention that the physical procedure of Alice measuring her particle's Y-spin, Bob measuring his particle's X-spin, and Charlie measuring his particle's Y-spin, then multiplying the results (or equivalently counting whether the number of "up" outcomes is even or odd) is *not* a measurement of the operator YXY? Since this is what is actually done in the experiment, if you are right then all of Mermin's discussion of the operators XXX, XYY, YXY, and YYX is completely pointless, since those operators have nothing at all to do with the actual experimental protocol."

Did you answer my question and the follow-on point? You did not. Why not? Because you can't. So you turn tail and run, leaving a trail of rhetorical smoke, straw man assertions, irrelevancies, and smug condescension to mask the retreat.

More of my questions, verbatim:

"Or are you denying that given the experimental procedures outlined the only possible outcomes of each of these three measurements are either +1 or -1 (or, in the other way of describing it, "odd" or "even")?

Or are you denying that the probabilities for the particular outcomes in each of these cases for the experiment being carried out on the GHZ are either 0 or 1?

If you are not denying any of these three propositions, then the additional remarks you are making are simply beside the point."

If I am confused *then answer my questions are show how I am confused*! Don't produce all this "I am a physicist" bullshit and straw man bullshit and use of success verbs bullshit. And worst of all, the quantum computation bullshit. If a usual training in physics is bad, a training in quantum computation is even worse for doing foundational work. Quantum computation is largely focussed on practical issues that have exactly zero foundational interest, like doing error correction and making machines stable against perturbations. Those are perfectly fine and serious issues for an engineer—which is what quantum computation is, a form of engineering—but they are of no foundational interest at all. And quantum computation people have the illusion that they understand things because they stick all the important questions in some box of a flow diagram and think they are solved. Want to "solve" the measurement problem? It's a snap: just invent a "circuit element" and give it the name a "measurement gate" and stipulate that it accomplishes a von Neumann collapse measurement in the "computational basis" (try to give a clean physics definition of the "computational basis"). Meanwhile, be sure to deal only with spin degrees of freedom even though the spatial degrees of freedom are essential to actual computer design to produce the illusion that you know what you are talking about.

So it's your choice. Answer the questions or run away. It's all the same to me. There are lessons to be learned either way.

Steven,

ReplyDeleteThere is a further aspect of your recent posts that Tim did not cover, which has been bothering me: the repeated invocation of Mermin's perspective on GHZ. Paul Hayes brought Mermin in first, but you have referred to him too, approvingly. So I went hunting today for Mermin's wisdom. I was unable to be sure what AJP paper by Mermin you refer to, nor which video (mentioned by PH), so I just read through a PDF of some slides of a talk Mermin gave at Stanford, precisely on GHZ and why Bohr was right and Einstein wrong. I suppose I got the line clear enough from these slides. (If any one wants to look, they are at http://www.lassp.cornell.edu/mermin/spooky-stanford.pdf )

Mermin runs very nicely through the upshot of the GHZ, namely: since there is no way to consistently fill in the table with pre-possessed values, the systems must not have had them prior to measurement. Now, Mermin also mentions the obvious inference to make, for the experimental context where two photons have been measured and the 3rd is still in flight (but we now know what the results will be when it is measured, conditional on, or assuming we know, what direction will be measured):

"1. One has the option of learning either the 1-color or the 2-color of any one of the things, by testing only stuff it leaves behind it. [the other two photons]

2. But all three things cannot have both a 1-color and a 2-color.

3. So the act of testing the stuff it leaves behind it must give a thing its 1-color or 2-color."

At this point I thought to myself, "Huh, Mermin does get the point! I wonder why Steven and PH were holding up Mermin as their spokesman?"

But then I read onward, and all was revealed. Mermin rejects the inference to 3. In its place, he offers this:

"3. So a thing cannot have 1-color or 2-color unless

an actual test establishes that that color is R or B."

and the following re-working of a famous obscure phrase by Bohr, to clarify the perspective he is recommending:

"BOHR:

Messing up the stuff left behind [2 photons] does mess up the thing [third photon], because it messes up “the very conditions which define the possible types of predictions regarding the future behavior of” the thing."

The revised '3.' is meant, I guess, to be the way of speaking that lets us evade admitting that measuring two of the GHZ systems has non-locally affected the 3rd system. And it fits with things you have said in recent days, e.g.,

"That is, in order to arrive at a contradiction between the quantum and the classical descriptions of the GHZ experiment, there is the substantial assumption involved that unmeasured quantities do have a value."Do you endorse Mermin's lines here, and own them as your view too? It seems to me that they are nothing more than smoke and mirrors, but I would be happy to hear your thoughts in response to the following comments on these bits from Mermin.

First, notice that the revised '3.'

in no wayfollows from '1.' and '2.' - it is a blatant non-sequitur, more of a flat-footed refusal to draw the obvious conclusion. But let that pass; let's give Mermin what he insists on, that the 3rd system has no definite color until measured, even if the other two have been measured. Nonetheless, once the other 2 are measured,somethinghas changed, and changed about either the whole system (all 3 systems - spread out in space, i.e., a non-local object), or about the 3rd system itself (if we are permitted to talk about it alone), because now we are able to predict with certainty the result of measuring the 3rd, which would not have been the case if we had not done that.Notice that the revised Bohr quote admits this very clearly: messing up (measuring) the 2 systems “left behind” DOES mess up the “thing” [3rd system], because it messes up “[blah blah]”. Even Mermin here admits that the 3rd system has been messed with!

CONT.

CONTINUED:

ReplyDeleteBut it has not been messed up, Mermin wants to say, in the sense of

having acquired a definite value of color. Fine, if you and Mermin want to say that values, properties etc. don’t exist until we look at them, go right ahead. Now please explain to uswhat determines the outcome of the measurement of the 3rd system, when that IS done.Because we know what the outcome’s gonna be in advance under the specified conditions (measurements on the other 2 having been done, and such-and-such results obtained); and under those exact conditions, the 3rd systemalwaysdoes the same thing. It “knows how to behave”, in some sense. Or, the whole 3-particle system “knows how to behave”, if you prefer; I don’t care, because that is a thing that extended across a spacelike region, and if you are saying that THAT has been affected by measuring 2 of the 3, that is admitting all the non-locality that I need.If you believe in a ‘reality out there’ in any sense at all, then you have to admit that

somethingchanges when the measurements on the other two particles have been done, and that something is relevant to (indeed: determines) the outcome of the 3rd particle measurement. So what is it that changes, and how is it a change that determines the outcome of the 3rd measurement without anything deserving the title ‘non-locality’ going on?Let’s now come back to the Bohr line: what changes is “the very conditions which define the possible types of predictions regarding the future behavior of the thing.” Let’s go with this: are those “conditions which define” things that are out there in the world? If you admit they are, then you will not be able to evade the conclusion that those “conditions” have been changed non-locally, because the conditions have to do with what happens in the 3rd measurement. I assume you are not going to argue that all the

changehappens only in the region of particles 1 and 2, and the 3rd particle just always does the right thing by magic or coincidence.But if not magic or coincidence, why

doesthe 3rd particle always do the right thing when measured?What I guess you will say is that what

changes, and what Bohr was really talking about, is just in our heads, it’s a change inwhat we can predictabout the 3rd particle, nothing more.But “nothing more” doesn’t explain why the 3rd particle always does the right thing, nor does the epistemic change in the heads of the experimenters who see the outcomes on the first 2 particles. So: what is your explanation?

Please note that in stating my challenge, I have

notmade a hidden assumption that I am blind to and you guys keenly spot. I am granting you that the 3rd particle doesn’t have a value until measured. Nonetheless, somehow it always does the right thing, always thesame(in the specified conditions). Please explain how this is the case without any non-locality.Sincerely,

Carl Hoefer = Carl3

Philosophy, University of Barcelona

ReplyDeleteBecause I know who he is, and who Carl3 is and who Arun is. I would prefer that they also post under their full real names, but despite their not doing so I know who they are, their background and training.Just click on the "Arun" that appears on top of this comment, you will see my blogger profile and from there you can see who I am; or perhaps more importantly, much of what I think about.

Sigh, if in two envelopes that I post randomly to two addresses, I send one a picture of a heads up coin and one a picture of tails up coin; when you open one of the envelopes you have a prediction of what the other envelope has, that works, but that doesn't bring up non-locality, does it? So the "non-locality" is more subtle than that challenge.

ReplyDeletetravis,

ReplyDeleteThe QM laws are explicitly local, explicitly nonclassical, and single-world (but multi-observer). The nonclassicality is explicit in the non-simplex structure of quantum probability state spaces: in the GHZ context Alice, Bob and Cindy are each given one of three "completely unpolarised" qubit systems prepared in the correlated state, (↑⊗↑⊗↑ - ↓⊗↓̣̣⊗↓)/√2.

The locality is explicit in the structure of the observable algebras: Alice's algebra of observables is the (tensor) product M²⊗id⊗id. Bob's is id⊗M²⊗id and Cindy's is id⊗id⊗M². Every operator in one commutes with every operator in the others. They're local operators corresponding to local measurements.

The operators corresponding to the overall correlations / the steps in the GHZ experiment sequence are compositions of those local operators (σ₁⊗σ₂⊗σ₂ is the (algebra) product of σ₁⊗id⊗id, id⊗σ₂⊗id and id⊗id⊗σ₂ etc.) They're nonlocal w.r.t. Alice, Bob and Cindy but obviously could become the appropriate local operators for some observer, as could the pairs tensored with one identity operator. They leave the GHZ state unchanged but, more generally, whether a 'collapse' occurs, meaning that a measurement result is used to update a probability state accordingly, depends on which observer you're talking about.

So again, it's local noncommuting operators acting in the both definiteness and indefiniteness inducing manner I described, e.g. when Alice's σ₁⊗id⊗id is followed by her σ₂⊗id⊗id, that's the (obvious) reason QM is able to "get the right answers" where the imposition of naive and, I would argue,

perverseclassicality assumptions aren't able to. No spooky nonlocality and no multiple worlds are needed and I can see nothing to be gained by invoking them.Period.

For what little is worth, what I get out of what has been said on the current issue is something like this:

ReplyDeleteGo back to the classical experiment with red and blue marbles. Bob has a 50-50 chance of getting either. A marble arrives and it turns out to be red, so he immediately knows, across space and time, that Alice has received or will receive the blue marble. This is accepted without cause for surprise because it is considered that the marbles carried their colors with them as they were separated in space and time.

In the quantum cases, the separated particles carry their entanglement with them across space and time.

Sabine, Tim, Travis and all who care about this non-locality puzzle,

ReplyDeleteTim´s

“determinism/indeterminism divide”of BM (deterministic) and GRW (indeterministic) is interesting since both BM and GRW areobserver independentwith respect to the measurement.SR told us long ago: the laws need to be observer independent - well, this got a bit messed up in Copenhagen.

But BM and GRW are non-relativistic and I agree with Travis that

“Adding a preferred foliation doesn't really seem like just completing relativity; it runs counter to its entire spirit“. Einstein´s deep insight to save the very determinism in GR was that only spacetime coincidences are real. Particles and the metric finally should not refer to coordinates, especially not to a preferred frame – GR is diffeomorphism invariant.What about determinism?

Sure, we know that the unitary evolution in QM is deterministic and linear. And it must be like this, without unitarity it would mess with probabilities. But is the evolution of big things or the universe itself entirely deterministic? (and linear??)

Look at all the complexity on our planet. Do you really think that all this is already encoded in the very initial conditions at the beginning of time? I do not think so. This would render Darwinian evolution completely nonsensical. Another argument in favor of a not completely deterministic theory is, that the BH info loss paradox dissolves immediately (well, as well as some hopes for new physics beyond the SM).

To generate complexity Nature uses feedback (non-linearity) and a bit of randomness.

If the linearity of QM would not be broken e.g. in the measurement, there would be no chance to get complex structures like clouds, plants or us.

Linearity/unitarity of QM:

The linearity of QM clearly signals, at least for me, that it is only one component of a more complete process. I would roughly compare QM with the differentials in tensor calculus of GR. Only the differentials

dx’/dxare linear, but the coordinate transformationx→x’itself is in general non-linear.We return to classical physics in the limit ℏ→0, because of e^(i

S/ℏ) in the path integral only the “classical path”, i.e. δS=0 survives. For 1/c→0 we go non-relativistic and in the limitG→0 we switch off gravity.But our world needs both QM and SR/GR (

ℏ, c, G).The unitarity of QM and the squaring of amplitudes to get probabilities clearly signals, at least for me, that the measurement is another important component, but is must be

observer independentto comply with the spirit of SR.“non-locality” of QM:

I will argue that the key to resolve the “non-locality“ puzzle is a clear distinction between the unitary evolution (U) and the reduction (R), separated by an observer independent

triggeredmeasurement.Here I already mentioned that the QM state is a non-local entity - it´s very job description is to connect spacelike separated points. Within a path integral, the creation and annihilation operators are local, because they create/destroy particles at a single spacetime point, but the virtual particles they create can be way off mass shell on their path, which will be summed up (superimposed) and only a resonance at their rest mass reminds them of SR.

But almost all non-locality will be encapsulated in the unitary evolution (U).

I will argue that the square of the Planck mass

ℏc/Ghas something to do with the threshold beyond which an observer independent measurement will be triggered. Spacetime at the Planck scale becomes granular, but not quantized.CONT.

I will further argue that our world is the result of myriads of tiny process steps. Each step is a unitary evolution (U) followed by triggered reduction (R).

ReplyDeleteSeparability:

Here we have to go slowly. Let us assume we start with a bunch of unentangled particles (e.g. electrons) i.e. a product state |Ψ>=|1>⊗|2>⊗…⊗|n> (***). The state will unitarily evolve and those in close proximity (e.g. |2> and |3> in a helium atom) get entangled, forming e.g. for parahelium a singlet |↑>|↓>-|↓>|↑>≡|E2> and thus |Ψ>→|Ψ’>=|1>⊗|E2>⊗|4>⊗…⊗|n>. Soon the state will evolve into a product of tiny entangled states |Ψ>→|Ψ’’>=|E1>⊗|E2>⊗…⊗|Em>, but and this is very important, it is a state of products and not an entangled state (Remember: an entangled state is a state that cannot be written as a product state).

Also e.g. |E1>⊗|E2>→|E1E2> can get entangled and form a bigger entangled state containing more particles, which I will call a

U-patch(U for unitary and it is basically a path integral).Measurement:

Each U-patch (tiny state) will get bigger (more real particles from the environment, or virtual particles). It must get bigger since the only mechanism we know how to

unentangle stuff is a reduction (R). Each U-patch grows until the superposed mass/energy would start to curve non-quantized spacetime. But non-quantized spacetime cannot be in superposition. The assumption is now, that beyond a certain threshold a reduction (R) will be triggered (inspired by Penrose). But this affects only this U-patch (this single tiny state), since the state of the world is a product of tiny states.One U-patch could e.g. be the singlet in an EPR experiment. The measurement is triggered once Alice or/and Bob get their apparatus (just made of QM particles) close, so additional particles (the pointer) get entangled with the EPR pair. In this case the observer just plays the role of bringing additional particles close, which would have happened anyway

observer independentlyby particles from the environment if not sufficiently shielded – this saves reductionism as Sabine requires (see quote here at the end), because now the measurement is triggered by just another bunch of QM particles.The assumptions for the dynamics of this

U/R-processare:1.) Spacetime is non-quantized. The very tension between non-quantized spacetime and mass/energy in superposition (a tiny fraction of the Planck mass) triggers a reduction (R).

2.) The state of the universe is a product of tiny states (U-patches), each

independentlyreduceable. (And NOT a wholly completely entangled mess, including spacetime)3.) Once reduced and not in superposition any more a tiny backreaction on the non-quantized spacetime curvature happens.

Once a tiny state is reduced (R) its particles get entangled again, i.e. form new U-patches and evolve unitarily (U) until a threshold is reached, get reduced (R) … and so on…. This happens everywhere and all the “time”.

Because our world needs some rough balance between enough unitary evolution (U) before a reduction (R) is triggered, this would also explain why gravity (

G) needs to be so weak.With every tiny backreaction of a reduced U-patch spacetime is glued together (smooth above the Planck scale) like in a sheaf cohomology and thus granting also changes in topology, i.e. adding and removing holes (BHs) in the spacetime.

-----------------

(***) It is really cumbersome to write this here in the comment (and I skip the ⊗ where it is obvious, like in |↑>|↓>), therefore:

I put a link in my profile name and you can find all this in the pdf under ‘Locality and QM’ and 'Entanglement, EPR and its "non-local" aspect'.

CONT.

When Alice and Bob are halfway across the universe it is obvious that a U-patch also needs a curved spacetime patch, otherwise it would not be clear what the notion of “parallel polarizers” should mean.

ReplyDeleteWith every reduction (R) a bit of QM randomness is added and converts the block universe into a not exclusively deterministic one, where the symmetry between the future and past is broken - the flow of “time”.

Now back to “non-locality”.

As I said above within a U-patch there is pure QM non-locality. And this is needed in the unitary evolution for the amplitudes to come out right. Only after the reduction (R) a real, on mass shell photon/electron “travels” respecting SR. [Also, since a U-patch lives on a curved spacetime and if it touches a horizon this also includes Hawking radiation.] And unlike

porq, time is not an operator in QM. Time in the unitary evolution (U) is used to get the amplitudes right, especially on a curved spacetime, so that all complies with SR/GR after the reduction (R). All? Not quite. Travis said“I think relativity is actually telling us something deep about the structure of spacetime, and it can't survive without many worlds.”The U/R-process is a simple alternative. Almost all non-locality is encapsulated in the U-patches, only EPR spacelike correlations survive, but no superluminal signaling. Thus, it is relativity and QM together, that tell us how events in spacetime and spacetime itself behaves.MWI is absolutely capable to “solve” the measurement problem and the “non-locality” of QM. But by doing so MWI cannot break anymore the linearity of QM.

Also, if we would try to quantize spacetime we would just extend the linearity of QM onto spacetime and thus again would lose the feedback (non-linearity) and the QM randomness (calculated in U and realized in R) to generate complex structures in myriads of

observer independenttriggered measurements/reductions (R).The U/R-process simply uses the tension between QM and non-quantized spacetime, without changing QM (QFT, SM) or GR. It is non-local realism (observer independent, psi-ontic). And in short:

GR breaks the determinism of QM and QM randomness breaks the determinism of the block universe.

If we regard our world as the result of a process (in contrast to e.g.

exclusivelyintegrating a differential eq.) a lot of problems and paradoxes immediately dissolve.-----------------

Under my profile name there is further detailed information.

JimV

ReplyDeleteThe disanalogy between the marble case and the quantum case is, of course, that while the properties of being red and being blue are local properties—that is, the redness of the red marble is a physical feature of it that can be specified without any reference at all to the blue marble and the blueness of the blue marble can be physically specified without reference to the red marble (or to anything else outside the vicinity of the blue marble), the entanglement of the two particles in not a logical consequence of the local physical states of the two particles.

If you want to ascribe a local physical state to each individual electron in an entangled pair, then the only mathematical objects you have available are the reduced density matrices of each. These matrices are identical for the two particles and, crucially, are also identical to the reduced density matrix of each electron in a pair that are in the m = 0 triplet state (just replace the minus sign in the definition of a singlet with a plus sign). This reduced density matrix assigns a .5 chance of getting an "up" outcome for a spin "measurement" in any direction. But this since the density matrices of the electrons are precisely the same for the singlet and the m = 0 triplet, just knowing the local state of each particle gives you zero information about what correlation there might be if spin "measurements" are carried out on both in some given directions. For example, if two z-spin "measurements" happen to be carried out on the electrons, the pair in a singlet state are certain to give opposite outcomes and the pair in the m = 0 triplet state are certain to give the same outcome (And in neither case is it determined which particular outcome either particle will give.). And there are other states in which each particle has the same reduced density matrix and the correlation predicted for a pair of z-spin "measurements" is whatever you like.

The red marble/blue marble case is the case of Bertlmann's socks. Each sock has its color all along and experimenting on or observing one sock has zero physical effect on the other. There is no non-locality, but by observing one sock and determining its color (and knowing something about Bertlmann!) you acquire information about the other sock. That is completely local and non-mysterious, as Bell says: that is why he brought up the example of Bertlmann—to clearly make the point *that the quantum mechanical case simply cannot be like that*! Here is the opening paragraph:

"The philosopher in the street, who has not suffered a course is quantum mechanics, is quite unimpressed by the Einstein-Podolsky-Rosen correlations. He can point to many examples of similar correlations in everyday life. The case of Dr. Bertlmann's socks is often cited. Dr. Bertlmann likes to wear two socks of different colors. Which color he will have on a given foot on a given day is quite unpredictable. But when you see the first sock is pink, you can already be sure that the second sock will not be pink. Observation of the first, and experience of Bertlmann, gives immediate information about the second. There is no accounting for tastes, but apart from that there is no mystery here. Is not the EPR business just the same?"

And of course, Bell's whole point is that *according to the Copenhagen interpretation* the whole business is *not * the same because Bohr and company insisted on the completeness of the wavefunction and the wavefunction does not specify the z-spin of either particle.

A few pages later:

(Con't)

"It is in the context of ideas like these [i.e. Copenhagen] that one must envisage the discussion of the Einstein-Podolsky-Rosen correlations. Then it is a little less unintelligible that the EPR paper caused a fuss, and why the dust has not settled even now. It is as if we had come to deny the reality of Bertlmann's socks, or at least of their colors, when not looked at. And as if a child has asked: How come they always choose different colors when they *are* looked at? How does the second sock know what the first has done?

ReplyDeleteParadox indeed! But for the others. Not for EPR. EPR did not use the word 'paradox'. They were with the man in the street in this business. For them the correlations simply showed that the quantum theorists had been hasty in dismissing the reality of the microscopic world...."

Years later, Gell-Mann would have the audacity and chutzpah to showcase his incomprehension of Bell's point by saying that all the talk of non-locality in quantum theory is just "flapdoodle" and the whole thing is just like the case of Bertlmann's socks, apparently blindly unaware that that very example was used by Bell to insist that the quantum case *cannot* be like that! That is the level of understanding you can expect from a Nobel-prize-winning physicist when it comes to foundational questions.

Paul Hayes

ReplyDeleteSince you are channelling Reinhard Werner here with this nonsense about classicality meaning that some space or other (The state space? A probability space, i.e. the space of convex sums of states weighted by a probability measure? No, a "quantum probability state space", whatever the hell that is.) being a simplex, the right thing to do is respond with the same response I gave to Werner, and that he could never coherently answer: locate, precisely, where in the EPR argument this assumption about some space or other being a simplex is to be found.

Answer: nowhere.

The EPR argument runs off of one assumption: the locality assumption that is implicit in the EPR deployment of the criterion of physical reality:

"The elements of the physical reality cannot be determined by a priori philosophical considerations, but must be found by an appeal to results of experiments and measurements. A comprehensive definition of reality is, however, unnecessary for our purpose. We shall be satisfied with the following criterion, which we regard as reasonable. If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur. Regarded not as a necessary, but merely as a sufficient condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ideas of reality."

The EPR deployment of this criterion uses locality in the following way: they take it as so obvious that no one would deny it that experimental operations carried out on one system cannot in any way (i.e. faster than light) disturb the physical state of a system arbitrarily far away. That's it. Period. End of story.

Granting that locality premise, the EPR argument for the incompleteness of the wavefunction, and hence for the falsity of the Copenhagen Interpretation, goes through without a hitch.

It also follows—and this is what Bell naively thought anyone could see just by reading EPR—that retaining locality requires postulating a deterministic theory, which was precisely Einstein's conclusion.

Now: point out where EPR are assuming that anything is a simplex.

Dear Reimond

ReplyDeleteThere is quite a lot in your last post to discuss. It is interesting and very useful to see how you regard all of these different pieces of interpretation hanging together, so thanks for taking the time to put it all down in one place. But commenting on all of it at once would tend to lose focus. So let me start with three observations.

First: yes, the only theories I think have any chance of being taken seriously are Quantum Theories Without Observers, a phrase that Shelly Goldstein has used so much that we just use the acronym QTWO. Indeed, the three important conferences that Shelly and Detlef Dürr and Nini Zanghi organized, the first one shortly after Bell's death, were QTWO, QTWO II and QTWO III. Any attempt to somehow require the existence of observers in order to make sense of physics must wind up with Wheeler's bizarre U-with-an-eye self-excited retrocausal circuit which is—to be blunt—madness. If there have to be observers for there to be physical states, and the observers are themselves physical things (which they must be to interact with the non-observers), then the theory ends up in a vicious regress paradox. Each observer needs another, physically prior observer to bring it into existence. Giancarlo Ghirardi would often quote Bell, I believe from a personal correspondence, saying that physics should neither require, nor be embarrassed by, the existence of observers. As usual, Bell was right on target.

Second: The division into fundamentally deterministic and fundamentally stochastic theories, into Bohm and GRW if you will, is the natural outcome of worrying about locality. Einstein saw instantaneously that the Copenhagen Interpretation, being committed to both fundamental chance and to the completeness of the wavefunction, was committed willy-nilly to spooky action-at-a-distance. This predates even the 1927 Fifth Solvay conference, and was a theme in Einstein's critique of Copenhagen from the beginning. This connection of chanciness with action-at-a-distance was made quite explicit by Einstein, and has been systematically repressed. When I was writing "What Bell Did" I did an internet search and was shocked to find that in the vast majority of cases Einstein's statement

"It seems hard to sneak a look at God’s cards. But that he plays dice and uses “telepathic” methods (as the present quantum theory requires of him) is something that I cannot believe for a moment."

was shortened by ellipsis to read

"It seems hard to sneak a look at God’s cards. But that he plays dice ... is something that I cannot believe for a moment."

This mutilation of Einstein's clearly expressed thought serves no purpose except to misrepresent the historical record, making it appear that Einstein just could not accept fundamental chance. Nothing could be further from the truth. He says explicitly in his letters that he has no problem with indeterminism: it is the non-locality, the spooky action-at-a-distance of quantum theory he could not abide. Any objective collapse theory, of the sort you have in mind, is immediately committed to non-locality if it is to recover the plain facts about the perfect EPR correlations.

(Con't)

Third: You are quite right that adding a preferred foliation to space-time violates the spirit of Relativity! I don't at all suggest otherwise. The question is: so what? The only phenomena that Einstein had to worry about replicating were those treated by Maxwellian electrodynamics and Newtonian gravitational theory. Maxwell's theory is explicitly local: it makes no use at all of the notion of distant simultaneity. Einstein repeats this point clearly and explicitly. Newtonian Gravitational theory is a different problem. One would have thought that if you can get a fully Relativistic local electrodynamic theory that yields Coulomb's inverse-square law as the electro-static limit, you could just follow exactly the same steps to get a fully Relativistic local gravitational theory that yields Newton's inverse square law as the gravo-static limit. And I am embarrassed to say that it was only very, very late in my career that I learned from a historian of GR that of course that was exactly the first thing everyone tried, and it just fails empirically. (I think it gets the anomalous advance in the perihelion of Mercury wrong, but I'm not certain of that.) So it took Einstein (and Hilbert) another decade to come up with the General Relativistic Field Equation, which is (of course) also fully Relativistic and local. It also has no use for any notion of distant simultaneity. And Einstein concluded from these results that physics as a whole was headed in the direction of complete locality of both the physical states of systems and of the dynamical laws, with no reference to absolute simultaneity used. Ockham's razor would do the rest.

ReplyDeleteBut of course neither Maxwellian electro-dynamics nor Newtonian gravitational theory was tasked with accounting for the violation of Bell's inequality for experiments done arbitrarily far apart! So of course neither Special nor General Relativity has anything like a preferred foliation: they didn't need one to recover all of the empirical data that anyone had to hand. The problem is when the rejection of a preferred foliation is elevated into some sacrosanct principle that can never be violated, even once Bell's result has been proven and violation's of Bell's inequality have been observed for experiments done at spacelike separation! Then you are in a whole new ballgame, one that Einstein never even vaguely imagined. I would like to think that had Einstein known of Bell's theorem and of the relevant experimental results, he would have been the first to recant, and consider adding in a preferred foliation if that could yield a clearly articulated rational theory (as opposed to the Copenhagen "tranquilizing philosophy", as he called it.) But whatever Einstein would have done, it is clear what we should do: open the books again and open our minds to the possibility that Relativity left some space-time structure out of account because when it was developed there was no empirical evidence of non-locality. Adding back in a preferred foliation should be a reasonable hypothesis to consider. If course, that raises the question of why the preferred foliation cannot be empirically discovered (as far as we know). This is intimately related to the question of why, despite the necessity of non-locality, we cannot send superluminal signals. And the answer to both these questions in Bohmian mechanics is simple and compelling: quantum equilibrium. The answer in GRW is very, very different. I will leave you a while to ponder the situation.

I would love to hear your reaction to these observations before going further through your post.

Carl,

ReplyDeleteThanks for your comments! Here’s my belated reply (in two parts!)

If Alice and Bob have just measured, say, X and Y on their respective qubits, and they know those outcomes, then they can indeed predict the measurement outcome for Charlie’s measurement of Y. The big jump (for me) is to assume that even property X of qubit C must exist, just because Alice could have measured Y instead, and then if Alice and Bob had known her outcome for that measurement, they could have predicted Charlie’s outcome for an X measurement. That’s the conclusion that needs a nontrivial assumption, that unperformed measurements (on a qubit) really do have outcomes that in turn correspond to real properties of that qubit.

You quoted Mermin and objected:

“First, notice that the revised '3.' in no way follows from '1.' and '2.' - it is a blatant non-sequitur, more of a flat-footed refusal to draw the obvious conclusion.”

Mermin did not simply conclude 3 from 1 and 2: instead he said there’s a choice, you must either accept non-locality or accept 3. The point (Mermin’s, and I agree with that point) is NOT to deny the possibility of non-locality, but that non-locality is the ONLY conclusion one can draw at this point. [I’m referring to his Oppenheimer lectures given at Berkeley.]

But we get to the crux in your second post, where you consider just the measurements Alice, Bob and Charlie actually perform, say, X and Y and Y respectively.

[continued]

ReplyDeleteI think everyone agrees on what QM says about what measurement outcomes I should expect with

what probabilities, given my description of any specific experimental setup. What’s really hard and where large disagreements arise, is what picture to use for what’s going on before the measurement outcome is recorded. I think there are different stories to tell about what happens below the surface and how what happens there leads you or I to expect certain observations above the surface.

Your below-the-surface story, which involves the particle knowing how it should behave when it’s going to be measured, and it really doing one thing or another, leads inescapably to non-locality. I agree.

But there are other stories possible, too (stories less relying on equating reality with how you or I describe or observe it).

I like to take as starting point the way we actually calculate probabilities within QM. Then one can try to reformulate that calculation in ways that mimic classical probability calculations as much as possible. You can think of Wigner functions or, what I like better, look at what was done here: https://arxiv.org/pdf/1805.08721.pdf

One thing I like about the result [compare Eqs. (6) and (9)] is that the difference between classical probability calculus and quantum probability calculus shows up very naturally [naturally, because all that is done there is to relate standard quantum probabilities to other standard quantum probabilities] through the appearance of negative probabilities. Importantly, those negative probabilities never show up as the probabilities of measurement outcomes, but they do show up in calculations of intermediate quantities. That is, they lie below the surface and never float above. I could say, the offensive part (what do negative probabilities mean physically??) is hidden by Nature, very much like offensive non-local effects like signaling are hidden, too, even if you think that what happens below the surface is non-local. So to me, there is a real choice to be made here between different stories.

There are other stories too: a third story involves many worlds below the surface that somehow lead to one world above the surface (maybe I should say, lead to there apparently being just one surface).

A fourth story involves waves traveling from the future below the surface, but canceling in just the right way against waves traveling towards the future so as to be invisible above the surface.

I personally find all these stories fine and I even encourage them, and I would love there being even more stories, as long as whatever story you accept (perhaps merely tentatively), it leads you to new interesting questions, or new interesting experiments or even new applications of QM. I wish I had time to work on all of these stories, but I have plenty of other things to do (which I also enjoy, fortunately) that do not involve any such stories. So, in short, I just object to the idea there is just one story to tell.

Carl, I’m happy to take this discussion with you offline if you wish.

Steven