Pages

Wednesday, December 06, 2017

The cosmological constant is not the worst prediction ever. It’s not even a prediction.

Think fake news and echo chambers are a problem only in political discourse? Think again. You find many examples of myths and falsehoods on popular science pages. Most of them surround the hype of the day, but some of them have been repeated so often they now appear in papers, seminar slides, and textbooks. And many scientists, I have noticed with alarm, actually believe them.

I can’t say much about fields outside my specialty, but it’s obvious this happens in physics. The claim that the bullet cluster rules out modified gravity, for example, is a particularly pervasive myth. Another one is that inflation solves the flatness problem, or that there is a flatness problem to begin with.

I recently found another myth to add to my list: the assertion that the cosmological constant is “the worst prediction in the history of physics.” From RealClearScience I learned the other day that this catchy but wrong statement has even made it into textbooks.

Before I go and make my case, please ask yourself: If the cosmological constant was such a bad prediction, then what theory was ruled out by it? Nothing comes to mind? That’s because there never was such a prediction.

The myth has it that if you calculate the cosmological constant using the standard model of particle physics the result is 120 orders of magnitude larger than what is observed due to contributions from vacuum fluctuation. But this is wrong on at least 5 levels:

1. The standard model of particle physics doesn’t predict the cosmological constant, never did, and never will.

The cosmological constant is a free parameter in Einstein’s theory of general relativity. This means its value must be fixed by measurement. You can calculate a contribution to this constant from the standard model vacuum fluctuations. But you cannot measure this contribution by itself. So the result of the standard model calculation doesn’t matter because it doesn’t correspond to an observable. Regardless of what it is, there is always a value for the parameter in general relativity that will make the result fit with measurement.

(And if you still believe in naturalness arguments, buy my book.)

2. The calculation in the standard model cannot be trusted.

Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies. If that is so, then any calculation of the contribution to the cosmological constant using the standard model is wrong anyway. If there are further particles, so heavy that we haven’t yet seen them, these will play a role for the result. And we don’t know if there are such particles.

3. It’s idiotic to quote ratios of energy densities.

The 120 orders of magnitude refers to a ratio of energy densities. But not only is the cosmological constant usually not quoted as an energy density (but as a square thereof), in no other situation do particle physicists quote energy densities. We usually speak about energies, in which case the ratio goes down to 30 orders of magnitude.

4. The 120 orders of magnitude are wrong to begin with.

The actual result from the standard model scales with the fourth power of the masses of particles, times an energy-dependent logarithm. At least that’s the best calculation I know of. You find the result in equation (515) in this (awesomely thorough) paper. If you put in the numbers, out comes a value that scales with the masses of the heaviest known particles (not with the Planck mass, as you may have been told). That’s currently 13 orders of magnitude larger than the measured value, or 52 orders larger in energy density.

5. No one in their right mind ever quantifies the goodness of a prediction by taking ratios.

There’s a reason physicists usually talk a about uncertainty, statistical significance, and standard deviations. That’s because these are known to be useful to quantify the match of a theory with data. If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero.

In summary: No prediction, no problem.

Why does it matter? Because this wrong narrative has prompted physicists to aim at the wrong target.

The real problem with the cosmological constant is not the average value of the standard model contribution but – as Niayesh Afshordi elucidated better than I ever managed to – that the vacuum fluctuations, well, fluctuate. It’s these fluctuations that you should worry about. Because these you cannot get rid of by subtracting a constant.

But of course I know the actual reason you came here is that you want to know what is “the worst prediction in the history of physics” if not the cosmological constant...

I’m not much of a historian, so don’t take my word for it, but I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.

In this case, you can calculate the likelihood for observing a universe like our own. But the larger and the less noisy the observed universe, the less likely it is to originate from a fluctuation. Hence, the mere fact that you have a fairly ordered memory of the past and a sense of a reasonably functioning reality would be exceedingly tiny in such a case. So tiny, I’m not interested enough to even put in the numbers. (Maybe ask Sean Carroll.)

I certainly wish I’d never have to see the cosmological constant myth again. I’m not yet deluded enough to believe it will go away, but at least I now have this blogpost to refer to when I encounter it the next time.

234 comments:

  1. Thank you Dr. H. Always an interesting topic, well expressed and informative. Do not quite understand the last part but I think you are saying our universe is very unlikely to have emerged from a vacuum fluctuation... Indeed! Again thanks for an interesting post

    ReplyDelete
  2. A quite enjoyable blog post. My next question is what is a "fluctuation" as used in physics? The common definition as given by Google's dictionary is "an irregular rising and falling in number or amount; a variation" Is that the same meaning physicists have in mind? In addition what is a "vacuum fluctuation" or a "quantum fluctuation"? If you'd be kind to clarify or refer me to a not too technically demanding reference.

    I often hear those terms used in laymen and serious discussions but nobody has really defined what they mean. Are they even well-defined concepts?

    ReplyDelete
  3. Alexander,

    A fluctuation is a temporary deviation from the average.

    ReplyDelete
  4. Sabine

    (a) I have ordered your book. I will read it when it arrives.

    (b) If I didn't know, I would have said that this post was written by a grumpy old guy.

    (c) These are my thoughts on your five points.

    (1) You say that the standard model (SM) can only contribute to the cosmological constant (CC), which is true. But if you assume the entire CC comes from the SM, how do these differ? Yes, they are different things, but they are similar things, and if one contributes to the other, then this doesn't sound like a silly statement.

    (2) Yes, of course it is silly to trust the SM at all energy scales. But everyone (well everyone who knows science) knows that.

    (3) I guess I don't see that it is idiotic to take ratios of densities. I mean, it's not obvious that densities of ratios is the best thing, but at least it normalizes over volumes, which is a plus. And, even if you're right in your assertion, isn't 30 orders of magnitude still a bit of a problem?

    (4) Ditto 13 or 52.

    (5) I don't see why the ratio thing is a problem. Yes, obviously uncertainties are crucial. And if the uncertainties are such that the uncertainty spans 120 orders of magnitude, then the whole thing is silly. But the ratio tells you that there is a missing idea, calculation, or concept, somewhere in the connection between the two, correct?

    There are tons of predictions that eventually fail. The one you mention here will likely be one. The metastability of the universe due to the mass of the Higgs & top is another. The taming of the quadratic divergences of the Higgs mass are a third. But your posts seem to boil down to "we don't know anything for sure, so we can't say anything." And of course we are ignorant. That's why people like you and me still have jobs.

    I do look forward to receiving your book, so I can better understand your point. I'm trying to see something deeper and more insightful than the obvious.

    ReplyDelete
  5. "I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.". the problem with deciding which prediction is the worst, is that you should only consider predictions made by reasonable theories. there are lots of bad predictions made by theories that are not reasonable. also, one could say that the 'prediction' that Schrodinger's cat is half dead-half alive is bad but is this a prediction rather than an interpretation? i am tempted to add the prediction that accepting Christ as your savior results in your getting into heaven is a bad prediction because, like string theory and the multiverse, it is untestable and must be taken as an act of faith. LOL

    ReplyDelete
  6. Hi Sabine,

    I thought Lambda = 2 PI/3 R_U^2.

    Best,
    J.

    ReplyDelete
  7. Don,

    I sound like a grumpy old guy? Must be making progress with my writing!

    1) Well, don't assume it. Why would you?

    2) Then better not forget mentioning it.

    3) What's the point of normalizing something if you take a ratio anyway?

    4) Who is grumpy here?

    5) As I tried to get at later, the actual problem is the size of the fluctuations. I really think it's important to be clear on what the problem is. If you zoom in on the wrong problem, you'll be looking for the wrong "missing concept."

    (Note that I didn't say there is no problem here.)

    ReplyDelete
  8. "its value must be fixed by measurement" Accepted theories’ testable predictions are empirically sterile: proton decay, neutrinoless double beta-decay, colossal dark matter detection attempts. A 90-day experiment in existing bench top apparatus falsifies accepted theory. But, the Emperor cannot be naked!

    https://www.forbes.com/sites/startswithabang/2017/12/05/how-neutrinos-could-solve-the-three-greatest-open-questions-in-physics/
    https://i.pinimg.com/736x/45/5c/84/455c847510f593d06efef3e83a0aac06.jpg
    ...Ars gratia artis

    The answer is where it should not be. Look. The Emperor is naked.

    ReplyDelete
  9. Ah, what to do! "Demiurge" was always known to be a mostly cunning entity (to clear out all those fluctuating theological anxieties.)

    ReplyDelete

  10. Sabine said… “Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies.”

    Sabine,

    I enjoyed the post. I wanted to ask you to consider discussing the quoted issue in a future post. It's something I was unaware of and would like to read your thoughts on.

    ReplyDelete
  11. Hi Sabine,

    The difference is that I >>actually am<< a grumpy old guy, not a smart, charming, and lovely young theorist. And if it ever came down to a grump contest, I'd win because I have you beat by years...even decades. Practice and all that....

    To your points.

    1.) Nobody is assuming. There may be other mechanisms. Clearly there are other mechanisms. But those only exacerbate the situation. Or...and I know this is antithetical to your current thinking...there is a large unknown component with an opposite sign that nearly-perfectly cancels a very large number. Yes, you don't like naturalness-based arguments, but it seems to be a compelling argument that the thinking isn't completely foolish. Not religiously perfect, of course...but not completely foolish.

    2.) I always mention that bit. Ditto with the vacuum stability and the quadratic divergences and a myriad of other tensions.

    3.) I think the normalization is just a clarity thing. But, you're right, it isn't necessary. And it dispenses with arguments on how to define the volume.

    4.) I never claimed to be anything but. On the other hand, you rejected the 120 orders of magnitude and substituted 10 and 50, give or take. At some level, once you get above 2 - 3 orders of magnitude, it's all the same, crisis-wise. They all say "ummmmmm...you forgot something."

    5. I would like to hear more about this point. The rest of the points you have made all seem pretty obvious to me and much ado about nothing. But I don't understand this point in enough detail to say much. However, I think it is likely that I could learn something about a discussion on this point.

    And, as far as your closing point, I didn't think you made such a claim. I think my point is that the way you have written has conveyed a point that isn't what you intended.

    The way I read what you wrote is more along the lines of arguing about various orders of magnitude and somehow in showing that since people can't even agree on the numbers, maybe there is nothing to this. In contrast, I would flip this around and say that no matter how you decide to cast the problem, they all show that something is wrong. Accordingly, we should embrace the wrongness and instead focus on why to pick one way of looking at it so as to make it more likely to advance our knowledge.

    While we might disagree on some points, I think we agree on the big ones. This is mostly (for me) a discussion about distilling your concerns in a way that makes it clear that there is a problem and doesn't confuse the reader into thinking that physicists are just so clueless as to not even be sure if there is a problem at all.

    Or something like that.

    ReplyDelete
  12. "If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero."

    I see you saved your big gun for last. Given this, do you even need to mention the other points? (BTW, I hope you expand on this point in some future blog post.)

    Reminds of this talk, "Dissolving the Fermi Paradox":

    http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf

    ReplyDelete
  13. This is probably an odd point to get nitpicky about but you said that

    "Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies"

    I'm curious about the converse, are there people who think that standard model suffices to arbitrary high energy?

    ReplyDelete
  14. Sabine,
    I often enjoy your crotchety posts, But this one is too much. You are doing a dis-service to science if you imply that the cosmological constant is not a problem worthy of scientific consideration.

    I can make a rigorous prediction of the contribution to the cosmological constant linear in the up quark mass which is valid to within 20%. (For details read my papers or ask me.) The result is 43 orders of magnitude bigger than the final answer. Yes, we all know that there are other contributions. But if this does not sound like it identifies a problem worthy of further study, I do not know what else to say.

    Look, all of these issues are really just motivations for our work. You have chosen to devote a large fraction of your scientific career to quantum gravity phenomenology. This choice is despite the fact that the only reliable predictions in quantum gravity – via the effective field theory – gives results 50-100 orders of magnitude below observations. But you are willing to bet your life-work on some highly speculative ideas which might possibly overcome this barrier. Great. I respect your choice and I would love it if you are correct in this bet.

    Others look at the great disparity between the magnitude of known contributions to the cosmological constant and the measured value and view that as a motivation for a problem worthy of study. For the life of me, I cannot see why does not sound like a good problem to you. It looks like a great issue to study as an indicator of new physics.

    So I think that, as a writer about science who connects to many audiences, you do a dis-service to the way that we look for new physics when you say that this is not a problem. It is a significant puzzle and may be important. Scientists who take it seriously and look for new physics associated with it are trying to do good science. Hopefully in the end we will find out whether their choices or your choices are fruitful.

    Best,
    John Donoghue
    (Yes, Google says Jack as the identity, but it is John from UMass.)

    ReplyDelete
  15. John,

    The problem is not the prediction of the average value that's supposedly wrong by 120 orders of magnitude because there is no such prediction. It's an ill-posed problem and if you think otherwise, then please explain. I wrote on length elsewhere why naturalness arguments are not good arguments to make if you don't have a probability distribution. You end up trying to solve what's not a problem and waste a lot of time.

    I am doing a "dis-service" to science by pointing out that this argument is flawed and many people are wasting their time? Now that's an interesting claim.

    "The result is 43 orders of magnitude bigger than the final answer. Yes, we all know that there are other contributions. But..."

    Anything that comes after the "but" is irrelevant. "We" all know there are other contributions, hence the result of the calculation isn't meaningful. End of story. Why talk about it?

    The reason for writing this post is that "we" doesn't include most of the people who have been told this myth.

    Best,

    B.

    ReplyDelete
  16. Kevin,

    I explained that in point 2. If there are any particles heavier than the ones we have seen so far, these will make the largest contribution. You can't neglect them. Hence the uncertainty is larger than the average value.

    ReplyDelete
  17. Don,

    1) "it seems to be a compelling argument that the thinking isn't completely foolish. Not religiously perfect, of course...but not completely foolish."

    You can only solve a problem with mathematical methods if you have a mathematically well-posed problem to begin with. And this is not such a problem. I really think theoreticians should try somewhat harder. I know you are speaking about your gut feelings here, but go and try to make it formally exact. Just go and try. You'll figure out that it's not possible. If you look at the math, there is no problem.

    2) Good for you. You are great. You are wonderful. You are excused from everything.

    4) One can get very philosophical about this. You know, there are a lot of numbers floating around in theories that you can clump together to get fairly large factors. Here's an example: (2*Pi)^10 is about 6*10^8. Why an exponent of 10? Well, because there's ten dimensions in case you haven't heard. More seriously, who says how large is too large?

    In any case, are you saying that 120 orders of magnitude is as bad as 13 but 13 is much worse than 3? I can't follow the logic if there's any.

    5) Not sure what you are referring to here. To the fifth point in my list? This merely refers to the uncertainties mentioned in point 2) with the addition that if you'd accurately state them, it would be clear there isn't any wrong prediction because the calculation allows for pretty much any result.

    In case that refers to the problem with the fluctuations, well, the thing is that while you can get rid of the average value by using the free constant that GR contains anyway, the fluctuations around the average will still be of the same order of magnitude (ie, huge) and wobble your spacetime. Now, since gravity is a very weak force, this isn't as bad as it sounds as first, but it's not entirely without consequences. (Niayesh in his paper worked out what goes wrong.)

    "This is mostly (for me) a discussion about distilling your concerns in a way that makes it clear that there is a problem and doesn't confuse the reader into thinking that physicists are just so clueless as to not even be sure if there is a problem at all."

    Well, I explicitly wrote "The real problem with the cosmological constant is..." etc. It's hard for me to see how this could be mistaken for "no problem at all." Best,

    B.

    ReplyDelete
  18. Louis, yankyl,

    I'll discuss that in a future post.

    ReplyDelete
  19. The QFT calculations of vacuum "fluctuation" contributions to the CC seem potentially dubious to me for another reason. And apparently it's not just psi-epistemicists who might suspect that uncertainty is being mistaken for fluctuation there.

    ReplyDelete
  20. Regarding (1) I'd be more nuanced. Yes, according to (2) and (5) we can't calculate the contribution of vacuum fluctuations. But if we could, and if it ends up being very different than the observed value, there are two possible conclusions:
    * The cosmological constant is an integration constant, and therefore can be anything. It almost cancels all other contributions, the end.
    * This is a prediction that the cosmological constant is in fact not an integration constant, and some extension of GR might explain it.

    To be honest, I'd have sympathy for people exploring the second option. I'm curious too.

    ReplyDelete
  21. "(...)
    This is a prediction that the cosmological constant is in fact not an integration constant, and some extension of GR might explain it.

    To be honest, I'd have sympathy for people exploring the second option. I'm curious too"

    Or this one (https://arxiv.org/abs/1303.4444v3, Buchert et al.)? It doesn't pretend to "extend" GR, but takes in account the fact that a homogeneous-isotropic-at-every-scale space-time might be an over-simplified model of our universe, and predicts in its framework the emergence of an effective cosmological "constant".

    @Sabine: this is a little off-topic here. But if you have already posted something about this approach (which I know is subject to debate among specialists of GR), I would be interested in reading your opinion.

    ReplyDelete
  22. Its funny that you start your own post from 2016 with
    "The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude."
    http://backreaction.blogspot.de/2016/02/much-ado-around-nothing-cosmological.html
    and here you say
    "(...) And many scientists, I have noticed with alarm, actually believe (this kind of statements)."

    To me this statement always appeared to be given with a wink.
    Yet if you do a calculation that naively should contribute to the effective cosmological constant although it could of course be cancelled by another coupling
    it is somewhat surprising that it turns out to be almost exactly cancelled.
    This is like observing two particles that have almost the same mass.
    Just imagine the quotient of their masses would be 10^-8.
    This would be a curious observation that asks for understanding.
    Of course we don't know the distribution of particle masses over all possible universes and it could just be by chance.
    But this would not be your default assumption, i hope.

    "If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero."
    Can you provide a source for this calculation? I am genuinely interested.

    ReplyDelete
  23. That was a bit snarky. I may have to walk back from my claim of being able to "out grumpy old guy" you by virtue of my extra years.

    I do not desire to be excluded from anything. All I meant is that I always say that there are substantive limitations to the calculations, most obviously that the Standard Model cannot apply to the unification scale (if, indeed, the unification scale exists).

    I look forward to reading your book. Perhaps there I'll find out if I think your points are substantive and thoughtful or just a bit on the pedantic and banal side. I'm bummed that I have to wait until June.

    For whatever it's worth, I am also displeased when I read science popularizations (usually by theorists) that put forward the results of a prediction as if it were gospel and without mentioning the (well known to experts) limitations. To do that is a disservice to one's readers.

    ReplyDelete
  24. Cobi,

    It didn't really register in my brain until recently how much harm the oversimplified stories have begun to do in the community. It's one thing to write this in a blogpost, knowing that it kinda captures the essence, another thing to put it in a textbook or teach it to students leaving them to think that's the full story.

    ReplyDelete
  25. Don,

    I think I am somewhat scared because I have now talked to quite a number of students and more senior people not in the foundations (cond mat, mostly) and they tend to think that the cosmological constant is an actually wrong prediction. (As opposed to: there is something here that we don't really understand.)

    ReplyDelete
  26. I have not encountered this. But, I live very deeply embedded in the particle world. It's both the good and bad side of being a researcher at a national lab.

    On the other hand, I have no difficulty imagining that condensed matter types don't understand this well. But I forgive them, as my mastery of condensed matter is rather tenuous as well. My deep grasp of your particular discipline is also not what I might like it to be.

    ReplyDelete
  27. Don,

    I don't blame condensed matter physicists for this at all. I blame myself for not realizing earlier that such myths are hard to contain and eventually will affect the community, even though it's patently obvious.

    See, the people who understand the issue don't see anything wrong with making somewhat sloppy or inaccurate statements, because they think everyone knows what they mean anyway, right? It's just some kind of little joke about the worst prediction ever, haha. Then they start printing it in textbooks and in papers and students read the textbooks and the papers and you end up with a fraction of the community believing it actually is a mathematical problem. Same thing happened with the flatness problem and with naturalness (and, more recently, the bullet cluster). These stories have been repeated so often, a lot of physicists actually believe them. And whether or not they themselves believe it, they use it as a motivation for their research.

    So, well. I don't consider myself pedantic. You can't write on the frequency that I do and be pedantic. Words happen. But I think every once in a while it's good to remind people of what the real story was, so I'm here to remind them.

    ReplyDelete
  28. I have understood that the "dark energy" concept is brought to the cosmology to explain observed acceleration of distant galaxies in order to avoid talking about "forbidden" cosmological constant, which is equivalent as far as I understand.

    ReplyDelete
  29. Dear Sabine,

    I believe that you are missing some crucial points about naturalness and why it is an important criterium. I will not go into all of them, but there is always one very simple way to shut-down such attacks on naturalness that I describe below.

    There is a very simple way to understand naturalness, say in the context of the hierarchy problem for example. Try to write down a physics theory at 10^16 GeV which gives the Higgs mass to within an order of magnitude, or even two, three, four orders of magnitude. I guarantee that you simply can not write down this theory, which numerical values do you take for the parameters? Say you choose some parameter value M1=1.52324...5657, but then you remember that you forget to account for one (of 10,000) 3-loop diagram. This would completely change M1. The only way to get the Higgs mass from such a theory, in the absence of some principle or symmetry which makes it natural, is to put the Higgs mass in by hand in some way. But then your theory is completely useless - its only requirement is that it gives us the Higgs mass and it has failed to do so. Exactly the same argument can be made for the cosmological constant.

    Naturalness is not some abstract hand-wavy contentless requirement as you seem to be suggesting, but a concrete in-your-face problem that you can not avoid if you ever try to actually calculate anything. And after all, calculating things is what physics is supposed to do.

    ReplyDelete
  30. Cosmological constant is not a prediction. I agree. Why did Einstein say his theory predicted an expanding universe after Hubble discovered it? He added the cosmological constant to make a static universe model, and later said it was his biggest blunder. The cosmological constant is a free parameter. It doesn’t predict anything. I don’t know what he meant. Matter must be moving away from each other to counteract gravity from pulling them all together in one big lump. Was that his “expanding universe?”

    I say the worst prediction in the history of physics is Einstein’s “prediction” of an expanding universe.

    ReplyDelete
  31. Leon Lederman made the claim of "worst prediction" in his popular The God Particle.

    I'm slightly confused by your points 1) and 2). I thought the 120 orders claim was being used as a justification for thinking there was something beyond the Standard Model.

    ReplyDelete
  32. Ari,

    Various types of dark energy fields try to address various types of supposed problems. Need I add that there's no evidence any of these fields exist? See why I am trying to say maybe it's a good idea to figure out what the problem is in the first place?

    ReplyDelete
  33. Unknown,

    Yes, right. It's used as a justification. That doesn't make it correct.

    ReplyDelete
  34. Physics Comment,

    You seem to live under the impression that the Higgs mass is actually calculable and not put in by hand. That is not so.

    ReplyDelete
  35. Alexamder McLin & Sabine:

    What particle physicists—and people using the quantum theory—mean by a "fluctuation", and particularly a "vacuum fluctuation"— is a great question. Probably they don't mean what they seem to mean, namely that there is this physical state or thing, the quantum vacuum, that is fluctuating, i.e. changing its value. Why not? Because the standard view is that the wave function of a quantum system is complete. That means that every real physical feature of a system is reflected somehow in its wave function, so two systems with the same wave function are physically identical. If the wave function is complete, then the only way for anything in a quantum system to fluctuate or change is for the wave function of the system to change. But the quantum vacuum state does not change. It is what it is at all times. Since the quantum state is not changing nothing in the system is changing or fluctuating.

    Now one way out of this conclusion is to deny that the wave function is complete. If you do, then it is at least possible for something in the system to be fluctuating even though the wave function is not. Any theory according to which the wave function is not complete is called a "hidden variables" theory. (Ignore the word "hidden": that is another linguistic error that physicists have made.) But if you ask a standard issue physicist whether she or he accepts a hidden variables account of quantum theory, she or he will deny it up and down. (BTW, the whole point of the famous Einstein, Podolsky and Rosen "paradox" was to argue that the wave function is not complete.) So any physicist who renounces hidden variable simply cannot consistently mean by a "fluctuation" or a "vacuum fluctuation" that there is anything actually fluctuating or changing.

    So what in the world does it mean? Well, as we all know, the predictions of quantum theory are typically just statistical or probabilistic: they say that *if you perform a certain experimental intervention on a system* then there are various possible outcomes that will occur with different probabilities. Of course, that says exactly nothing about what is going on if you don't perform the experiment. And if you take these various possible outcomes and square them (so the positive and negative outcomes don't cancel out) and weight them by their probability, then you get a number that is called a "fluctuation". It is a sort of average sqaured-deviation from zero when you do the experiment on the vacuum state. And on any state, the "fluctuation" of a quantity is the average squared-deviation from the average result. If the average tells you approximately where you can expect the result to lie, the fluctuation tells you how much noise—how much deviation from that expected value—the data will have.

    In sum, *if you do a certain kind of experiment*, the fluctuation tells you how much variation in the outcomes to expect. But on the "standard" understanding of quantum theory, this does not and cannot reflect any actual fluctuations in the system when it is not measured. The noise is the product of the measurement itself.

    Probably quite a few physicists will deny what I have just written, and also deny that they think the wave function is incomplete. But they are just confused: such a position is self-contradictory.

    ReplyDelete
  36. Hello,

    Are you claiming that it is not possible to predict the Higgs mass, at least to within order of magnitude estimates, from a given natural theory? I'm sorry but this is just wrong. Take the MSSM at 10^16 GeV and set all soft masses to 1.0 TeV - this predicts a Higgs mass m \sim 1 TeV to some order of magnitude accuracy. Loop correction do not change this prediction significantly - and that is precisely the point. Take an unnatural theory at 10^16 GeV and try to do the same. Forget about why the underlying parameters take certain values - statistics, distributions, etc.. - pick them as you like - you simply can not even calculate what those values need to be to get even close to a prediction for the Higgs mass. Any prediction you get would completely change if you didn't include the 10,456th 3-loop diagram into your analysis.

    You can compare this to a technically natural parameter, such as the Yukawa couplings. There are many theories of why the Yukawa couplings are hierarchical. For example, Froggett-Nielsen models. We can propose theories in the UV, calculate the resulting Yukawa couplings, and test the predictions against experiments. You simply can not do this for parameters like the Higgs mass unless you address the problem of naturalness (for example by a symmetry).





    ReplyDelete
  37. While I haven't read all the posts, as it's early AM, the clarity of Don Lincoln's writing and thinking, especially the 2:21 PM, Dec. 6 post, really impressed me. It reminded me of the way our 19th century president, sharing the same surname, addressed and handled contemporary issues in the political sphere of his day.

    ReplyDelete
  38. Physics,

    I am saying that currently the Higgs mass is a free parameter and no one can calculate it. Whether there is any underlying theory that predicts it, nobody knows.

    As to the rest of your comment, you seem to be saying that a theory in which you don't know how to calculate the Higgs mass is a theory you think shouldn't exist. Are you seriously trying to tell me that this is a scientific argument?

    ReplyDelete
  39. Higgs mass in SM is 125GeV.
    There is a problem with SM theoretical prediction and LHC results.

    ReplyDelete
  40. The standard model does not predict the Higgs mass. The Higgs mass (as all the other particle masses) is a parameter determined by measurement.

    ReplyDelete
  41. Whether there is an underlying theory that is correctly a part of nature that predicts the Higgs mass indeed nobody knows. But whether a given theory, be it in nature or purely a construction of our mind (as most theoretical physics naturally is), can predict the Higgs mass we certainly do know - I just gave you an example of one (the MSSM with SUSY breaking at 1 TeV).

    A theory whose aim is to explain the Higgs mass in which you can not calculate the Higgs mass (even to within multiple orders of magnitude) is not a scientific theory at all. It contains no more information (regarding the Higgs mass) than the sentence: 'The Higgs mass is 125 GeV'. Of course nature is not obliged to us to be calculable or understood, but if we are to make scientific progress in understanding the Higgs mass we inevitably have to construct a theoretical framework that will in one way or another address the naturalness problem. This is why it is such a central problem scientifically and practically.

    The philosophical type arguments usually adopted in the literature about the likelihood of a given choice of parameters in the UV, I believe, are correct. But certainly one can debate them - as you do. But my point is that if you want to avoid such debates you can focus on the fact that there is no way to make practical scientific progress in understanding the Higgs mass (or the cosmological constant) without addressing the issue of naturalness in some way. Even anthropic arguments are addressing naturalness in some way (whether you like them or not). But not worrying about naturalness at all is the same as giving up on scientific progress - and that is fine if you are happy to sit and philosophise about it - but if you are actually a working scientist who has not given up on understanding the universe this is not an option.

    ReplyDelete
  42. Physics,

    That you believe a theory more fundamental than the standard model must allow you not only to calculate the mass of the Higgs, but must allow you to do so a) easily and b) using currently known techniques for quantum field theories is just that, a belief.

    "Of course nature is not obliged to us to be calculable..."

    You can stop right there. This is sufficient to demonstrate there is no scientific argument to support your conviction.

    ReplyDelete
  43. Thanks David...

    Sabine...

    Reading Physics Comment's comments, I think you might have been hasty. Reading the general tenor of his or her comments there was an implied missing word. It would appear that they meant "easily calculable."

    BTW...you have >>WAY<< better educated commenters than I get on my posts on my own internet contributions. But, then again, you write for a much more academically-gentrified audience.

    ReplyDelete
  44. Don,

    Well, in case that was what he or she meant, it was clearly not what they wrote. I am not in the business of reading other people's thoughts.

    Even so, yes, one can rephrase "sensitivity to UV parameters" as "sucks to make any IR calculation," but that doesn't mean anything is wrong with a theory that has such properties. It also sucks to have commenters who hide behind anonymity to talk down to me.

    ReplyDelete
  45. Perhaps I am obtuse. I didn't read it as talking down. I read it as disagreeing.

    I disagree with you on some of your core points too. But that doesn't mean I disrespect you.

    Regarding anonymity, well those are the breaks. You voluntarily write a blog, post it on the internet and solicit comments. If anonymity and criticism bother you, I question your choice to do what you do. These are relatively tame and informed. You should see the comments I receive to some of my efforts. I never knew some of the things that have been claimed about my mother...

    ReplyDelete
  46. Don,

    Well, if someone comes here and proclaims "you are missing some crucial points about naturalness" and then goes on to tell me some standard folklore, I consider that talking down. How about "what do you think about..." Or "have you considered the argument..."? Or maybe just, to put it bluntly, "I chose to believe in naturalness because..."

    I have no problem with criticism per se, but it is exceedingly tiresome to have to repeat the same things over and over again because the "critics" don't take the time to think about what I am saying.

    None of which is to say that I have a problem with your comments. I get the impression you're not seeing the problems I see. That's unfortunate, but I think it's my fault.

    I am generally not in favor of anonymity in public debate unless a safety concern calls for it. I consider it pure cowardice. Whoever is this commenter clearly has a background in particle physics, yet they don't have the guts to sign their naturalness obsession with their name. (It's a different issue with peer review. I am not in favor of anonymity in peer review either, but the way academia works right now I think it's the lesser evil.)

    ReplyDelete
  47. Rigorous disciplines triumph at will. Structure and predictions are mathematical. The world's organic bulk is synthetic chemistry whose quality is %-yield. Theory fails.

    Managerial fine-grain control and mathematical certainties exclude serendipity (insubordination). Embrace rule enforcement, produced units, and minimized risk (real costs/modeled gains). Fifty years of sublime theory replaced tangibles with advertising. Unending parameterizations rationalize mirror-symmetry failures. Pure geometric chirality tests are ridiculous versus empirically sterile accepted theory excluding them.

    0.1 nm³ chiral emergence volume enantiomorphic Eötvös (U/Washington) or microwave rotational temperature (Melanie Schnell) contrasts are quickly executed in existing apparatus. Test mirror-asymmetric vacuum toward hadrons. Accepted theory is already ridiculous versus observation. %-Yield look.

    ReplyDelete
  48. Enjoyed reading Sabine versus Don (and Physics-person). I am reminded of Bohr's parable on how to embrace Complimentarity:

    In an isolated village there was a small Jewish community. A famous rabbi once came to a neighbouring city to speak and, as the people of the village were eager to learn what the great teacher would say, they sent a young man to listen. When he returned he said,

    'The rabbi spoke three times.
    The first talk was brilliant - clear and simple. I understood every word.
    The second was even better-deep and subtle. I didn't understand much, but the rabbi understood all of it.
    The third was by far the finest- a great and unforgettable experience. I understood nothing, and the rabbi himself didn't understand much either'.

    How can the consensus of less informed ("the Wise crowd" indeed) ever fairly assess the competing theories of those blessed with clearer insight?

    ReplyDelete
  49. Well, we each have a level that we will tolerate and a level we don't. I have teenagers, so I have remarkably thick skin.

    I agree with you on some of your points, but the points I agree with you on, I think are blindingly self-evident. Then there are the points I don't understand that you are making. I vacillate between thinking they are of the first category, or disagreeing with you. And, occasionally, I am simply perplexed.

    I >>DO<< believe that any prediction needs an uncertainty. Ditto measurement. And, to a much lesser degree, the probability distribution function you mention. Since that particular goal is likely out of reach, I put it aside as being unachievable for the moment and therefore not a reasonable thing to ask for.

    Maybe it's just that I come at it from an experimental point of view. I view all theories as suspect to a greater or lesser degree. Nearly all predictions these days come from perturbative expansions, thus they are inherently flawed, although we have some hope that we can assign uncertainties to these problems.

    Then there is the problem that we are pretty damned sure that there are new phenomena not currently in our theory. I'm essentially certain on that. (Yes, it is a faith statement, but it's not a faith statement that requires boldness.) Accordingly, I realize that all calculations have this huge limitation as we push into realms not constrained by experiment. That doesn't much bother me. I'm used to being confused and cautious about extrapolations. I presume you are as well.

    But, that said, I see some value in the guidance that the aesthetics of such things as naturalness bring to the conversation. (And, I should be cautious as the term naturalness can have a breadth of meanings depending on context and I am not being particularly careful here.) No matter the answer, there is a problem inherent in the significant disagreement between the cosmological constant (if, indeed, it is a constant) and calculations from QFTs about latent energy. Or in the quadratic divergences of the Higgs, etc.

    You (properly) pointed out that the 120 orders of magnitude number is arbitrary and depends on how you cast the problem, but your other options also had large discrepancies. These point to the fact that there is an unexplained problem and the naturalness aesthetic suggests paths forward. Obviously, those suggested paths might be wrong. The answer could be something not-yet-imagined. In fact, it probably is.

    So I am not offended to the degree that you are. Maybe I just don't understand or maybe I'm just not as philosophically...oh...I don't know...rigid, for the lack of a better word. But, then again, I haven't read your book. I reserve the right to change my mind when I receive it.

    ReplyDelete
  50. I note the early response to the blog to the definition of fluctuation as a temporary deviation from the average. I was wondering whether, that such a definition requires a probability measure renders it problematical in the sense that you argue against naturalness.

    ReplyDelete
  51. What a pity that our Washington politicians nixed the Superconducting Supercollider in Waxahachie Texas, as Uncle Al, mentioned in the linked thread of Niayesh Afshordi's analysis of the Cosmological Constant issue. If it were ever to be revived, advances in technology might even allow boosting of its originally planned energy of 40 TeV to even higher values. This is one area of government spending that I don't mind paying taxes for.

    ReplyDelete
  52. Don,

    My point is this. Arguments from beauty - and that includes naturalness - have not historically been successful. In the cases where they have been successful (think: SR, Dirac equation, GR), it turns out the underlying problem was actually a problem not of ugliness but of actual mathematical inconsistency. I conclude that we are more likely to make progress if we make an effort to formulate problems that are mathematically well-posed.

    I think the problem with the fluctuations in the CC is a well-posed problem (also an understudied one). The problem with the absolute value is not. I have a hard time seeing how you can think it's not a problem to subtract infinities from infinities and just define the residual to comply with measurement as you do when you renormalize eg bare masses, but when you do the same for the CC then the argument that the terms which you subtract from each other are not measurable individually for some reason doesn't carry weight.

    Yeah, sure, there's always a chance that when you follow your sense of aesthetics you'll hit on a good idea. But it doesn't seem to be terribly successful and it certainly doesn't help if theorists erroneously believe the problem they work on is any more than an aesthetic itch.

    Let me not forget to add that in my book it counts as deception to let members of the public think these studies are based on any more than aesthetic misgivings. If particle physicists had been honest, they'd have said "We believe that the LHC will see new particles besides the Higgs because that would fit so well with our ideas of what a beautiful theory should look like."

    ReplyDelete
  53. (Read "in my book" as "in my opinion". Seems a figure of speech I should stop using.)

    ReplyDelete
  54. @Don Lincoln Arguments cannot exceed their postulates. Noether's theorems plus evident universal intrinsic symmetries evolve physical theory. General relativity lacks conservation laws. Intrinsic symmetries are inapplicable (arXiv:1710.01791, Ed Witten).

    Physics fundamentally denies a left-handed universe. Emergent symmetry geometric chirality transforms beautiful equations into horrors. Parameterize after beauty derives a beast.

    Baryogenesis’ Sakharov conditions are a left-handed universe toward hadrons. Test for a vacuum trace left foot with divergence of fit of calculated maximally divergent paired shoes. Crystallographic enantiomorphs measurably violate the Equivalence Principle, on a bench top. But that is impossible! - as is a triangle whose interior angles sum to 540°, yet there we live.

    ReplyDelete
  55. “Another real-world manifestation of implicit memory is known as the “illusion-of-truth effect:” you are more likely to believe that a statement is true if you have heard it before – whether or not it is actually true. […] The illusion-of-truth effect highlights the potential danger for people who are repeatedly exposed to the same religious edicts or political slogans.”

    http://www.eagleman.com/incognito

    David Eagleman is a neuroscientist; he runs the neurolaw lab and Baylor College of Medicine and has written a number of really good books!

    ReplyDelete
  56. I do think that renormalization is an ugly business and represents some dodgy thinking.

    ReplyDelete
  57. Hello Sabina

    Been a quiet follower of your blog for ages and finally motivated to talk... thank goodness an expert, on a well-respected blog, decided to finally bring this forward. Cannot tell you how appalled I was as a graduate student to see this kind of a claim of a some estimate of lambda creeping in-even into textbooks - which shall remain nameless because I liked them otherwise, and I would reveal my age! When Sean Carroll these days, whom by the way I also like very much as a public expositor, uses this item as if it could ever be a serious talking point with audiences, I turn green once again. I think my best QFT teachers had the wisdom, looking back now for decades (when dinosaurs were still roaming!), to keep a safe distance between all discussion of a nice, visible Casimir effect just appearing then in actual precision lab measurements, to a wild grab at Einstein's lambda that only a desperately optimistic of a "real vacuum scale" of QFT as we know it know might very naively permit!
    signed - a finally content surfer DKB

    ReplyDelete
  58. Don
    All theories are wrong, some are useful. The problem is naturalness, aesthetics, symmetry. They are not physical theories but mathematical tools to aid calculation. They are useful to experimental physicists but to call them a theory is just silly. Theoretical physicists without any new theory but plenty of mathematical tricks to calculate experimental data are like theologians without a god.

    ReplyDelete
  59. Sorry for the late comment---I was at the Texas Symposium on Relativistic Astrophysics in South Africa---but it is never to late to remind people to read the wonderful paper by Bianchi and Rovelli, which does a good job of pointing out what a non-problem the cosmological-constant problem is. In general, physicists would probably be better off by listening more to Carlo. :-)

    What bothers me most is that those who think that it really is a prediction and that it is bad hardly ever assume that there is anything wrong with the basis of the prediction, but rather that it points to something wrong with GR, some new physics, etc.

    My impression is that this is often touted as a problem in the (semi-)popular literature but, although too many still believe it, among people who actually work in cosmology there is a growing consensus that this is no real problem. Something similar is happening with respect to the flatness problem, but progress there has been somewhat slower.

    ReplyDelete
  60. Its funny that you start your own post from 2016 with
    "The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude."
    http://backreaction.blogspot.de/2016/02/much-ado-around-nothing-cosmological.html


    Later on in the post referenced above, Sabine writes:

    "Regarding the 120, it's a standard argument and it's in its core correct"

    If you read the whole thing in context, though, this is more of a teaser used as a jumping-off point for other things, but to be fair this might not be completely clear to all.

    ReplyDelete
  61. "I have understood that the "dark energy" concept is brought to the cosmology to explain observed acceleration of distant galaxies in order to avoid talking about "forbidden" cosmological constant, which is equivalent as far as I understand."

    No. Observations indicate there is accelerated expansion. The idea of the cosmological constant has been around for literally more than 100 years. There is no a priori reason that it should be zero, and many arguments for the fact that it should not be zero. Fit the parameters to the observations. One gets a value for the cosmological constant which explains them, and one also gets the same value for the cosmological constant which is required from other observations. That is why the current standard model of cosmology is called the standard model.

    This should be the end of the story, at least until some observations indicate that something is missing. But none do.

    Some people don't like the cosmological constant because they don't know what it is. Interestingly, none of these people claim not to like gravity because we don't know why the gravitational constant is non-zero. So various other explanations, such as "quintessence", are invented. "Dark energy" (a really stupid name; as Sean Carroll pointed out, essentially everything has energy and many things are dark; sadly, his much better "smooth tension" never caught on) is a generic term for these other ideas (perhaps including the cosmological constant as well); in general, the value of this additional term, unlike the cosmological constant, can vary with time, can be coupled to ordinary matter and/or dark matter, etc.

    But there is no observational evidence that the cosmological constant is not sufficient to explain all observations. While it might be interesting to investigate other ideas, there is no hope of finding observational evidence for them as long as the cosmological constant fits the data.

    Somewhat related to this is the question as to which side of the Einstein equation dark energy belongs. Is it a property of space, essentially an integration constant, as in Einstein's original idea? Or is it some substance with an unusual equation of state (pressure equal to in magnitude but opposite in sign to the density in the appropriate units) which has the same effect (an idea first suggested, as far as I know, by Schrödinger)?

    ReplyDelete
  62. Cosmological constant is not a prediction. I agree. Why did Einstein say his theory predicted an expanding universe after Hubble discovered it? He added the cosmological constant to make a static universe model, and later said it was his biggest blunder. The cosmological constant is a free parameter. It doesn’t predict anything. I don’t know what he meant. Matter must be moving away from each other to counteract gravity from pulling them all together in one big lump. Was that his “expanding universe?”

    I say the worst prediction in the history of physics is Einstein’s “prediction” of an expanding universe.


    This is wrong on many levels.

    Einstein originally introduce the cosmological constant with a specific value because he wrongly thought---as did many, if not most, at the time---that the universe is static on large scales. After the expansion was discovered, he thought it better to drop the term. In that sense, GR would have predicted expansion (or contraction) because there are no static models without a positive cosmological constant. However, before the discovery of expansion, others had presented models based on GR with a cosmological constant (with a generic value) which do expand. Whether Einstein really said that it was his biggest blunder is not clear, and even if he said it it is not clear what he meant by it. At the time, observations were not very good, so Einstein later favoured models without the cosmological constant, especially the Einstein-de Sitter model since it is mathematically simple, but also one with a large matter density since it is spatially closed.

    Sabine is talking about the value as predicted from particle physics. This has essentially nothing to do with the cosmological constant as Einstein used it.

    Yes, matter must be moving apart or contracting unless there is a special value of the cosmological constant. Yes, that is the expanding universe. It is only accelerated expansion which needs a cosmological constant, though.

    ReplyDelete
  63. "GR would have predicted expansion (or contraction) because there are no static models without a positive cosmological constant."

    GR would have predicted static or expansion or contraction or accelerating expansion or decelerating expansion depending on the value of the cosmological constant - positive, negative or zero. That's why it's a free parameter.

    "Yes, matter must be moving apart or contracting unless there is a special value of the cosmological constant. Yes, that is the expanding universe."

    Moving matter is not the expanding universe. Newtonian mechanics predicts moving matter without spacetime expansion. Moving matter can be red or blue shifted. Expanding universe is just red shift.

    ReplyDelete
  64. Tim M. - I think you are confusing yourself with your talk of the 'completeness' of the wave function even though you clearly realize that the wave function does not completely determine the result of the measurement of the system. If you want to say that the wave function is complete, you have to realize that that doesn't mean it tells you everything you might want to know about a quantum system's future, instead it means that it tells you all you CAN know about the system's future.

    The reality of quantum fluctuations, though, as demonstrated in measurements and calculations, has been known at least since the 1940's.

    ReplyDelete
  65. "GR would have predicted static or expansion or contraction or accelerating expansion or decelerating expansion depending on the value of the cosmological constant - positive, negative or zero. That's why it's a free parameter."

    Yes, but the value for stability must be infinitely fine-tuned. This was pointed out by Eddington. Technically, in the language of dynamical systems, it is an unstable fixed point. Perturbing the solution slightly leads to expansion or contraction. (Of course, in a perfect Einstein universe, there is no way one can perturb it.)

    The Einstein-de Sitter model is an unstable fixed point in exactly the same sense. This fact was used against the static Einstein model, but in favour of the Einstein-de Sitter universe ("the universe must be exactly Einstein-de Sitter because if not it would evolve away from it"---an argument often made about our real universe, though of course it is only approximately described by any Friedmann model).

    Yes, your description of the expanding universe is technically more correct, but my simpler version was to put the original poster on the right track, rather than confuse him more. :- By the way, though I am in the "expanding-space camp", there are colleagues who don't think that this is a valid way of looking at things. Of course, at the end of the day, what matters is whether one calculates correctly.

    ReplyDelete
  66. "GR would have predicted static or expansion or contraction or accelerating expansion or decelerating expansion depending on the value of the cosmological constant - positive, negative or zero. That's why it's a free parameter."

    Just to be clear (I am not implying that this is what you meant), it is not the case that there is a simple relationship between "static or expansion or contraction or accelerating expansion or decelerating expansion" and "positive, negative or zero". If the cosmological constant is negative, then a) there is always deceleration and b) after initial expansion (in all cases, I am assuming expansion now; the equations are completely time-symmetric) there is contraction. If the cosmological constant is zero, there is always deceleration (unless there is no ordinary matter (nor radiation) as well) and the ultimate fate depends on the value of the density parameter or (equivalently, but only in this case) the spatial curvature. If the cosmological constant is positive, then there is always deceleration at first after the big bang (unless there is no ordinary matter (nor radiation) as well). If it is low enough, there will be no acceleration and the universe will collapse. The limiting case is that it reaches the Einstein static model after an infinite time. If it is larger, the deceleration changes to acceleration and the universe expands forever. (There are also the non-big-bang cases that the universe was in the static state infinitely long ago then accelerates forever, and models which contracted in the past to some minimum (possibly zero, in the case that there is no matter) size before expanding forever.)



    ReplyDelete
  67. CaptialistIP-

    It is not, of course, "my" notion of completeness, but the terminology introduced in the EPR paper. It is a perfectly clear notion for purposes of foundational analysis, and should not be confused with what you seem to have in mind, which has to do with either predictability-in-principle or predictability-in-practice.

    In a complete theory, as EPR defined it, every real physical feature or aspect of a system at a time is reflected somehow in its theoretical description. In this case, we are considering the theoretical description to be its wave function. If a theory is incomplete, then the theory is missing something, and needs to be completed by postulating a more refined theoretical description, one that captures the missing physical features. Every presently existing physical theory is of course incomplete in this sense, since we have no Theory of Everything. But also a theory could be incomplete even in a restricted domain. For example, a theory of electromagnetism that uses only the electric and magnetic fields in its electro-magnetic description of a system is incomplete, as the Aharonov-Bohm effect demonstrates.

    What has this to do with predictability, i.e. whether the theory can be used to predict the outcomes of measurements, or in general the future evolution of the system? Nothing direct.

    A complete theory can fail to yield predictability because the fundamental dynamical laws are stochastic rather than deterministic. In that case, two physically identical systems at a time in physically identical circumstances can evolve differently. That is the usual picture of quantum theory, as codified by von Neumann, with a fundamentally indeterministic and hence unpredictable collapse of the wave function, or, in short, "God plays dice". The standard picture, as Einstein saw, was that QM is complete but indeterministic, with the indeterministic bit somehow associated with "measurement". This leads to the "measurement problem" in one guise (i.e. what physically distinguishes these special indeterministic “measurement” interactions from plain vanilla everyday deterministic interactions described by a Hamiltonian?). The GRW collapse theory has this dynamics, but with no measurement problem because the collapses are not particularly associated with “measurements”. That's why Bell was so impressed with it.

    What EPR showed was the very thing that had bugged Einstein all along, namely that the standard Copenhagen approach not only had to posit indeterminism (God plays dice), but also non-locality ("spooky action-at-a-distance"), since in their experimental situation the collapse not only changes the physical state of the local particle but the distant particle as well. Their whole argument is that "the quantum-mechanical description is *not* a complete description" because that would entail non-locality. It is a bit buried in the paper because they did not think anyone would actually endorse non-locality.

    So a stochastic dynamics can produce a theory that gives only probabilisitic predictions but is still complete, in Einstein's sense of “complete”. What the EPR argument shows is that it also must be non-local. To preserve locality and get rid of spooky-action-at-a-distance, you have to have a deterministic theory, at least in the EPR setting. This is where Bell takes up the question: is it possible to preserve non-locality even granting the incompleteness of the wave function, which is the only hope you have.

    (Con't)

    ReplyDelete
  68. A deterministic theory, on the other hand, must give non-probabilistic predictions from a complete initial description of a system. Two systems can that end up differently (e.g. giving different outcomes of a “measurement”) must start out differently. Any practical inability that we have to predict the outcome must be explained by our inability to determine the exact initial state. This is illustrated by Bohmian mechanics or pilot-wave theory. There, the wave function (of the universe) evolves deterministically and linearly, and unitarily, and predictably all the time, but we can't predict what the system will do from just that information because that information is incomplete. According to this theory there is another physical fact about the system (in fact a manifest rather than a "hidden" one)—namely the actual particle positions—that are needed to fill out the theoretical description. This is by far the cleanest and most highly developed and precise understanding of quantum theory, but most physicists reject it. In this theory, if a system such as an atom is in its lowest energy state, and the wave function is stationary, then the particles are not "fluctuating" or "jumping around": they too are static.

    The attempt to keep the wave function as a complete description but get rid of the collapse leads to Many Worlds, as Schrödinger pointed out (and rejected as absurd) in his "cat" paper, his response to EPR.

    In no case—collapse, Bohm, or Many Worlds—is anything in the vacuum state fluctuating. That was my point. The talk of "fluctuations" is not about any physical process going on in the vacuum state, but about the character of the interaction of the vacuum state with a very non-vacuum physical apparatus we call a "measuring device". But as long as no such physical system is around, e.g. in interstellar space, nothing is fluctuating.

    Now I don't know if this clears up your confusions, because I'm not sure what they are. You said that I was confused without specifying anything wrong in my post. What is wrong in yours is equating the question of the completeness of a theory with the sorts of predictions it makes, on the one hand, and the sort of predictions we can makes with it (which depends on who much of the physical state we can actually know) on the other. So you can explain why you think I am "confusing myself" or point out an error in this post or the last, that might help us sort this out.

    ReplyDelete
  69. @Tim M. When you say that "the talk of fluctuations is not about any physical process going on in the vacuum state" Does that imply that, in your view, the proposed solution to the cosmological constant problem proposed by Unruh is not correct? In his recent paper he literally takes the "vacuum fluctuations" as a form of fluctuating energy that gravitates.

    ReplyDelete
  70. Tim M. - Perhaps instead of saying that you were confusing yourself, I should have just said that I don't understand your point. In any case, I think that you are misleading in attributing the indeterminacy to the measurement process. An electron orbiting a proton in a hydrogen atom experiences the vacuum fluctuations of the electromagnetic field, and they produce the Lamb shift. Many other examples exist that show the reality of vacuum fluctuations.

    The wave function is a useful idealization, but in practice physicists deal with Lagrangians and what can be calculated in approximations.

    Perhaps you would like to put QM into the Procrustean bed of your deterministic philosophy, but its not a good fit.

    ReplyDelete
  71. @Tim M. - You should think about what the wave function says about the vacuum field, and what it says is that at small scales that field changes very rapidly in space and time. If that doesn't sound like a vacuum fluctuation to you, then I give up.

    ReplyDelete
  72. CapitalistImperialistPig,

    Tim is right about vacuum fluctuations but wrong about EPR and 'the' CI. And pilot waves may be "by far the cleanest and most highly developed and precise" psi-ontic 'understanding' of quantum mechanics but it's still spooky and it's still psi-ontic.

    ReplyDelete
  73. Gabriel- Since I have not read Unruh's recent paper I cannot comment. The point I am making is a simple logical point. The vacuum state state itself is stationary: it does not change with time. If it is complete, which most physicists would insist on, then it just immediately follows that nothing is fluctuating. That's a simple, one-step argument. If Unruh really requires something to fluctuate, then either he has adopted a very idiosyncratic interpretation of quantum theory, which I can't even describe, or he is out of luck.

    ReplyDelete
  74. Tim, Gabriel,

    Unruh indeed talks about fluctuations around the average - the same fluctuations that I am referring to (and that Niayesh is referring to). I think you have run into a problem of vocabulary here. The vacuum expectation value is often attributed to "fluctuations" but that's just a fancy way of referring to certain types of sums. There isn't really anything fluctuating here - the result is a constant. But this constant is only an average value around which there should be actual fluctuations.

    Note that this constant does *not* appear in usual QFT (which has been referred to above) because not only can you renormalize it away, even if you don't it has no effect if space can't curve in response to it.

    ReplyDelete
  75. PS: I wrote about Unruh's paper here. I've been thinking about this paper a lot after writing the post and haven't changed my mind.

    ReplyDelete
  76. Sabine,

    Now you have introduced yet another concept into the discussion: the expectation value. Of course that is "constant", but as an average you get that for free. To note that the expectation value for a given state does not fluctuate is to note a triviality.

    The question is really very simple: take a system in the vacuum state over some period of time. Is there any physical magnitude in that system that is fluctuating, i.e. changing, over that period of time? If you think that the wave function is complete, then the answer trivially must be "no". So if you think the answer is "yes", as you appear to, then you must think that the wave function is not complete, i.e. you endorse (consciously or unconsciously) a "hidden variables" theory.

    Since most physicists deny that they hold a hidden variables theory, they are perforce required to say that there is nothing actually fluctuating or changing in the vacuum state. This is just simple logic.

    If you want to insist that something is really fluctuating, as you seem to, then please answer these diagnostic questions:

    1) Do you regard the the wave function of a system as a complete physical description of it, that is, as not leaving out any physical property that the system has?

    2) Does the wave function of the quantum vacuum change with time?

    Just "yes" or "no" for each will do. If you think that either questions is somehow vague or ambiguous, point that out and I sharpen it up.

    ReplyDelete
  77. Tim,

    The vacuum energy (that this post is about) is time-independent. It is its average value. Yes, that's trivial. Please excuse my futile attempt at being polite.

    I think of quantum mechanics (and qft) as an effective theory in which degrees of freedom (call them hidden variables) have been neglected. So, no, I do not regard the (usual) wave function as the complete description. I add usual because it may well be that whatever is the more fundamental theory also has a wave-function.

    As to your question 2). The vacuum state isn't uniquely defined so it's impossible to answer.

    ReplyDelete
  78. Sabine,

    This is getting very interesting, then. Of course, we all know that present theory is not the final theory, and so is not, in that sense, complete, but we can still get at an important part of how you are thinking here. So granting that there are more degrees of physical freedom than are represented in present theory (so this is, I guess, part of the "unknown sector" of physics we haven't figured out yet), you seem very confident that in what we call the vacuum state (or a vacuum state, if that is degenerate) these unknown degrees of freedom are actually fluctuating. Do you have any idea of what drives these fluctuations? Is the underlying dynamics linear? Unitary? Do you think of the probabilistic nature of quantum predictions as due to these fluctuations? Or is all that just a guess without any real grounds? Do you think it is actually impossible that these unknown physical degrees of freedom are stationary too?

    The EPR paper called into question the completeness of the quantum-mechanical description, and was roundly rejected by Bohr et. al. Do you think you your view as therefore more on Einstein's side of that debate? Just curious about how you are thinking about all this.

    ReplyDelete
  79. Tim,

    I don't really see the relation of your comments to the argument of my post. There are certain calculations that you can do in qfts and these give certain results and all I am saying is that these don't predict the cosmological constant, never have, and never will.

    I think the dynamics underlying qfts is neither linear nor unitary. Yes, I think the probabilistic nature of quantum mechanics is due to the unresolved degrees of freedom. I wouldn't call them fluctuations, I am not sure how that helps. I don't know what you mean by 'stationary' in that context, so can't answer that question.

    It's mostly a guess. I never looked into the Bohr-Einstein debate so I can't tell to what extent I'd agree with whose argument about what.

    ReplyDelete
  80. Tim
    In real EPR experiments, physicists claim they disproved Einstein. Entanglement of distant particles can be explained by a theory that is probabilistic and non-local or a theory that is deterministic and local. What is ruled out is probabilistic and local. How did they disprove Einstein when he was arguing for a deterministic and local theory? They say they have proven non-locality but that is only true if they also assume the theory is probabilistic, and quantum mechanics is probabilistic. But that is circular reasoning. Non-locality is true if QM is true. But the position of Einstein et al was QM and non-locality are false or incomplete.

    ReplyDelete
  81. Sabine,

    So here is one source of misunderstanding here: I have not been even responding to your post. Just to McLin's question. There is a lot of confusion about what "quantum fluctuation" means, so I thought it was a nice questions quite apart from the Cosmological Constant.

    Enrico-
    Once you take account of Bell as well as EPR, then no local deterministic theory can work.You are stuck with non-locality.Could be indeterministic non-locality (GRW) or deterministic non-locality (Bohm) but there must be non-locality.

    ReplyDelete
  82. Tim
    Experiments testing Bell's inequality are inconclusive. They don't rule out locality as pointed out by Franson et al (see link) Is there more updated experimental result?
    http://math.ucr.edu/home/baez/physics/Quantum/bells_inequality.html

    ReplyDelete
  83. Enrico

    The link you have is to the early 90's! Of course there are updated results, and all the experimental loophole have been closed. Try this:

    https://phys.org/news/2017-07-probability-quantum-world-local-realism.html

    The chance of a local explanation is less than 1 in a billion.

    ReplyDelete
  84. Tim
    Sorry but I think this report is hyped. Below is the most important sentence in the report.
    “This means that the quantum world violates either locality (that distant objects cannot influence each other in less than a certain amount of time) or realism (that objects exist whether or not someone measures them), or possibly both.”

    This means they have not falsified Einstein et al because EPR were arguing for locality AND realism. The experiments cannot determine if locality or realism or both was violated. By the way, it is not a choice between either “hidden variables” or entanglement. Hidden variables are a proposed explanation for what is perceived as “entanglement” in QM

    ReplyDelete
  85. Sabine,

    As usual I agree with almost everything you say in your post. I do not believe that the Standard Model of particle physics has anything to say about the Cosmological Constant. I wasn't aware of Martin's paper (arxiv:1205.3365), and I haven't read it in detail, but I share the unease with equation (515) in that I have no idea what the parameter mu in that equation means physically.

    But I would like to add my ha'pence on the Standard Model as an effective field theory. Sure it's an effective field theory, from an observational point of view we know we have to add something to get neutrino masses (not difficult, there is a number of feasible models on the market, but not yet clear which is the best one), we know we have to add dark matter (completely unknown) and, of course, gravity. But I maintain that it is a very unusual effective field theory. The Standard Model has 19 parameters (I'm including Theta-QCD here but not the cosmological constant). Of these 19 parameters one is relevant (the Higgs mass) and 18 are marginal (not exactly marginal of course, there are logarithmic corrections, but still marginal in the usual terminology). This is unprecedented in the history of physics. In my opinion this is probably significant and Nature is giving us some hint here. We've had effective field theories in particle physics before, the Fermi 4-point weak interaction was an effective field theory, it involved an irrelevant operator and everyone knew it had to break down --- there had to be new physics at 100GeV. The Standard Model has no irrelevant operators (in old fashioned terminology it was called renormalisable). Because all operators are either relevant or marginal there is no internal evidence of any need for new physics for a very large range of energies, at least up to 10^9 GeV, as determined by the calculations on the instability of the electroweak vacuum (though this very sensitive to the top quark mass https://arxiv.org/pdf/1704.02821.pdf).
    So there is no internal evidence in the Standard Model for any need of new physics from current energies (10^3 GeV) up to 10^9 GeV, six orders of magnitude!
    This is unprecedented in physics. I am not a condensed matter theorist but realistic models in condensed matter that have no irrelevant operators are not usual and marginal operators are certainly not generic.

    To end my rant, I do not believe that the Standard Model of particle physics is "just another effective field theory" (I am aware that this is a very unorthodox point of view!). It is a very non-generic effective field theory with no irrelevant operators, 18 marginal operators and 1 relevant operator. This is unprecedented in the history of physics and I feel that Nature is probably giving us a hint here that whatever underlies the Standard Model must be something very unusual.

    Brian

    ReplyDelete
  86. Enrico

    You asked about whether we can say with very high confidence from experiments that Bell's inequality is in fact violated (for measurements none at space like separation). The answer is "yes". Period. The so-called experimental loopholes have been closed.

    I didn't even read the article, because I was was looking for some report about the best experiments. The stuff about "realism" is patent nonsense, as you can see, although admittedly widespread nonsense. Do objects actually exists even when no one is looking at them? Of course they do. No physical theory could possibly suggest or imply that they don't. Adding "realism" as a presupposition of the experiment is literally even sillier then reporting every experimental result like this: "The experiment provides strong evidence that the Higgs boson exists or that the experimenters were all hallucinating or are pathological liars".If the physical world does not exist when no one is looking then physics itself just is not possible.

    As silly as is it, there really are physicists who talk like this. They say ("argue" would be too strong) that we should interpret the result of these experiments as indicating a violatiton of realism rather than locality. Or, as I like to put it, "Well, the nothing actually exists, but thank God it's local".

    ReplyDelete
  87. Enrico:

    Your "logic" is failing you.

    If one finds that either A must not be true, or that B must not be true, or both A and B must not be true; then, a fortiori, A AND B must not be true.

    What one has is (NOT A) OR (NOT B) is TRUE (where the OR is not an exclusive or, but an inclusive or, so both could be false).

    The negation of this yields A AND B is NOT TRUE.

    It's simple logic.

    David

    ReplyDelete
  88. Tim:

    The most general form of Bell's Theorem, as I recall, is that Locallity "(that distant objects cannot influence each other in less than a certain amount of time)" and Contrafactual Definiteness, together, lead to Bell's inequality (which is violated by empirical evidence, as you know).

    Contrafactual Definiteness is similar to "realism (that objects exist whether or not someone measures them)", but is more narrow and refined, and speaks more specifically to ("hidden") variables or degrees of freedom that have definite values, even if you don't even try to measure such.

    This is not about "reality" being "really real", or not, even when one doesn't "look".

    For instance, non-relativistic Bohmian mechanics has Contrafactual Definiteness (with the "actual" "particle"), but violates Locality.

    Other interpretations of Quantum Mechanics (QM) take the opposite tradeoff, maintaining Locality (even at the relativistic level), but "give up" Contrafactual Definiteness (even though they do not "give up" "realism" in a sence that reflects our actual experience).

    In fact, even the integrals that are used within QM, need not be on "surfaces" of "simultaneity", as they are usually formulated, but can involve effective "velocities" that are far smaller (at least down to arbitrarily close to the speed of light), as I showed within my Ph.D. Dissertation. (Unfortunately, I still have yet to obtain an electronic version I can share. I've got to get on that!)

    (I think it can be formulated actually on light-like surfaces, and, maybe, even surfaces involving only effective "speeds" less than that of light, but I haven't proven that, Mathematically, yet.)

    Anyway, maybe this will "shed some light".

    David

    ReplyDelete
  89. David:

    This is a common misconception. Neither Bell nor EPR presuppose counterfactual definiteness. Nor do they presuppose determinism. Rather, as Bell insists, determinism is *derived* from locality and the EPR perfect correlations. And once you have determinism you get counterfactual definiteness for free. So it is not a presupposition than can be denied in order to save locality. Further, the CHSH inequality does not use the perfect EPR correlations, and so never even implies counterfactual definiteness.

    Standard QM is clearly non-local in virtue of the collapses. That was what bothered Einstein about standard QM all along: the spooky action-at-a-distance. That's the whole point of the EPR paper. What Bell showed is that you can't escape the non-locality and still get the right predictions.

    Tim

    ReplyDelete
  90. Standard QM is clearly non-local in virtue of the collapses.

    To avoid confusion, it's good to be clear about the meanings of "standard QM" and "locality". It would be better (in my opinion) to talk about quantum field theory and separability. It goes without saying that non-relativistic quantum mechanics is non-relativistic, meaning it is not Lorentz invariant. Also, the word "locality" is (or should be) reserved for the proposition that no energy or information propagates faster than light, which amounts to Lorentz invariance. So, standard (non-relativistic) quantum mechanics obviously does not satisfy locality (Lorentz invariance), which is why we talk instead about quantum field theory, which IS Lorentz invariant and hence satisfies locality according to our definition (which is the right definition!).

    With that said, quantum entanglement does not entail non-locality, it entails non-separability, where "separable" is a more subtle concept that refers to a degree of (statistical) independence between the results of measurements in one region and the (free?) choice of measurements in another (space-like separated) region. If quantum coherence is maintained, phenomena are not separable in this sense. Notice that Einstein didn't complain that entanglement implies no real existence for the separate entities until measured, he complained that it implies no real independent existence, i.e., that space-like separated physical systems are not separable. He was right about that, but his reasons for regarding this as unacceptable were sketchy.

    ReplyDelete
  91. If we assume that reality is both left-handed and right-handed for every one and both negative and positive charged for every one but only inverse makes difference, then entanglement and Bell logic need no nonlocality. It's all due to spatial environment that preserves inversions.

    ReplyDelete
  92. Amos:

    "Also, the word 'locality' is (or should be) reserved for the proposition that no energy or information propagates faster than light, which amounts to Lorentz invariance."

    This is a bold assertion, which is also not correct. Lorentz invariant descriptions of tachyons have been around forever, and tachyons certainly would transmit both energy and information superluminally. So your proposed "definition" of "locality" is certainly not acceptable: it is self-contradictory.

    In any case, to understand what "locality" means in the context of Bell's theorem one obviously has to look at the condition Bell used in his proof, and discussed in his works. You may want to use the term in some other way, but that is just the wrong way if you are discussing Bell.

    Further, it is not clear what you mean by "information": does that mean signals or Shannon information? It is well known that violations of Bell's inequality need not allow for signaling. Indeed, even stronger violations of locality can be non-signaling (see: Popescu boxes). And if you mean no signaling faster than light, then the condition is too weak to capture what Bell (or Einstein) had in mind. When Einstein correctly accused quantum theory of "spooky action at a distance" he was certainly not claiming you could superluminally signal using it!

    Bell has many clear discussion of what he means by locality, and in Bell's sense QM. and just as well QFT, is clearly non-local. And if you think that QFT is Lorentz invariant, think again. In particular, figure out how you plan to solve the measurement problem. Only then can one have a real discussion.

    ReplyDelete
  93. Amos,

    Quite right but that's the trouble with arguing about "nonlocality": it is just nonseparability, but because nonseparability is a distinctly non-classical feature, those who wish to spook themselves and others with it can always 'call that dog by a bad name'. It's a regrettable practice and should be resisted - along with the simply false claims about QM.

    ReplyDelete
  94. Paul Hayes,

    PBR proves that you must have a psi-ontic theory or else violate basic statistical independence assumptions that underlie all experimental method. That is not a false claim of any sort: it is a mathematical theorem.

    ReplyDelete
  95. To understand what "locality" means in the context of Bell's theorem one obviously has to look at the condition Bell used in his proof, and discussed in his works.

    Yes, and what Bell’s inequality rests on is actually not what most people call “locality” (no faster-than-light-signaling) but rather separability. Bell himself was muddled about this. He said things like “The reason I want to go back to the idea of an aether here is because in these EPR experiments there is the suggestion that behind the scenes something is going faster than light”. But of course he recognized that this implies that “things can go backward in time”, so it’s a big problem, unless the “things” convey no energy or information (signaling), in which case at most we are talking about separability, not locality.

    In Bell's sense QM. and just as well QFT, is clearly non-local.

    If the non-relativistic Schrodinger equation were true, locality would be violated, as seen by the fact that it is not Lorentz invariant. (The possibility of tachyons is neither here nor there… literally.) QFT, on the other hand, is manifestly Lorentz invariant (see below), so it satisfies locality in the sense of no faster-than-light signaling. As noted above, “Bell’s sense” of locality was muddled, but his inequality involves what I’m calling separability, not locality.

    If you think that QFT is Lorentz invariant, think again. In particular, figure out how you plan to solve the measurement problem.

    I think QFT is manifestly Lorentz invariant, but I suppose it’s conceivable that the resolution of the measurement problem might result in the overthrow of QFT and replacement with a theory that is not Lorentz invariant. However, QFT already entails quantum entanglement, and yet it is Lorentz invariant, hence the need to distinguish between the distinct concepts of locality and separability.

    ReplyDelete
  96. Amos,

    Lorentz invariant tachyons, even as a theoretical or mathematical possibility, are a flat counter-example to your claim. Your suggestion about the meaning of "local" is provably incorrect. You can't simply wave a counter-example away.

    Again you are not distinguishing signaling (which requires a level of controlability and observability) from information transfer. If you regard the wave function as complete, then QFT just like QM is manifestly non-local in the sense of superluminal information transfer, as Einstein saw. The EPR argument works as well for QFT as it does for QM.

    Locality "in the sense of no faster-than-light signaling" just is not the sense that either Einstein or Bell had in mind. Which I already pointed out.

    If you have no resolution to the measurement problem then you have no theory, certainly not one that can predict the results reported by Aspect, Zeilinger, etc.

    Lorentz invariance is neither necessary nor sufficient for any sort of locality. If you want all the details of various properties (no superluminal energy transfer, no superluminal signaling, no superluminal information transfer, with detailed calculations of how much transfer of Shannon information is required to violate the inequalities) it is all in my "Quantum Non-locality and Relativity", with explicitly constructed examples.

    ReplyDelete
  97. According to all measurements the locality, the speed of causality being c, has been never broken.

    It's bullshit speaking about nonlocal phenomena but we must research out the spatiality as conservator of inversions.

    ReplyDelete
  98. Amos, Tim,

    I side with Tim on that Lorentz-invariance is neither necessary nor sufficient for locality. It's not necessary because there are clearly examples which are local and not Lorentz-invariant. A set of entirely disconnected points will do - doesn't get more local than that. It's not sufficient because you can write down Lorentz-invariant theories that are not local.

    As to the example with tachyons however, it is not at all clear they can indeed used to tranfer information, See eg here.

    ReplyDelete
  99. You are not distinguishing signaling from information transfer. ..QFT is manifestly non-local in the sense of superluminal information transfer…

    I’ve distinguished between signaling versus non-signaling, and called one locality and the other separability. By the way, the word “transfer” when applied to non-signaling correlations is problematic, because two space-like separated events may exhibit (over multiple trials) some correlation, but each precedes the other in some frame, so the idea of “transfer”, which tends to imply direction, is misleading. We need to distinguish between signaling (which has a clear sense of directional “transfer” of information) versus spacelike-separated correlations (which don’t have a clear Lorentz-invariant sense of directional transfer). This is the distinction that is designating by the words locality and separability.

    The EPR argument works as well for QFT as it does for QM.

    Sure, but we don’t even need the EPR argument to show that QM violates not only separability but also locality (using the words in my sense), because the non-relativistic Schrodinger equation is not Lorentz invariant, so it would imply a preferred frame and superluminal signaling. Einstein insisted that we should not violate special relativity (the content of which is Lorentz invariance, along with a few other tacit but generally conceded assumptions, such as that tachyons cannot permit superluminal signaling), but non-relativistic quantum mechanics explicitly violates special relativity. QFT fixes this problem, although Einstein never had much interest in learning about QFT (and in his day it was sort of a mess anyway), but as you said the EPR argument still applies. But EPR does not imply superluminal signaling nor even directional transfer of information, it implies spacelike separated correlations, which is what I’m calling non-separability, to distinguish it from superluminal signaling and directional transfer of information.

    Lorentz invariant tachyons, even as a theoretical or mathematical possibility, are a flat counter-example to your claim. Your suggestion about the meaning of "local" is provably incorrect. You can't simply wave a counter-example away.

    I’m not trying to wave anything away, I’m saying that “tachyons” that permit superluminal signaling would violate special relativity, and tachyons that do not permit superluminal signaling are not directional transfers in any Lorentz invariant sense, and hence (at most) are just another word for non-separability. If you want to conceive of spacelike-separated correlations as being somehow facilitated by “tachyons”, you’re free to do so, but that would not constitute superluminal signaling nor even directional transfer of information (without violating special relativity). The point is to clearly distinguish between directional transfers of information (signaling) versus non-directional spacelike-separated correlations.

    Lorentz invariance is neither necessary nor sufficient for any sort of locality.

    Taking “locality” to mean no superluminal (directional, in the Lorentz invariant sense) transfer of energy or information (meaning signaling), I think Lorentz invariance is sufficient for locality, though not of course for separability.

    It's not sufficient because you can write down Lorentz-invariant theories that are not local.

    Do you mean “not local” in the sense that they permit superluminal signaling, i.e., Lorentz invariant directional transfer of energy or information? Is this referring to achronal cylindrical universes, or some such thing? Or are you using the word "local" with some other meaning?

    ReplyDelete
  100. Amos

    Communication here is is simply going to be impossible if you insist on using words in an idiosyncratic way. You certainly cannot make any contact at all with Einstein's and Bell's concerns.

    You write:
    "I’ve distinguished between signaling versus non-signaling, and called one locality and the other separability."

    Yes, and that is creating nothing but confusion. The distinction between theories that allow superluminal signaling and theories that don't already has a name: the signaling/non-signaling distinction. That simply has nothing to do with locality or separability. The fact that this leads to a complete mess is illustrated by the following sentence:

    "But EPR does not imply superluminal signaling nor even directional transfer of information, it implies spacelike separated correlations, which is what I’m calling non-separability, to distinguish it from superluminal signaling and directional transfer of information."

    No, the EPR correlations don't require non-separability in any sense at all. The EPR correlations can be perfectly accounted for using a completely local and separable theory. This is the sort of Bertlmann's socks explanation that Bell contrasts with the violation of his inequality. The EPR correlations, of course, do not violate the Bell inequality. So the fact that by your definition EPR requires a failure of separability shows only the incorrectness of your definition, nothing else.

    The example of a completely Lorentz invariant theory with superluminal signaling is in my book. It is easy to construct: you allow signaling from the transmitter to a locus at constant Spacelike Invariant Interval from the emitter. Since the Invariant interval is, well, Invariant the theory is Lorentz invariant but allows superluminal signaling. It is a flat disproof of your "definition".

    ReplyDelete
  101. We, as a society, seem staggeringly drawn to short term profits instead of long term “profits.” (Quotes because, in the case of science, the profits are a deeper understanding of Nature.) That is an absurd solution to anything.

    In this case, the problem is that the emphasis of science has morphed from understanding to “measurables” like number of citations. Monetizing it will only distort the focus even further.

    ReplyDelete
  102. Communication here is is simply going to be impossible if you insist on using words in an idiosyncratic way.

    The concept of 'locality' in physics arose as an alternative to “instantaneous action at a distance”, and was refined in special relativity to the idea that no action can propagate faster than light. It is not idiosyncratic to define the word ‘locality’ in this way. Also, they key concept of separability (as distinct from locality) is well recognized in discussions of quantum entanglement. For example, the Stanford Encyclopedia of Philosophy article says “EPR make two critical assumptions… The first assumption (separability) is that at the time when the systems are separated, maybe quite far apart, each has its own [independent] reality… They need this assumption to make sense of another. The second assumption is that of locality. Given that the systems are far apart, locality supposes that “no real change can take place” in one system as a direct consequence of a measurement made on the other system. They gloss this by saying ‘at the time of measurement the two systems no longer interact.’” (Note “consequent” and “action”.) In these terms, as Bohr and others argued, quantum entanglement violates separability but not locality. There is no action at a distance.

    The example of a completely Lorentz invariant theory with superluminal signaling is in my book… you allow signaling from the transmitter to a locus at constant Spacelike Invariant Interval from the emitter. It is a flat disproof of your "definition".

    I don’t think definitions can be disproved. Your example of superluminal signaling across all spacelike intervals entails action propagating faster than light, so you have causal loops, etc., which would indeed violate locality, but all the evidence is that no action ever propagates faster than light. The traditional association of Lorentz invariance with locality is based on the tacit assumption (or observation) that action does not propagate along spacelike intervals.

    You certainly cannot make any contact at all with Einstein's and Bell's concerns.

    Einstein never used the word “locality”, he talked about the independent existence of space-like separated entities, which is separability. I’ve already quoted Bell on his belief that we need an aether because the EPR experiments suggest something is moving faster than light. Bell himself acknowledged that this worried him, because (he said) it conflicted with Lorentz invariance. (“Behind the apparent Lorentz invariance of phenomena there is another level that is not Lorentz invariant.")

    The EPR correlations can be perfectly accounted for using a completely local and separable theory. This is the sort of Bertlmann's socks explanation that Bell contrasts with the violation of his inequality. The EPR correlations, of course, do not violate the Bell inequality.

    You lost me. Obviously quantum entanglement violates the Bell inequalities, so when you say the EPR correlations do not violate those inequalities are you trying to make some point about the specific example in the original EPR paper, e.g., pointing out that it isn’t fully general, or are you really claiming that quantum entanglement in general does not violate the inequalities, and can be explained like Bertlmann’s socks?

    ReplyDelete
  103. Amos

    Of course you can disprove a proposed definition, unless you mean it as a stipulation. If you want to stipulate that you, yourself are going to use "local" as a synonym for "Lorentz invariant" then just say that. It is a weird stipulation, and will certainly cause all sorts of confusions, because that is not what "local" as used by everyone else means. I thought you were proposing this rather as an account of what "local", as the term is actually already used, means. As such, it is subject to counterexample. I just gave you one.

    We have a perfectly good term for Lorentz invariance, namely "Lorentz invariance". If that is all you mean by "local" then just drop the term "local" altogether: you will lose nothing but the extra confusion.

    Bell did worry, having proven that we need non-locality, whether it can be implemented in a Lorentz invariant way. He was encouraged by his analysis of the non-relativistic GRW collapse theory (see the end of "Are There Quantum Jumps?"), and his intuition there was right: a completely Lorentz-invariant non-local version of GRW was constructed later by Roderich Tumulka. I can give the citation, if you want. But in order to appreciate the theory, you have to recognize that a "non-local but Lorentz invariant theory" is not a contradiction in terms, as your stipulation would require.

    I can't imagine why you continue to insist on this account of "local", which is simply false as a claim about how the word is used. Why can't you just acknowledge that?

    If you have not done so, start with "Are There Quantum Jumps?", and we can go one from there. But we won't get anywhere with your bizarre insistence to misuse language.

    ReplyDelete
  104. Amos-

    Oh, and yes, by the EPR correlations I mean those discussed in the EPR paper, which do not violate Bell's inequality.

    ReplyDelete
  105. Of course you can disprove a proposed definition, unless you mean it as a stipulation.

    I suppose someone might propose a definition that is self-contradictory or non-sensical, although in that case I wouldn’t say their definition was “disproved”, I would say it was self-contradictory, etc. But the definition we’re discussing is simply the standard definition of “locality” found in countless (though admittedly not all) references throughout the literature, i.e., no superluminal action (which entails no superluminal signaling). I don’t see how this definition can be disproved.

    If you want to stipulate that you, yourself are going to use "local" as a synonym for "Lorentz invariant" then just say that.

    No, I want to stipulate the most common formal definition of "locality" in physics, which is no superluminal action. It is distinct from, and not to be confused with, separability. (The proposition “no superluminal action” is not identical to the concepts of Lorentz invariance and special relativity without some further stipulations, but they are obviously related.)

    You have to recognize that a "non-local but Lorentz invariant theory" is not a contradiction in terms, as your stipulation would require.

    If “locality” (as distinct from separability) is defined as “no superluminal action”, and if you agree that superluminal action would violate Lorentz invariance, then “non-local but Lorentz invariant” is indeed a contradiction in terms. This doesn’t imply that we have the "wrong" definition of locality, it just implies that quantum entanglement doesn’t violate locality (nor Lorentz invariance, nor special relativity).

    I can't imagine why you continue to insist on this account of "local", which is simply false as a claim about how the word is used.

    I’ve just checked several references, and whenever the word “locality” is formally defined, the definition usually amounts to “no superluminal action”. I wouldn’t claim that this is how the word is always defined, but I would say it’s the most common definition. I’m also advocating what I think is the most common definition of separability, and in these terms quantum entanglement does not violate locality (nor Lorentz invariance, nor special relativity), but it does violate separability.

    Oh, and yes, by the EPR correlations I mean those discussed in the EPR paper, which do not violate Bell's inequality.

    Sure, but their belief that this kind of situation entails non-separability of the wave function was vindicated by Bell’s more detailed analysis of different possible measurements.

    By the way, I don't think Bell's "Are There Quantum Jumps" would be a good starting point for any discussion. Spontaneous collapse based on non-relativistic quantum mechanics, which manifestly violates Lorentz invariance, doesn't work, so we would have to immediately go to your follow-up reference, in which you say Lorentz invariance of this spontaneous collapse model is achieved... but we already have a Lorentz invariant quantum field theory, i.e., a theory in which there is no superluminal action, and yet there is quantum entanglement, which implies that the wave function of the two spacelike-separated components is not separable... even though there is no superluminal signaling or action. How does your preferred Lorentz invariant spontaneous collapse model differ from standard quantum field theory?

    ReplyDelete
  106. A completely Lorentz-invariant non-local [sic] version of GRW was constructed later by Roderich Tumulka.

    The paper begins by remarking that the stochastic spontaneous multi-time collapse model

    “turns quantum mechanics into a completely coherent theory. It resolves all paradoxes of quantum mechanics, in particular the measurement problem, and accounts for all phenomena of quantum mechanics in terms of an objective reality governed by mathematical laws… “

    At the conclusion, when considering predictions of the model, it says

    “A difficulty with obtaining any predictions at all from the model is that it does not involve any interaction, and thus does not support the formation of macroscopic bodies such as observers or apparatuses.”

    In this regard, it comments that " the difficulty of including interactions... is encountered by every kind of relativistic quantum mechanics".

    ReplyDelete
  107. Amos

    I really think you have to try to be more precise in what you are writing. Here is a quote from above:

    "Sure, but we don’t even need the EPR argument to show that QM violates not only separability but also locality (using the words in my sense), because the non-relativistic Schrodinger equation is not Lorentz invariant, so it would imply a preferred frame and superluminal signaling. Einstein insisted that we should not violate special relativity (the content of which is Lorentz invariance, along with a few other tacit but generally conceded assumptions, such as that tachyons cannot permit superluminal signaling), but non-relativistic quantum mechanics explicitly violates special relativity."

    You make a bunch of non-equlvalent claims here, and treat them as if they say the same thing, so it is not possible to even know what you mean.

    Claim 1:"the non-Relativistic Schrödinger equation is not Lorentz invariant".

    Claim 1 is true, of course.

    Claim 2: "so it would imply a preferred reference frame and superluminal signaling". Claim 2 is false. It does presuppose a unique temporal foliation, but not at all superluminal signaling. Superluminal (or instantaneous) signaling is not possible in standard non-relativistic QM so long as there is no interaction term in the Hamiltonian between sufficiently separated systems. And nothing in QM requires such a term.

    Claim 3: "Einstein insisted that we should not violate special relativity (the content of which is Lorentz invariance, along with a few other tacit but generally conceded assumptions, such as that tachyons cannot permit superluminal signaling)". The "content" of SR is exactly the issue here. If you take it to be just Lorentz invariance, then SR is consistent with both superluminal action and superluminal signaling. Again, see examples in my book. If it is something more than Lorentz Invariance, then spell out what you mean.

    "No superluminal action" is not at all obviously equivalent to "no superluminal signaling" or "no superluminal information transfer." That depends on what goes in to defining an "action". So this is yet another concept floating around that you have not distinguished from the rest.

    I should warn you that if you do make fine and exact distinctions between superluminal signaling, superluminal action and superluminal information transfer, then it will not be at all obvious how to categorize a theory if it is not crystal clear how it solves the measurement problem. The difficulty with standard QFT is that it is completely unclear how it confronts that problem. So if you want to keep using QFT as an example, please explain how, in your view, it solves the problem. GRW and Bohm do have not this difficulty, because it is clear how they solve the problem. And both theories are non-local.

    con't

    ReplyDelete
  108. con't

    Let's define a Relativistically local theory this way: a theory is relativistic local if an intervention at one location can have no effect on the physical state at any space-like separated location. That is, the state at the distant location is unchanged before and after, or with or without, the intervention. What Bell proves is that no such theory can be empirically adequate. So the actual world is not relativistically local in this sense.

    It is true that standard QM achieves this non-locality through entanglement. But that does not mean it isn't non-locality. To call entangled distant states non-separable is fine. But accepting non separability does not negate non-locality. Since we know by PBR that the wave function represents a real physical feature of a system, the world also contains non-seperable states. So have have both non-seperablility and non-locality in standard QM.

    Tumulka's paper, including the parts you cite, are correct. The toy theory violates the Bell inequalities with a completely Lorentz invariant, non-local theory with entangled states. The problem of adding an interaction Hamiltonian without making everything mathematically undefined haunts every QFT, and so is not peculiar to Tumulka's theory and cannot be held against it.

    ReplyDelete
  109. "[the non-relativistic Schrodinger equation] does presuppose a unique temporal foliation, but not at all superluminal signaling.

    Yes, I mis-spoke. What I had in mind was Bell’s neo-Lorentzian belief (expressed as late as 1986) that a preferred frame somehow legitimizes superluminal signaling and shields us from the associated causal loops etc. I don’t agree with any of that, so I (sloppily) tossed in “superluminal signaling” along with “preferred frame” to critique Bell’s entire proposal, not meaning to suggest that the non-relativistic Schrodinger equation actually facilitates superluminal signaling.

    The "content" of SR is exactly the issue here. If you take it to be just Lorentz invariance, then SR is consistent with both superluminal action and superluminal signaling… If it is something more than Lorentz Invariance, then spell out what you mean.

    Again, I take special relativity to include the proposition that there is no superluminal signaling, which rules out your counter-example.

    "No superluminal action" is not at all obviously equivalent to "no superluminal signaling" or "no superluminal information transfer." That depends on what goes in to defining an "action". So this is yet another concept floating around that you have not distinguished from the rest.

    I’m using the word “action” with the standard physics definition, as it’s traditionally used in phrases such as “no action at a distance”. It has units of [energy]*[time], or sometimes [momentum]*[length], and the statement that there is no superluminal action signifies that there is no superluminal flow of energy or momentum. I take this to imply no superluminal signaling, since any signaling entails some flow of energy or momentum.

    The difficulty with standard QFT is that it is completely unclear how it confronts [the measurement] problem. So if you want to keep using QFT as an example, please explain how, in your view, it solves the problem. GRW and Bohm do have not this difficulty, because it is clear how they solve the problem.

    I don’t think the measurement problem is resolved by either GRW or Bohm. But even more fundamentally, neither of them is Lorentz invariant, and the attempted relativistic version of GRW unfortunately cannot model any interactions (so it can’t model any observers or apparatuses, etc). I don’t see how we can claim resolution of the measurement problem without being able to model interactions. (A measurement is an interaction.) I think QFT has a framework for modeling interactions, although I don’t claim that it resolves the measurement problem. But I don’t think we need to resolve the measurement problem before we can agree on what are the standard definitions of locality and separability.

    Let's define a relativistically local theory this way: a theory is relativistic local if an intervention at one location can have no effect on the physical state at any space-like separated location.

    From my cursory scan of the literature, it appears that physicists most often define locality as “no superluminal action”, whereas philosophers sometimes agree with the physicist’s definition and other times give the word a variety of meanings, such as the one you’ve proposed above. From my perspective, your proposed definition conflates two key concepts. The most striking thing about quantum entanglement is that the correlations seem (by normal classical principles) to require superluminal action, and yet there is no superluminal action. (I’m using the word “action” in the physicist’s sense). So, using words with their original meanings, quantum entanglement doesn’t violate “locality” (there is no action at a distance), but it violates something else, that has been called “separability”. We can’t factor the wave function of two entangled particles into two independent wave functions, even if they are spacelike separated. Thus Einstein complained that the separate particles don’t have independent existence.

    ReplyDelete
  110. Dan: OK, let's clear up some confusions here, so we can make progress. Here is an absolutely key mistake in your reasoning:

    "I’m using the word “action” with the standard physics definition, as it’s traditionally used in phrases such as “no action at a distance”. It has units of [energy]*[time], or sometimes [momentum]*[length], and the statement that there is no superluminal action signifies that there is no superluminal flow of energy or momentum. I take this to imply no superluminal signaling, since any signaling entails some flow of energy or momentum."

    You know, I never even considered taking the notion of "action at a distance" to have anything to do with the "action" defined in Newtonian physics! The concept of action obviously predates Newton by millennia. Anyway, if this what some physicists have in mind, then better to call it "superluminal energy transfer" or "superluminal momentum transfer" or, really, "superluminal action transfer" if you want to get the units right. But however you sort that out, it is just false that signaling requires it! That is easy to see.

    First, actual information transfer or actual signaling does not require any actual energy or momentum transfer. In my book, I use the example from Sherlock Holmes: Holmes gains information about a crime from the fact that the guard dog did not bark. But the non-barking did not transfer any energy or momentum to anything. It does transmit information, and can be used to signal. For example, I want to be able to signal you that the bad guy has arrived at my home without in any way alerting him. So we agree that I will call you every five minutes as long as he is not there. After he comes, I signal you that he has come by not phoning, which clearly transmits no energy or momentum.

    Now these cases do depend on the *possibility* (but not actuality) of energy transfer. But it is easy to see that this is not at all conceptually necessary. There are atoms that spontaneously decay, that fire off beta particles from time to time. These decays do not violate energy or momentum conservation and require no input of energy or momentum or action. But suppose that by turning a knob in my room, I could make it that the decays all occur in one direction, and turning it another way they all occur in another direction. Then I could signal you without sending any energy or momentum. So the "no energy or momentum transfer" requirement does not entail "no superluminal signaling" or "no superluminal information transfer". This is all spelled out in detail in my book.

    The physicist's definition you mention is therefore useless. No one advocates that sort of action at a distance, but pretty much nothing about either Lorentz invariance or information transfer or signaling follows from it anyway.

    Try to absorb these distinctions and see if they clear things up.

    ReplyDelete
  111. Dear Tim,

    "PBR proves that you must have a psi-ontic theory or else violate basic statistical independence assumptions that underlie all experimental method."

    In your paper "What Bell did", in regards to superdeterminism, you say something similar:

    "Of course, such a purely abstract proposal cannot be refuted, but besides being insane, it too would undercut scientific method. All scientific interpretations of
    our observations presuppose that they have not have been manipulated in such a way."

    Can you please point me to a paper/material presenting a good argument that the statistical independence assumption "underlies all experimental method"? As far as I can tell one only needs to exclude conspiracies (correlations caused by some intelligent mind, like an alien, or a god) but one should allow correlations caused by known physical interactions between objects.

    If you look at the sky, the objects you see are not moving independently of one another. If you stay inside our galaxy pretty much everything orbits the central black hole. This is a non-trivial correlation. But even if you look at other galaxies, their motion cannot be described as "independent". And this has a very good reason. In both Newtonian gravity and GR the motion of an object is a function of position/momenta of all other objects. So, the assumption that some objects or group of objects are independent is a denial of those theories.

    In Bell's theorem we have Alice and Bob. Their "choices" are determined by the motion of their internal particles (mostly electrons and quarks). If the theory describing that motion is a field theory (like Maxwell's electrodynamics or GR) it follows that it is actually impossible for them to be independent. Of course, a correlation between the motion of their internal particles does not necessarily imply the correlation required by Bell but I see no good reason to deny it either. It seems to me that the most logical position would be agnosticism. We have a clear physical mechanism that could stay at the basis of the statistical dependence between the systems Alice and Bob (electromagnetical interaction between the charged particles inside them) but we just cannot perform the required calculations to see if this mechanism actually leads to the Bell correlations. But the burden of proof should be on Bell's side.

    Regards,

    Andrei

    ReplyDelete
  112. Tim Maudlin: Superluminal signaling is not consistent with Lorentz invariance. In the example below, if the deactivation device is able to send a faster-than-light signal to the bomb, then special relativity ends up in a contradiction: the bomb explodes in the frame of the train but does not in the frame of the tunnel:

    http://www.people.fas.harvard.edu/~djmorin/chap11.pdf
    pp. 41-42: "11.6. Train in a tunnel. A train and a tunnel both have proper lengths L. The train moves toward the tunnel at speed v. A bomb is located at the front of the train. The bomb is designed to explode when the front of the train passes the far end of the tunnel. A deactivation sensor is located at the back of the train. When the back of the train passes the near end of the tunnel, the sensor tells the bomb to disarm itself. Does the bomb explode?"

    ReplyDelete
  113. Pentcho,

    This argument is wrong because it implicitly uses but does not state the presence of an arrow of time. This is unfortunately a very common confusion and I would appreciate if you stopped spreading it. Superluminal signalling is perfectly consistent with Lorentz-invariance provided you do consistently take into account that our universe - for better or worse - does have an arrow of time.

    ReplyDelete
  114. A theory is relativistic local if an intervention at one location can have no effect on the physical state at any space-like separated location. That is, the state at the distant location is unchanged before and after, or with or without, the intervention.

    The word “intervention” is loaded with conceptual baggage, as is the phrase “have effect”. If A and B are two spacelike-separated events, they have no Lorentz invariant temporal ordering, so how do we decide if A “had an effect on” B, or if B “had an effect on” A? You might say “well, B is affected by the intervention that happened at A”, so the directionality of the “effect” is determined by where we place the “intervention”, the effect of which then propagates to other spacelike-separated events. The intervention is thus conceived as some free external… well… intervention, which is not a sound basis for a definition of locality. The alternative, respecting the lack of directionality and invariant temporal ordering, is to say “well, suppose A and B “affect each other” in a reciprocal way”. That’s fine, but then it isn’t a directional propagation, it is simply a correlation, which brings us back to separability.

    You know, I never even considered taking the notion of "action at a distance" to have anything to do with the "action" defined in Newtonian physics!

    That’s interesting.

    I want to be able to signal you that the bad guy has arrived at my home without in any way alerting him. So we agree that I will call you every five minutes as long as he is not there. After he comes, I signal you that he has come by not phoning, which clearly transmits no energy or momentum.

    That’s just a crude “shadow” signal wave conveyed by a carrier wave. That isn’t an example of signaling without energy or momentum transfer.

    Try to absorb these distinctions and see if they clear things up.

    I think we have a clear disagreement about whether superluminal signaling is possible. You say it is, and I say is isn’t. I think all the empirical evidence is on my side. In particular, even experiments involving entangled particles do not exhibit any superluminal signaling. The fact that a shadow can “move faster than light” is well known, and it is not an example of superluminal signaling.

    In summary, the first of my two comments at the start of this thread was that when discussing quantum entanglement it’s best to talk in a Lorentz-invariant context, so we should talk about quantum field theory rather than the non-relativistic quantum mechanics (even though quantum entanglement is present in both). You responded that quantum field theory is not Lorentz invariant, at least not without a resolution of the measurement problem, and you then revealed what you regard as a solution of the measurement problem by a stochastic multi-time spontaneous collapse model… which, however, tells us nothing about interactions (including measurements!). I don't believe your preferred theory resolves the measurement problem (to the extent that a problem exists).

    My second comment was that the word “locality” in physics commonly is (and ought to be) defined as “no superluminal action”, and according to this definition quantum entanglement does not violate locality, although it does violate what is called separability. You said this definition of locality is easily disproved, and this led you to explain that you believe in superluminal signaling. This seems to be the core of our disagreement, as I don’t believe in superluminal signaling.

    ReplyDelete
  115. I think we have a clear disagreement about whether superluminal signaling is possible.

    To clarify, I meant we disagree about whether signaling without energy transfer is possible. Since energy can't propagate faster than light, this means (to me) no superluminal signaling, but since you dispute that signaling requires energy, I need to make the distinction.

    ReplyDelete
  116. "After he comes, I signal you that he has come by not phoning, which clearly transmits no energy or momentum."

    For me, this confuses information (data) or the lack thereof, with the conclusions that can be drawn from it. To say that no information (no signal) is information is like saying zero is a positive number. Conclusions can be drawn from a value of zero, but that does not make zero a positive number.

    Further, conclusions can only be drawn from the lack of a signal if the possibility of a signal exists. If your phone doesn't work, the lack of a call from you does not permit a valid conclusion.

    ReplyDelete
  117. Pentcho,

    There is nothing paradoxical or self-contradictory with the problem as stated: the difficulty is that it is underdescribed for our purposes. The answer to the question as given is of course "yes" so long as all signaling is at best luminal. If we want to postulate superluminal signaling, then we have to provide physical details of how it works. If you use the simplest form of manifestly Lorentz invariant superluminal signaling (what I call a Case-2 Superluminal Signal in Quantum Non-Locality and Relativity") then you have to provide some parameters to answer the question: the exact physics of the signal, the length of the tunnel, the relative speed. It could be that the bomb explodes and it could be that it does not. Of course you can't answer questions about superluminal signals without physically specifying how they work. Case-2 signals don't use tachyons, and are transparently Lorentz invariant. If you are interested, this is all spelled out in painful detail in my book.

    ReplyDelete
  118. Dear Andrei,

    Thanks for the chance to be clearer about this. There is a matter of definition here that seems to trip people up a lot, and I would like to get it cleared up.

    The sense of "statistical independence" required for both Bell's proof and the PBR theorem is this: set up an experimental technique designed to make a "random selection" from an ensemble. For example, given a stream of incoming electrons, "randomly select" some to have their x-spins measured and others the y-spins. Or given a set of boxes, all with the letter psi written on them, "randomly select" two. Or given a set of experimental subjects, "randomly assign" half to an experimental group and half to a control group.

    The statistical independence assumption is just that the obvious kinds of methods for doing this—coin-flipping, or use of random number generators, or by the parity of the digits in the decimal expansion of pi, or even "the whim of the experimenter" (although the last is the most problematic)—are methods that yield statistically random subgroups. That is, as the number of members of the subgroup grows, it becomes more and more certain that the selected groups are statistically like the whole ensemble and hence statistically like one another with respect to all physical characteristics, known and unknown. This is the basic method of a randomized trial.

    Suppose we want to test a flu vaccine, so we take a group and "randomly" divide them into the control and experimental group, give the vaccine to the experimental group, then expose both to the flu virus. If no one in the experimental group gets sick and everyone in the control group does, then we conclude that the vaccine works. But it is logically possible that the difference is not due to the vaccine at all: there is some gene that protects from the flu, and all the people with the gene just happened to end up in the experimental group, and all the people without in the control. We would reject such a possibility as not worthy of consideration. And if we really were worried about it, we could switch from one "physical randomizer" (a coin flip) to another (the digits of pi) to check. The assumption of statistical independence is just the assumption that what we call physical randomizers do in fact yield such statistically representative subgroups. To deny it is basically insane. And you see how it would undercut all the experimental methods of randomized studies.

    Bell and PBR do assume that physical randomizers choose their subgroups in a way that is statistically independent of the physical differences among the members of the ensemble, so the chosen groups are statistically like the whole ensemble and hence like each other. The assumption is so natural, and the worries about it so conspiratorial, that you usually don't even mention the assumption. Denying it is basically crazy.

    ReplyDelete
  119. Dan:

    We are making some progress here, but there are still some very basic confusions. One is your claim that I believe in superluminal signaling. Since I never said that, nor implied it, nor suggested it, there is a real breakdown in communication here. What I did say is that Lorentz invariance is not at all the same condition as no-superluminal-signalling. One demonstrates that by constructing a manifestly Lorentz-invariant theory which permits superluminal signaling. Whether the theory is physically plausible or realistic is neither here nor there: you are just showing the the content of the one concept is distinct from the content of the other.

    There are several ways to construct such theories. One is by postulating tachyons, where the physics of the tachyons is given in a Lorentz invariant way. Another (what I call Class-2 Superluminal Signals in my book) does not use tachyons but uses the relativistic metric directly instead. You can see the details in Quantum Non-Locality and Relativity, chapter 4. These constructions are proofs of the difference in the two concepts. It is not that I believe that there actually are Class-2 signals in nature! That is completely beside the point.

    If I have a certain kind of robust correlation between spacelike separated events—suppose the correlations even allow one to signal—how do I decide which is the cause and which the effect? You suggest that the only method is by temporal order: the earlier is the cause and the later the effect. But spacelike separated events have no objective temporal order. So there is no objective distinction between cause and effect.

    But you are wrong about how we could decide what is the cause and what is the effect. If I have control over one of the events—if it is, say, pushing a button—and the correlations are strong with some other event such as a light going on, then the button-pushing is the cause because that is effectively a free variable. That is, I can hook the button up to whatever I like—a random number generator or a coin flipper or my own whim—and if the light flashes always in the exact sequence that the button is pushed then the button is the cause and the light the effect. This is independent of anything about time order. This point is made in "Waiting for Gerard", available here: https://www.facebook.com/tim.maudlin/posts/10155641145263398

    Your response about a "crude carrier wave" is incomprehensible. There is no energy or momentum transfer in the case described. Zero. But Shannon information is transmitted, and in a controllable way. A signal is sent.

    You have missed the point about interactions in Tumulka's theory. He has not rigorously defined how to incorporate an interaction term into the Hamiltonian. But violations of Bell's inequality do not require such a term. And it is not a knock on his theory that he has not rigorously defined how to incorporate an interaction term: no QFT has done so at the level of mathematical rigor he requires. But the theory clearly has no measurement problem.

    ReplyDelete
  120. Jim V.

    In order to send Shannon information, the information channel must be operational. Of course. And in order to send a signal it is sufficient to send Shannon information using a controllable transmitter to an observable receiver. That is, you have to be able to tell which state the receiver is in. But given these constraints, one can send information, and even signal, without a speck of energy or momentum being sent, as my examples illustrate.

    ReplyDelete
  121. Dear Tim,

    First, thanks for taking your time for this discussion. I admire your honesty and clarity of thought and I am honored to be able to speak to you.

    Now, I fully agree that if superdeterminism implies some sort of selection bias (we only detect the particles confirming QM’s prediction even if that prediction fails in the general case) it is not a concept worth speaking about. However, while some superdeterminists might believe in such an approach this is certainly not my intention (and not ‘t Hooft’s intention either).

    So let me make my view clear:

    1. I do think that the observed correlations are true in all situations. They are representative for how physics works, they are a fact of our universe.
    2. It is possible that these correlation are a result of the “history” of the system (by system I understand the sum of all experimental parts – in our case the two detectors, Alice and Bob and the source of the entangled particles).

    Let me now give you two examples of how I imagine these correlations to appear in a completely local way without assuming any new physics.

    Example 1.

    It is a fact that all observed planetary systems reveal a non-trivial correlation: all planets orbit in the same plane and in the same direction. Clearly, physics does allow for perpendicular orbits or for planets orbiting in alternate directions, so this observation requires an explanation. It is certainly possible that we are only able to see a biased sample due to a fine-tuning at the Big-Bang or whatever, but this would be foolish. The true explanation has nothing to do with any conspiracy or fine-tuning and it implies looking at the history of the system. All planetary systems start as gas clouds and because the way the fundamental particles interact gravitationally and electromagnetically they take the shape of a disk. The planets are formed by the material in the disk so, it is to be expected to display the observed correlation.

    In an EPR experiment we also have a system of particles interacting electromagnetically and gravitationally (all the electrons and quarks in A+B+S - Alice, Bob and the particle source.). What are the possible states of such a system? Is it possible to start with such a system and arrive in a configuration that would contradict QM assuming that Maxwell’s EM and GR are true? I am not sure about that. Just like in the absence of any knowledge of how planets form you would assume that planets should orbit in all possible directions you also assume that a classical theory should predict a different result from QM. I see no good argument for this. I think the only possible way to know this is to actually simulate an EPR test (as simplified as possible) starting from different initial states and plugging in the equations of the theory to be tested (say classical EM) and see what the results are.

    -continuation in the next post-

    ReplyDelete
  122. -continuation-

    Example 2:

    In this example I would also use the classical theory of electromagnetism (as a local, hidden variable theory) to make my point. Assuming classical EM is true the following statements are true (just forget for now about the stability of atoms and other issues like that) :

    1. The detector setting at Alice is a function of the electric and magnetic fields acting at Alice’s position.
    2. The detector setting at Bob is a function of the electric and magnetic fields acting at Bob’s position.
    3. The spins of the entangled particles are functions of the electric and magnetic fields acting at source’s position.

    But the following are also true:

    4. The electric and magnetic fields acting at Alice’s position are a function of position/momenta of all particles (Alice + Bob + Source).
    5. The electric and magnetic fields acting at Bob’s position are a function of position/momenta of all particles (Alice + Bob + Source).
    6. The electric and magnetic fields acting at source’s position are a function of position/momenta of all particles (Alice + Bob + Source).

    So, we see that the so assumed “independent” subsystems (Alice, Bob, source) evolve according to a very complicating function of the same stuff (position/momenta of all particles of the combined system). In fact, the function would be the same in all three cases except for the value of 1 coordinate (if we imagine a linear disposition of the three experimental parts). It doesn’t seem that the independence assumption is likely to be true. Again, just like in the first example I cannot make the claim that classical EM actually explains the correlations, but it has the required mechanisms in place.

    In my understanding ‘t Hooft’s approach also makes use of a classical albeit discrete version of a classical field theory (his cellular automaton) and the reason why it might work is exactly the same as in my examples above. The experimental parts are not independent of each other because the physics is described by a field theory of unlimited range.

    Best Regards,

    Andrei

    ReplyDelete
  123. Dear Andrei,

    Let me point out some absolutely critical points that are being overlooked in your examples.

    The planetary orbit example is what Bell illustrated with the story of Bertlmann's socks. Because Bertlmann has an odd sense of style, he always chooses to wear different colored socks in the morning. This creates a correlation: the colors of the socks are not statistically independent of each other. His right sock is, say, red 10% of the time and his left sock also red 10% if the time (assuming he has no right/left bias), but both socks are red 0% of the time rather than 1%, which is what would happen if there were no correlation between the sock colors. So seeing that one sock is red provides information about the other: it isn't red. This is similar to the planets: seeing the orbit of one planets provides information about the rest because they are correlated. This correlation demands an explanation, and it is easy to give one. It is a common cause explanation: in the case of the socks the common cause is Bertlmann's choice in the morning, in the case of the planets the process of disk accretion and planet formation. This is exactly how we expect these sorts of persistent correlations between separate systems to be accounted for: by common causes. Let's call this a Bertlmann's socks explanation.

    Now Bell's whole point is that *violations of Bell's inequality cannot be accounted for by this kind of explanation*. His point in bringing up Bertlmann is exactly that the violations of Bell's inequality *cannot* be recovered by this sort of account. So if you are thinking that the planetary orbits provide a good model of how to handle violations of Bell's inequality, you are off on the wrong foot. They can't be explained that way.

    Note that I have said violations of Bell's inequality can't be explained that way, not that the EPR correlations can't be explained that way. The perfect correlations noted by EPR can trivially be accounted for by a common cause, and that is just what EPR assume accounts for them. Their point is that the common cause explanation, the Bertlmann's socks explanation, requires that the theory be deterministic. As Bell put it, if neither sock even *has* a color until it is measured (someone looks at it) and if each sock is sometimes red but both socks are never red, *how does one sock "know" which color the other sock became when measured?*. You make this question acute by having the measurements take place far away from each other, ideally at spacelike separation. If the sock-colors are not already predetermined at the source (Bertlmann's choice in the morning), if the outcome of the sock-color "measurement" is really probabilistic rather than desterministic (as standard QM says), then how does one sock know what the other sock did so they can always avoid being the same color? That, EPR says, would be spooky action-at-a-distance or, as Einstein also said *telepathy*, or, as Schrödinger said, "magic". EPR and Schrödinger simply refused to accept this action-at-a-distance which follows from an insistence on indeterminism. Einstein thought that there must be a local, non-spooky, and hence deterministic account of these correlations.

    Con't.

    ReplyDelete
  124. Con't

    So how does Bell prove the impossibility of a local common-cause explanation? His proof treats the choice of which experiment to perform on each of the separated particles as "free parameters" whose setting is statistically independent of the physical state of the particles when they were created. Note that the planet example has no corresponding free parameters at all: there are no choices about what to measure in that case.

    Why think that the choice of what to measure should be statistically independent of the state of the particles at the source? Well, we can use what Bell calls a "physical randomizer" such as a coin flip, or a random number generator, or the parity of the digits of pi to determined the measurement made. Note that this has nothing at all to do with "free will": the "free" in "free parameter" is just code for the statistical independence assumption. After all, if the parity of the digits of pi are being used, how could there possibly be a common cause explanation: no matter what happened in the past, that makes no difference to the parity of the digits of pi! The only way to try to break statistical independence of the measurement setting and the initial state of the particles in this case is by *backwards causation* rather than *common causation*: somehow the (later) setting of the apparatus must determine or influence the *earlier* state of the particles! 't Hooft explicitly denies such retrocausation, and no one has figured out really how to make a theory with it.

    So your second example is really not to the point. The pseudo-random number generator calculating the parity of the digits of pi may well be electromagnetic, but that is not relevant. All that is relevant is that it is accurately calculating the digits of pi, so there is no room for any common cause to influence the settings. There just is no common cause, Bertlmann's socks explanation available. Contrary to Einstein's belief, there is spooky action-at-a-distance.

    You should read Bell's little paper "Free Variables and Local Causality" that covers this. Also, of course, "Bertlmann's Socks and the Nature of Reality". Both are in Speakable and Unspeakable in Quantum Mechanics. Just read slowly and carefully: Bell is always right on target, but he does not belabor points the way I do.

    ReplyDelete
  125. ..there are still some very basic confusions. One is your claim that I believe in superluminal signaling.

    You may have missed the follow-up message where I clarified that you claim signaling without energy or momentum transfer is possible, whereas I claim that it is not. This is a fundamental disagreement, although it’s only tangentially related to the original discussion. For anyone (like me) who believes that signaling is not possible without energy transfer, and that energy cannot propagate faster than light, it follows that superluminal signaling is not possible, which is the original point. With this, in combination with the physicist’s (and some philosopher’s) definition of locality (no superluminal action), it follows that quantum entanglement does not violate locality, although it does violate separability, i.e., the wave function of entangled systems is not factorable.

    …a manifestly Lorentz-invariant theory which permits superluminal signaling. Whether the theory is physically plausible or realistic is neither here nor there…

    As I keep saying, it goes without saying that formal Lorentz invariance is not the full content of special relativity, because the latter relies on several other commonly-granted assumptions necessary for an empirically viable theory, such as no superluminal signaling! We could have a separate discussion about whether Lorentz invariance combined with superluminal signaling (and action) is actually Lorentz invariant in a physically meaningful sense, but that would just lead to a debate over what “physically meaningful” means, and it’s irrelevant anyway, because we are in agreement that superluminal signaling is impossible, which is all I need.

    But you are wrong about how we could decide what is the cause and what is the effect. If I have control over one of the events—if it is, say, pushing a button—and the correlations are strong with some other event such as a light going on, then the button-pushing is the cause because that is effectively a free variable.

    Aside from the preface “But you are wrong”, the rest of your paragraph is merely repeating what I just finished explaining to you. Remember, you talked about an “intervention”, which you now call a free button-pushing, and I identified this as what you image to be the source of the directionality. But I also explained that this is not sound, because it begs the question of what counts as a “intervention” (much as the measurement problem asks what counts as a measurement). You see, things are happening at both ends of a spacelike interval, so why do you claim that what happens at one end of the interval is a “free intervention”, whereas what happens at the other end of the interval is just a response. You talk about flipping a coin or a random number generator or “my own whim”, but none of these represent the introduction of an asymmetry between the ends of the spacelike interval (random is random) except possibly for your whim if you are invoking free will. In that case I will stand back while our blog host pummels you to smithereens for basing your physics on free will. I’ll just say that I see no evidence that free will plays an essential role in any of the phenomena we are discussing. Absent free will interventions, I would say your approach entails “asymmetries that do not seem to be inherent in the phenomena”.

    Cont.

    ReplyDelete
  126. Your response about a "crude carrier wave" is incomprehensible. There is no energy or momentum transfer in the case described. Zero.

    Not true. To understand what’s going on, just increase the frequency. Instead of one (light-like) pulse every five minutes until the bad guy arrives, send pulses at a frequency of 10000 Hz, and then turn off the signal when the bad guy arrives. Thus the signal wave is just a step function that terminates the 10000 Hz carrier wave. You could superimpose Mozart or anything else onto that carrier wave, but you will not achieve superluminal signaling, nor energy-free transmission of Mozart. Note that the agreement to regard the ceasing of pulses as the time of arrival must have been conveyed between source and receiver in advance, by a means that involved energy transfer. (If we agree in advance that when I text you the letter W, this signifies the entire contents of Wikipedia, have we achieved tremendous data compression when I text you “W”?)

    Each received pulse signifies to the receiver at reception time T that the bad guy hadn’t arrived by the time T – D/c, where D is the distance. This is all the receiver knows until T+dt sec, when he doesn’t receive a pulse, at which time he knows (by previous energetic communication) that the bad guy had arrived by the time T+dt-D/c. The uncertainty equals dt, the period between pulses. You have a very crude carrier wave with 5 minutes between pulses, so the receiver has a 5 minute uncertainty as to when the bad guy arrives, and of course this has the lightspeed delay (D/c). But the point is that the “information” at T+dt when the pulse doesn’t arrive actually came from a previous energetic message saying “when you don’t get a pulse, then means the bad guy arrived”. This required energy.

    You have missed the point about interactions in Tumulka's theory… it is not a knock on his theory that he has not rigorously defined how to incorporate an interaction term… the theory clearly has no measurement problem.

    Isn’t a measurement an interaction? How can you be confident that a theory has no measurement problem if it can’t rigorously handle interactions? Isn’t this the same basis you have for saying quantum field theory has a measurement problem? If quantum field theory was able to handle interactions in a rigorous way to your satisfaction, would you still say it had an unresolved measurement problem? The Tumulka paper itself acknowledges that there’s a difficulty extracting any actual predictions since it can’t handle interactions, so it can’t model observers or apparatuses.

    ReplyDelete
  127. If you define no information as information, then you can prove anything you like with that postulate, but I was pointing out that it seems mathematically inconsistent. Better to separate data from conclusions in your postulates, I think. Otherwise the fact that I have no milk in my refrigerator proves I have milk.

    To clarify my other point, if one claims to have a signalling system which requires no energy transfer, it is insufficient even if granted to say that no signal requires no energy and no signal is a signal. Otherwise I am continuous getting signals from my twin in the Andromeda galaxy at greater than the speed of light. (The signals consist of no signal.)

    Note also, in your example the lack of a phone call only meant something because the information [if A, I will call; if B I won't] had previously been sent - using energy. So again, no energy transfer (now or previously), no signal.

    This is of course off-topic and I can't and won't complain if it is moderated out of existence and will try to restrain my something-is-wrong-on-Internet reflex for the rest of this thread.

    ReplyDelete
  128. Jim V

    I am just applying Shannon's definitions to the case. His definitions make no mention of energy transfer. The informational content has to do with what can be inferred from the state of the transmitter from the state of the source given the proper operation of the information channel. The pauses in a Morse code signal carry as much information as the sounds, even though a pause transmits no energy. Just review Shannon to see the point.


    Your twin on Andromeda is not sending a message to you now by doing nothing because the following counterfactual is not true: if the twin had been doing something now, your physical state would have been different.

    ReplyDelete
  129. Dan,

    I'm afraid that this is just getting more tangled up. Let me start from the beginning.

    When I talk about the concept of "signaling", for example, I mean the properties in virtue of which something counts as a signal. In that sense, it is just obvious that signaling requires no energy transfer or action (in your sense) at the conceptual level. If I push a button and a light goes on, always in exact correlation with the pushing, then I conclude that I can signal using such an apparatus. I do not need to inquire about whether any energy or action or whatever has been transmitted from the button to the light. As long as the subjunctive conditional holds—if the button should be pushed the light would go on—then I have a signaling device.

    This is again just an application of Shannon's basic information-theoretic definitions. The definitions do not mention energy or action.

    Maybe you believe that in fact there cannot be any physical signaling apparatus without (at least possible) energy transfer. But even if that is physically true, it plays no role in the definition of what a signal is.

    Since you try to insist on building transmission of energy or action into the very notion of a signal we just can't have a clear discussion. All I can ask you to do is reflect on the examples to see that even if we discovered that no energy was transferred, we would still insist that it is a signaling device that can be used to transmit information.

    Let's leave free will aside, It is not relevant to the physics here and calls up some very bad philosophy. But you are not right about there being some difficulty defining an "intervention" or a "free variable". Buttons can be treated as free variables in most experimental circumstances because they can be hooked up to physical randomizers that determine their state. Go back to our old friend, the parity of the digits of pi. I can hook the button up to a computer that calculates these parities and pushes the button with that pattern. If whenever the button is so hooked up the light flashes out the same pattern, then the button is the cause and the light the effect. Again, see my little play. Even backwards causation can't reverse that inference.

    con't

    ReplyDelete
  130. Your attempt to show that the dog not barking or my not phoning transmits energy from my location to some other location is just invalid. Showing that some other sort of transmitting device sends energy does not show that the one I describe does.

    What do I mean by a measurement problem? Well, if a theory actually employs the word "measurement" in its basic axioms, then it is in conceptual trouble. The von Neumann collapse theory suffered this problem: the collapse were real, and triggered by making a measurement. But what counts as a measurement? That becomes a problem. The GRW collapse theory cuts through that problem by postulating a fundamentally stochastic collapse process. Since collapses are not triggered by measurements, we have no problem of needing to define a measurement.

    The other thing sometimes called the "measurement problem" is really the problem with Schrödinger's cat. If my basic physical ontology is particles and the particles don't split, then the cat ends up dead or alive depending on where the particles end up. And the same for the flashes in the flash theory. So in this sense, there is no measurement problem in Bohm or GRW.

    Many Worlds is a different kettle of fish. It does not require any definition of "measurement", but denies the basic premise of Schrödinger's argument, namely, that the only cat at the end of the experiment is the one you started with and it is either alive or dead. But Many World inherits other problems because of this denial.

    In this sense, I don't need a strict exact definition of "intervention" here. The term does not appear in the language of the basic laws or ontology of the theory. But if the button gets hooked up to the computer in the way I have described then that is a case of the button being the free variable and the light the dependent variable. The correlation between the two shows that information is being sent from the button to the light.

    The problem with QFT is that it is unclear if it is a collapse theory after all. If it isn't, and the basic equation for the wave function is linear, then we end up with Schrödinger's cat. Unless you add extra "hidden" (i.e. manifest) variables like particles.

    So the questions about QFT are: do you think the wave function represents a real physical item or property of the system?
    If not, then what is?
    If it is, does it ever collapse?
    If it doesn't collapse, are you accepting Many Worlds?

    Once I get clearer how you think of QFT, we can talk better.

    ReplyDelete
  131. Dear Tim,

    You say:

    "So your second example is really not to the point. The pseudo-random number generator calculating the parity of the digits of pi may well be electromagnetic, but that is not relevant. All that is relevant is that it is accurately calculating the digits of pi, so there is no room for any common cause to influence the settings. There just is no common cause, Bertlmann's socks explanation available."

    I disagree with you. No matter how a pseudo-random generator appears to behave, it is still just a large collection of charged particles. And it must obey the laws of electromagnetism. And these laws ensure us that the evolution of this system must take into account the existence of all other charged particles, no matter how far, including those in the other detector and those in the source. So, I think that the independence assumption fails.

    Let me try to present this argument from a different perspective. Assume you start with an empty universe. Place in it three charged particles (say 1 proton and 2 electrons) very far from each other, say 1000000 light-years away. Let's call these three subsystems Alice, Bob and Source. Do you think they are independent? I think they are not. The motion of each particle is a function of the electric and magnetic fields at its location, which in turn are a function of position/momenta of the other two particles. Do you agree with this?

    Now, imagine adding other three particles, one at each location. Are the new two-particle systems independent. Again, I think they cannot be. No matter how many particles you have, they still obey the laws of electromagnetism. Add even more particles until you create your random number generator using Pi or whatever. Are they independent now? If so, when do you think this independence pops in?

    P.S.

    I have read Bell and EPR and pretty much all significant literature on this issue. I just don't agree with this statistical independence assumption. Sure, it looks intuitive but when thinking about QM intuition does not help you much. Sure, it seems strange that distant systems might be correlated in a non-trivial way but this seems to be the implication of any infinite-range field theory.

    Andrei

    ReplyDelete
  132. Maybe you believe that in fact there cannot be any physical signaling apparatus without (at least possible) energy transfer. But even if that is physically true, it plays no role in the definition of what a signal is.

    That fact that signaling requires energy transfer isn’t part of the definition of a signal (that would make it tautological), but one consequence of the laws of physics in our universe (to the extent that we have been able to discern them) is that signaling requires energy transfer, and hence no signal propagates faster than light. By Fourier analysis of any pattern of energy flux we can determine the phase, group, and signal velocities. The phase or group velocities each can, in some circumstances, exceed the speed of light, but the signal velocity never exceeds the speed of light, and is always associated with a flow of energy. For example, between spacelike-separated entangled particles there is no energy transfer and no signaling.

    All I can ask you to do is reflect on the examples to see that even if we discovered that no energy was transferred, we would still insist that it is a signaling device that can be used to transmit information.

    Your examples don’t show that at all. Examine the Fourier expansion of a step function (the signal) superimposed on a carrier wave consisting of a sequence of discrete pulses. When the sequence is terminated, there is uncertainty as to precisely when the sequence was stopped. The more coarse the carrier wave, the less fidelity we have in the signal wave (step function), i.e., the greater is the period of uncertainty as to when the carrier wave stopped. By making the carrier wave extremely coarse, e.g.,. with five minutes between pulses, we can delay the recognition for a long time after the last pulse, but the energy of that last pulse was the last signal, and everything after that is inference and extrapolation based on prior agreement, etc. If I transmit the first three seconds of a Mozart song and then stop transmission, you may be able to whistle the rest of the song, but that doesn’t mean I transmitted the rest of the song without energy.

    Let's leave free will aside, It is not relevant to the physics here and calls up some very bad philosophy. But you are not right about there being some difficulty defining an "intervention" or a "free variable". Buttons can be treated as free variables in most experimental circumstances because they can be hooked up to physical randomizers that determine their state.

    Buttons? As you know, a key ingredient in the derivation of Bell’s inequality is the assumption that “buttons” can be treated as “free variables”. As Bell himself said “In the analysis it is assumed that free will is genuine, and as a result of that one finds that the intervention of the experimenter at one point has to have consequences at a remote point, in a way that influences restricted by the velocity of light would not permit. If the experimenter is not free to make this intervention, if that also is determined in advance, the difficulty disappears”. The use of “random” interventions has also been thoroughly discussed in the literature, and doesn’t preclude superdeterminism… but even without superdeterminism, the fact remains that there is no natural asymmetry inherent in the phenomena (absent super-natural free will), so there is no directionality in the correlations, and hence it is not properly called signaling, it is non-separability, i.e., the wave functions of entangled systems are not factorable, even when spacelike separated.

    Regarding Bell's use of the word locality, bear in mind that he was a neo-Lorentzian, who suspected that Lorentz invariance was just an illusion, and that superluminal signaling was in fact responsible for the correlations.

    ReplyDelete
  133. Your attempt to show that the dog not barking or my not phoning transmits energy from my location to some other location is just invalid.

    That isn’t what I showed. I showed that the signal corresponds to energy transfer, and the rest is just extrapolation and delayed inference from a very coarse carrier based on prior shared information. You are simply delaying the recognition time by using a course carrier, not transmitting a signal without energy.

    What do I mean by a measurement problem? Well, if a theory actually employs the word "measurement" in its basic axioms, then it is in conceptual trouble…

    Sure, but a theory that satisfactorily handles all interactions (without an ill-defined distinguished class of interactions called “measurements”) doesn’t have that problem. Conversely, a theory that doesn’t model interactions at all, and can’t represent observers or apparatuses, cannot claim to have resolved the measurement problem, since a measurement is an interaction.

    The other thing sometimes called the "measurement problem" is really the problem with Schrödinger's cat. If my basic physical ontology is particles and the particles don't split, then the cat ends up dead or alive depending on where the particles end up… So in this sense, there is no measurement problem in Bohm or GRW.

    I don’t think you are coming to terms with the measurement problem. The whole point is that we can’t just blithly say the cat quickly collapses to either alive or dead, because as long as it remains in a superposition (which we can maintain indefinitely, unless you think gravity or some such prevents maintaining isolated coherence for long) we can have interference effects, incompatible with the idea that the cat’s wavefunction had collapsed (in conventional terms). This is easy to see more clearly in two-slit experiments, and in delayed-choice experiments. But all these things involve interactions, and your relativistic GRW can’t model interactions, so I still don’t see how it can possibly resolve the measurement problem. In fact, it isn’t clear to me how any spontaneous collapse model can agree with things like delayed choice experiments – unless the spontaneous collapse is so effectively disguised and delayed that it no longer functions like spontaneous collapse, and perfectly mimics QFT.

    Once I get clearer how you think of QFT, we can talk better.

    I'm not advocating a novel understanding of QFT, I'm just saying that when discussing quantum entanglement and the meanings of locality and separability and signaling, etc., we ought to work in the context of special relativity, which entails Lorentz invariance and no superluminal signaling, etc. QFT (and even QM) gives a serviceable account of all the phenomena, including delayed choice experiments, although I don't claim that the measurement problem has been resolved to everyone's satisfaction. I don't know enough about your relativistic spontaneous collapse model to even know how it accounts for the all phenomena, but I suspect if it does, it is essentially equivalent to QFT. Admittedly I'm saying this without having studied your theory... I'm basing my opinion mostly on the fact that it's extremely difficult to match all the verified predictions of QFT without being QFT... and also on the fact that the paper on your model says it has difficulty actually making any predictions at all, because it can't model interactions.

    ReplyDelete
  134. Andrei,

    Please just think the example through. Of course a computer calculating the parity of the digits of pi is not influenced in the least in that calculation by the disposition of distant electro-magnetic objects! If it were so influenced, then the actual parities it outputs would depend on those other objects, and hence two different computers computing the same thing would give different answers! The output is determined by the structure of pi, and that is immune from any physical situation, so as long as the computers compute the digits of pi, the details of the rest of the physical situation beyond the computer are irrelevant.

    It is your intuitions about the significance of the fact that a computer operates electro-magnetically that is leading you astray here. The point is really quite unassailable. If you read "Waiting for Gerard", that is just the point made there. pi is pi. And computers can calculate pi.

    ReplyDelete
  135. Dan,

    We really can't make any progress if you don't try to stop mixing things up. No signaling is no signaling and no energy transfer is no energy transfer. They are just completely different conditions. Separate them in your mind and in your comments. Once again, Shannon information theory nowhere makes any mention of energy transfer. Anywhere. But it is the very theory that defines information transfer: rates, etc. We have no reason to suspect there is any superluminal energy transfer. We know for sure there is superluminal information transfer (the argument is trickier it you are a Many Worlds person, but trivial if you aren't.) And it was an open question whether Quantum Mechanics allowed for superluminal signaling. Bell himself settled that, under certain presuppositions, with his "no-Bell-telephone" theorems. In my book, there are separate chapters on energy transfer, information transfer, signaling, causation, and Lorentz invariance. These are all different conditions, and the relations between them are subtle. You have to stop running them together.

    Do you think that there is a physical correlate to the wave function in QFT?
    Do you think, if there is, that it ever collapses or always evolves linearly?
    Do you think that there is anything beside the physical correlate to the wave function?
    If you don't think there is any correlate to the wave function, what is there? And how do you avoid PBR?

    If you can just answer these questions I will know how to proceed in the discussion of QFT.

    Bell is almost always consummately careful, but that one quote is a slip. Maybe because it was an interview and not something he wrote. Read "Free variables and Local Causality". The issue is what he calls "physical randomizers", of which the computer calculating the digits of pi is an example, and the computer is obviously effectively deterministic with no "free will" at all. The fact that "free will" and "free variable" share the word "free" has caused no end of trouble. I always use the pseudo-random number generator just because it makes the point in a clean way. Nothing to do with free will at all.

    Maybe this will help. Every existing exact form of quantum theory—Bohm, GRW, Many Worlds—takes the wave function seriously as representing a real physical item. And hence takes entanglement seriously, and non-separability. And they are all also non-local theories. Again, for Bohm and GRW it is obvious, and it is a trickier question for Many Worlds. So there is not a choice here: non-locality *or* non-seperaabliity. Rather each theory has non-locality *and* non-separability. You could , in principle, violate Bell's inequality without any non-separability, but no one is trying that. (Well, Travis Norsen has.)

    When you sue the pi-calculater to set the measurement, the issue of symmetry is gone.Just think the case through.

    ReplyDelete
  136. Dear Andrei

    One more thing. In your example with the three particles, the issue of statistical independence does not even come up. This is another common confusion. What is or is not statistically independent are *sequences*. I have two (or more) *sequences*, with the members paired up. Each sequence will display certain statistics characteristics: how often a certain value occurs in the sequence, whether there are correlations *within* the sequence (a 1 is always followed by a 0, for example), and there may or may not be correlations *between* the sequences. If there are no such correlations, then the sequences are statistically independent. The physical systems producing the sequences can interact as much as you like. If I shake two dice in my hand, they interact like crazy, but still the sequence of results of one die will be statistically independent of the sequence of results of the other if I throw them many times. What you seem to be thinking (and 't Hooft does too) is that the mere fact that two systems have *interacted* in the past makes it likely that their behavior will be statistically correlated. But that is far from the truth. Indeed, most sequences show no such correlations. We think it demands a physical explanation when they do.If the two dice, no matter how many time we threw them, never showed a 12 even though each die comes up 6 about 1/6 of the time, we would demand an explanation. And "well they interacted in your hand" would not count as an explanation.

    ReplyDelete
  137. Tim,

    Your post at 9:37 AM, Dec. 16th, mentions "PBR" in response to Paul Hayes previous comment at 4:19 AM, same day. At first I thought it was a mistype of "EPR". But opening up Paul Hayes links I discovered it was a new theorem, similar to EPR, that came out in 2011.

    That confusion aside, I wanted to make sure I understood a most important point that you've been trying to get across in multiple posts. It seems like you're quite explicitly saying that for space-like separated quantum entities, say two entangled photons a light year apart, that the measurement of, the say polarization of one, resulting in the instant determination of the other photon's polarization, does not entail any energy transfer, thus not violating Special Relativity (I realize it would take a whole year, at light speed, for a scientist to confirm the polarization of the distant photon).

    And, not wanting to distract you too much from the ongoing conversation, I wanted to ask your opinion of this piece written by Patrice Ayme titled: "Quantum Entanglement: Nature's Faster Than Light Architecture". I found it a rather enjoyable read, as it's written in a semi-humorous vein, and thus more digestible for someone at my level of understanding of QM. Does this article conform to your knowledge of QM?

    https://patriceayme.wordpress.com/2014/11/22/quantum-entanglement-natures-faster-than-light-architecture/

    ReplyDelete
  138. Tim,

    thanks for your efforts and strong desire to clarify our (or my) muddled thinking, but I have a problem with your examples about signaling information with no interactions. Previous interactions allow one to set up hypothesis and assign probabilities to different events, but in non-deterministic processes, only posterior interactions can confirm or reject the hypothesis.

    So I would identify your scenario with |psi> = |ring=1>|bad guy=0> + |ring=0>|bad guy=1>.

    You need to have an interaction and measure the ring (or its absence, but you certainly need the interaction, look at you cell phone, a working telephone company linking your phones, etc.), to have new information regarding the bad guy.

    ReplyDelete
  139. Tim,

    I don't know if my previous post will show up, but I posted a link to a paper by Patrice Ayme discussing quantum entanglement, asking your opinion of it. I had read to page 3 assuming it was standard QM. But I didn't realize that the author digresses into his own non-standard theory in the last page and a half. My blunder, I apologize for that. However is the conclusion that I made, on what you are trying to convey, in the 2nd paragraph of my previous post correct?

    ReplyDelete
  140. Madmadera,

    I am sincerely happy to try to clear this up, and will keep trying. So let me begin again by reminding you that we have a bunch of nearby concepts here that have to be kept apart, and carefully distinguished. Concerning what is *physically forbidden*, we have these possibilities:

    1) Superluminal energy transfer.
    2) Superluminal signaling.
    3) Superluminal causation
    4) Superluminal information transfer.
    5) Violation of Lorentz invariance by the theory
    6) Empirical violation of Lorentz invariance. (Observational Lorentz invariance)

    There are many subtle relation among these, but they are all different conditions. About the only actual logical relation is that if superluminal signaling is possible, then of course superluminal information transfer is, as is superluminal causation. For every other pair (I think), you can cook up a theory that fulfills one condition and violates the other. You can practice by trying it: can you describe a scenario that would satisfy one of these conditions but not another?

    Note: I left Superluminal interaction off the list, because I can't think of a precise meaning of "interaction" that does not reduce to one or another of these.

    Now you asked about my examples of "signaling information with no interactions". So that's got a bunch of different key concepts in it! I have not been trying to give examples like that, so that's the first point. I gave trivial examples of actual signals from one place to another (in this case, *not* spacelike separated!) which contained no energy transfer from the one place to the other. That was to help distinguish signaling from energy transfer. In the case of the phone *not* ringing and the dog *not* barking, information is clearly sent about one place to the other place with zero energy transfer. It is trivial that no energy goes from one location to the other, but still information does. Because the dog didn't bark, Holmes gets the information that the thief was not a stranger (the dog would have barked).Because the phone doesn't ring, I get the information that at the other location the bad guy has arrived. The second case is even intentional signaling: that's how we set up the signal to work. So this is information transfer and signaling between the locations without any energy transfer between the locations. As I have said already, of course this is only information transfer and signaling in this case given the physics of the situations because energy *could have been* transferred (if the dog had barked or the phone had been dialed). But possible energy transfer is not actual energy transfer, and in these cases there is zero actual energy transfer. Is that clear now?

    I did not attempt to claim that there are no interactions anywhere in these cases! Since I have not tried to define "interaction", that would depend on what you have in mind as to whether you say there in any interaction *between the two locations in space-time*. I would say there is none, myself. And remember, this is all classical, not QM. I am making a conceptual point here.

    So I would say that both a signal and information is sent from one location in space-time to the other with no interaction at all between the locations. Agreed? There is certainly no energy transfer.

    Con't

    ReplyDelete
  141. Con't

    All of the classical physics here is deterministic. You bring up indeterminism, presumably in a collapse scenario. That is a more subtle problem. But Einstein's whole point with the EPR was that *if there is indeterminism on either side, then there is superluminal information transfer between the sides*. That is again trivial! Since the theory predicts the perfect correlation between the sides, what happens on one side carries information about what happens on the other. That could be due to pre-arrangement in a *deterministic* physics: both outcomes are already determined at the source, so nothing superluminal happens at all. That is like the case of Bertlmann's socks, or the dollar bill torn in half, etc. etc, Nothing spooky. But if the physics is *indeterministic*, and the outcome on each side was *not* determined at the source, then the persistence of the perfect correlation implies superluminal information transfer: the way one side behaves is sensitive to how the indeterministic process came out on the other side. As Einstein put it, "God plays dice and uses telepathic methods". What really bugged him was not the playing dice (indeterminism) but the telepathic methods (superluminal information transfer). The EPR correlations do not demand that anything superluminal happen *so long as the theory is deterministic*. If it is *indeterministic*, then EPR is already spooky. What Bell then proved is that while you can get EPR correlation with nothing spooky, as is obvious, you can't recover all the predictions of QM that way.

    I hope that helps.

    ReplyDelete
  142. David,

    Yes, exactly. In your example, if you take the collapse of the wave function seriously, as reflecting a physical change of state due to an interaction ("measurement") on one side, then that collapse sends information to the other side without sending any energy at all. The fact that no energy is transferred follows (roughly) from the fact that there is no "interaction term" in the Hamiltonian. (I say "roughly" because to be really precise here we would have to get into the exact meaning of "energy", which is a whole different problem!). Anyway, the absence of an interaction term in the Hamiltonian would normally be considered to guarantee no energy transfer. But the collapse obviously sends information in Shannon's sense.

    If we understand the theory as indeterministic, then before any polarization experiment (let's not call it a "measurement") is done on either side there simply is no physical fact about how either experiment will come out. Before any experiment is done, neither of the entangled photons *has* a definite polarization in any direction because the entangled state is not an eigenstate of the polarization operators. So obviously no one anywhere has information about what the outcome of these experiments will be. The information simply does not exist. Now suppose the set-up is that the polarizers on the two sides are perfectly aligned in the same direction. So quantum theory does not predict how either experiment will come out (it's 50/50 on both sides). But quantum theory does predict that the results will not be statistically independent of each other: in this case the results on the two sides will certainly be the same. So the outcome of the experiment on one side carries information about the outcome on the other. If there is an indeterministic collapse, it alters the physical situation on both sides, even though they are space-like separated. And if you want to avoid the superluminal information transfer, then you have to abandon the indeterminism. That was Einstein's point: give up the idea that God plays dice in order to expunge the spooky action-at-a-distance. But Bell later proved that this strategy won't work: the spooky action-at-a-distance is here to stay.

    But in no case is there ever any question of energy transfer between the sides. The polarization does not effect the energy at all.

    All I can say about the Ayme post is please do not take it seriously. It would take days to correct things in that post.

    ReplyDelete
  143. No signaling is no signaling and no energy transfer is no energy transfer. They are just completely different conditions.

    Again, we have a fundamental disagreement about whether signaling requires energy. I say it does, and you say it does not. You offered what you thought was an example of signaling without energy, consisting of a step function on a coarse carrier wave, and I explained why does not represent signaling without energy. Having disposed of your purported example, if you still think signaling without energy transfer is possible, let’s hear an example.

    We know for sure there is superluminal information transfer…

    Quantum entanglement does not entail any superluminal signaling. Are you really claiming that it does? You switch here from signaling to information transfer, but we’ve been over this before: the word “transfer” smuggles in a sense of directionality (signaling) that is not inherent in the phenomena, i.e, there is no physically meaningful directionality for spacelike correlations with quantum entanglement. So don't say "information transfer", say non-separable.

    If you can just answer these questions I will know how to proceed in the discussion of QFT.

    We stipulated at the start that quantum field theory does not resolve the measurement problem. All your questions seems aimed at getting me to explain how quantum field theory resolves the measurement problem, but as you know, I don’t not make that claim. I claim that signaling requires energy, and I claim that quantum entanglement does not entail superluminal signaling. As for definitions, I also claim that the word “locality” in physics is commonly defined as meaning no superluminal action (and hence no superluminal signaling).

    Bell is almost always consummately careful, but that one quote is a slip.

    It isn’t just that one quote. The article is filled with very revealing quotes about his neo-Lorentzian views, and of course this fits with his famous paper on how to teach special relativity, in which he explicitly champions the neo-Lorentzian interpretation.

    When you use the pi-calculater to set the measurement, the issue of symmetry is gone.

    Transmitting the parities of the digits of pi with light or electrical signals is not an example of signaling without energy along a spacelike interval, it is signaling with energy along a lightlike (or timelike) interval. It goes without saying that this transmission is asymmetrical, and there is a clear cause and effect (in conventional terms) propagating from transmission to reception. This doesn’t do anything to support any of your radial claims i.e., that signaling without energy is possible, and that quantum entanglement entails signaling.

    As an aside, transmitting the digits of pi is not a great example, because in principle the “receiver” could compute the digits of pi as well (just as they could know the Mozart song, or when the bad guy shows up in relation to the last pulse), so the entire infinite string of bits can be conveyed simply by saying “the parities of the digits of pi”. But more importantly, this kind of example (timelike or lightlight signaling) does nothing to support your claims.

    ReplyDelete
  144. Dear Tim,

    Regarding your first post let me know if I understand you correctly. You claim that the fact that a subsystem (the detector connected to the computer calculating Pi) can function predictably and autonomously to a good approximation it means that it must be independent. I disagree with this. Counterexample:

    Earth-Moon system, Jupiter-Io-Ganymede-Europa-Callisto system, Pluto-Charon system also have a great deal of autonomy. You can determine their future, to some approximation, ignoring the fact that they are subsystems of the Solar system. Are these subsystems independent of the Solar system? I think not. The same is true about the Solar system itself. We can predict Earth's orbit millions of years into the future (or postdict it) but this is not a proof that the Solar system is independent from the galactic one and so on. You can also think about a group of wheels in a clockwork mechanism. They can be treated on their own but, obviously, their motion is correlated with the rest of the mechanism.

    About your second post, I also disagree. Let's take your example with the dice. You claim that even if the dice interact the results are independent. Why would you say that? As far as I can tell there is no way to know if there is a correlation between two data sets other than actually find the correlation.

    Bell's independence condition boils down to the hidden variable not being dependent on the detector's settings. Now let's say that you have three dice. The first dice determines the hidden variable, the second the setting of the first detector and the third the setting of the second detector. Then you take the initial state of the combined system (position/momenta of the dice) and simulate their interaction on a powerful computer. Obviously the result you get is a function of how the dice collide. You just cannot determine the hidden variable without taking into account the detectors. So it is a matter of certainty that a correlation would be there, albeit a very complex one. So, in this case the independence assumption fails.

    Regards,

    Andrei

    ReplyDelete
  145. Bell is almost always consummately careful, but that one quote is a slip.

    It wasn’t a slip. Even in the Bertlmann socks paper, when he lists four possible explanations for the quantum correlations, it’s clear that his sympathy is with the third explanation: “It may be that we have to admit that causal influences do go faster than light. The role of Lorentz invariance in the completed theory would then be very problematic. An ‘aether’ would be the cheapest solution. But the unobservability of this aether would be disturbing. So would the impossibility of sending ‘messages’ faster than light, which follows from ordinary relativistic quantum mechanics in so far as it is unambiguous and adequate for procedures we can actually perform…” You see, he echoed these same ideas as his personal view in the 1986 interview. So this definitely wasn’t a slip.

    The other options he listed were (1) quantum mechanics is wrong, i.e., experiments will eventually show a conflict with the QM predictions (which of course has not happened), (2) superdeterminism, and (4) no independent reality to entangled systems, which amounts to non-separability, just as Einstein explained it. Bell’s option (3) is essentially non-locality in the physicist’s sense, and he clearly recognizes the conflict with special relativity and the manifest absence of superluminal signaling.

    In summary, Bell's four options were
    (1) QM fails (didn't happen)
    (2) superdeterminism (can never be ruled out)
    (3) non-locality (superluminal signaling - conflicts with observation)
    (4) non-separability (wave functions of entangled systems can't be factored)

    Bell identified (4) as Bohr's interpretation, and I think it is still the majority view, although (2) is always a possibility.

    ReplyDelete
  146. Andrei

    I will make this short so we can bear down on this one issue and resolve it. A computer correctly calculating the successive parity of the digits of pi does not "function predictably and autonomously to a good approximation" with respect to its output. It functions perfectly autonomously and predictably with respect to its output. Again, this point is made in "Waiting for Gerard", That is why Estragon and Vladimir can have the digits pre-printed to check the output. The pseudo-random sequence produced is completely and totally independent of all physical events outside the computer so long as the computer is functioning according to design specifications. It does not make a speck of difference what the computer is made of, whether it is electromagnetic, whether it is in a gravitational field, etc., etc. .etc. The existence of any such physical interactions is completely irrelevant to the sequence of apparatus settings that the computer produces. And it is that sequence that we are testing for statistical independence to something else—in the case of "Waiting for Gerard" the sequence of light flashings. Those two sequences are clearly not statistically independent! And because of the computer, that failure of statistical independence *cannot* be explained either by a common cause or by a causal arrow running from the light to the switch. It must run from the switch to the light.

    And I can only say one more time that the relevant statistical independence holds or fails to hold among *sequences*. No matter how carefully and exhaustively you analyze the interactions between your three dice on a single throw, that has no significance at all. They obviously strongly interact. But one throw simply does not make a sequence to be analyzed. Maybe this will make the point. Toss two coins once. If you ask "Are the outcomes of the tosses correlated"?, then you have not even asked a sensible question. Either they both fell heads, they both fell tails, or one fell one and the other the other. None of these results in any more "correlated" than another. The very concept of statistical independence does not apply to one toss. Whether the coins interacted or not is neither here nor there.

    ReplyDelete
  147. Dan.

    OK, let's try and resolve these one by one. I'll start just with 2 and then we can go from there.

    I did offer you several examples of signaling with zero energy transfer: the dog not barking and the phone not ringing. You responded with some strange analysis of the cases in terms of a "carrier wave" and a "step function". There is no carrier wave and a fortiori no step function on it. There is no energy transfer from one location to the other, or if there is it has nothing to do with the signal. But just to make this crystal clear, let me modify the phone example. I want to know (approximately) when the bad guy arrives at your apartment, but the signal has to be sent without alerting him at all. You have a laser pointed out the window aimed at my apartment across the street. Your windows are double-glazed, with a perfect vacuum between the sheets of glass. Every 5 minutes exactly you turn on the laser and send a pulse across if the bad guy is not there. When not in use, the laser is turned off and unplugged. When the bad guy comes, you do not touch the laser in any way. At the first 5 minute mark after the bad guy comes a signal is sent and I know that he is there: information has been transmitted. The signal has been sent.

    Now: if you want to try to maintain that any energy was sent from your apartment to my apartment in sending that signal,energy that would have to pass through the vacuum between the sheets of glass, do your best. But be serious about it. No physicist in the world will agree. And this is perfectly good Shannon information and Shannon signaling.

    Second point. You write this:

    "We know for sure there is superluminal information transfer…

    Quantum entanglement does not entail any superluminal signaling. Are you really claiming that it does?..."

    I literally do not know how to be more clear about this. The concept of signaling and the concept of information transmission are different. I have been repeating this over and over, with various examples. Look at the list of concepts above I sent to Madmadera. Yet I write about superluminal information transfer and you read "superluminal signaling". For the last time, because I have no recourse after this, *information transfer and signaling are different concepts*. It is impossible to violate the Bell inequalities at spacelike separation without superluminal information transfer. It is certainly possible to do it with a theory than does not allow superluminal signaling. If you really cannot keep these concepts separate in your mind, there is no way to clear anything up. You response shows that you have not made this first conceptual step. Do you acknowledge the error in your reaction?

    ReplyDelete
  148. Dear Tim,

    I read your discussion with 't Hooft but I would rather try to explain superdeterminism in my own way.

    You want a computer to work perfectly, I am OK with that, this is not important to my point. I still think you are making a false assumption. The fact that a subsystem works reliably does not make it independent of other subsystems or of the large system. I'll try another example.

    Let's assume the entire universe is built of Planck sized cogwheels. Everything is built out of them, and there are no disconnected parts. That should be a good image of a Laplacian universe. Your computer calculating Pi is just a stable group of cogwheels. A randomizer using radioactive atoms is another group of cogwheels. As long as you stay inside this universe everything is such a group of cogwheels and, of course you and I are such groups of cogwheels as well.

    Imagine you are a god looking at this mechanism and you focus on a Bell test. You see the cogwheels in the source rotating and arriving to a configuration that allows the emission of a pair of entangled particles with spin components (+1/2, +1/2, +1/2) and (-1/2, -1/2, -1/2) respectively. Can the groups of cogwheels we call "detectors" be in any state? No, they cannot because they are connected to the cogwheels making up the source. You wait a little bit and the source produces another pair, say (+1/2,+1/2,-1/2) and (-1,2, -1/2, +1/2). This corresponds to a different state of the detectors. Is the fact that the detectors are connected to some other groups of cogwheels we call "computers calculating Pi" relevant? I don't see any reason. The source will just be unable to emit particles with other spins giving its state. As the universe evolves the source will produce all types of entangled pairs, but each of them corresponds to a specific setting of the detectors. So, the statistical independence assumption fails for such a mechanism.

    If you change the cogwheels to Planck-sized cubes, or cells, you get to 't Hooft's theory. You also have an interconnected system, each cell changing its state as a function of the states of the adjacent cells. Just like in my above example the source cannot emit a specific entangled pair unless the detectors are in a certain state. So, again, the independence assumption fails.

    Coming back to my example, you can see that the independence assumption (the source should create particles of any spin for a specific detectors' configuration) requires one to break the mechanism, in other words to violate the physical laws in the universe.

    Let me try to say a few words about your Vladimir/Estragon common cause joke. I don't think an event-based view is very helpful when dealing with field theories, cellular automaton included or my cogwheel example. The correlations between source and detectors cannot be traced back to some unique, localized event in the past, but to the specific way the beables of the theory interact with each-other. Try for example to explain the correlations between the motion of the planets in our Solar system by a common cause. What would be that cause? I think the correct way is to just look at the consequences of the way the physical laws act upon the entire system. The common cause, if you want, is the state of the entire system in the past.

    Andrei

    ReplyDelete
  149. Dear Andrei,

    Please try to take in these comments before responding again.

    In 't Hooft's theory and your cogwheel theory, there just is no such thing as an "entangled state" or "entangled particles". Entanglement is a property of some wave functions or some quantum states. 't Hooft's theory and your cogwheels have no quantum state, and hence no entanglement. (Schrödinger writes in 1935, having introduced the term "entanglement" that year in his "cat" paper: "When two systems, of which we know the states by their respective representatives, enter into temporary physical interaction due to known forces between them, and when after a time of mutual influence the systems separate again, then they can no longer be described in the same way as before, viz. by endowing each of them with a representative of its own. I would not call that one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought. By the interaction the two representatives (or ψ-functions) have become entangled"

    In a local and separable physics, like the cellular automaton or the cogwheels, things interact but are never entangled.

    Second, the cogwheel image is very bad because is would seem to require instantaneous action reaching to arbitrary distance. You can't move one of a set of interconnected cogwheels without moving all of them at the same time. At least the cellular automaton has a finite rate of causal influence: one space-step per time-step. So the dynamics of the automaton has a limited velocity of influence, the cogwheels don't. Thinking in terms of cogwheels is thinking in terms that ignore the velocity of light as a limit.

    But most of all, you again confuse interaction with statistical dependence. These are just completely different concepts. To use an example I did with 't Hooft: suppose I am looking out my window watching traffic go by and counting the leaves as they fall off the trees in the park. The leaves and the traffic interact: certainly the passing cars can create breezes that knock off leaves. And suppose I count, in each minute, how many cars go by and how many leaves fall. I keep a log for a full day, with all 1440 minutes, and record whether the respective numbers were even or odd. That is all I record. So I have two sequences of 1440 entries, each either an E or an O. And now I ask whether these sequences are statistically independent or statistically correlated. We have granted that the leaves and the cars interact, and certainly there are many common causes in their past. But still, what we expect is the these sequences will be completely uncorrelated. Zero statistical correlation, or at least small enough to happen by chance. If we were to find a strong correlation—much less a perfect correlation—between these sequences we would be more than astonished. A perfect correlation would mean either that whenever an even number of cars passed an even number of leaves fell and vice versa—every one of the 1440 minutes—or that every time an even number of cars passed an odd number of leaves dropped and vice versa! It would mean that by counting leaves I could perfectly predict the parity of the number of cars! We would be gobsmacked. We would not have a clue how to explain that. And just noting that the cars and the leaves interacted—which they certainly did—would be of almost no help. So the idea that "interaction make the failure of statistical independence unsurprising" is 100% wrong.

    ReplyDelete
  150. Con't

    Now we push it even further. Instead of the parity of the number of cars passing, we start calculating the digits of pi, and write down their parities as a sequence in our list. And we find that the parity of the number of leaves dropping is not statistically independent of *that* sequence! By counting the number of leaves, I can guess with better than 50% accuracy what the parity of the next digit in pi will be! Or, the other way around, by figuring out the parity of the next digit of pi, I can predict with better than 50% accuracy what whether the number of leaves to fall in the next minute is even or odd! We would be even more gobsmacked! Pointing out that the leaves and the computer are both electromagnetic, or that they interact gravitationally, would do exactly nothing to explain this. In fact, we don't expect that the sequence of parities of the number of leaves falling is statistically correlated with *any* sequence we could derive in some other way. Even though the leaves are part of the great cellular automaton or great cogwheel universe. The failure of statistical independence would be considered miraculous and inexplicable.It is this sort of failure of statistical independence that 't Hooft has to posit. With no explanation.


    One last little comment:
    The common cause of the sense of the orbits of the planets was the angular momentum of the accretion disk. If you have in mind some other sense of statistical dependence among the planets, please explain what sequence you are testing for statistical dependence. Don't just say the systems interact. We know that. Say what sequences fail to be statistically independent. In particular, is each of the sequences, all by itself, statistically random?

    ReplyDelete
  151. Dear Tim,

    “In 't Hooft's theory and your cogwheel theory, there just is no such thing as an "entangled state" or "entangled particles". Entanglement is a property of some wave functions or some quantum states. 't Hooft's theory and your cogwheels have no quantum state, and hence no entanglement.”

    I see no justification for your requirement that any hidden variable theory (HVT) should use the same mathematical formalism as QM. The only necessary condition is for that HVT to reproduce QM’s predictions regarding the experimental results. For example, if a “cogwheel theory” is able to predict that for the orientations a and b of the polarizers the source generates same spins with a probability of 1/2cos2 (a,b) it should be enough. The same holds true for ‘t Hooft’s theory or for any other potential candidate.

    If you think you can prove that no such theory could ever produce this result, I am looking forward to see that proof.

    “Second, the cogwheel image is very bad because it would seem to require instantaneous action reaching to arbitrary distance.”

    My fault here, I forgot to mention that the cogwheels are rotating at a constant speed (say one cog/time unit). The mechanism does not accelerate or slow down. The states are changing as a result of the cogwheels changing their relative positions, not because they accelerate. We may imagine that such acceleration has taken place at the Big-Bang, when the mechanism started, but this should not be of any concern for now. So there is nothing non-local about this model.

    “But most of all, you again confuse interaction with statistical dependence. These are just completely different concepts.”

    I am not saying that interaction is the same thing as statistical independence, but interactions could in certain situations eliminate the possibility of independence.

    Consider the case of a system that consists of only two connected (interacting) cogwheels, A and B. Each has 6 cogs corresponding to 6 possible states. The possible states of the cogwheel A are A1, A2, A3, A4, A5, A6, and for B we have B1, B2, B3, B4, B5, B6. We say that A is in the state A1 if the cog we named 1 points up. Say that the cogs are numbered in such a way that only states with the same number (cogs) can touch (because the wheels rotate in opposite directions we will number the cogs in opposite directions as well). So, the mechanism can pass through 6 possible states: A1-B1, A2-B2, A3-B3, A4-B4, A5-B5, A6-B6 and then it rotates back into the initial state A1-B1 and so on. Do you agree with me that the states of A and B are not independent? If they were, you could expect any combination to occur like A1-B5 or A3-B4, etc. Two non-interacting (disconnected) wheels rotating at variable speeds could display such a behavior. Let me know if we are in agreement about this example, and I would try to answer your more elaborate examples with cars and leaves later.

    Now, about the common-cause thing. Take two massive objects of any kind (forget about planets and how they form) that are rotating around their common center of mass. The correlation consists in the direction they accelerate (toward their center of mass). Would you say that there was a time in their past when they were just moving randomly, then a certain event (the common cause) occurred and only after that they started orbiting their center of mass?

    Andrei

    ReplyDelete
  152. Andrei

    Your first remark completely misunderstands the point I was making. "Entanglement" is a very particular sort of mathematical/physical property, first named by Schrödinger, as I mentioned above. It is the condition sometimes called "non-separability". If we have a physical state and there is space-time, we can ask about the physical condition in different regions (or of different regions) of space-time. In a separable theory, these physical states of different regions can be specified independently of each other. For example, in 't Hooft's cellular automaton theory, the state of each cell can be specified independently of the state of any other cell. Each cell has N possible physical states, and I inform you what that state is by specifying which of the N obtains. I specify the global state of the automaton by specifying all of the local, separable states, i.e. by giving the state of each cell. Similarly in your "cogwheel" theory. Each cog has its own state, specified by which way it is rotated, and the global state is nothing over and above these local states,

    Entanglement is a different thing altogether. When the quantum state of two distant particles is entangled, you simply cannot specify the physical state of the pair by specifying the state of the particles separately. Insofar as each particle has its own state, it is a reduced density matrix, and from the pair of reduced density matrices one cannot recover the joint, global pure state. The reason I brought this up is that you write " You see the cogwheels in the source rotating and arriving to a configuration that allows the emission of a pair of entangled particles with spin components (+1/2, +1/2, +1/2) and (-1/2, -1/2, -1/2) respectively". But what you describe are not entangled particles at all. You have somehow described the state of the pair by giving separately the state of the particles (although I really don't understand what state you have in mind). But you certainly have not described an entangled pair. I am not requiring that your theory use entanglement, I am pointing out that it can't, so you should not talk about entangled pairs in this theory. Because entanglement plays such a central role in QM, you should remain aware that you have cut yourself off from that resource.

    In your cogwheel theory, the states of the cogwheels are obviously not treated as free variables in Bell's sense, exactly because of the mechanical constraints. But the choice of what to "measure" on the two side of an EPR experiment or in the three labs of a GHZ experiment is treated as a free variable, and all combinations of experimental choices actually occurs. The sequences of the choices can be effectively random and statistically independent of each other. The claim is that any sequence that is not statistically independent of them is an effect of the free choice. So if the initial state of the emitter particles is not statistically independent of the later settings, the setting must be a cause of the earlier state of the particles, and the theory is retrocausal. Since 't Hooft explicitly denies retrocausation, the state of the particles should be statistically independent of the later choice of settings. But it can't be. and he has no explanation of why it can't be. That's the problem.

    I have no idea what you are getting at with the orbiting body example, since it can't yield violations of Bell's inequality. But yes, they can start out not orbiting and end up arbitrating due to some event. The event has to dissipate some kinetic energy in order that they come to orbit each other.

    ReplyDelete
  153. Dear Tim,

    I have finally noticed my mistake. I didn't realize this was the source of confusion, sorry about that. I did indeed say that the source produces pair of entangled particles. I should have said it emits particles that are described by QM to be "entangled". They are not in fact "entangled", they are classical particles. So, (+1/2, +1/2, +1/2) represents an electron with the spin components on X, Y and Z equal to +1/2, while (-1/2, -1/2, -1/2) represents an electron with all spin components -1/2.

    "In your cogwheel theory, the states of the cogwheels are obviously not treated as free variables in Bell's sense, exactly because of the mechanical constraints."

    Exactly. This is why such a theory could potentially reproduce QM while being local.

    "But the choice of what to "measure" on the two side of an EPR experiment or in the three labs of a GHZ experiment is treated as a free variable, and all combinations of experimental choices actually occurs. The sequences of the choices can be effectively random and statistically independent of each other. The claim is that any sequence that is not statistically independent of them is an effect of the free choice. So if the initial state of the emitter particles is not statistically independent of the later settings, the setting must be a cause of the earlier state of the particles, and the theory is retrocausal. Since 't Hooft explicitly denies retrocausation, the state of the particles should be statistically independent of the later choice of settings. But it can't be. and he has no explanation of why it can't be. That's the problem."


    If I understand you correctly you seem to believe that 't Hooft's CA allows "free parameters". Well, it does not. It is exactly as constrained as the cogwheel toy model with the exception that the constraints are not mechanical in nature. The rigidity of the cogs gets replaced by the CA rule. A cell will change its state in a very specific way depending on the state of the surrounding cells. This has the effect that the states of the source and the states of the detectors are not independent.

    There is another issue I'd like to make clear. We are discussing deterministic theories, right? So, are we in agreement that there is no such thing as "choice"? The so-called "experimental choices" represent just the state of the CA in the region the detectors/experimenters are located. That state is uniquely determined by the past state of the CA. So there cannot be any retro-causality.

    -will continue

    ReplyDelete
  154. -continuation

    I'll try to come up with a simple CA model. Our "universe" has 10 cells on 1 dimension. Each cell can have two states 0 and 1. I will make the cell on the position 4 to be detector 1, on position 5 we have the source and on position 6 the second detector. We start with the following initial state:

    Time 0: 000[101]0000

    So, each detector is in state 1 and the source in state 0 (I put in brackets the important region). Let the rule be that the new state of each cell is determined by averaging and rounding the adjacent cells. So the CA evolves like that:

    Time 1: 001[010]1000
    Time 2: 010[101]0100
    Time 3: 101[010]1010

    We can see that the detectors never share the same state with the source and they always have the same state. So, the states of the detectors and source are not free variables. For a realistic CA (Planck-sized cells) the source and detectors will have a huge number of cells and a huge number of states and those states may appear to you, due to their complexity, to be unrelated (free variables).

    "I have no idea what you are getting at with the orbiting body example, since it can't yield violations of Bell's inequality."

    This is not related to Bell but to the concept of past causes. OK, I didn't think about a collision, the orbiting bodies are the only ones in that universe. I would say that in such a case the cause of the current state of the system is the past state of the system, not some event. In the CA model for example there are no events, just a lot of cells changing states according to some rule. An experimenter pressing a button is only a significant event from a human, emergent, non-fundamental perspective. For the CA it just means that a certain pattern shifted to another location as a result of the CA rule. That shift in pattern was caused by the previous state of the CA in that region. This shift in pattern may be correlated to another change in some other region, like a light bulb going on. But I don't think you can pinpoint one to be the cause and other the effect. You may very well say that the state of the light bulb prior to the button being pressed was the cause and the movement of the button the effect.

    ReplyDelete
  155. Andrei,

    Deterministic theories still can treat variables as free parameters. Newtonian mechanics, for example, or Bohmian mechanics or a cellular automaton. Let the cellular automaton state represent three physical objects—the source and two detectors— in a vacuum. The vacuum is represented by long stretches of cells in the 0 state. And suppose the source emits a pair of particles in opposite directions. These particles are local disturbances in the CA vacuum state that propagate in opposite directions. All of this is determined by the CA dynamical rule. Eventually, each particle reaches one of the detectors and will interact with it.

    Let's also suppose that each detector can be in one of two states, corresponding to making two different measurements. Now clearly, it is perfectly legitimate to treat the detector settings are free variables. That is, each detector can be set in either of two ways without requiring any change at all outside the detector. And clearly one can ask in a perfectly sensible way what the final outcome would be for each of the four possible global apparatus settings for a given pair of emitted particles. Since the dynamics of the CA is deterministic, there will be definite answers to such questions. And, finally, it is absolutely clear that such a physics cannot possibly violate Bell's inequality for the detectors. The theory is local in its ontology and local in its dynamics. But we can analyze what would happen under various detector settings nonetheless.

    In order for such a theory to display violations of Bell's inequality, there would have to be a sort of pre-arranged harmony between the states of the emitted particles and the states of the detectors. In the original Bell proof, that harmony is a bit hard to pin down because one has to produce enough data to test a probabilistic prediction. In the GHZ scheme it is very sharp: for any given emitted triple of particles, a local theory has to somehow exclude the possibility of one of the four global settings of the apparatuses. And there is just no plausible way to do that dynamically. It has to happen just by fiat, somehow. But there is no physical explanation and no prospect of one. That's why your cogwheel analogy is misleading: it does not have a local dynamics.

    ReplyDelete
  156. Tim,

    Yo have chosen a CA that corresponds to a Newtonian theory (billiard balls only interacting by direct collision). You say that the particles are disturbances in the vacuum, represented by 0's. I agree with you that such a CA allows for free parameters.

    A realistic CA however would use fields, the vacuum state at each cell would not be 0, but would contain the local magnitude/direction of the fields.

    "Let's also suppose that each detector can be in one of two states, corresponding to making two different measurements. Now clearly, it is perfectly legitimate to treat the detector settings are free variables. That is, each detector can be set in either of two ways without requiring any change at all outside the detector."

    Not true in the field scenario. A change in detector orientation would require a corresponding change in the entire CA in order for the local values of the fields to correspond to the new distribution of charges. If you only change the charge distribution you will not have a physically possible initial state. You may point out that the fields do not change instantaneously everywhere with the change of the charge distribution. True, but you still need to change the entire CA because the distant fields need to correspond with the past charge distribution. But a changed distribution in the present implies a change in that past distribution.

    So, I think that the "pre-arranged harmony" is ensured by the constraints imposed by the fields and particles states.

    Andrei

    ReplyDelete
  157. Dan Couriann,

    I assume we are on the same page about energy transfer and signaling. Now about the rest,

    "In summary, Bell's four options were
    (1) QM fails (didn't happen)
    (2) superdeterminism (can never be ruled out)
    (3) non-locality (superluminal signaling - conflicts with observation)
    (4) non-separability (wave functions of entangled systems can't be factored)"

    No, this is wrong. First, although in a logical sense superdeterminism (or better "hyperfine tuning") can't be ruled out, the principle of statistical independence that Bell appeals to underlies all scientific method. And no known theory exists that recovers the quantum-mechanical predictions this way.

    (3) non-locality is *not*the same as superluminal signaling. What is needed to violate Bell's inequality at spacelike separation is superluminal information transfer, not signaling. Einstein always complained that quantum theory employed "spooky action-at-a-distance", but he never claimed one could send signals with it. Every existing precise "interpretation" of quantum mechanics is non-local, and none provide for superluminal signaling.

    Most important. (4) is not on Bell's list, and certainly was not Bohr's position. Non-factorability into a product state just is the definition of entanglement, and every theory that takes the wavefunction seriously as representing a physical state must accept entanglement and non-separability. Bohmian mechanics certainly does, GRW certainly does, and Everett certainly does. But accepting entanglement alone does not solve the measurement problem, and so does not explain how violations of Bell's inequality can occur. For Bohm, you have to add particles, for GRW you have to add collapses. Many Worlds is trickier to discuss, because there is no standard way to understand the theory. So simply accepting entanglement does not resolve Bell's problem. You have to solve the measurement problem if you are going to discuss the correlations among experimental outcomes. And entanglement alone does not solve the measurement problem.

    Bohr said that experimental apparatus had to be described in classical language, and entanglement is not part of classical language. So Bohr never citied entanglement as the key. to anything. And simply having an entangled wave function does not produce violations of Bell's inequality. Since the correlations are correlations between measurement outcomes, you have to solve the measurement problem.

    That is why I kept asking you about QFT. Once you admitted that you don't have a solution to the measurement problem you thereby admitted that you don't have a theory that can account for violation of Bell's inequality of the sort found in the lab. And every solution to the measurement problem brings along with it a way to implement the non-locality of the theory, There is no place in Bell's works where he suggests that your option (4) alone can account for violations of his inequality without some sort of non-locality. If you think otherwise, please cite the passage,

    In summary, (1) didn't happen, (2) he never took seriously, which leaves only (3)( although non-locality does not at all mean superluminal signaling). Every serious theory has (4), of course, but different theories implement the non-locality in different ways.

    ReplyDelete
  158. Andrei,

    What you write is false for 't Hooft's CA. It is time reversible and local dynamics, with no constraints. You can change any cells any way you like, and the dynamics will generate a unique future and past. The CA that 't Hooft has defined simply has no forbidden global instantaneous states.

    He himself admits that he has to rule out some initial states by fiat. In the end, most will have to be. With no principle or explanation.

    ReplyDelete
  159. I have thought of another example which I hope (but doubt) will clarify the difference between a non-signal signal and an inference from prior information - which apparently I must share with the world.

    Is there a planet, somewhere in the Andromeda galaxy, with earth-like conditions, which hosts some form of biological life (not necessarily DNA-based)? I think that there is, based on my past accumulation of knowledge. Let's say I am right and there is. Does that mean I have received some information from Andromeda across unimaginable space and time? No, it is an inference.

    Similarly, if I tell someone "If A, phone me at time C; if not-A, don't phone me," and I get no call at time C, I may infer that not-A is true, but not with certainty. He (or she) could have forgotten to call, might have lost their phone, or been robbed of their phone by a mugger; the nearby cell towers could be damaged, the battery could have run down, and so on. Probably (but not certainly), not-A is true; and probably there is that earth-like world somewhere in Andromeda: inferences, not signals.

    Similarly, if I have one of a pair of entangled particles and measure some property, I may infer that the other would have the complementary property, based on past experiments; I have not instantaneously received any signal which confirms it. (And there is some remaining probability, say at -20 sigma, depending on the number of past experiments, that the other particle does not have that property.) Now suppose we have done infinite experiments and the probability of disagreement is now zero - is there a discontinuity in our definitions of signal and inference at zero?

    May you all have a Happy New Year, everyone except Trump and his emulators (not referring to anyone here - I hope) .

    ReplyDelete
  160. I assume we are on the same page about energy transfer and signaling.

    Are we? I say that signaling requires energy transfer, and that your purported examples of signaling (or "information transfer") without energy transfer are invalid.

    “superdeterminism (can never be ruled out)…” No, this is wrong… in a logical sense superdeterminism (or better "hyperfine tuning") can't be ruled out…

    Aside from the opening phrase “No, this is wrong”, it sounds we’re in agreement that superdeterminism can never be ruled out.

    …the principle of statistical independence that Bell appeals to underlies all scientific method.

    Since we agree that superdeterminism isn’t (and can’t be) ruled out, it can’t conflict logically with “the scientific method”. The patterns, symmetries, etc., all still exist to be discerned, by whatever method, so there is no conflict with methods of discovery, regardless of your concept of the “flow of causality” (static block universe versus dynamically evolving Cauchy surfaces, etc).

    And no known theory exists that recovers the quantum-mechanical predictions this way.

    I don’t understand what you mean. The only theory here is quantum field theory, which obviously yields all the quantum-mechanical predictions. If you mean superdeterminism, I think we’ve agreed that it can’t logically be ruled out, and doesn’t logically conflict with quantum field theory. Superdeterminism is an interpretation that yields all the results of quantum field theory, and you’ve agreed it’s a possibility.

    …non-locality is *not* the same as superluminal signaling.

    I think we’ve been around this before. The principle of locality in physics is most commonly defined as no superluminal action, which implies no superluminal signaling (see above). Admittedly this is a matter of definition, and some philosophers define the word locality to mean what others call separability. Setting aside the semantic differences, we hopefully agree that the wave function of two space-like separated entangled particles can’t be factored. I say this violates separability but not locality, because there is no superluminal action. You say it violates “locality” due to “information transfer”, even though the word “transfer” smuggles an asymmetry that is not inherent in the phenomena – baring some supernatural intervention – and even though there is no signaling. Sure enough, you invoke “interventions” to support your assertion of asymmetry, and you claim that the existence of such “interventions” is essential to the scientific method. We have a fundamental disagreement about all of this. It may all come down to what the word “action” means.

    Most important. (4) is not on Bell's list, and certainly was not Bohr's position.

    Here’s the actual quote: “Fourthy and finally, it may be that Bohr’s intuition was right – in that there is no reality below some classical macroscopic level.” But Bell immediately goes on to admit that he doesn’t understand Bohr’s position at all, and what Bohr actually said, and what Einstein responded to, was not that the entangled particles have no reality, but that they have no independent reality, because their wave functions cannot be factored, meaning they are non-separable. So, if Bell’s fourth alternative is Bohr’s position, it is that quantum theory violates separability. (We’ve already explained why the neo-Lorentzian Bell used the word locality for this, because he really suspected that special relativity and Lorentz invariance were violated by quantum entanglement, i.e., his option 3.)

    ReplyDelete
  161. JimV
    Shannon's analysis of information transfer involves three components: a transmitter, a receiver and a channel. The question Shannon asks is this: how does the state of the receiver depend on the state of the transmitter given that the channel is operating according to design specifications. The analysis considers subjunctive conditionals: if the transmitter were in state A, what are the probabilities that the receiver would be in various of its possible states, if the transmitter were in state B, what are the probabilities, etc. From this information we can determine what we can believe about the state of the transmitter given the state of the receiver, all assuming the channel is operational.

    in this case, state of the transmitter is binary: bad guy present or no bad guy present. The state of the receiver is visible spot of light or no visible spot. The channel is the whole set-up. Granting that the channel is operational, at each five minute mark the state of the receiver yields one bit of information about the state of the transmitter. The first five-minute mark with no light is a perfectly reliable signal that the bad guy has come.

    The difference with your Andromeda case is easy: the state on the earth will be exactly the same whether or not your inference is correct. The state of the receiver (Earth) is not sensitive in this way to the state of Andromeda. So Shannon gets the right result: an inference of that kind is not a signal.

    Entangled particles cannot be used to send signals: we all agree with that. But if there is a real collapse of the wavefunction, that does transmit information. Clearly, by observing the outcome on one side, one acquires information about the other, information that was unavailable to anyone before the collapse. Namely information about how the distant experiment came out.

    ReplyDelete
  162. Dan,

    I only now realize that I sent another response that seems not to have been published. It contained the following refinement of the example. You are in your room waiting for Bad Guy to come. Your associate is across the street. Your windows are double-paned, with a vacuum between the panes. You have a laser. Every five minutes, you plug the laser in, flash a spot of light through the window to your associate, unplug the laser and put it away. At the first five-minute mark after Bad Guy arrives you do nothing, the laser is unplugged, no spot appears on the wall across the street, and a signal is sent to your associate that Bad Guy has arrived. This is a perfectly good Shannon signal.And if you want to argue that any energy is transmitted from your room to your associate's room, give it a shot. No physicist in the world will agree.

    The method of drawing consequences from controlled experiments relies on statistical independence. This is obvious. If I flip a coin to sort a population into an experimental and control group, then treat the experimental group and get a statistically significant difference in the result I am interested in, I infer that the treatment caused the difference. That depends on accepting statistical independence, which implies a high probability that the experimental and control groups are statistically similar to each other. If, by chance, all of the individuals who were already predisposed to show the result happened to end up in the experimental group, then the treatment might have no influence at all on the result. That is, of course, logically possible. But we ignore the possibility. You seem to think that scientific inferences logically guarantee their conclusions. Of course they don't. They rely on presuppositions that we do not seriously doubt. Among these are statistical independence in a case like this.

    Since you do not specify how QFT/QM as you understand them solve the measurement problem, that "theory" makes no empirical predictions at all and a fortiori cannot predict violations of Bell's inequality. Your idea that you can make claims of any sort for this "theory" is misguided. How a theory implements non-locality depends on how it solves the measurement problem. So you can't use QFT/QM as an example of anything, much less a theory that violates statistical independence. No interpretation of the theories I am aware of denies statistical independence.

    Of course entanglement violates separability. But violating separability alone does not provide resources to violate Bell's inequality for events at spacelike separation. Any theory that takes the wavefunction seriously takes entanglement seriously. That includes Bohm's theory and GRW. But these theories implement non-locality and account for violations of Bell's inequality in very different ways because they solve the measurement problem in different ways.Bell never lists entanglement alone as an escape from his theorem because it isn't. Bell's lambda can include an entangled wavefunction. The theorem goes through just the same.

    ReplyDelete
  163. Tim,

    I don't agree with what you are saying. I remember some paper of 't Hooft where he was exploring CA's with information loss (so non-reversible) and I also know that he insists a lot on the rich structure of the vacuum. I'll try to find those papers and point you to them.

    But, regardless of what 't Hooft is working on now, do you agree that the field-like CA I was describing in my previous post does not allow for statistical independence?

    Andrei

    ReplyDelete
  164. I only now realize that I sent another response that seems not to have been published. It contained the following refinement of the example…

    I don't see any refinement, it's the same as before. Again, your signal is a step function superimposed on a crude carrier wave. The lower the frequency of the carrier wave, the less the fidelity of the signal, i.e., the greater is the uncertainty as to when the step function actually occurred (turning off the signal). The actual signal (information transfer) is the energy transfer. To see this, ask yourself: Why is the knowledge of the bad guy’s arrival delayed by the transport time for the light pulse to traverse the distance between houses (plus the period of the carrier wave)? For example:

    Suppose the houses are 100 light years apart. At the first missing pulse, the receiver will know that either his friend fell asleep and forgot to send a signal or else the bad guy arrived at some time between ‘100 years ago’ and ‘100 years plus 5 minutes ago’. The five minutes of uncertainty is due to the crudeness of the carrier wave (which can be reduced arbitrarily by using a higher frequency carrier wave), but the unavoidable 100 year delay is due to the propagation time for the energy of the last received pulse. The speed of the information transfer is strictly limited by the speed of the energy transfer. You’re just adding a 5 minute uncertainty (i.e., delaying the end point of the receiver’s uncertainty interval) by using a low frequency carrier wave. So this is not an example of energyless transfer of information.

    The method of drawing consequences from controlled experiments relies on statistical independence.

    Well, quantum entanglement entails a violation of some possible assumptions about statistical independence of spacelike-separated actions (and re-actions) for entangled systems, so if we design a controlled experiment that relies on Bell’s inequality being respected for entangled systems, our experimental basis will indeed be compromised. As for situations in which our assumptions about statistical independence remain valid, well, they remain valid. Maybe what you’re trying to say is that if we invoke superdeterminism to account for quantum entanglement, we might as well invoke it to explain everything. That’s a standard criticism of superdeterminism, but the counter-argument is that the degree of statistical independence between events is whatever it is, and is characterized by whatever recognizable attributes apply, making the results observationally equivalent, regardless of how explained. But I’m not here to argue superdeterminism, I’m just saying this was Bell’s second option and everyone agrees that (like solipsism, etc.) it can’t be logically refuted.

    No interpretation of the theories I am aware of denies statistical independence.

    I don’t understand that statement at all. Quantum theory denies statistical independence between spacelike-separated entangled systems, i.e, the wave functions can’t be factored.

    …violating separability alone does not provide resources to violate Bell's inequality for events at spacelike separation.

    I think it does, at least by the standard interpretation of quantum theory. It used to be taught that quantum mechanics was a theory of how quantum systems interact with classical systems, and there was some wisdom in that description, side-stepping the measurement problem altogether. In practice, this is invariably how the theory is deployed, which is why we don’t bump into the measurement problem in practice. In this context, the non-separability of the wavefunction of a quantum system fully explains all experimental results related to quantum entanglement – and everything else. When we get bored with that, and decide that we want to treat everything as a quantum system, well, things get murky. I’ve already said I don’t know of any theory that definitively resolves the measurement problem (or even settles the question of whether there is a problem).

    ReplyDelete
  165. Guys,

    If a thread has a lot of submissions, I sometimes miss a comment and it keeps hanging in the queue among the junk. Sorry about that. If you think this happened, the easiest is to post a brief comment notifying me of it. Please make sure to address this comment directly at me. You can also send an email to hossi at fias.uni-frankfurt.de Best,

    B.

    ReplyDelete
  166. Dan,

    So let's make this crystal clear. You maintain that at the five minute mark once the Bad Guy has arrived, when you leave the laser unplugged and send no light to the room across the street, thereby signaling, that some energy is transmitted from your room to the room across the street? And to back this extraordinary claim up all you have are the words "crude carrier wave" and "step function"?

    Then there is no point continuing. You appear to know no physics at all. If you think there is energy transmitted at that time, identify the physical source and where the physical flow occurs. Through the vacuum between the window panes? Around the window? Out through the ceiling?

    Either take the discussion seriously or stop engaging.

    ReplyDelete
  167. Dan,

    So let's make this crystal clear. You maintain that at the five minute mark once the Bad Guy has arrived, when you leave the laser unplugged and send no light to the room across the street, thereby signaling, that some energy is transmitted from your room to the room across the street? And to back this extraordinary claim up all you have are the words "crude carrier wave" and "step function"?

    Then there is no point continuing. You appear to know no physics at all. If you think there is energy transmitted at that time, identify the physical source and where the physical flow occurs. Through the vacuum between the window panes? Around the window? Out through the ceiling?

    Either take the discussion seriously or stop engaging.

    ReplyDelete
  168. Andrei,

    No, your considerations are clearly insufficient. We do, as a matter of fact, employ the assumption of Statistical Independence all the time. That assumption is much, much, much weaker than a claim that there are no interactions between things. To advert to my other example, the parity of the number of leaves to fall off the trees in the park in a minute is statistically independent of the parity of the number of cars that go by. We make that assumption without questioning it, and you are free to check: it is true. But the cars and the leaves interact. You always equate interaction with the denial of statistical independence. They are just very different conditions. Indeed, as Bell remarks, things that interact a lot, like atoms in a box of gas, tend to become statistically independent. That is the condition that Boltzmann assumes in his H-theorem. Please try to separate the concepts, or we cannot make progress, Think about the cars and the leaves.

    ReplyDelete
  169. TM, if you can postulate an imaginary Shannon channel (at a time when there is no physical channel between the two characters in the phone scenario), I don't see why I couldn't (although I don't) postulate an imaginary channel between myself and Andromeda, which similarly changes the state of my imaginary receiver to make me positive that my conjecture is true. In other words, I feel that is the sort of thing your systems leaves you open to, i.e., how do you distinguish among imaginary channels (by which I mean, in less-loaded terms, channels which no physical test could detect in operation).

    In any case my system (signal = physical signal, conclusion based on previous information = inference or more generally, algorithmically-determined action) makes sense and seems consistent to me, as no doubt yours does to you. I am happy to resist further engagement on these terms.

    ReplyDelete
  170. So let's make this crystal clear. You maintain that at the five minute mark once the Bad Guy has arrived, when you leave the laser unplugged and send no light to the room across the street, thereby signaling, that some energy is transmitted from your room to the room across the street?

    No, I maintain that information is not transferred at that time, i.e., I dispute your phrase “thereby signaling”. It’s important to distinguish between new information (received external signals) versus extrapolations and deductions based on previously received information. (This parallels the vital distinction, well known to philosophers, between sense impressions versus internal thoughts.) Each pulse received at time t signifies that the bad guy had not arrived at the transmitter by the time t – L/c. If the next expected pulse does not arrive at t + T (where T is the period between pulses), the receiver may infer that the bad guy arrived some time between T - L/c and t + T - L/c. In other words, the bad guy could have arrived immediately after the last energetic pulse was transmitted, so the last energetic pulse is the last actual transfer of information. The rest is deduction based on previously received information and the agreed communication protocol.

    Let me explain it this way: Suppose a 10 MHz beacon is shuttered until the bad guy arrives, at which time it is unshuttered. Clearly this arrival signal is conveyed with energy. Now consider the converse, i.e., the beacon is unshuttered until the bad guy arrives, at which time it is shuttered. This arrival signal too is conveyed with energy. In the first case the signal is a leading edge of an energetic carrier wave, and in the converse case the signal is the trailing edge of an energetic carrier wave. Note that, in both cases, the receiver has the same uncertainty as to when the bad guy arrived, because he can’t resolve the time more precisely than the interval between pulses. But with a 10 MHz carrier wave this uncertainty is negligible.

    Now suppose we slow down the pulses, i.e., use a lower frequency carrier wave. At what frequency do we begin to produce your purported “signaling without energy”? The answer is that, regardless of frequency, the signal is always conveyed by energy, whether it is a leading edge or a trailing edge signal, and in both cases we can’t resolve the bad guy’s arrival time more precisely than the period of the carrier wave. And, of course, in both cases the speed of the arrival signal is limited by the speed of the energy conveying it. The first (or last) received pulse anchors one end of the uncertainty interval, and the size of the interval was determined by prior agreement (which of course was communicated by energetic signals). So there is no energyless signaling.

    ReplyDelete
  171. JimV

    You seem to think that what I am saying is somehow idiosyncratic. It isn't. I am just applying Shannon's theory to this case.

    There is not an "imaginary" Shannon channel, whatever that might mean. There is a real, physical channel. The conditions for the channel to be operational are physical conditions (the glass must be transparent, there can be no obstructions between the buildings, etc. If the channel fulfills these conditions, then the state in the room across the street at five minute intervals (spot to no spot) carries information about the state of affairs in my room (Bad Guy or no Bad Guy).There is nothing "imaginary" about any of this. What is important are the subjunctive connections: if there should be no Bad Guy in the room right now, there would be a spot on the wall. These are all plain physical claims. The difference with Andromeda is that there is no condition on Andromeda such that, had it been different, your physical state would be noticeably different. That claim is also backed by physics.

    The problem with your imaginary transmitter from Andromeda is just that: it is imaginary, not real. The transmitter in my case is perfectly real.

    If you are not familiar with Shannon's definitions, then you should familiarize yourself with them.

    ReplyDelete
  172. Dan,

    Your problem is that you insist on trying to understand everything using the model of a "carrier wave". But not everything Is a carrier wave. A letter is not a carrier wave. And my pulse system does not use carrier waves. It is only accidental that it uses any wave (i.e a light wave) at all: instead we could have agreed that every five minutes I would throw a pebble at the window across the street until Bad Guy arrives. No waves ever. But the first 5-minute mark without a pebble is a signal. A plain, vanilla Shannon signal, that satisfies Shannon's definitions exactly the same way that each 5-minute mark with a pebble does. At the 5-minute marks, and only at the 5-minute marks, the state of the receiver, across the street, contains information about the state of the room: whether at that time (minus the transit time for the pulse or pebble) my room contained or did not contain Bad Guy. Every 5-minutes one bit of information is transmitted. And since I can control the transmission, this is a case of signaling. When there is no pulse or no pebble there is zero energy transfer. That is neither here nor there.

    In every case, there is an inference: given the state of the receiver, and under the assumption that the transmitter and transmission channel are in order, what can be inferred about the state of the transmitter? That the the question Shannon asks and answers. And in this set-up, the validity of the inference does not require that every signal transmit energy. The pulses and pebbles do; the absence-of-pulses and absence-of-pebbles don't.

    If Bad Guy arrives in the first five minutes, so no pulse or pebble is ever sent, then no energy is ever sent. But it is a signal all the same.

    Again, if you can't accept this (and I can't for the life of me see what is hanging you up here), then go read Shannon's original paper, or some other presentation of his theory. What I am saying not in the least controversial. If we can't settle this then there is no prospect or sorting out entanglement, which is a trickier business.

    ReplyDelete
  173. Tim,

    "No, your considerations are clearly insufficient. We do, as a matter of fact, employ the assumption of Statistical Independence all the time. That assumption is much, much, much weaker than a claim that there are no interactions between things."

    Yes, but I didn't just say that statistical independence fails because things interact, but because once you shift from a Newtonian, "billiard-ball" like theory, where objects only interact by direct collisions to a field approach the system becomes rigid. Any change in a part of a system requires a corresponding change everywhere. Example:

    You have a CA universe of finite size with a charge particle in the middle. The particle is stationary. For consistency with EM laws you need to add in each cell an electric field component with a magnitude decreasing with the square of the distance and with a direction pointing towards the particle. If you want to change the system by adding a speed to the particle you will need to add a magnetic field component in every cell. As the particle changes position both field change, and so on. Can you find two cells in this universe with independent parameters?

    "To advert to my other example, the parity of the number of leaves to fall off the trees in the park in a minute is statistically independent of the parity of the number of cars that go by. We make that assumption without questioning it, and you are free to check: it is true."

    How would you recommend me to check it? How can you prove statistical independence experimentally?

    "Indeed, as Bell remarks, things that interact a lot, like atoms in a box of gas, tend to become statistically independent. That is the condition that Boltzmann assumes in his H-theorem."

    Statistical independence is an approximation that works in that specific case. This does not prove that the assumption is true. It is also worth mentioning that Boltzmann deals with statistical quantities, not with correlations between properties of individual particles such as in the case of Bell's theorem.

    "Please try to separate the concepts, or we cannot make progress, Think about the cars and the leaves."

    I think I did that, but let me comment a little bit about the cars and leaves. the correlations that are established in a filed theory relate particle distribution and motion with local field components. This is the level where the statistical independence fails, and where the relevant physics (like the emission of particles by the source) takes place. Leaves and cars are just non-fundamental, subjective descriptions of more-or-less random collections of particles that happen to have a meaning for humans. It is not cars and leaves that interact, but their internal particles. Counting the cars or the leaves tells you virtually nothing about their internal state so I would not expect a meaningful correlation to occur.

    So, I think this is how we stand:

    1. I claim that you cannot establish statistical independence between any properties of any cells in a CA universe obeying the laws of electromagnetism. If you think you can then just give an example.


    Thanks,

    Andrei

    ReplyDelete
  174. Tim,

    I've been following your discussion with interest, but like Dan and JimV I'm having trouble with your explanation/examples of energy-free information transfer. It seems like you're ignoring the information content implicitly encoded in your assumptions, your communication channel setup and information transfer prior to "the event" that you claim requires no energy to signify. Please read the following from the standpoint of principle, not as nit-picking over details.

    First example: Assume a laser-based transmitter/receiver setup that continuously sends light until an event occurs. Let's say you open the channel by a dark period of at least five seconds, followed by exactly one second of light after which the communication channel is fully active. By prior agreement the sender turns off the laser when the event occurs, which the receiver detects to "know" the event has occurred. How does the receiver decide the laser has turned off? Does the algorithm that detects "dark" contain implicit information ("memory" about what constitutes an event, which was set up sometime in the past) that is activated by transition from light to dark? Can that algorithm be created/employed in an energy-free way? Alternatively, might we instead use a sensitive calorimeter that slowly absorbs the light energy while a sensitive thermometer records the temperature samples, so that the elapsed time from "channel reset" (five seconds of dark) to the light-to-dark event can be known sometime later? Would you argue the rising temperature before the event was irrelevant to the detection, i.e. it contained no information about the occurrence and time of the event itself?

    Your pebble example seems problematic. First, don't you need to tell the receiver to start listening for pebbles, so that the case of "no pebble heard" can be distinguished from a closed/nonfunctional channel? Doesn't that notification constitute information, and doesn't it require energy? If you were the receiver, would you bet a year's salary that not hearing a pebble after five minutes means a "bad guy" showed up, or might you also consider the possibility of transmitter or channel failure?

    It seems your assumption of transmitter, transmission channel and receiver all being in order is highly nontrivial, and implicitly contains a great deal of information that allows you to receive and trust a signal. How do you establish the channel and verify its continued operation without energy transfer? It seems you can't simply treat "channel is operating correctly" as a simple boolean, but must account for the many ways noise can enter the channel (e.g., a bird momentarily blocks the beam) or the channel can fail outright (e.g., someone bumps the laser without knowing it) -- all those possibilities must be accounted for by sophisticated encoding techniques and continuously monitored by auxiliary devices. And even then, you need the sender and receiver to meet (and exchange information) at the end of the experiment to verify that the results are trustworthy.

    I suspect pretty much any example you can come up with, no matter how complex, will have problems with performing energy-free information transfer. The "real world" is always messier than mathematical abstractions, although abstractions can give a best-case bound. At least that's the way it seems to me, although I could be overlooking something...

    ReplyDelete
  175. Andrei,

    I appreciate that you are trying to think this through, but somehow you end up asking very strange questions. You ask, for the leaves and cars example, "How would you recommend me to check it? How can you prove statistical independence experimentally?" But that is trivial. Count and check. Easy as pie. Or, to give an easier and faster experiment (now that the leaves are down): take two different colored dice. Put in a cup. Shake and roll the dice. Record the outcome. Repeat 1,000 times. Prediction: the outcomes of the two dice will be statistically independent of each other. You are free to alter the dice however you like: shave the edges, attach weights, whatever. Just don't glue them together: they should move freely. If you doubt this, please do the experiment. If you don't think you even have to do the experiment, think about why you are so confident. Get back to me on this: Do you doubt that the sequences will be statistically independent (up to usual errors)? If so, did you do the experiment and check? If not, why not? The dice are interacting like crazy. Maybe if we settle this simple observation we can make progress.

    ReplyDelete
  176. Marty,

    Our question is simple: does sending a signal from point A to point B require transmitting any energy from Point A to Point B. And the answer, as my examples show, is "no". Pretty trivially.

    Since I have constructed my examples to make my point, please don't just alter them and ask about the altered examples. Of course signals can transmit energy. They just don't have to.

    As far as Shannon's definitions go, it is irrelevant whether there is anyone in the room to hear the pebbles. The question is just about correlations between the state of the transmitter and the state of the receiver. The room itself can be the receiver, even when no one is home. I just added the person to make the point more vivid.

    Shannon's definition requires a characterization of the transmitter states, the receiver states and the channel conditions. The definition requires just that the channel be operational, not that anyone know it is. So your worries about whether the channel is working are beside the point. You can always raise skeptical questions. If you try to check the channel, you can raise skeptical questions about whether the checking mechanism worked as designed. That is not relevant.

    Anyway, put in all the channel-monitoring devices you like, devices that check for birds or table-bumps, or whatever. Let the monitors sound an alarm and flash a bright light if and only if a flaw is detected: otherwise, they sit quietly. Then if there is no problem they all sit quietly and transfer zero energy into the room And you can be as secure betting your salary as you wish to be.

    ReplyDelete
  177. "There is not an "imaginary" Shannon channel, whatever that might mean."

    That's discouraging because I thought I clearly defined it as a channel whose *operation* (key word) cannot be detected by any physical test. Shannon was of course free to make any definition he wanted, but in science (and I suspect in Shannon's mind) things have to be empirically verified (over competing systems, such as mine) by physical tests. Shannon's work, to the extent that it can be applied to imaginary channels, seems like a red herring to me. (Although of course Shannon's work is very important as applied to actual, physical systems, such as the phone systems it was developed for and tested in.)

    I should however have added the element of local evidence to make my system general:

    Signal = physical signal, detectable by a (blinded) third party by physical test.

    Conclusion based on prior knowledge *and* local (non-transferred) evidence = inference, or algorithmically-determined action.

    For instance, when you measure some property of one of a pair of entangled particles, you have your prior knowledge of QM and the local evidence of the measurement to convince you that the other particle, if measured, would have the complementary property. Someone who did not know QM would not have received your (in my view, imaginary) information transfer over your imaginary Shannon channel.

    (I suppose one could say something like, "reality knew what the other particle's property would be, even if the experimenter did not"; however in that sense reality knows everything, even to the farthest neutrino in Andromeda, instantaneously, which does not seem to be a useful distinction to me.))

    Similarly with the phone example, absence of a call is local information, not transferred information, which together with previous (physically-transferred) information, may warrant a conclusion (albeit a risky one, for reasons previously mentioned).

    This system makes sense to me. I don't see any contradiction with Shannon's work in it. I feel free to continue using this model, pending further (physically-transferred) information. (I have skimmed Dr. Hossenfelder's link but not yet absorbed it.)

    ReplyDelete
  178. I have not followed your discussion, but here is information transfer without energy.

    Ironically, that referenced blog article fully supports the “no energyless signaling” side of the discussion we’ve been having. Here are the relevant points:

    In a flat space-time with three spatial dimensions, the field only correlates with itself on the lightcone [and hence no energyless transfer] but in lower dimensions this is not so… Okay you might say then, fine, but we don’t live in 2+1 dimensional space-time. That’s right, but we don’t live in three plus one dimensional flat space-time either: We live in a curved space-time. This isn’t further discussed in the paper but the correlations allowing for this information exchange without energy can also exist in some curved backgrounds. The interesting question is then of course, in which backgrounds and what does this mean for our sending of information into black holes? Do we really need to use quanta of energy for this or is there a way to avoid this? And if it can be avoided, what does it mean for the information being stored in black holes?

    So, this confirms that, in the flat 3+1 spacetime that we’ve been discussing, energyless transfer of information is not possible. (Of course, due to the elementary fact that Huygen’s principle applies only in spacetimes with an odd number of space dimensions, it isn’t surprising that off cone propagation occurs in 2+1 dimensional spacetime.) It also acknowledges that it is speculative whether energyless transfer of information might be possible in some curved spacetimes resembling our own, and, if so, who knows, this might solve the black hole information paradox. The reference concludes by predicting we will hear more of this in the future. Possibly. But this clearly refutes the claim that energyless signaling is possible in flat 3+1 dimensional spacetime, even by the most exotic means, let alone with an ordinary flashlight out the window to signal the arrival of a bad guy. So, again, just to be clear, in flat 3+1 dimensional spacetime there is no energyless signaling, and whether this fundamental principle is somehow violated in some curved spacetimes resembling our own is highly speculative… and in any case not relevant to our discussion. So, ironically, that reference is very helpful in clarifying that there is no known violation of the principle of “no energyless signaling” in our universe.

    ReplyDelete
  179. Dan

    I have killed your claim with a clear counter-example. Can you just accept that and try to figure out where you have gotten so confused?

    If you insist on complicating issues, at least look at the really interesting ones. The Elitzur-Vaidman bomb problem is actually a *measurement interaction* that transmits zero energy to the measured system, and obviously conveys information about that system. It allows quantum theory to do something that classical theory can't: no classical theory can solve the bomb problem. But that is a subtle point. The point about energyless signaling is quite trivial. That you refuse to accept something so obvious means that there is no hope in having a useful conversation.

    ReplyDelete
  180. JimV

    There is an actual precise theory of information transfer due to Shannon. Add controllability of the transmitter to information transfer and you get a signal. Such a signal does not require any energy transfer, as my examples illustrate.

    You seem to be trying to construct some alternative set of definitions to Shannon's. I have really no interest in trying to investigate or critique your alternative: Shannon is the gold standard here. Just apply his account to the example I have given and you get the result I claimed. Stop trying to muddy the waters.

    ReplyDelete
  181. …not everything Is a carrier wave… my pulse system does not use carrier waves. It is only accidental that it uses any wave (i.e a light wave) at all: instead we could have agreed that every five minutes I would throw a pebble at the window across the street…

    But that’s the carrier wave! When I refer to a "carrier wave", I’m not talking about the EM fluctuations comprising an individual light pulse, I’m speaking conceptually about an abstract carrier signal that can consist of a sequence of pulses or pebbles or postal letters or anything you like. These entities carry energy, and this sequence of pebbles (say) is the carrier wave of your signal, which is stopped at some point (step function), so you are using a trailing edge signal. And your carrier wave is crude because it consists of discrete pulses (rather than a continuous wave form) with a very low frequency, which gives poor bandwidth and poor resolution. Of course, if you throw pebbles at 10 MHz, you will have good resolution of when the bad guy arrives, and if you throw pebbles only once per day you will have poor resolution, but the pebbles are the carrier wave and the step function is the signal (stop throwing pebbles), and the rest is deduction based on previously communicated protocol. There is no energyless transfer of information here. This is a fundamental principle of physics in 3+1 dimensional flat spacetime, and, I’d venture to say, in all realistic spacetimes (although energy becomes a tricky concept in curved spacetimes, but that’s not relevant to our discussion here). From time to time, people re-discover the fact that Huygens' Principle doesn't apply in 2+1 dimensional spacetime, but don't be distracted by that.

    ReplyDelete
  182. TM, from my point of view, you seem to be applying Shannon's Information Theory, which I have no problem with, in a way which I doubt was ever intended, by defining no-signal as a signal, which seems like an obvious contradiction in terms, and therefore a sign of a bad application. My system simply shows that the conclusions you claim to be the result of transferred information can easily and clearly be explained as the result of previously-transferred information (using non-zero signals and energy), plus evidence which is available locally without any transference. At a minimum, this alternate explanation stands in the way of any supposed proof that information can be transferred by no signal and no energy - in the same way, for example, that James Randi's duplication of Uri Geller's spoon-bending showed that no magic or psychic phenomena are needed to bend spoons. It didn't prove that Geller did not bend spoons with his mind, and I haven't shown that your imaginary Shannon channels don't exist - just that they aren't necessary to explain anything.

    For my part, those who wish to believe that Geller bends spoons with his mind can continue to think that way, but I am satisfied with Randi's alternate explanation. I am sorry if this muddies the water (as indeed it does, for Geller fans).

    Having read Dr. Hossenfelder's link twice now, I focus on the part which says "While the switching of the detectors requires some work, this is a local energy requirement which doesn’t travel with the information." So the claim of sending information without [expending any] energy seems a bit over-hyped. No energy, no signal still seems to be true.



    ReplyDelete
  183. Dan,

    Really, for the last time: if you think that when I don't use the laser or don't throw a pebble that some energy gets transmitted across the street, then identify the source, quantity and trajectory of the energy. If you can't do that, then just concede. I'm not going to continue this. I am trying hard, but there is only so much I can do. Either concede the case or identify the energy transmission.

    ReplyDelete
  184. JimV

    We can never make any progress if you don't pay attention to what is at issue. A signal can be sent from space-time location A to space-time location B without any energy being transmitted from A to B. I never said that no energy is ever used anywhere! So you are in heavy battle with a straw man.

    Information is sent from my room to the room across the street. Information about the state of my room becomes available across the street. That information cannot be the result of any of our prearrangements because *we did not have the information at the time we made those arrangements*. So the relevant information cannot have been transferred at that time.

    The most egregious thing you write is "by defining no-signal as a signal". This is what is known as petitio principii or begging the question. The only grounds you have for saying that it is no-signal is because it is clearly no-energy. But whether there can be signals with no energy is precisely the question we are debating. So you beg the question in how you state the situation. Shannon would have no problem with anything I have said.

    ReplyDelete
  185. Tim,

    My question regarding the experimental proof of statistical independence is based upon the fact that it might be difficult to evaluate data without any model in mind. If you have a linear relationship it is easy to find it. But more complex situations might pose more problems. Every statistical test has some limitations, etc.

    But this is not an issue here, just a side-question. We may ignore it, as I have acknowledged in my previous post that I would expect cars and leaves to be independent:

    "Counting the cars or the leaves tells you virtually nothing about their internal state so I would not expect a meaningful correlation to occur."

    Now, even if you can find some things that are independent (like cars and leaves, or dice if you want) you still cannot justify the independence assumption in Bell's theorem because you cannot control the hidden variable and you cannot even know it. And this takes us to the main point of my argument:

    "the correlations that are established in a filed theory relate particle distribution and motion with local field components."

    and my example:

    "You have a CA universe of finite size with a charged particle in the middle. The particle is stationary. For consistency with EM laws you need to add in each cell an electric field component with a magnitude decreasing with the square of the distance and with a direction pointing towards the particle. If you want to change the system by adding a speed to the particle you will need to add a magnetic field component in every cell. As the particle changes position both field change, and so on. Can you find two cells in this universe with independent parameters? "

    So, what is your answer to the above question?

    ReplyDelete
  186. "Information is sent from my room to the room across the street."

    In my model this does not happen. The fact that no call is received, or no pebble, or whatever, is purely local information. I am not getting phone calls a lot of the time! (Not quite so much as I would like.) I can determine this locally, instantaneously (the way I obtain all local information), without anything being sent.

    Of course, by being sent you mean not being sent, and no physical signal is a signal. (In claiming this assessment is a logical fallacy on my part, it seems to me you commit a meta-level fallacy of that same type.) In your model I am constantly and forever connected by imaginary Shannon channels to everyone who could be (but isn't) calling me on the phone or shining light at me. Imaginary friends cannot be detected in operation by a third party. Neither can your channels when they are supposedly operating by not sending a signal.

    As I said, if you want to believe in these imaginary channels that is your prerogative but it is not convincing to me, since I have an alternate explanation for your scenarios which does not require them.

    (I apologize to Dr. Hossenfelder for all this waste of good Internet space. I feel impelled to answer charges against my reasoning with which I do not agree, but I recognize they add to your moderation burden.)

    ReplyDelete
  187. If you think that when I don't use the laser or don't throw a pebble that some energy gets transmitted…

    But I don’t. See the previous messages. Again, my claim is that the only actual signaling is by the pulses or pebbles, and that when no pulse or pebble is sent, there is no signal and no transfer of information. And I haven’t just claimed this, I’ve explained it in detail, including the delay due to the propagation speed of the energy, and the uncertainty due to the low frequency of the pebbles, etc.

    Remember this exchange from several messages ago: Tim: “You maintain that at the five minute mark some energy is transmitted...” Dan: “No, I maintain that information is not transferred at that time. You see, we’ve been around this before. I’m not sure how to get past the mental block that seems to be stopping us from communicating. Let me try this: Consider four possibilities for what happens at the five minute mark:

    (1) energy, signal
    (2) no energy, signal
    (3) energy, no signal
    (4) no energy, no signal

    You claim (2) and I claim (4). But for some reason you are certain that I’m claiming (1), even though I keep telling you I'm not claiming (1) I'm claiming (4). Now you say that if I don’t provide a defense of (1) forthwith you will terminate the discussion. This is very strange, isn't it?

    I’ve explained in detail why (4) is correct and (2) is fallacious. In summary, your “bad guy” scenario just consists of a trailing edge signal on a very low frequency pulse carrier wave. This is not energyless information transfer, because the pebbles carry the information, and everything else is deduction based on previously received information. Also, I’ve explained why this applies to any purported energyless information transfer. We’ve even noted a paper acknowledging (as common knowledge) that energyless information transfer is impossible in flat 3+1 spacetime, even by quantum entanglement, let alone by tossing pebbles.

    ReplyDelete
  188. Andrei,

    What is my response? The same as ever. The statistical independence postulated by Bell is between the *apparatus settings* and the *initial state of the particles*. Suppose you postulate that these are not statistically independent? Okay, why not: as in all such cases we would seek a mechanism. If you want a causal mechanism, it can't be that the initial state of the particles is the cause and the apparatus setting is the effect. Wy not? Because if we set the apparatus by the parity of the digits of pi, the apparatus setting cannot have the initial state of the particles as a determining factor: it is determined by the party of the digits of pi. The apparatus would be set that way no matter the particle state. So the casual arrow would have to go from the setting backwards in time to the particle creation. Now 't Hooft explicitly denies retrocausation, so that it out for him. So all he can do is say that there is some otherwise completely unexplained restriction on the initial data that sets up this statistical dependence of the initial state on the parity of the digits of pi in exactly those situations where the digits of pi are later used to set the apparatus. But that sort of constraint with no explanation (except: otherwise I don't get the right predictions!) just is not an acceptable scientific theory. I have been repeating this over and over, so maybe you can point out which claim or inference you are objecting to.

    ReplyDelete
  189. JimV,

    I said I would give this up, so I will. You are confused and will not answer a simple question. Yes, you are not getting phone calls all the time, and that does of course give you real, precise, definite information that you would not have if the phone were not present or were ringing. For example, from my not getting a phone call right now, I get the information that Donald Trump, whatever he is doing, is not calling my number right now. If the phone were not here, I would not have that information. If the phone were not here, for all I know he could be calling my number right now. And similarly for every person with access to a working phone.This is just a plain fact. You may not like it, for some reason, but your dislike is neither here nor there. Apply Shannon's definitions and you get that result, and furthermore it is the correct result. If you deny this then you either have to say that 1) I don't actually have that information (which I evidently do) or 2) the phone not ringing is not essential to my getting the information (which it evidently is). The phone system is not an "imaginary Shannon channel", whatever that is supposed to mean. It is a very real, very physical Shannon channel, designed and constructed at great expense to be just that, and it does a splendid job. It informs me that there are literally billions of people in the world who are not presently dialing my number. That is a restriction of the space of physical states they could be in. It is a small restriction, but a real restriction. Without the actually existing, functioning phone system I would not have that information and anyone could be calling my number for all I know.

    I have run out of both words and patience, since you refuse to helpfully engage. If you insist of posting again, then start by saying what you mean by an "imaginary" channel. and why the phone system is an "imaginary" (rather than perfectly real) channel. And either why you do not have the information I have just said you have or, if you have it, how you got it. Those are sharp questions. Give sharp answers or just don't bother.

    ReplyDelete
  190. Dan,

    You are just directly contradicting yourself. I will try one more time because you have laid things out, so maybe we can clear this up. I claim 2. You say you claim 4. But if you claim 4, then you claim that no signal and no information arrives at the five minute mark. But it evidently *does* arrive exactly then: exactly then, and not before, the associate becomes apprized of the fact that Bad Guy has arrived! At exactly that time he gains information of a very specific form that he previously did not have. Just say directly: do you want to deny this? If so, how? Are you really denying that he does, at that very moment (which of course takes into account the transit time that the pebble *would have taken* had it been thrown) acquire a brand new piece of information about whether Bad Guy is in the room across the way? Just answer: Does he get new information or not?

    In your post you show that you really do not want to deny this, that is, you really do not want to adopt position (4). Because you write: "In summary, your “bad guy” scenario just consists of a trailing edge signal on a very low frequency pulse carrier wave." Now whatever a "trailing edge signal" is supposed to be (I have asked that you stop using this obscure vocabulary), whatever it is supposed to be, it is a *signal*. So when you say that my case is a "trailing edge signal" you are acknowledging the plain fact that it is some sort of a signal! So you do not actually hold (4) after all: you admit there is a signal (which cannot be denied), and since you demand that every signal carry energy you are committed to (1) willy-nilly. That, or you agree with the correct view, namely (2).

    I will make two more remarks. If you want to write again, please address yourself to these, and do not just repeat this stuff about "carrier waves" without making perfectly clear what that is supposed to mean. First remark: it seems as if you want to claim that in the pebble case the "carrier wave"consists in the initial stream of pebbles which do (of course) transmit energy from one room to the other. No pebbles thrown, no carrier wave. Now suppose we agree to our protocol, but Bad Guy arrives before the first five-minute mark. Then there is never a pebble thrown, never any energy transmitted from room to room, but still a signal is sent. Just as in the other cases, the associate comes to know when Bad Guy arrived (within the five-minute window. Second comment. You write: "This is not energyless information transfer, because the pebbles carry the information, and everything else is deduction based on previously received information". So here is an obvious point. Anything I can deduce from previously received information *can be so deduced as soon as I have that information*. Deduction is deduction: once the premises are established the conclusion follows. So if I can deduce that Bad Guy arrives in a certain five-minute window from previously received information, then I can deduce it *before* the five minutes has passed. But I can't. That would be some sort of pre-cognition. I can't draw the conclusion until five minutes has passed, so I don't have all the information I need until five minutes has passed. So some information becomes available only once five minutes has passed. And that information arrives accompanied by zero energy.

    Do not answer again unless you address these points. Do you really deny that the accomplice gets the information, or when he gets the information? Then make that plain, and leave the "carrier wave" talk out of it unless you can give some strict definition of what that means.

    ReplyDelete
  191. Tim,

    "What is my response? The same as ever. The statistical independence postulated by Bell is between the *apparatus settings* and the *initial state of the particles*."

    Agreed.

    "Suppose you postulate that these are not statistically independent? Okay, why not: as in all such cases we would seek a mechanism. If you want a causal mechanism, it can't be that the initial state of the particles is the cause and the apparatus setting is the effect. Wy not? Because if we set the apparatus by the parity of the digits of pi, the apparatus setting cannot have the initial state of the particles as a determining factor: it is determined by the party of the digits of pi. The apparatus would be set that way no matter the particle state. So the casual arrow would have to go from the setting backwards in time to the particle creation."

    I disagree with you here. A correlation between the detector settings at the time of measurement and the initial state of the particles can have three possible explanations:

    1. The detector settings at the time of measurement determine the initial state state of the particles (retrocausality)

    2. The initial state state of the particles determines the detector settings.

    3. There is a common cause that makes the initial state state of the particles and the detector settings correlated.

    It is this option 3 that I am arguing for and you did not address it.

    What is it this common cause? We need to go back to:

    "the correlations that are established in a filed theory relate particle distribution and motion with local field components."

    The meaning of this is that the state of the source even before the particles are emitted and the state of the detectors even before the measurements are performed are correlated, not independent. By "state" I do not mean the macroscopic appearance of the source/detectors but the complete description in terms of position/momenta of their internal particles (electrons and quarks).

    So, at the beginning of the experiment we start with the source and detectors in a correlated state. Now, we need to remember that the theory is deterministic. This correlated initial state will evolve into another, still correlated state. So, the state of the particles and the detector settings will be correlated.

    So, my argument is:

    Premise 1: The initial states (position/momenta of electrons and quarks+electric/magnetic fields' magnitudes/direction) of the source/detectors are not independent due to the constraints imposed by the laws of electromagnetism.

    Premise 2: The initial state of the particles emitted by the source is determined by the initial state of the source.

    Premise 3: The states of the detectors at the time of measurement is determined by the initial states of the detectors.

    Premise 4: correlated states evolve into correlated states.

    If Premise 1, 2, 3, 4 are true it follows that the initial state of the particles and the detector settings at the time of measurement are not independent.

    I have justified premise 1 in my earlier post (the example of a moving/stationary charge in a CA). Premise 2 and 3 follow from the fact that the theory is deterministic. You may disagree with Premise 4 but I'll wait to see your opinion.

    Andrei

    ReplyDelete
  192. Sharp answers:

    "Imaginary Shannon channel" = one whose operation cannot be physically detected by any third party (as when an actual Shannon channel is not sending a physical signal). Admittedly a bit pejorative, which I do regret, but it seems such an apt description (analogous to imaginary friends) and so much easier to type that I have become addicted to it.

    How do I get local information as to when I am not getting phone calls = directly from the local environment via my senses, as could any third party who shared that local environment. No need for any transmission from a second party far removed from my local environment.

    How do I know what the lack of a call in my local environment signifies = by inference from previously-obtained information plus local information.

    Did Shannon believe in local information gathered with the senses rather than transmitted from afar? Did he believe that inferences (with some chance of being mistaken - e.g., there could be multiple reasons why a second party does not or cannot call me on the phone) could be drawn from such information? If he did, I don't see how he could object to my model.

    You have decided to model these simple interactions with non-physical transmissions, which of course happen instantaneously at the given time with no transmission lag. That may be an interesting academic exercise, but as long as a simple alternative model exists which does not require this arcana, you have no proof of energy-less transmission. I.e., a proved relationship must be not only sufficient, but necessary.

    Proposal: since we keep saying the same things over and over (nothing in the above could not be gleaned from my previous comments), should we not make a donation to the site, say $10, each time we do from now on, to offset our host's time and trouble moderating them? I will start it off with this post.

    ReplyDelete
  193. You claim that no signal and no information arrives at the five minute mark. But it evidently *does* arrive exactly then: exactly then, and not before, the associate becomes apprised of the fact that Bad Guy has arrived!

    “Becomes apprised” is too vague. Again, each pebble striking the window at time t constitutes a signal that the bad guy’s arrival time must be subsequent to t - L/v where v is the speed of a pebble (signal) and L is the distance between houses. Notice that each signal propagates with the speed of the energy (pebble) that conveys it. Now, suppose at time t + T no new signal (pebble) arrives. This doesn’t tell us the earliest that the bad guy could have arrived, it just signifies that the bad guy’s arrival time was not later than t + T – L/v. So, at the time t + T the neighbor “becomes apprised” that the bad guy arrived at some time between t – L/v and t + T – L/v. You see, the uncertainty band is bounded by the last received pebble (energy), and the size of the band (T) is just the agreed inter-pebble period that was communicated (also by energetic signals) in advance. So there is no energyless transfer of information.

    At exactly that time he gains information of a very specific form that he previously did not have. Just say directly: do you want to deny this? If so, how?

    “Gains information” is too vague. At each instant there are infinitely many signals that we do not receive, and hence, if you believe that lack of information constitutes information, an isolated system continuously “gains an infinite amount of information” at each instant. But this is really just deductions that a system can make based on actual received information. ("There is nothing in the mind that was not in the senses.")

    As an example, suppose you tell a friend on Jan 5 that you will be flying to Hawaii on Jan 31 unless your plans change. On Jan 31, having not heard that your plans have changed, your friend “becomes apprised” of your departure for Hawaii. But this doesn’t mean that your friend received this information via a signal from you on Jan 31. At any time between Jan 5 and Jan 31 you might have sent them another signal to say your plans have changed, and at any time during that interval (even if they have not heard of a change of plans), there is still a chance that your plans may yet change, so it isn’t until Jan 31 that they can conclude that indeed you are departing on Jan 31. But this is what you told them on Jan 5, and it remains their expectation until/unless new information arrives. So, even though they cannot conclude that you are departing on Jan 31 until Jan 31, this does not mean that they are receiving new information on Jan 31. To the contrary, it means they have NOT received new information.

    Just answer: Does he get new information or not?

    “Get” is too vague. To refer to the lack of new information as “new information” is obviously fallacious. This relates to the well-known fallacies associated with treating shadows as if they are signals. For example, a shadow can propagate superluminally, but a signal cannot.

    You write: "In summary, your “bad guy” scenario just consists of a trailing edge signal on a very low frequency pulse carrier wave." Now whatever a "trailing edge signal" is supposed to be (I have asked that you stop using this obscure vocabulary), whatever it is supposed to be, it is a *signal*.

    Yes, but it’s the trailing edge of an energetic signal with very low resolution. You have confused yourself by not recognizing that the signal and information aligns with the trailing edge of an energetic signal, and the other end of the uncertainty band is part of the communication protocol. See above (and each of the previous messages!) for the detailed explanation of this.

    By the way, the terms “carrier wave”, "signal wave", "trailing edge signal", "step function", etc., are not obscure in the field of signal analysis.

    ReplyDelete
  194. Andrei,

    Let me repeat one more time...correlations are only defined between *sequences* not between individual states. Put it this way: think of the Game of Life CA. if you just pick two cells at some time and ask "are they correlated?", you have not even asked a question with any content. At any given time either they are in the same state (both on or both off) or in different states (one on and one off). Neither of these possibilities is more "correlated" than the other. But if we track the state of the two cells over some period of time and get a sequence for each—off, on, on ,on, off, off, on, etc. for one and another sequence for the other—then we can sensibly ask whether the sequences display any correlations. That is a well-defined mathematical question. It can be asked of any pair of sequences, irrespective of the causal structure of the CA.

    Furthermore, we can ask the same question about more generic properties. Take a huge collection of cells, and ask whether the total number of cells on in that collection is even or odd. Call this the parity of the collection. As time goes on, there will be sequence of parities: +, - , -. +, +,+.... etc. You can ask whether that sequence is correlated with the sequence of on/off in some cell, or with the sequence of parities of some other collection of cells.

    Now I claim that in the Game of Life, as long as there are a lot of cells active and no large scale exact symmetries, both the states of individual cells that are far apart and the parities of groups of cells that are far apart will almost always be statistically independent of each other. And if they are not—if correlations persist—then there must be some structure that accounts for that. Since the Game of Life in not retrocausal, and there is no instantaneous action, the only explanations are common cause or pre-established harmony. The latter is also what we have caller hyperfine-tuning: the initial states are restricted, without further explanation, to a very special set of states. You are rejecting that. So all that is left is. as you say, common cause.

    Now to appeal to a common cause explanation here, it is not enough just that the two sequences have *some* common causes in the overlap of the past light cone. Those common causes must, all by themselves, *determine the states that display the correlations*, e.g. determine the state of the individual cell or the parity of the group of cells. And that almost never happens. And then add on top of all that the example I have gone back to over and over: setting the apparatus by the parity of the digits of pi. In that case, the "common cause" can do no work at all: the sequence of apparatus settings is determined by the parity of the digits of pi, and nothing in the past light-cone of the computer can influence that at all. So there can be no common cause.

    I think you are getting stuck on two points. One is that you are thinking about individual pairs of states rather than sequences of states and the other is that your simple example of the single charged particle is too simple. There my be correlations in distant sequences in such a simple world exactly because there is a common cause that *alone determines* the states of the distant cells. But in a complex world this just isn't the case, and your example does not hold up.

    It is not so much that I reject Premise 4 as that it is ill-formed. Individual states are not the kinds of things that can even be correlated. Sequences are (or aren't). So Premise 4 doesn't even state a coherent proposition.

    ReplyDelete
  195. JimV-

    So there is something new in this post, or at least something not obvious in the others, that reveals one of your confusions. Maybe this will help.

    You write: "You have decided to model these simple interactions with non-physical transmissions, which of course happen instantaneously at the given time with no transmission lag." No, of course I have not. If you were paying attention, I explicitly said that the transmission time has to be taken into account. If the pebbles are supposed to be thrown at exact 5 minute intervals, on the minute: 12:00, 12:05, 12:10 etc. and the pebble takes one second to get from the window where it is thrown to the window across the street, then the signal always arrives at 12:00:01, 12:05:01, 12:10.01, etc. This is obvious in the cases where the pebble is actually thrown, because that is when the sound of it hitting window is heard. And it is equally obvious in that case where the pebble is not thrown: it is only the lack of sound at 12:00:01 that carries the information that Bad Guy has arrived. The lack of sound at 12:00:00 is informationless, because that is the state that would obtain whether or not Bad Guy has arrived. This is all trivial.

    Whether A "third party" can detect the "operation" of a channel is, to say the least, a completely irrelevant as well as unclear question. What does "operation" mean? What does the third party have access to? How much about the protocols does the third party know? (If the third party knows the protocols in my case, then the third part knows that the channel is transmitting information between 12:00 and 12:00:01 (when exactly in that window depends on which part of the channel is at issue) no matter what its physical state is, and so the third party does not need to do anything to conclude that the channel is "operating". If you are thinking that by definition a third party must be able to *monitor* a channel, then you are really out of luck. That is what secure quantum communication is all about: information channels that cannot, from fundamental principles, be eavesdropped on. But that quantum communication is still communication, and information and signals can be sent.

    You seem not to be even familiar with Shannon's theory and definitions. Maybe you should go learn it before posting again. I don't want to have to write a tutorial on the basics of information theory to carry on this conversation. And that will give Sabine a break.

    ReplyDelete
  196. Dan

    You are grasping at straws, but if you insist I will remove them from your fingers.

    "gets", "becomes apprized" and "gains information" at time t all mean precisely this: at times less than t the receiver does not have the information and at time greater than t the receiver does have the information.

    The signals in my case contain one bit of information: whether or not Bad Guy has arrived within a given 5-minute window. You worked all that out. When the pebble arrives, receiver obviously does not have that information before the pebble hits the window and obviously does have it after. When no pebble is thrown, receiver obviously does not have the information before the time the pebble would have arrived if thrown according to protocol, and obviously does have the information after that time. In each case the information arrives—receiver "gets", "becomes apprized" and "gains" the information—at a precise time, namely the exact time that transmitter was to have thrown the pebble had Bad Guy come plus transit time the pebble would have had.

    If no pebble is thrown, the information arrives, and receiver gets it, becomes apprized of it, and gains it (which are all synonyms) at that exact time *without the transmission of one erg of energy* from transmitter. Period. End of story.

    Signal analysis is not the relevant theory here: Shannon information theory is. If you can translate the plain truths I have just written into your vocabulary, feel free to. If you can't, so much the worse for the concepts of "signal analysis" theory. The truths remain true, and obviously true.

    It is your example that is too vague. Is it part of the protocol that your friend will call if his plans change? If so, you only know that he is in fact going on Jan 31, despite what he told you on Jan.5. Obviously. Because on Jan. 5 he did not tell you he was going on Jan. 31, he told you a conditional. You don't get the information that the antecedent of the conditional is true until Jan. 31. If it is not part of the protocol that your friend will call you if his plans change then you never receive the information that he has gone on Jan 31. Ever. You just don't know if he has gone or not. Period.

    ReplyDelete
  197. TM, the given time at which your transmissions arrive is arbitrary, defined by previous agreement, which seems seems a hint that they are not physical, and could, if so arbitrarily defined, arrive instantaneously, but I did not read your comments thoroughly which were not addressed to me, and I did miss that, an error on my part for which I am sorry. You may strike that from my previous comment.

    Radio signals can be intercepted, phone lines can be tapped. The transmission may not be decrypted and understood, but the fact that it exists can be detected with some non-zero probability. Communications from ghosts cannot be detected by physical means. Your claim that quantum mechanics can be used to send signals which no one except the designated receiver could possibly detect will give great comfort to spiritualists everywhere, but I suspect it is a bit over-hyped. It is an interesting claim which does not appear in a few minutes googling for "undetectable quantum signals" or "secure quantum communication", but suppose you are correct: I amend my statement to, "a classical communication channel whose operation cannot be physically detected by any third party." (You are not, I take it, claiming that your imaginary Shannon channels use quantum technology?)

    I don't go into Shannon Information Theory for the same reason I don't go into Sturm-Louiville Eigenfunction Theory: they are irrelevant to my argument, and red herrings.

    An agreement has been made that no transmission will be sent by time t under condition A. Local knowledge (not the receipt of any signal) tells me no transmission has been sent. I conclude that either condition A holds or that the other party has lost the capability of sending a transmission, for reasons unknown, or not transmitted for some other reason. I tentatively decide to behave as if condition A holds. You like to claim no transmission is a transmission. I see no reason to add this complication to a simple model. Whether or not Shannon theory can be stretched to include non-transmission transmissions does not have any relevance as to whether such contortions are necessary to understand a simple situation. The example which counters any supposed proof that Shannon-stretching is necessary, is that I have demonstrated, above, my understanding of this simple situation without having to invoke any Shannon theory.

    (And another 10 Euros goes to this site. I don't count quibbling over definition terminology as any significant new addition.)

    ReplyDelete
  198. Let me try again to make this clear, without Fourier transforms, etc. (although I urge you to google “fourier energy density signal” and read the first search result). First, suppose we agree to leave the porch light ON until the bad guy arrives, when we will immediately turn the porch light OFF. So the signal is a step function like this:

    (fixed width font, please)
    _______________
    |_______________________

    Is this an energyless signal? You see, the signal consists of the edge, i.e., the transition from ON to OFF. We are modulating the energy emitted by the porch light. Someone might say this is an energyless signal, because the signal consists of turning the energy OFF, but obviously this is just a very simple modulation of the energy, so it is not an energyless signal.

    Now consider the same step function, but instead of modulating a constant continuous energy carrier, it modulates a discrete toothed carrier:
    __ ___ ___ ___
    |___| |___| |___| |__________________________

    For this situation you claim that the signal (transition from ON to OFF) is actually energyless, because the receiver can’t resolve the edge between ON and OFF more precisely than the period of the carrier. You say the awareness of the stopping of the pulses doesn’t actually “arrive” until some time AFTER the last pulse, and therefore (you think) we have achieved energyless signaling… with just a simple porch light!

    But that reasoning is invalid. Even in the first scenario, with the “continuous” carrier, the physical signal is quantized and discrete, and hence there is some non-zero time required to recognize that the modulation has occurred, i.e., we can never resolve the information of a signal with a higher frequency than the carrier wave, which is always finite.

    Is it possible to transmit such a crude signal that the receiver needs a long time to be sure the modulation has taken place, i.e., to distinguish modulation from the baseline fluctuations of the carrier? Of course. Do we thereby achieve energyless information transfer? Of course not. The information impinging on us flows with the energy, and the rest is internal extrapolation and deduction on the part of the receiver.

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.