Sunday, July 28, 2019

The Forgotten Solution: Superdeterminism

Welcome to the renaissance of quantum mechanics. It took more than a hundred years, but physicists finally woke up, looked quantum mechanics into the face – and realized with bewilderment they barely know the theory they’ve been married to for so long. Gone are the days of “shut up and calculate”; the foundations of quantum mechanics are en vogue again.

It is not a spontaneous acknowledgement of philosophy that sparked physicists’ rediscovered desire; their sudden search for meaning is driven by technological advances.

With quantum cryptography a reality and quantum computing on the horizon, questions once believed ephemeral are now butter and bread of the research worker. When I was a student, my prof thought it questionable that violations of Bell’s inequality would ever be demonstrated convincingly. Today you can take that as given. We have also seen delayed-choice experiments, marveled over quantum teleportation, witnessed decoherence in action, tracked individual quantum jumps, and cheered when Zeilinger entangled photons over hundreds of kilometers of distance. Well, some of us, anyway.

But while physicists know how to use the mathematics of quantum mechanics to make stunningly accurate predictions, just what this math is about has remained unclear. This is why physicists currently have several “interpretations” of quantum mechanics.

I find the term “interpretations” somewhat unfortunate. That’s because some ideas that go as “interpretation” are really theories which differ from quantum mechanics, and these differences may one day become observable. Collapse models, for example, explicitly add a process for wave-function collapse to quantum measurement. Pilot wave theories, likewise, can result in deviations from quantum mechanics in certain circumstances, though those have not been observed. At least not yet.

A phenomenologist myself, I am agnostic about different interpretations of what is indeed the same math, such as QBism vs Copenhagen or the Many Worlds. But I agree with the philosopher Tim Maudlin that the measurement problem in quantum mechanics is a real problem – a problem of inconsistency – and requires a solution.

And how to solve it? Collapse models solve the measurement problem, but they are hard to combine with quantum field theory which for me is a deal-breaker. Pilot wave theories also solve it, but they are non-local, which makes my hair stand up for much the same reason. This is why I think all these approaches are on the wrong track and instead side with superdeterminism.

But before I tell you what’s super about superdeterminism, I have to briefly explain the all-important theorem from John Stewart Bell. It says, in a nutshell, that correlations between certain observables are bounded in every theory which fulfills certain assumptions. These assumptions are what you would expect of a deterministic, non-quantum theory – statistical locality and statistical independence (together often referred to as “Bell locality”) – and should, most importantly, be fulfilled by any classical theory that attempts to explain quantum behavior by adding “hidden variables” to particles.

Experiments show that the bound of Bell’s theorem can be violated. This means the correct theory must violate at least one of the theorem’s assumptions. Quantum mechanics is indeterministic and violates statistical locality. (Which, I should warn you has little to do with what particle physicists usually mean by “locality.”) A deterministic theory that doesn’t fulfill the other assumption, that of statistical independence, is called superdeterministic. Note that this leaves open whether or not a superdeterministic theory is statistically local.

Unfortunately, superdeterminism has a bad reputation, so bad that most students never get to hear of it. If mentioned at all, it is commonly dismissed as a “conspiracy theory.” Several philosophers have declared superdeterminism means abandoning scientific methodology entirely. To see where this objection comes from – and why it’s wrong – we have to unwrap this idea of statistical independence.

Statistical independence enters Bell’s theorem in two ways. One is that the detectors’ settings are independent of each other, the other one that the settings are independent of the state you want to measure. If you don’t have statistical independence, you are sacrificing the experimentalist’s freedom to choose what to measure. And if you do that, you can come up with deterministic hidden variable explanations that result in the same measurement outcomes as quantum mechanics.

I find superdeterminism interesting because the most obvious class of hidden variables are the degrees of freedom of the detector. And the detector isn’t statistically independent of itself, so any such theory necessarily violates statistical independence. It is also, in a trivial sense, non-linear just because if the detector depends on a superposition of prepared states that’s not the same as superposing two measurements. Since any solution of the measurement problem requires a non-linear time evolution, that seems a good opportunity to make progress.

Now, a lot of people discard superdeterminism simply because they prefer to believe in free will, which is where I think the biggest resistance to superdeterminism comes from. Bad enough that belief isn’t a scientific reason, but worse that this is misunderstanding just what is going on. It’s not like superdeterminism somehow prevents an experimentalist from turning a knob. Rather, it’s that the detectors’ states aren’t independent of the system one tries to measure. There just isn’t any state the experimentalist could twiddle their knob to which would prevent a correlation.

Where do these correlations ultimately come from? Well, they come from where everything ultimately comes from, that is from the initial state of the universe. And that’s where most people walk off: They think that you need to precisely choose the initial conditions of the universe to arrange quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob left rather than right. Besides sounding entirely nuts, it’s also a useless idea, because how the hell would you ever calculate anything with it? And if it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning at superdeterminism is not entirely unjustified.

But that would be jumping to conclusions. How much detail you need to know about the initial state to make predictions depends on your model. And without writing down a model, there is really no way to tell whether it does or doesn’t live up to scientific methodology. It’s here where the trouble begins.

While philosophers on occasion discuss superdeterminism on a conceptual basis, there is little to no work on actual models. Besides me and my postdoc, I count Gerard ‘t Hooft and Tim Palmer. The former gentleman, however, seems to dislike quantum mechanics and would rather have a classical hidden variables theory, and the latter wants to discretize state space. I don’t see the point in either. I’ll be happy if the result solves the measurement problem and is still local the same way that quantum field theories are local, ie as non-local as quantum mechanics always is.*

The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this opens the door to new applications. That’s why I remain perplexed that what I think is the obvious route to progress is one most physicists have never even heard of. Maybe it’s just a reality they don’t want to wake up to.


Recommended reading:
  • The significance of measurement independence for Bell inequalities and locality
    Michael J. W. Hall
    arXiv:1511.00729
  • Bell's Theorem: Two Neglected Solutions
    Louis Vervoort
    FoP, 3,769–791 (2013), arXiv:1203.6587

* Rewrote this paragraph to better summarize Palmer’s approach.

285 comments:

  1. Well, I do hold the view that Science is about finding a (concise enough) description of our observations, and I am not sure why you think I view Science as otherwise. But in order to know if the description we come up is the right one, we need to test it against results of experiments performed in different situations/setups. It is this part that the assumption of freedom of choice is implicitly assumed.

    ReplyDelete
  2. Please can you ignore my latest comment.

    ReplyDelete
  3. "But in order to know if the description we come up is the right one, we need to test it against results of experiments performed in different situations/setups. It is this part that the assumption of freedom of choice is implicitly assumed."

    No, it is not. You do not need a freedom of choice to have different setups.

    ReplyDelete
  4. Sabine,
    If some sort of hidden variables produce what's perceived as randomness, but that perceived randomness actually has correlations that we haven't recognized, then those correlations, if they're strong enough, could potentially mess up quantum computing.

    Also, there's an ongoing experiment in which "randomness" data from organic quantum processes around the world is being monitored to see if any underlying correlations can be found. That one's not exactly geared towards looking for evidence for superdeterminism. It's being done by some "noetic" society in Princeton, that's hoping to find that human consciousness causes wave functions to collapse in a correlated way. Still, if correlations are in fact found they of course wouldn't have to be attributed to human consciousness and the results could be useful.

    If hidden variables or some other type of superdeterminism perfectly reproduce existing quantum mechanics and "perceived randomness" is indistinguishable from "true randomness" then I don't see how superdeterminism could help, other than being a possible interpretation of many to choose from.

    ReplyDelete
  5. My question about superdeterminism is this. If superdeterminism is true then everything is more-or-less correlated to everything else (in some sense). So why do we see *only* quantum correlations?? Shouldn't we see more-than-quantum correlations sometimes, or, indeed, almost all of the time?

    ReplyDelete
    Replies
    1. Yes, Pmer
      You are correct.
      If superdeterminism is true then everything is correlated
      to everything else by the quantum correlations.
      ====

      Delete
  6. Lawrence Crowell,

    Indeed, Bell's theorem uses classical probabilities, but this does not mean that classical theories cannot violate the inequality. A trivial example is as follows:

    A radio station broadcasts both the hidden variable and the detector settings. Once the source receives the signal it sets the hidden variable accordingly. The detectors do the same. I agree this is a highly contrived example, but it is a proof that in a fully classical context it is possible to reproduce the predictions of QM in regards to Bell tests.

    Stochastic electrodynamics claims that what is known as "quantum fluctuations" from QED is in fact a real EM field with a Lorentz invariant spectrum. There is nothing stochastic about it, it is a proper solution of Maxwell's equations. Assuming that this field is the cause of the Casimir force one can determine the value of h-bar. The interpretation of h-bar is that the expression h-bar*omega/2 represents the energy per normal mode. This "zero-point field" is not stochastic, it evolves deterministically but it is assumed to be random to simplify the calculations.

    "Ontology seems to reflect mathematics of real valued quantities, such as expectations of Hermitean operators. Yet epistemic interpretations leave a gap between the quantum and classical worlds."

    It seems to me that stochastic electrodynamics does not leave any gap. Sure, it is not yet a proper interpretation as there is no proof that it does fully reproduce QM's formalism.

    On the other hand I think it is really difficult to accept that Psi is "real". Reality for us is represented by objects or events that happen in spacetime. Psi cannot exist in spacetime.

    ReplyDelete
  7. Sabine,

    I’ll be happy if the result solves the measurement problem and is still local the same way that quantum field theories are local, ie as non-local as quantum mechanics always is.

    Could you please clarify what you mean by this? It seems a kind of Moebius strip construct where one side of the issue (locality) flows seamlessly into its opposite (non-locality). Am I missing something? Thanks.

    ReplyDelete
    Replies
    1. bud rap,

      That sentence is to say that there are two different notions of "locality" which matter here. The one is the usual notion of locality which particle physicists use. This is the one that I think we should try to maintain, simply because it works very well.

      The other notion of "locality" is the one used by Bell which (as many people have pointed out) really shouldn't be called "locality". But in any case, quantum mechanics is Bell "non-local" in that particular sense. I am saying that I do not see any reason to avoid that, as long as the theory remains local in the former sense.

      Delete
  8. @dtvncdonald: The KAM (Kolmogoroff-Arnold-Moser) theorem is a purely classical result. For an oscillator with n degrees of freedom the dynamics is on a torus of 2n - 1 dimensions. So a simple oscillator will execute motion on a circle in space of momentum and position. For a coupled oscillator there are 2 DoFs and the phase space is 4 dimensions with 2 position variables and 2 momentum variables, and the energy constraint puts this in 3 dimensions. Now the dynamics for linear motion, say small oscillations, has a rational winding of a path on that torus. As the dynamics becomes more complicated that winding approaches an irrational winding. This irrational winding of the dynamics on the torus is the limit of regular dynamics. The KAM theorem gives physical limits on this where the torus can be “punctured” so that dynamics is no longer regular. Often this is seen with a Poincare section, a 2 dimensional plane that records where a particle passes through it, as so called Cantor dust. The regular dynamics of different particles with different initial conditions appears as a nested sets of dots, which get complicated as they approach the so called KAM surface. The chaos then appears as regions with randomly placed dots.

    BTW, algorithms for a chaotic dynamical system make good random number generators. Coding the standard map for instance can serve as a good random number generator.

    Where does quantum mechanics fit into this? QM throws spanners into these works. The quantum version of motion on a torus has a Bohr-Sommerfeld quantization around the various dimensions. This means that irrational winding on the torus is impossible, for the quantum wave would have complete destructive interference and not be able to exist on this. We could suppose there is some sort of tunnelling where the winding approaches an irrational winding and then transitions into chaos. However, chaos is odd with QM. Quantum chaos is a subject of interest, and often it is manifested as random phases with so called scarring or the breakdown of degeneracies into what are called avoided crossings.

    Henri Poincaré worked on separatricies between dynamics with different initial conditions. Perturbations would deform these into wild oscillating functions that converge with high frequency dynamics to a cusp where two separatrices meet. Henri Poincare was in effect working with chaotic dynamics, but the subject was at best gestated with this. It you have the particle with some quantum properties, say the deBroglie wavelength λ = h/p, then even if that wavelength is very small for some sufficiently classical particle, there will be some limit where Poincare's “mad separatrices” can't wind up further. This is becomes some nearly closed loop “filigree” in the dynamics will have ∫pdq ≈ ∮pdq = ħ, where the first integral is a line integral around some nonclosed but nearly closed loop and the second integral approximates this with a closed loop. If the open loop approaches itself by a distance smaller than the wavelength this is not a bad approximation. The problem is you can't get the particle to orbit on a smaller loop! Or at least you can't get it to orbit a smaller loops with any meaning to the momentum of the particle. This is a cut-off that is similar to why a quantum system can't irrationally wind on the torus.

    So this means that chaos theory makes the emergence of classical physics from QM more focused in some ways, but very difficult. This may be connected to quantum measurement or einselection. The coupling of a quantum system to some large N system as the apparatus means there is some shift of the quantum phase which might have some of the fractal dynamics similar to chaos theory.

    I would recommend Arnold's book on the math of classical mech and Lichtenberg-Lieberman's book on chaos and bifurcation of vector fields.

    ReplyDelete
  9. @dtvmcdonald: This is a copy of a message I sent above that so far has not showed up. I give a brief idea of how QM fits in, or maybe not fits in, with chaos theory.

    The KAM (Kolmogoroff-Arnold-Moser) theorem is a purely classical result. For an oscillator with n degrees of freedom the dynamics is on a torus of 2n - 1 dimensions. So a simple oscillator will execute motion on a circle in space of momentum and position. For a coupled oscillator there are 2 DoFs and the phase space is 4 dimensions with 2 position variables and 2 momentum variables, and the energy constraint puts this in 3 dimensions. Now the dynamics for linear motion, say small oscillations, has a rational winding of a path on that torus. As the dynamics becomes more complicated that winding approaches an irrational winding. This irrational winding of the dynamics on the torus is the limit of regular dynamics. The KAM theorem gives physical limits on this where the torus can be “punctured” so that dynamics is no longer regular. Often this is seen with a Poincare section, a 2 dimensional plane that records where a particle passes through it, as so called Cantor dust. The regular dynamics of different particles with different initial conditions appears as a nested sets of dots, which get complicated as they approach the so called KAM surface. The chaos then appears as regions with randomly placed dots.

    BTW, algorithms for a chaotic dynamical system make good random number generators. Coding the standard map for instance can serve as a good random number generator.

    Where does quantum mechanics fit into this? QM throws spanners into these works. The quantum version of motion on a torus has a Bohr-Sommerfeld quantization around the various dimensions. This means that irrational winding on the torus is impossible, for the quantum wave would have complete destructive interference and not be able to exist on this. We could suppose there is some sort of tunnelling where the winding approaches an irrational winding and then transitions into chaos. However, chaos is odd with QM. Quantum chaos is a subject of interest, and often it is manifested as random phases with so called scarring or the breakdown of degeneracies into what are called avoided crossings.

    Henri Poincaré worked on separatricies between dynamics with different initial conditions. Perturbations would deform these into wild oscillating functions that converge with high frequency dynamics to a cusp where two separatrices meet. Henri Poincare was in effect working with chaotic dynamics, but the subject was at best gestated with this. It you have the particle with some quantum properties, say the deBroglie wavelength λ = h/p, then even if that wavelength is very small for some sufficiently classical particle, there will be some limit where Poincare's “mad separatrices” can't wind up further. This is becomes some nearly closed loop “filigree” in the dynamics will have ∫pdq ≈ ∮pdq = ħ, where the first integral is a line integral around some nonclosed but nearly closed loop and the second integral approximates this with a closed loop. If the open loop approaches itself by a distance smaller than the wavelength this is not a bad approximation. The problem is you can't get the particle to orbit on a smaller loop! Or at least you can't get it to orbit a smaller loops with any meaning to the momentum of the particle. This is a cut-off that is similar to why a quantum system can't irrationally wind on the torus.

    So this means that chaos theory makes the emergence of classical physics from QM more focused in some ways, but very difficult. This may be connected to quantum measurement or einselection. The coupling of a quantum system to some large N system as the apparatus means there is some shift of the quantum phase which might have some of the fractal dynamics similar to chaos theory.

    I would recommend Arnold's book on the math of classical mech and Lichtenberg-Lieberman's book on chaos and bifurcation of vector fields.

    ReplyDelete
  10. If an underlying deterministic theory is based on for instance the premisse that entropy can be zero and it turns out to be a wrong premisse? If quantum mechanics is wrong, why not also athe theory it is derived from?

    ReplyDelete
  11. Ms.Sabine has anyone spoken of recording going on at the quantum level, and the involvement of memory at the quantum level?

    ReplyDelete
  12. An interesting discussion, if you like this sort of thing, of metaphysical and philosophical ruminations arising from the incomplete nature of the mathematical formalisms of quantum mechanics. By incomplete, I mean that the formalisms accurately produce only statistical outcomes of aggregated quantum experiments, but they fail to predict outcomes for single experiments. It is fair then to say that quantum mechanics must not capture the underlying processes that produce its statistical outcomes. It is further, reasonable to assume that this inability to accurately predict a single event outcome is directly related to the fact that we are unable to make direct observations of events transpiring on the quantum scale, and therefore there is no known qualitative model from which to construct a mathematical model.

    Most of the discussion about quantum interpretations amounts to a fruitless attempt to extrapolate a qualitative process-model from the formalisms that contain no information about the processes involved. Essentially, science is ignorant of the nature of quantum processes. This vacuum of scientific knowledge has, unsurprisingly, opened the floodgates to a deluge of mathematical, metaphysical, and philosophical speculations, none of which have any empirical basis, for the obvious reason that empirical observations of quantum scale events do not exist.

    Even more troubling than all the unscientific speculations that have come to be considered 'scientific' interpretations of quantum theory, is the widespread contention that reality is fundamentally quantum in nature. This view rests on the belief that the quantum scale itself is fundamental in some way to physical reality. This reductionist viewpoint, one that is certainly arguable, is also, almost certainly wrong. It is the type of over-simplification of physical reality that produces nothing of scientific significance beyond the spectacle of “scientists” baring their metaphysical souls for fun and profit.

    This oversimplification (quantum-is-fundamental) has resulted in efforts such as the search for a quantum model of gravity. There is nothing about the observed gravitational effect that necessitates such a model; it is motivated only by a desire to see the world as fundamentally quantum. The world is not fundamentally quantum and it is not fundamentally classical; it is both. Physical reality exhibits both quantum behavior (on the quantum scale) and classical behavior (on the classical scale). One isn’t more fundamental than the other.

    Certainly, there is an open question as to where the transition between the two scale-effects occurs, but this is a question for experimentalists to resolve, not theorists. The line will almost certainly be fuzzy and dependent on the exact nature of the experimental setup, but some reasonable sense of the distinction between the scale behaviors will almost certainly hinge on the size of the masses being experimented on and the energies involved in detection.

    Quantum behavior arises when an object is of low enough mass that it becomes subject to disturbance by the ambient electromagnetic radiation and transient high-energy particles pervading the cosmos, as well as the radiation inherent in an experimental setup. The gravitational effect arises when there is sufficient mass to alter the path of nearby electromagnetic radiation. Chasing after quantum effects on the classical scale or classical effects on the quantum scale is unlikely to be productive; the effects should be thought of as scale dependent – because that is the what we observe them to be.
    ...

    ReplyDelete
  13. ...
    Declaring some entity, event, or scale to be a “fundamental” property of a cosmos, which, inclusive of all observable scales is enormous, both in scale and complexity, is like staring at the workings of a mechanical clock and declaring one of the many gears to be “fundamental”. In this context, “fundamental” is a reductionist concept that has done almost as much harm to modern science as mathematicism.

    What makes the quantum-as-fundamental claim nonsensical is the fact, noted above, that we know nothing about the quantum processes that produce the experimental results. The mathematical formalisms of QM can only produce the probabilities for those results; they do not describe the physical processes that produce them. The wavefunction is only math; it has no physical correlate. In taking off on our flights of interpretational fancy, we have only fetishized our ignorance and declared it to be fundamental to understanding the nature of physical reality.

    ReplyDelete
  14. What inconsistency does the measurement problem introduces? As far as I know every experiment that we've done is consistent with quantum mechanics and every thought experiment we can think of has a consistent result predicted by quantum mechanics (with the exception of black hole entropy paradox, which we assume is a quantum gravity issue).

    My bet is that superdeterminism is wrong simply because it tries to solve a problem that does not exists.

    ReplyDelete
    Replies
    1. Udi,

      The measurement problem does not introduce an inconsistency, the measurement problem is the fact that quantum mechanics is inconsistent. The inconsistency is one with reductionism. The measurement, either way you turn it, introduces macroscopic concepts, such as "measurement" or (if you prefer Psi-epistemic approaches) "knowledge" that is held by "observers". If the theory was a consistent fundamental description of nature, these macroscopic concepts should follow from the theory, which they do not and indeed, cannot - that being the inconsistency.

      You may want to argue that maybe rather than improving the measurement prescription we should be giving up reductionism. That too would be a solution, in principle, but I am not aware of anyone pursuing such an approach. (Well not seriously anyway.)

      Delete
    2. Sabine,

      As long as the measurement device is a simple microscopic system, there is no problem with measurement. If we try to scale it up to cats, we don't know how to describe the wave function of the cats, but conceptually, it is the same, just with a bit more decoherence. I don't see any issue with reductionism here.

      People have been trying to find inconsistencies in quantum mechanics for almost a hundred years now. So far, quantum mechanics prevailed.

      Delete
    3. Udi,

      "As long as the measurement device is a simple microscopic system, there is no problem with measurement."

      Tell me how the Schroedinger equation gives you a detector eigenstate.

      Delete
    4. Sabine,

      If your detector is a Hermitian operator, each eigenstate has its own eigenvalue. It is a perfect measurement device.

      I agree that if your interpretation of quantum mechanics includes a wave function collapse, then such a mechanism is missing. But this is a limitation of your interpretation, it is not a limitation of quantum mechanics.

      Delete
    5. Udi,

      I have no idea what you think an interpretation changes about the fact that the Schroedinger equation does not (generically) give you a detector eigenstate even though this is what we measure. The collapse postulate is arguably non-local and discontinuous which makes it incompatible with GR. If you want to go for a psi-epistemic interpretation, you'll have to tell me what the wave-function is knowledge about, what you mean by "knowledge", who or what has that "knowledge" and how they get it, and how you make such a non-reductionist explanation compatible with everything else we know about particle physics, which looks pretty damn much reducible. Either way you turn it, neither Copenhagen nor any other interpretation of quantum mechanics resolves these problems. Collapse models do, but these have other problems.

      Delete
    6. You just published a blog post about quantum measurement, so I will reply there.

      Delete
  15. "No experimental basis" means that none of the very many experiments leads to the conclusion that determinism is real. Here, "determinism" is defined as "the philosophical belief that all events are determined completely by previously existing causes".

    Experimentally, determinism invariably fails sooner rather than later... in weather forecast, billiard-type games, chaos experiments, QM, astronomy (space exploration), etc. etc. Traditionally, all these failures were blamed on our [always] limited technology. But why? Occam's razor indicates it's better to not assume "determinism" in the first place.

    As far as "superdeterminism", if not testable, then not scientific. Period. Hence "personal religion".

    ReplyDelete
  16. Andrei,
    "How many examples do you have where the speed of light has been exceeded?"

    Every experimental test so far of Bell's inequality?

    I realize that can be seen as begging the question. But you can't say the evidence for nonlocality is zero.

    ReplyDelete
  17. Sabine,
    "It's not consistent with the standard model..."

    Do you mean that any model exploiting nonlocality must contradict predictions of the standard model?

    ReplyDelete
  18. Nonlin.org

    I have said this many times before, but here we go once again. You cannot test and therefore cannot falsify principles. Not ever. You can only test and falsify models.
    "Determinism" and "Superdeterminism" likewise are principles and as such unfalsifiable. The same is the case for principles like "supersymmetry" or "naturalness" and so on. You cannot falsify these. You can only falsify models that have been built using them.

    In other words, your complaint is a straw man.

    ReplyDelete
  19. Andrew,

    No, I am saying that the risk exists and if you can avoid that then it seems a good idea to avoid that.

    ReplyDelete
  20. @dtvmcdonald: I wrote this and it appeared on the recent comments list, but never showed up here. I will try again.

    The KAM (Kolmogoroff-Arnold-Moser) theorem is a purely classical result. For an oscillator with n degrees of freedom the dynamics is on a torus of 2n - 1 dimensions. So a simple oscillator will execute motion on a circle in space of momentum and position. For a coupled oscillator there are 2 DoFs and the phase space is 4 dimensions with 2 position variables and 2 momentum variables, and the energy constraint puts this in 3 dimensions. Now the dynamics for linear motion, say small oscillations, has a rational winding of a path on that torus. As the dynamics becomes more complicated that winding approaches an irrational winding. This irrational winding of the dynamics on the torus is the limit of regular dynamics. The KAM theorem gives physical limits on this where the torus can be “punctured” so that dynamics is no longer regular. Often this is seen with a Poincare section, a 2 dimensional plane that records where a particle passes through it, as so called Cantor dust. The regular dynamics of different particles with different initial conditions appears as a nested sets of dots, which get complicated as they approach the so called KAM surface. The chaos then appears as regions with randomly placed dots.

    BTW, algorithms for a chaotic dynamical system make good random number generators. Coding the standard map for instance can serve as a good random number generator.

    Where does quantum mechanics fit into this? QM throws spanners into these works. The quantum version of motion on a torus has a Bohr-Sommerfeld quantization around the various dimensions. This means that irrational winding on the torus is impossible, for the quantum wave would have complete destructive interference and not be able to exist on this. We could suppose there is some sort of tunnelling where the winding approaches an irrational winding and then transitions into chaos. However, chaos is odd with QM. Quantum chaos is a subject of interest, and often it is manifested as random phases with so called scarring or the breakdown of degeneracies into what are called avoided crossings.

    Henri Poincaré worked on separatricies between dynamics with different initial conditions. Perturbations would deform these into wild oscillating functions that converge with high frequency dynamics to a cusp where two separatrices meet. Henri Poincare was in effect working with chaotic dynamics, but the subject was at best gestated with this. It you have the particle with some quantum properties, say the deBroglie wavelength λ = h/p, then even if that wavelength is very small for some sufficiently classical particle, there will be some limit where Poincare's “mad separatrices” can't wind up further. This is becomes some nearly closed loop “filigree” in the dynamics will have ∫pdq ≈ ∮pdq = ħ, where the first integral is a line integral around some nonclosed but nearly closed loop and the second integral approximates this with a closed loop. If the open loop approaches itself by a distance smaller than the wavelength this is not a bad approximation. The problem is you can't get the particle to orbit on a smaller loop! Or at least you can't get it to orbit a smaller loops with any meaning to the momentum of the particle. This is a cut-off that is similar to why a quantum system can't irrationally wind on the torus.

    So this means that chaos theory makes the emergence of classical physics from QM more focused in some ways, but very difficult. This may be connected to quantum measurement or einselection. The coupling of a quantum system to some large N system as the apparatus means there is some shift of the quantum phase which might have some of the fractal dynamics similar to chaos theory.

    I would recommend Arnold's book on the math of classical mech and Lichtenberg-Lieberman's book on chaos and bifurcation of vector fields.

    ReplyDelete
  21. It's obvious you need to guess the theory where the measurement states are included...

    ReplyDelete
  22. continuing on KAM theory: Having worked on solid state physics, though largely applied nuts and bolts stuff that is not research level material, I have done quite a lot of reading on the subject. One of the things that amazes me is that a lot of intellectual progress lies there and it has curious parallels to subjects such as string theory and even supersymmetry. In fact a route to high temperature superconductivity is with a form of AdS_2 ~ CFT_1 correspondence. Fractial quantum Hall effects, abelian and nonabelian anyons and Haldane chain models for quantum phase transitions are most interesting, and these invoke structures such as Virasoro algebras used in string theory and particularly the bosonic string. These have the advantage of being more tied to measured physics than what you see in string theory that is terribly unconstrained.

    I was intending to write this in the entry on superdeterminism in response to comments on KAM theory. The winding around a KAM surface is similar to the fractional quantum Hall effect in formalism. The expectation on a 2-dim surface, either a boundary or a graphene sheet ~ -log|z' – z| leads to the Laughlin wave for ψ ~ e^{φ(z')√q}φ^{φ(z)√q} and then Z ~ |z' –-z|^q exp(¼|z'|^2), which in general is computed for a product of these in a path integral. Fractional statistics emerges in anyons with there being an S-dual charge on the lattice called it p and there is this fractional ν = p/q charge.

    A black hole event horizon stores up information concerning what composed the black hole. It is not hard to show that an accelerated observer near the horizon is in a Rindler wedge configuration. There is then an accelerated amplification of Hawking radiation appearing as Unruh radiation. An observer within a Planck length of the horizon observes this Hawking radiation as a huge pulse of radiation occurring almost instantly. Holography tells us that fields in 3-space plus time are defined on horizons of one spatial dimension less. We then I think have something entirely analogous to edge effects. The 2-dim surface will then have curious anyon statistics and other properties which include Yang-Mills fields. For black holes this would have connections to quantum hair and BPS black holes.

    It is possible the root of quantum gravity is then on low dimension spaces, which then might embed into higher dimensions. The stretched horizon as a membrane may contain the most germane physics for quantum gravitation, which BTW has fields with structures such as Virasoro algebras.

    Again for some reason my comments are not appearing here, but do show up on the comments list. I am not sure if this is a problem on my end or not.

    ReplyDelete
    Replies
    1. Lawrence Crowell: I am seeing this comment at least in email, and I see them on the blog comments, although I have to click on the "load more" button below the comments as initially loaded.

      Delete
  23. Thanks, Sabine, for bringing this paper of Louis Vervoort to our attention. He shows that the presence of a background field, such as correlated spins, can induce long-range order leading to a violation of the Bell tests. Another way to view this is that the background field constrains the choices available to the experimenter - the type of superdeterminism described in the blog. This approach seems like a promising starting point for a new PhD student.

    ReplyDelete
  24. Sabine, with "Superdeterminism", wouldn't you expect, on average, *greater* than quantum correlations??

    ReplyDelete
  25. Sabine,

    Thank you for a very interesting article about QM. I myself am not that familiar with superdeterminism. So I thought I would pose a question, which might help me to understand it better.

    Let's say that an experimentalist has a million pions (at rest) at his disposal, of which decays he observes. His experiment measures the half life of each pion. After doing this, he simply verifies that the pion life-time if what it says in the particle data book. But of course, that is true only on average, and there is a spread around the average value. Some pions decay much sooner than the average, others live longer. In the "Copenhagen interpretation" this is exactly what you expect. The field theoretic computation of the pion decay width is a probability amplitude.

    Now, I can ask my question about superdeterministic interpretation of QM. How does a superdetermisnitic theory predict exactly when each pion will decay? A simpler question might be this: to what mechanism does a superdeterministic theory attribute the fact that all pions don't decay at the average value? What makes one pion different from another in a superdeterministic interpretation?

    ReplyDelete
  26. "The only claim a superdeterministic theory has to make is that a particular type of electromagnetic phenomena are not independent. There are plenty of examples of physical systems that are not independent, an obvious one being the motion of any of two massive objects due to gravity."

    No. Your example with gravity is what is required by usual science - a correlation requires a causal explanation. Reichenbach's principle of common cause. But the very point of superdeterminism is that this principle is rejected, that superdeterministic correlations do not require causal explanations. And this is easy to show - if you do not reject Reichenbach's principle, you can prove the Bell inequalities. without any "superdeterminism" loophole.

    "Patently wrong. You cannot even make such a statement without having an actual model. "superdeterminism" is not a model."

    I can. Because superdeterminism (as a principle) is in conflict, and therefore has to reject Reichenbach's principle of common cause. Else, it would not define a loophole for the violation of the Bell inequalities.
    But once the violation of the Bell inequalities can be rejected as "so what, superdeterminism is the loophole", why can this excuse not applied to any other experiment which has statistical results I don't like? Why can the tobacco industry not refer to superdeterminism to explain away some correlations with lung cancer?

    What I have to use, in any statistical experiment, to justify any conclusions (of course, no need to explicitly mention it, but as a self-evident truism) is the principle of common cause. The correlation between smoking and lung cancer needs an explicit causal explanation. And it is the inability of the tobacco industry to give an innocent common cause (say, those with gen X like to smoke more, but also have increased lung cancer risks) which is the problem of the tobacco industry. With superdetermism accepted, they have no such problem. A simple "so what, that's superdeterminism" would be sufficient. The initial conditions at the big bang where so that in all those studies smokers are overrepresented among those with lung cancer.

    Once you argue that without such a, say, genetic model explanation of the correlations the tobacco industry has nothing, fine. But then, the opponents of a preferred frame have nothing too. If they come up, in some unknown future, with some superdeterministic theory, fine, they are welcome back in the game. Up to this, we have the straightforward simple solution, a hidden preferred frame where FTL causal influences are possible, the same type of "back to the time of Newton" theory like "smoking causes lung cancer".

    "You better think carefully about what experiment to make."

    This is what I do. That's why I prefer experiments where the superdeterminism excuse looks, hm, most questionable.

    ReplyDelete
  27. I don't see why superdeterminism would give you *exactly* the *quantum* statistics. For example (I think) 1/Sqrt(2) = about 71% is the max of the Bell pair correlations (where random = %50). But why not 81%? 60%? 95%?

    ReplyDelete
  28. According to Rosenblum and Kuttner (Quantum Enigma: Physics Encounters Consciousness), "It is by considering the experiments we might have done, but in fact did not do, that the quantum enigma arises."

    A simple and obvious solution to the enigma is to deny that the physicist could have done otherwise. John Bell addressed "no free will" with Paul Davies - it "gets us out of the crisis."

    Incidentally, Einstein didn't believe in free will, and this didn't stop him from doing science. Not sure why so many quantum luminaries cling to the belief that they must have free will to operate.

    You might want to check out my book God Does Not Play Dice: the Fulfillment of Einstein's Quest for Law and Order in Nature.

    ReplyDelete
  29. Regarding the following objection to models of Superdeterminism,

    "If Alice and Bob are very far away, then this overlap goes back millions of years ago, extending over millions of light-years over space. Then, all – but absolutely ALL – particles within this huge area must carry – together! – the information required for a deterministic computation that can predict the two human's decision, which is not yet known even to themselves (they are not born yet ��)."

    it seems to me this is possibly an incorrect assumption. Consider chaos models. For Cellular Automata ala Wolfram, information contained within this massive particle overlap is not needed. Only the current state and rules for entering a subsequent state are needed. Nothing is statistically independent, not the experimenters, the measuring devices and not the particles, but large scale and complexity would make these appear independent. I can imagine initial conditions that create entanglement and a rule that doesn't allow that state to change unless and until another particle disrupts the correlated pair.

    Also, the rules described by a Cellular Automata wouldn't require hidden variables any more than the Schrödinger equation is a hidden variable resulting in a particle's position, for example.

    Perhaps I misunderstand your point Avshalom, but it seems you're saying that the calculation must take all previous states into consideration and that this calculation then becomes a super-computation of indefensible extent. I don't think this exists per Cellular Automata.

    KC

    ReplyDelete
  30. Lawrence, I'm on Cape Cod for the week visiting relatives, and we happened to watch the 2nd and 3rd episodes of one of my favorite movie trilogies "Back to the Future", starring Christopher Lloyd and Michael J. Fox. In these episodes both Doc. Brown and Marty McFly encounter copies of themselves at earlier times, which prompted me to think about your mention of duplicate quantum states arising from timelike loops through a wormhole. So, if I'm not mistaken, a duplicate quantum state would require an actual particle to carry that state. So, as with Doc. and Marty the duplicate particle coming from the past or future, would presumably violate energy conservation, along with the quantum violation, when it shows up in the same place as its past or future self. I was puzzled what the acronym "LOCC" meant, also.

    It's hard to write coherently in this tiny box with only 4 lines of text showing. Wish there was a way to expand the box.

    ReplyDelete
  31. Sabine,

    I’m finding it easy to get lost in the wealth of technical argument here and lose track of the essential question(s). In the interval between the universe’s hot, primordial broth and today’s issue of the New York Times, has anything changed? Have new things arisen? Is the universe more complex? Was today’s news typeset before the appearance of atomic carbon? Where would one make a change that would change the news of a mass shooting in a border town?

    Clarification requested, thanks.

    ReplyDelete
  32. Sabine,

    By "It's not consistent with the standard model" do you mean that all forms of nonlocality entail consequences that contradict the standard modeL?

    ReplyDelete
  33. Andrei,

    "How many examples do you have where the speed of light has been exceeded?"

    Every experimental test so far of the Bell Inequality? That counts as at least some evidence.

    ReplyDelete
  34. Andrew Dabrowski,

    From the point of view of Bell's theorem there are two choices, non-locality and superdeterminism. The theorem cannot say that one is more likely than the other so we should give them equal probabilities, 50%. In order to refine this we can turn to evidence that is external to Bell's theorem. What we see is that locality is a core principle of all our theories (standard model and general relativity), whyle statistical independence is a property that some systems have (like different coin flips) and other systems do not (like stars in a galaxy). We know for sure that there are systems that are not independent therefore superdeterminism does not require any significant deviation from what we know about physics. We know of no example where locality has been violated, that makes very unlikely that entanglement is based on a non-local process. So, when we weigh all evidence superdeterminism is the most reasonable option.

    ReplyDelete
    Replies
    1. Andrei,
      "So, when we weigh all evidence superdeterminism is the most reasonable option."

      That's a reasonable summary, but I think you left out an important consideration: nonlocality is a rather simple fix that is available now; superdeterminism is a hypothetical fix that may never come to fruition. To my mind that pushes the reasonableness the other way.

      But I agree superdeterminism is worth looking at; I just don't think it's any more "scientific", at least based on what I've read here, than the multiverse. In fact superdeterminism sounds a bit like the god hypothesis.

      Delete
    2. Andrew Dabrowski,

      "nonlocality is a rather simple fix that is available now"

      This is questionable. I do not know of any successful extension of Bohm's interpretation to the relativistic domain.

      "superdeterminism is a hypothetical fix that may never come to fruition"

      Stochastic electrodynamics (which I consider to be superdeterministic) has relativity built-in and is capable now to reproduce some quantum phenomena. True, it has not been shown that it can reproduce QM in general.

      "But I agree superdeterminism is worth looking at; I just don't think it's any more "scientific", at least based on what I've read here, than the multiverse."

      I have seen no good argument that superdeterminism requires more than a good-old classical field theory. I do not understand your point about the multiverse.

      "In fact superdeterminism sounds a bit like the god hypothesis."

      No, it does not. By definition, a theory is superdeterministic if, by applying its rules, the emission of an entangled pair and the detection/absorption of those particles are not independent physical phenomena. In my opinion, classical electromagnetism, without any supplementary assumption, could be such a theory. It has nothing to do with gods.

      Delete
  35. I do not understand what is the problem with quantum mechanics as it is formulated.
    I have the feeling that some people do not understand (or acccept) the use of probability, probability density or expeted value.
    The "collapse" of the wavefunction is nothing but the update of the experimenter's knowledge.
    QM is in many respects a more satisfactory theory than classical mechanics because it integrates measurement and experimenter directly into its formalism.

    ReplyDelete
    Replies
    1. isometric,

      I agree that the collapse of the wavefunction can be explained locally as an update of experimenter's knowledge. But this fact alone does not provide an explanation for the observed correlations in a Bell test. You need some "extra" stuff for that (hidden variables) as EPR had argued.

      If you disagree can you provide a local explanation for the results obtained in a Bell test without hidden variables?

      Delete
    2. Andrei,

      The preparation of two perfectly anticorrelated spins (the singlet state) is determined from the start when the subsystems are in contact. The probability distribution corresponding to this initial state then evolves according to the Schrödinger equation as long as there is no measurement. The anticorrelation is there from the beginning and shows up at the end when the experimenter makes a measurement.

      You end up with a random process AND anticorrelated measures.

      QM is coherent even in this case. There is no need of further ingredients, action at distance nor new interpretations.
      This is all about probability distributions evolving in space and time.

      I really don’t understand what worries some people about that. They try to find a cure to a problem than does not exists.

      Regards

      Delete
    3. isometric,

      In your first post you said that the state represents the knowledge of the observer, not a "real" thing in the world. Therefore the evolution of this state in agreement to Schrodinger's equation does not describe something that takes place in the world, but just the way the observer's expectancy in regards to a potential measurement evolves. But just because you know what the result of a measurement will be does not represent an explanation for the actual experimental results. Knowing the probability that a certain star will go supernova is not the same thing as explaining the reason a certain star will undergo such a process in nature.

      So, if you want to use the evolution of the wavefunction as an explanation for the observed results you need to accept that the wavefunction represents a real entity, but then you need to accept that it's non-local character corresponds to a non-local character of the world. If, on the other hand you use the wavefunction as a representation for observer's knowledge you end up with an explanatory gap.

      You say:

      "This is all about probability distributions evolving in space and time. "

      The problem is that those probability distributions, according to your take on the wavefunction (observer's knowledge) do not evolve in space and time, but in observer's brain. If you want them to evolve in space and time you need to accept the wavefunction as a real entity "in space and time".

      I would say that your explanation fails because you change your interpretation to suit your interest. It's just observer's knowledge when you do not like it's non-local character but it becomes an entity in space and time when you want to use it to explain experimental results. So, make your choice then stick with it!

      Regards

      Delete

    4. Andrei,

      I will try to make a very tangible thought experience illustrating the singlet state:

      Let's take a long corridor with two people at each end (the detectors). In the middle of this corridor we prepare an urn with a white ball and a black ball (the single state preparation with corresponding probability distrubtion). Without looking (no experiment) at the color of the balls we transfer them in two other boxes that we bring to the people at the end of the corridor (Schrödinger). Opening the box, a person has 1/2 chance to have the white ball (measurement).

      This is a random experiment AND measurements are anti-correlated.

      Nothing faster than light was send between the experimenters (detectors). No hidden variables are needed. No new fancy interpretation.

      The wave function is not a physical object like a wave on a lake. It is a template of complex probabilities. It doesn't need any substrate to propagate.

      The wave function can perfectly be translated (deterministically) in space and time depending on the shape of the experimental device which is well known.

      There is no measurement problem, QM is local, it incorporates the experimenter, the measurement and shape of the experiment in its description.

      QM perfectly describes the singlet state "problem" which is nothing special.

      It is a wonder to me how much books were written about this "no-problem".

      Regards

      Delete
    5. isometric,

      I fully agree that the white/black balls description is a perfectly acceptable explanation, but it is a hidden-variable explanation. The reason for getting anticorrelated results is that:

      1. You introduce in the box a white ball and a black ball.

      2. The balls do have a well-defined state (black or white) all the time.

      3. Measurement reveals the color the balls had from the beginning.

      I actually agree that this is also the explanation for Bell tests. But in order for such an explanation to work in this case you need also the type of measurements to be correlated with the color (the hidden variable) and this requires superdeterminism.

      On the other hand you seem to reject hidden variables so I do not understand why did you choose a hidden variable explanation for Bell correlations.

      "Nothing faster than light was send between the experimenters (detectors). No hidden variables are needed. No new fancy interpretation."

      You are wrong here. The colors (black/white) that persist during the experiment are those hidden variables.

      "The wave function is not a physical object like a wave on a lake. It is a template of complex probabilities. It doesn't need any substrate to propagate.

      The wave function can perfectly be translated (deterministically) in space and time depending on the shape of the experimental device which is well known."

      The wave function cannot exist in spacetime. It has the wrong number of dimensions for that. Only when a measurement occurs you get the probabilities of getting this or that result in spacetime.

      "QM perfectly describes the singlet state "problem" which is nothing special.

      It is a wonder to me how much books were written about this "no-problem"."

      You have provided no explanation for the Bell test results. Your explanation in terms of black/white balls is a hidden-variable explanation, but then you say that you do not need hidden variables. Please clarify this issue before making the claim that there is no problem to be solved!

      Regards

      Delete
  36. Sabine,

    in a previous post (Limits of reductionism) you claimed "[...] let me emphasize that reductionism is not a philosophy, it’s an empirically well-established fact. It describes what we observe. There are no known exceptions to it.". On the other hand, just above you admit (and I agree) that quantum mechanics (which is our best experimentally established theory) is inconsistent with reductionism. Isn't your first claim a little bit too radical?

    ReplyDelete
  37. Sabine,

    "It's not consistent with the standard model..."

    Do you mean that every nonlocal model makes predictions that contradict the standard model?

    ReplyDelete
  38. Andrei,

    "How many examples do you have where the speed of light has been exceeded?"

    Every experimental test so far of the Bell inequalities? It's not fair to say there is no evidence for nonlocality. Moreover, nonlocality doesn't necessarily even require a mechanism like wormholes, it might just be axiomatic. Einstein was having an off day when he referred to nonlocality as "spooky".

    ReplyDelete
  39. If superdeterminism works, as a model, then the usefulness/validity of that model was determined by initial conditions eons ago, and that is rather neat. We might ask whether initial conditions could have been such that superdeterminism would NOT work as a model-- and yet that outcome would have been determined? All of this seems to put a set of constraints on initial conditions. A set of random initial conditions would generate... what? Obviously you can't have random initial conditions. So, among possible initial conditions, what are the options? I would think that one constraint would be self-consistency, in that you can't have initial conditions that would produce an illogical or randomly evolving universe, or a universe devoid of causality. What does that imply for QM, if anything? I wonder how much variation is allowable, and whether the numbers of kinds of possible universes that can be determined by allowable initial conditions is finite or infinite.

    ReplyDelete
    Replies
    1. Rick Lubbock,

      Superdeterminism does not necessary require a specific set of initial conditions. Correlations can appear in physical systems described by field theories regardless of the exact initial state. Planetary orbits are all ellipses but this has nothing to do with the initial state of the planetary system, it's determined by the way general relativity works. The fact that in a planetary system all planets orbit in the same direction does not have anything to do with some delicate choice of the initial state. In fact any such state (a cloud of gas and dust) spontaneously evolves into a disk and all planets will go in the same direction.

      It might be the case that Bell correlations are of the same type as the above described correlations. Any initial state that could evolve into a Bell test will produce results in agreement with QM just like any cloud of gas will produce planets orbiting in the same direction.

      Delete
  40. supersymmetries = superdimensions = supergravity = superdeterminism
    #
    The hardest thing of all is to find a black cat
    in a dark room, especially if there is no cat.
    / Confucius /
    ===

    ReplyDelete
  41. This comment has been removed by the author.

    ReplyDelete
  42. In my understanding:
    - QM violation of Bell's inequalities implies that QM cannot be a deterministic hidden variable theory (i.e., one which acknowledges the separability principle).
    - The previous statement rules out the superdeterministic solution for the measurement problem. Then, how can we solve it?.
    - By using dynamical models. Instead of the typical wave-packet reduction, we can use the dynamical model of an apparatus coupled with environment that interacts with our quantum system.
    - The use of these models imply that we need to rethink concepts like: the wave-function collapse or unitary system evolution. Both opposing concepts created the measurement problem; but there are no sharp collapses in quantum systems; instead, there are gradual interactions with environment that leave these quantum systems without their superposition property.
    - Despite unitary evolution of an isolated system is indeed possible, in the real world: systems tend to get coupled with environment. In fact, manufacturers of quantum computers spend a lot of money in keeping their Qbits within their superposition property (is the record now 40 min.?); but finally all Qbits lose this property. The most natural way to lose it is with thermal noise. I was thinking also that a different way for losing superposition of any quantum system could be electromagnetic noise or even also gravitational noise (e.g., near supermassive black holes or near less and less massive bodies down to certain limit).
    - I had the chance to comment this last issue in person to Mr. 't Hooft. He told me that during all his work, he never related gravitational noise with quantum collapse (in general terms, he was not very happy with most of the above).
    - Regarding quantum mechanics, I hope that maths will tell only one correct way to interpret them. Until that day comes, there is plenty of room for us to speculate.

    ReplyDelete
    Replies
    1. Antonio,

      Bell's theorem has nothing to do with separability. According to Einstein separability means that "mutually independent existence of spatially distant things".
      A simple example: Two synchronized clocks have independent existence (they are separable) but their states are not independent.

      All field theories allow for separability (which is a consequence of locality) but not for independence (which is the relevant assumption of Bell).

      Delete
    2. Today I have learnt something... I couldn't believe that Liam Hemsworth and Miley Cyrus had just got divorced, because I thought that Liam Hemsworth was married to Elsa Pataky; but, googling a bit, I have understood that Thor is in fact played by Chris Hemsworth that is at the same time married to Elsa and that has a brother called Liam: 'mistery' solved.
      Andrei, your 'misterio' is much easier to solve: just read 2009's book "Quantum mechanics" by Auletta et al. (pg. 589).

      Delete
    3. Antonio,

      I have checked the book and found this definition:

      "Two dynamically independent systems possess their own
      separate state."

      In this case field theories do not obey the separability principle (a system of charged particles described by classical electromagnetism cannot be split in dynamically independent subsystems; a system of massive objects described by general relativity cannot be split in dynamically independent subsystems).

      So, by looking at classical field theories one can avoid Bell's conclusion.

      Superdeterminism is alive and well!

      Delete
  43. I really enjoy the back and forth banter in the discussions here, especially Lawrence Crowell's deeply knowledgeable responses. A wonderful nugget is his anecdote about every time you look through polarized sunglasses you are experiencing a violation of Bell's Inequalities. The probability difference for light transmission through the sunglasses between the classical and quantum mechanical cases is really quite startling.

    ReplyDelete
  44. Sabine,
    If you want compatibility with GR, notice that it is a deeply time-symmetric theory: Einstein's equation can be seen as equilibrium condition between past and future - for spacetime as "4D jello".
    In contrast, "local realism" uses our intuitive time-asymmetry instead. Just replace it with time-symmetric locality, like in GR's Einstein's equation, or in (e.g. Feynman's) path ensembles, and the problems disappear.
    For example the simplest uniform path ensemble ( https://en.wikipedia.org/wiki/Maximal_entropy_random_walk ), beside repairing disagreements of standard diffusion like lack of Anderson localization, has already Born rule rho ~ psi^2 directly from time-symmetry: one psi from past ensemble/propagator, second from future. It also allows for Bell violation construction.

    ReplyDelete
  45. @A Andros
    I strongly recommend to read D. Dennett's book "Freedom evolves" in which he explains in great detail how "agency" and high-level causal explanations are completely compatible with determinism. (The book is about free will, but the arguments in it pertain to what you write here as well.)

    ReplyDelete
  46. Ilja,

    I am sorry, but for some reason I could not see your answer till now.

    "Your example with gravity is what is required by usual science - a correlation requires a causal explanation. Reichenbach's principle of common cause. But the very point of superdeterminism is that this principle is rejected, that superdeterministic correlations do not require causal explanations. And this is easy to show - if you do not reject Reichenbach's principle, you can prove the Bell inequalities. without any "superdeterminism" loophole."

    On this page:

    https://plato.stanford.edu/entries/physics-Rpcc/#2.2

    we read:

    "Maxwell's equations not only govern the development of electromagnetic fields, they also imply simultaneous (in all frames of reference) relations between charge distributions and electromagnetic fields. In particular they imply that the electric flux through a surface which encloses some region of space must equal the total charge in that region. Thus electromagnetism implies that there is a strict and simultaneous correlation between the state of the field on such a surface and the charge distribution in the region contained by that surface. And this correlation must hold even on the space-like boundary at the beginning of the universe (if there be such). This violates all three common cause principles. "

    I am not sure how you can reject superdeterminism if Reichenbach's principle is accepted, but either way, a theory like classical electromagnetism could in principle violate Bell's inequality. And this theory is local.

    "Because superdeterminism (as a principle) is in conflict, and therefore has to reject Reichenbach's principle of common cause. Else, it would not define a loophole for the violation of the Bell inequalities."

    Can you present some evidence for this claim?

    "But once the violation of the Bell inequalities can be rejected as "so what, superdeterminism is the loophole", why can this excuse not applied to any other experiment which has statistical results I don't like? Why can the tobacco industry not refer to superdeterminism to explain away some correlations with lung cancer?"

    The answer is simple. The superdeterministic correlations, just like the nonlocalities in Bohm's theory are supposed to reproduce QM. So, as long as cancer research does not contradict QM it will not contradict a superdeterministic interpretation of QM. What you are doing here is the logical fallacy of faulty generalization, "a conclusion about all or many instances of a phenomenon that has been reached on the basis of just one or just a few instances of that phenomenon". Superdeterminism posits that the emission and detection of entangled particles are not independent physical phenomena. Does cancer research use entangled states? If not, superdeterminism has nothing to say about it.

    "The initial conditions at the big bang where so that in all those studies smokers are overrepresented among those with lung cancer."

    Correlations can be explained without fine-tuning the initial state. The quote about Maxwell's theory is an example. Orbiting planets are another one.

    ReplyDelete
  47. @Sabine.

    Thanks Sabine for this interesting discussion of superdeterminism. Since two of my articles on the topic have been mentioned above, allow me to post my latest article on it: https://arxiv.org/abs/1811.10992 (for some reason the last version is not yet uploaded by arXiv, I am checking this, but the older version is very close in content). My main argument is that (super)determinism solves more problems than its competitor, indeterminism. See Conclusion on the last page: “It was argued that there is one hypothesis which offers an explanation for 1) Kolmogorov’s problem of probabilistic dependence; 2) the interpretation of the Central Limit Theorem; and 3) Bell’s theorem – namely HYP-1, in short, the hypothesis of (super)determinism. On the other hand, indeterminism (‘no hidden variables’) remains entirely silent regarding 1) and 2), and leaves 3) as an obstacle rather than a solution for the unification of QM and general relativity.” As far as I know, there are no problems or questions that could be solved by indeterminism and not by (super)determinism. As I see it, this suggests that, if one wishes to stick to the standard position in physics and adopt the principles with the highest explanatory power, one should adopt superdeterminism and reject indeterminism.

    I would be happy to discuss.

    ReplyDelete
  48. What is the formula for "free will"?

    ReplyDelete
  49. There may be some properties and possible tests for the superdeterministic theories:
    1. We may divide the SD into two categories: reversible and time-irreversible theories (like Conway's game of Life where the given one state may have different predcessor states). If you can construct the experiment on T-invariance, then you will rule-out such theories (as well as Everett's solution that is also reversable)
    2. SD may include the actual ("true", not just a deterministically chaotic) probabilities in case of infinite state space, especially if you rule-out the initial condition at all (eternal evolution).
    3. It was some paper in PhysRevX where authors reconstructed the distribution of the double-split experiment by letting to interact the infinite classical "universes" using simple lagrangian mechanics. For the finite number of such "universes" they get the discrete PD function, and in the infinite limit they got the well-known prediction of quantum mechanics.
    4. Some authors here said that SD implies that given the initial conditions, one can fully calculate the outcomes, however they does not account that the prediction process may be non-computable.

    It was said a lot about the phylosophy and falsifiability of SD. But this depends on the definition of what the science is. If one have two theories that describe the same experimental outcomes, but one of them requires less information to be fully defined, the probability of more simple theory will be higher.

    My personal opinion: "free will" may be just a law term, not an actual biological, or worse, physical bunch of phenomena.

    ReplyDelete
  50. I'm sorry. I still don't see how you get the *particular quantum* statistics from superdeterminism.

    ReplyDelete
  51. It seems like if we have superdeterminism we could often get greater-than-quantum correlations with Bell pairs.

    ReplyDelete
  52. “It seems like if we have superdeterminism we could often get greater-than-quantum correlations with Bell pairs.”

    Turn over a rock and little critters dart for cover. They are the winners of countless past contests between predator and prey that have occurred over some three billion years. Their slower kindred have given up the game. They are evidence that superdeterminism cannot serve as some overarching organizing principle. That said, one must advise that the following discussion may reflect a measure of misapprehension.

    Superdeterminism has a biblical potency -- no matter what the question, “the answer is in the book.” Being both the all-encompassing container and its content, it effectuates both sides of any boundary. Thus, ultimately there is no real distinction between experimenter and experiment, predator and prey or, for that matter, position and momentum. SD has no need for adaptation of one part to fitting to another, no need for Toynbee’s challenge and response or any contest between possible biological futures. Being the only logical system, it is by definition complete, Gödel can stay home.

    As an overarching principle, superdeterminism would be incompatible with the mechanisms of the biological complexity of which we are intimately a part. It would deny biological systems some small but necessary measure of independent efficacy in the ‘present moment,’ and thus exclude the decisive contests between possible futures that are the stepping stones of evolution. Whatever theoretical inconsistency it may solve on the one hand, its other hand would wield a Midas touch that effectively removes the life from living things.

    Better to see superdeterminism as an organizing principle that may be half right! That is, the observable universe is more likely to be created by a hybrid organizing principle wherein one half is ‘super-deterministic’ and the other half is ‘super-indeterministic.’ Your patience, please. This statement is whimsical for sake of provocation. It may not make much sense or have any argumentative traction.
    But, consider that it does roughly convey a fundamental subtext of our universe when viewed as a continual dynamical interplay between principles of mutable constraint and constrainable mutation (mutation here from the PIE root *mei- "to change, go, move") This underlying dynamic would be more likely than SD to yield a universe with characteristics of a probabilistic description, energy and information as conserved quantities, a kinetic/potential description, a least action principle, an uncertainty between complementary variables, a biology that habituates to every possible extreme of terrestrial environment and our opposable thumbs.

    Proto-physical speculations aside, it would be useful to consider the production of frogs in our universe as contrasted with a hypothetical SD universe…

    ReplyDelete
  53. "the observable universe is more likely to be created by a hybrid organizing principle wherein one half is ‘super-deterministic’ and the other half is ‘super-indeterministic"

    I think this is a very good idea! Sabine?

    ReplyDelete
  54. Sabine,

    In defense of reductionism you have mentioned that no one has ever cut open a frog and not found atoms. But then again, no one has ever put together atoms and created a frog. Is there any way to do that? The typical frog contains about 1.4 x 10^28 Da. There are so many arrangements that get it wrong and only one way to get it right. There is no frog making equation derived from first principles on the chalkboard. The equation we have is proprietary to planet earth. It is moist, biological and writ very small. Its derivation took an enormously long time. One may simply insert this equation into its proper slot within the accompanying biosphere and presto: there is frog.

    If we accept the observable universe as factual reference for the most direct, least action path from some distant past to the moment of breathing frog, then, according to evolutionary biologists, the minimum-time process for creating a frog is on the order of three billion years (the production time for current models has been reduced to about fifty days). And we know that this process, far from being direct, was deeply episodic with myriad cycles of reproductive change and a saga of trial and error adaptation to complex constraints of changing physical and biological environments.

    In contrast, how are we to understand frog production in an SD universe? Really, please explain! Does it essentially create frog by decree, as an imprint upon the nascent universe that through time will invariably produce a particular frog. And not only the frog – it must create the physical universe in which the frog exactingly fits. It must create its lily pad, the flitting insects, the approaching heron, the exact shape of clouds on the horizon and all the stars beyond. But from whence does this extensive and intricately integrated pattern arise? I cannot grasp the rationale here. This sounds like Genesis rewrote – ‘In the beginning was the Frog.’

    Help me understand why, in a least action universe, there would be all the clearly evident but messy details of trial and error in evolutionary biology if they don’t have a necessary function, play some determinative role in end result? Why the bother? Why, for that matter, is there a need for time itself in an SD universe?

    Mariners navigated by celestial clock and then increasingly by synchronous chronometers. The problem was maintaining synchronicity between two clocks, one having floated to the central Pacific. There is comforting regularity in a clockwork with its every little gear tightly fitted and enmeshed, one to the other. But what if the gears lose touch with one another; what if gaps develop? In a universe that admits to description via subsystems, which of the fundamental forces serve to keep parts synchronous when they are far apart? It seems that an SD universe would require such a solution.

    ReplyDelete
  55. Sabine,

    You said in the blog post “... the detector isn’t statistically independent of itself ...”.
    I just realized that I might not have understood what you mean with “of itself”.
    The detector let´s say the screen in a double slit experiment consists of a bunch of QM particles, the photosensitive emulsion or whatever. The screen itself is more or less stable and has a certain position relative to the slits.
    Is it this stability of the screen, its relative position, this “classical correlation” of the particles, simply put its form and shape what you mean with “... the detector isn’t statistically independent of itself ...” ?

    Another example would be a simple polarizer. Take a Scotch tape, stretch it, i.e. rearrange the molecules and you get a preferred direction in space. Again, a more or less stable configuration of its molecules, not being statistically independent. Is it this what you mean?

    ReplyDelete
    Replies
    1. I take your silence as a yes, i.e. “... the detector isn’t statistically independent of itself ...” means a more or less stable shape localized in space – the hallmark of a classical object.
      But of course a classical object that consists out of a lot of QM particles.

      Delete
    2. Reimond,

      I mean all the degrees of freedom of the detector - that includes the measurement settings - are not statistically independent of the measurement settings.

      Delete
    3. Ok, thanks Sabine - I also meant the settings to be included, e.g. the selected angle for the polarizer in EPR.

      Delete
  56. Can I put this in less technical terms to see if I understand Superdeteminism: SD is a theory that everything that happens is causally determined by the initial state of the universe, including the actions of two physicists that do not exist yet and a particle they will measure in the future that does not exist yet.

    ReplyDelete
    Replies
    1. Ira,

      This is correct but entirely misses the point because that's the case in any theory that is deterministic like, eg, Newtonian mechanics. It is not the relevant property of superdeterminism.

      Delete
  57. Determinism in Newtonian mechanics is macro behaviour of object.
    Superdeterminism in quantum mechanics is dualistic (!) behaviour of micro particle.
    Two different situation.

    ReplyDelete
  58. Super determinism is dual to super freewill.
    Predictable (ordered) is dual to disordered (randomness, chaos).
    Order is dual to randomness (entropy, change).
    Optimized prediction is dual to absolute freewill.
    Certainty is dual to uncertainty -- Heisenberg.
    Syntropy (mutual information) is dual to increasing entropy -- the 4th law of thermodynamics.
    Super determinism is one part of a duality!
    The universe is completely predictable and unpredictable both at the same time -- duality! Genetic mutation is completely random according to Darwinism and evolution.

    ReplyDelete
  59. Dr. H. won't change her mind about free will because changing her mind would show that she has free will, thus negating her belief. Ha. And since her beliefs were already set 13.8 billion years ago in the Big Bang, she can't change her mind now.

    If super-determinism is true, we shouldn't trust our ability to judge its validity since we lack the free will to evaluate it. If our ways of thinking were arbitrarily pre-determined in the Big Bang, we can't trust our own thought processes since they are just a fluke of the initial conditions.

    Super-determinists make assumptions, but they don't prove anything. They assume that nature cannot evolve autonomous beings who have the power to make independent uncompelled choices.

    If we reduce everything to the physics of elementary particles, we could just as well argue that consciousness is impossible since there's no consciousness in matter. How can consciousness evolve from atoms of hydrogen, oxygen, carbon, etc.? Impossible! If everything should be subsumed to quantum mechanics, how does QM provide for the existence of consciousness? It doesn't. Nor does QM tell us anything about free will.

    "There are more things in heaven and Earth than are dreamt of in your philosophy."

    ReplyDelete
    Replies
    1. "Dr. H. won't change her mind about free will because changing her mind would show that she has free will, thus negating her belief. Ha."

      This is wrong. It is totally possible to change your mind without having free will. Computers also "change their mind" when they get new input. I do not change my mind because I am a scientist and scientific evidence says free will does not exist. I have no idea what makes you think I give a shit about your opinion. Also, I am tired of this nonsense and will close this comment section, good bye.

      Delete