It is not a spontaneous acknowledgement of philosophy that sparked physicists’ rediscovered desire; their sudden search for meaning is driven by technological advances.
With quantum cryptography a reality and quantum computing on the horizon, questions once believed ephemeral are now butter and bread of the research worker. When I was a student, my prof thought it questionable that violations of Bell’s inequality would ever be demonstrated convincingly. Today you can take that as given. We have also seen delayed-choice experiments, marveled over quantum teleportation, witnessed decoherence in action, tracked individual quantum jumps, and cheered when Zeilinger entangled photons over hundreds of kilometers of distance. Well, some of us, anyway.
But while physicists know how to use the mathematics of quantum mechanics to make stunningly accurate predictions, just what this math is about has remained unclear. This is why physicists currently have several “interpretations” of quantum mechanics.
I find the term “interpretations” somewhat unfortunate. That’s because some ideas that go as “interpretation” are really theories which differ from quantum mechanics, and these differences may one day become observable. Collapse models, for example, explicitly add a process for wave-function collapse to quantum measurement. Pilot wave theories, likewise, can result in deviations from quantum mechanics in certain circumstances, though those have not been observed. At least not yet.
A phenomenologist myself, I am agnostic about different interpretations of what is indeed the same math, such as QBism vs Copenhagen or the Many Worlds. But I agree with the philosopher Tim Maudlin that the measurement problem in quantum mechanics is a real problem – a problem of inconsistency – and requires a solution.
And how to solve it? Collapse models solve the measurement problem, but they are hard to combine with quantum field theory which for me is a deal-breaker. Pilot wave theories also solve it, but they are non-local, which makes my hair stand up for much the same reason. This is why I think all these approaches are on the wrong track and instead side with superdeterminism.
But before I tell you what’s super about superdeterminism, I have to briefly explain the all-important theorem from John Stewart Bell. It says, in a nutshell, that correlations between certain observables are bounded in every theory which fulfills certain assumptions. These assumptions are what you would expect of a deterministic, non-quantum theory – statistical locality and statistical independence (together often referred to as “Bell locality”) – and should, most importantly, be fulfilled by any classical theory that attempts to explain quantum behavior by adding “hidden variables” to particles.
Experiments show that the bound of Bell’s theorem can be violated. This means the correct theory must violate at least one of the theorem’s assumptions. Quantum mechanics is indeterministic and violates statistical locality. (Which, I should warn you has little to do with what particle physicists usually mean by “locality.”) A deterministic theory that doesn’t fulfill the other assumption, that of statistical independence, is called superdeterministic. Note that this leaves open whether or not a superdeterministic theory is statistically local.
Unfortunately, superdeterminism has a bad reputation, so bad that most students never get to hear of it. If mentioned at all, it is commonly dismissed as a “conspiracy theory.” Several philosophers have declared superdeterminism means abandoning scientific methodology entirely. To see where this objection comes from – and why it’s wrong – we have to unwrap this idea of statistical independence.
Statistical independence enters Bell’s theorem in two ways. One is that the detectors’ settings are independent of each other, the other one that the settings are independent of the state you want to measure. If you don’t have statistical independence, you are sacrificing the experimentalist’s freedom to choose what to measure. And if you do that, you can come up with deterministic hidden variable explanations that result in the same measurement outcomes as quantum mechanics.
I find superdeterminism interesting because the most obvious class of hidden variables are the degrees of freedom of the detector. And the detector isn’t statistically independent of itself, so any such theory necessarily violates statistical independence. It is also, in a trivial sense, non-linear just because if the detector depends on a superposition of prepared states that’s not the same as superposing two measurements. Since any solution of the measurement problem requires a non-linear time evolution, that seems a good opportunity to make progress.
Now, a lot of people discard superdeterminism simply because they prefer to believe in free will, which is where I think the biggest resistance to superdeterminism comes from. Bad enough that belief isn’t a scientific reason, but worse that this is misunderstanding just what is going on. It’s not like superdeterminism somehow prevents an experimentalist from turning a knob. Rather, it’s that the detectors’ states aren’t independent of the system one tries to measure. There just isn’t any state the experimentalist could twiddle their knob to which would prevent a correlation.
Where do these correlations ultimately come from? Well, they come from where everything ultimately comes from, that is from the initial state of the universe. And that’s where most people walk off: They think that you need to precisely choose the initial conditions of the universe to arrange quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob left rather than right. Besides sounding entirely nuts, it’s also a useless idea, because how the hell would you ever calculate anything with it? And if it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning at superdeterminism is not entirely unjustified.
But that would be jumping to conclusions. How much detail you need to know about the initial state to make predictions depends on your model. And without writing down a model, there is really no way to tell whether it does or doesn’t live up to scientific methodology. It’s here where the trouble begins.
While philosophers on occasion discuss superdeterminism on a conceptual basis, there is little to no work on actual models. Besides me and my postdoc, I count Gerard ‘t Hooft and Tim Palmer. The former gentleman, however, seems to dislike quantum mechanics and would rather have a classical hidden variables theory, and the latter wants to discretize state space. I don’t see the point in either. I’ll be happy if the result solves the measurement problem and is still local the same way that quantum field theories are local, ie as non-local as quantum mechanics always is.*
The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this opens the door to new applications. That’s why I remain perplexed that what I think is the obvious route to progress is one most physicists have never even heard of. Maybe it’s just a reality they don’t want to wake up to.
Recommended reading:
- The significance of measurement independence for Bell inequalities and locality
Michael J. W. Hall
arXiv:1511.00729 - Bell's Theorem: Two Neglected Solutions
Louis Vervoort
FoP, 3,769–791 (2013), arXiv:1203.6587
* Rewrote this paragraph to better summarize Palmer’s approach.
It's misleading to say that Maudlin in particular has declared that superdeterminism means abandoning scientific methodology. That point was made already in 1976 by Clauser, Shimony and Holt.
ReplyDelete"In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation. Unless we proceed under the assumption that hidden conspiracies of this sort do not occur, we have abandoned in advance the whole enterprise of discovering the laws of nature by experimentation."
I recommend that anyone interested in the status of the statistical independence assumption start by reading the 1976 exchange between Bell and CSH, discussed in section 3.2.2 of this:
https://plato.stanford.edu/entries/bell-theorem/#SuppAssu
With "in particular" I didn't mean to imply he's the first or only one. In any case, I'll fix that sentence, sorry for the misunderstanding.
DeleteWayneMyrvold,
DeleteIf statistical independence is such a generaly accepted principle it follows that no violations should be found, right? Let me give some examples of distant physical systems that are known not to be independent:
1. Stars in a galaxy (they all orbit around the galaxy'a center)
2. Planets and their star.
3. Electrons in an atom.
4. Synchronized clocks.
So, it seems that it is possible after all to do science and accept that some systems are not independent, right?
Now, if you reject superdeterminism you need to choose a different type of theory. Can you tell me what type of theory you prefer and just give a few examples of known experiments that provide evidence for that type of theory? For example, if you think non-locality is the way to go can you provide at least one experiment where the speed of light was certainly exceeded?
Thanks!
This would be easier for me to understand if I could figure out what the definition is of a successful superdeterminism model. Suppose you and your postdoc succeed! What would follow in the abstract of your article following "My postdoc and I have succeeded in creating a successful superdeterministic model. It clearly works because we show here that it..." (does exactly what?). Thanks.
ReplyDeleteLeibniz,
DeleteOn a theoretical level I would say it's successful if it solves the measurement problem. But the more interesting question is of course what a success would mean experimentally. It would mean that you can predict the outcome of a quantum measurement better than what quantum mechanics allows you to (because the underlying theory is deterministic after all).
Does that answer your question?
The reference cited (by WayneMyrvold) above
ReplyDeleteBell’s Theorem
(substantive revision Wed Mar 13, 2019)
https://plato.stanford.edu/entries/bell-theorem/
Is worth reading its entirety.
In
Bell's Theorem: Two Neglected Solutions
Louis Vervoort
'supercorrelation' appears more interesting than 'superdeterminism' (and perhaps more [Huw] Pricean).
Violation of the Bell-Inequality in Supercorrelated Systems
DeleteLouis Vervoort
(latest version 20 Jan 2017)
https://arxiv.org/abs/1211.1411v1
You have a HUGE problem with thermodynamics!
ReplyDeleteTake Alice and Bob intending to participate in an EPR-Bell experiment. None of them knows yet what spin direction she/he is going to choose to measure. This is a decision they’ll make only at the very last second.
Superdeterminism must invoke two huge past light-cones stretching from each of them backwards to the past, to the spacetime region where the two cones overlap. If Alice and Bob are very far away, then this overlap goes back millions of years ago, extending over millions of light-years over space. Then, all – but absolutely ALL – particles within this huge area must carry – together! – the information required for a deterministic computation that can predict the two human's decision, which is not yet known even to themselves (they are not born yet ��). Miss one of these particles and the computation will fail.
Simple, right? Wrong. The second law of thermodynamics exacts a fundamental price for such a super-computation. It requires energy proportionate to the calculation needed: Zillions of particles, over a huge spacetime region, for predicting a minute event to take place far in the far future.
Whence the energy? How many megawatts for how many megabits? And where is the computation mechanism?
I once asked Prof. t'Hooft this question. He, being as sincere and kind as everybody who have met him knows, exclaimed “Avshalom I know what you are saying and it worries me tremendously!”
You want superdeterminism? Very simple. Make quantum causality time-symmetric, namely allow each of the two particle to communicate to their common origin BACKWARDS IN TIME and there you go. No zillions of particles and light-years, only the two particles involved.
This is the beauty of TSVF. Here is one example:
https://link.springer.com/article/10.1007/s10701-017-0127-y
there are many more, with surprising new predictions.
Yours, Avshalom
Avshalom,
DeleteThat's right, you can solve the problem of finetuning initial conditions by putting a constraint on the future, which is basically what we are doing. Devil is in the details. I don't see what the paper you refers to achieves that normal quantum mechanics doesn't.
Our paper makes retrocausality the most parsimonious explanation. No conspiracy, no need to return to the Big Bang, just a simple spacetime zygzag.
DeleteParsimonious explanation... for what? Do you or do you not solve the measurement problem?
DeleteParsimonious for the nonlocality paradox. Our advance concerning the measurement problem is here:
Deletehttps://www.mdpi.com/1099-4300/20/11/854
We show that wavefunction collapse occurs through a multiplicity of momentary "mirage particles" of which n-1 have negative mass, such that the particle's final position occurs after the mutual cancellation of all the other positive and negative particles. The mathematical derivation is very rigorous, right from quantum theory. A laboratory confirmation by Okamoto and Takeuchi is due within weeks.
What do you mean by wavefunction collapse? Is or isn't your time-evolution given by the Schrödinger equation?
DeleteIt is, done time-symmetrically. Two wave functions along the two time directions between source and absorber (pre- and post-selections). This gives you information about the particle during the relevant time-interval much more than the uncertainty principle seems to allow. Under special choices of such boundary conditions the formalism yields unusual physical values: too large/small or even negative. Again the derivation is rigorous. The paper is very lucid.
DeleteWell, if your time evolution is given by the Schrödinger equation, then you are doing standard quantum mechanics. Again, what problem do you think you are solving?
DeleteAgain, we solve the nonlocality problem by retrocausality, and elucidate the measurement problem by summing together the two wave-functions. No point going into the simple math here. It's all in the paper.
DeleteAvshalom Elitzur,
DeleteCorrelations are normal in field theories. All stars in a galaxy orbit the galactic center. They do not perform any calculations, they just respond to the gravitational field at their location. In a Bell test you have an electromagnetic system (Alice, Bob and the source of entangled particles are just large groups of electrons and nuclei). Just like in the case of gravity, in electromagnetism the motion of charged particles is correlated with the position/momenta of other charged particles. Nature does the required computations.
No need for retrocausality.
Wrong, very wrong. All stars affect one another by gravity, a classical force of enormous scales, obeying Newton's inverse square law and propagating locally. No computation needed. With Alice's and Bob's last-minute decision, the particles must predict their decision by computing the tiniest subatomic forces of zillions of particles over a huge spacetime region. Your choice of analogy only stresses my argument.
DeleteAvshalom Elitzur,
DeleteThere is no "last-minute decision", this is the point of determinism. What we consider "last-minute decisions" are just "snapshots" from the continuous deterministic evolution of the collections of electrons and quarks Alice and Bob are made of. The motion of these charged particles inside our bodies (including our brains that are responsible for our "decisions") are not part of the information available to our consciousness, this is why our decisions appear sudden to us.
The exact way these charged particle move is not independent on the way other, distant charged particles move (as per classical electromagnetism) so I would not see why our decisions would necessarily be independent of the hidden variable.
Precisely! But this is the difference between stars and galaxies on the one hand, and particles and neurons on the other. In the former case no computation is needed because the gravitational forces are huge. In the latter, you expect an EPR particle to compute in advance the outcome of neuronal dynamics within a small brain very far away! Can you see the difference?
DeleteThe magnitude of the force is irrelevant. It is just a constant in the equations. Changing that constant does not make the equations fail (as required by independence). The systems are not independent because the state of any one also depends on the state of the distant systems. This is a mathematical fact.
DeleteMoving a planet far from its star does not result in a non-elliptical, random orbit, but in a larger ellipse. You do not get independence by either playing with the strength of the force or by increasing the distance so that the force gets weaker.
So each EPR particle can infer from the present state of Alice's and Bob's past light-cone's particles what decision they will take, even arbitrarily much later?
DeleteSabine, is this what the model says?
Avshalom Elitzur,
DeleteIt's not clear if the question was directed only to Sabine, but I will answer it anyway.
In classical electromagnetism any infinitesimal region around a particle contains complete information about the state (position and momentum) of all particles in the universe in the form of electric and magnetic fields at each point. In a continuous space the number of points in any such region is infinite so you will always have enough of them. Each particle responds to those fields according to Lorentz force. That's it. The particle does not "infer" anything.
The states of particles in the universe will be correlated in a way specified by the solution of the N-body problem where N is the number of particles in the universe.
My hypothesis is that if one would solve that N-body problem, he will find that for any initial state only solutions that describe particle trajectories compatible with QM's prediction would be found.
I think that such a hypothesis is testable for a computer simulation using a small enough N. N should be large enough to accommodate a simplified Bell test though (obviously without humans or other macroscopic objects).
Sorry I am still not getting a clear answer to my simple question about the the physics: Two distant experimenters are going to make a last-minute decision about a measurement, a decision which they themselves do not know yet. Then the two particles arrive, each at another experimenter, and the measurements are made. How can the measurement outcome of the one particle be correlated with the measurement taken on the other distant particle?
DeleteI (and probably other readers) will appreciate a straightforward answer from you as well as from Sabine.
Avshalom,
DeleteYou are asking "Why is the initial state what it is?" but there is never an answer to this question. You make assumptions about the initial state to the end of getting a prediction. Whatever works, works. Let me ask you in return how can the measurement outcome *not* be correlated? There isn't an answer to this either, other than postulating it.
Sorry this is certainly not my question. Please let me reiterate the issue. Two distant particles, each being affected by the choice of measurement carried out AT THAT MOMENT on the distant one. Either
Deletei) Something goes between them at infinite velocity;
ii) The universe's present state deterministically dictates the later choices in a way that affects also the particles to yield the nonlocal correlations
iii) Quantum causality is time-symmetric and the distant effects go through spacetime zigzag.
Each option has its pluses and minuses. It seems that (i) has no advocates here; I am pointing out serious difficulties emerging from (ii); and advocating (iii) as the most elegant, fully according with quantum theory yet most fruitful in terms of novel predictions.
I'd very much appreciate comments along these lines. Sorry if I am nagging - I have no problem continuing the discussion elsewhere.
Avshalom,
DeleteYours is a false dichotomy (trichotomy?). If you think otherwise, please prove that those are the only three options.
There is also, of course, no way to follow your "zig zag" from option iii. I already said several times that retrocausality is an option I quite like, but I do not think that it makes sense if the Schroedinger equation remains unmodified because that doesn't solve any problem.
I was sure that you are advocating a certain variant of (ii). Not so? What then is this (iv)?
DeleteAgain, the Schrodinger equation is unmodified, only used TWICE. And the result is stunningly powerful. See this example https://www.mdpi.com/1099-4300/20/11/854
Avshalom,
DeleteIf you have a time-reversible operator, postulating an early-time state, present state, or late state is conceptually identical, so I don't see the distinction you draw between ii and iii. For this reason any superdeterministic theory could be said to be retrocausal, though one may quibble about just what's "causal" here. (I think the word is generally best avoided if you have a deterministic evolution.)
The other problem with your 3-option solution is that option 3 is more specific than what is warranted. There isn't any "zig zag" that follows just from having a boundary condition in the future.
You can use as many Schroedinger equations as you want, the result will still be a linear time evolution.
Not when the 2nd Law of Thermodynamics is taken into account. Whereas (ii) invokes odd conspiracies and impossible computations with no mechanism, (iii) is free of them.
DeleteAnd the main fact remains: TSVF has derived a disappearance of a particle from one box, reappearance into an arbitrarily distant one and vice versa - all with certainty 1! See "The case of the disappearing (and re-appearing) particle" Nature Sci. Rep. 531 (2017). The actual experiment is underway. This derivation id obliged by quantum theory, but the fact that it has been derived only by (iii) indicates that it is not only methodologically more efficient but also more ontologically sound.
The 2nd law of thermodynamics is a statement about the occupation in state space. If you don't know what the space is and its states, you cannot even make the statement. Again, you are making a lot of assumptions here without even noticing.
DeleteI could locate another TSVF paper: https://arxiv.org/pdf/1304.7469.pdf Quote: "The photons do not always follow continuous trajectories" ... but Bohmian Mechanics require continuous trajectories. Can you repair it?
DeleteAvshalom Elitzur,
Delete"Sorry I am still not getting a clear answer to my simple question about the the physics: Two distant experimenters are going to make a last-minute decision about a measurement, a decision which they themselves do not know yet."
As I have pointed out before all this talk about "last-minute decisions" that "they themselves do not know yet" is a red herring. There are a lot of facts about our brains that are true, yet unknown to us. The number of neurons is such an example. If you are not even aware about the number of neurons in your head (and it is possible in principle to count them using a high-resolution MRI) why would you expect to be aware about the motion of each electron and quark? Yes, we do not know what decisions we will make and it is expected to be that way regardless of your preferred view of QM.
"Then the two particles arrive, each at another experimenter, and the measurements are made. How can the measurement outcome of the one particle be correlated with the measurement taken on the other distant particle?"
In order to understand the reason for the observed results you need to understand:
1. What initial states are possible?
2. How these initial states evolve and produce the observed experimental outcomes?
As long as those questions are left unanswered you cannot expect to have a proper explanation. Let me give you an example of a different type of non-trivial correlations that can only be explained by understanding the initial state.
You probably know that most planets in a planetary system orbit in the same direction around their star and also in the same plane (more or less). How do you explain it? There is no obvious reason those planets could not orbit in any direction and in any plane.
Once you examine the past you find out that any planetary system originates in a cloud of gas and dust, and such a cloud, regardless of its original state will tend to form a disk where all the material orbits the center of the disk in the same direction. As the planets are formed from this material it becomes clear that you expect them to move in the same direction and in the same plane.
In the case of EPR understanding the initial state is more difficult because the hidden variable and the detector settings are not statistical parameters (like the direction of rotation of a cloud of gas) but depend on the position and momenta of all electrons and quarks involved in the experiment. In other words I do not expect such a calculation to be possible anytime soon. This unpleasant situation forces us to accept that Bell tests are not of any use in deciding for or against local hidden variable theories. That decision must be based on simpler systems, like a hydrogen atom, or a molecule where detailed calculations can be performed.
How do you arrange any experimental test at all if you assume superdeterminism?
ReplyDeleteIf you have a regular probabilistic theory, you can test it. If you perform an experiment that is supposed to have probability ½ of result A and ½ of result B, and you get result A 900 times out of 1000, you can conclude that there is something wrong with your theory (or maybe with your calculations).
But in superdeterminism, if you perform such an experiment, can you conclude that your theory is wrong? Maybe the initial conditions of the universe were designed to yield exactly this result.
It certainly seems to me that, if you assume superdeterminism, you cannot rely on probabilistic outcomes being correct; the whole point of superdeterminism is to get around the fact that Bell's theorem shows that quantum mechanics gives probability distributions that can't be explained by a standard classical local probability theory. So if you can't rely on probability theory in the case of the CHSH inequality, how do you justify using probability for anything at all?
Peter,
DeleteYou arrange your experiment exactly the same way you always arrange your experiment. I don't know why you think there is something different in dealing with a superdeterministic theory than dealing with a deterministic theory. You can rely on probabilistic outcomes if you have a reason to think that you have them properly sampled. This is the case, arguably, for all experiments we have done so far. So, nothing new to see there.
Really, think about this for a moment. A superdeterministic theory reproduces quantum mechanics. It therefore makes the same predictions as quantum mechanics. (Or, well, if it doesn't, it's wrong, so forget about it.) Difference is that it makes *more* predictions besides that. (Because it's not probabilistic.)
"the whole point of superdeterminism is to get around the fact that Bell's theorem shows that quantum mechanics gives probability distributions that can't be explained by a standard classical local probability theory."
No, it's not because a superdeterministic theory doesn't have to be "classical" in any sense.
Peter,
DeleteA superdeterministic theory only needs to posit that the emission of entangeled particles and their detection are not independent events (the way the particles are emitted depends on the way the particles are detected). Such a theory does not need to claim that all events are correlated. So my answer to your question:
"if you perform such an experiment, can you conclude that your theory is wrong? Maybe the initial conditions of the universe were designed to yield exactly this result."
is this:
If your experiment was based on emission/detection of entangled particles you may be skeptical about the fact that your theory is wrong. If not, you may use statistics in the usual way.
Regards,
Andrei
Sabine,
DeleteI think you have underestimated the consequences of superdeterminism. If it is true, there is no way to really test a theory against measurements, thus we do not know if our theory really describes the world.
In my opinion, to defend locality by accepting superdeterminism is not worth it. Superdeterminism is an assumption at the epistemic level, underlying the possibility to check our theory about the world, while (non-)locality is at the theory content level.
tytung,
Delete"Superdeterminism" is not a theory. It's a property of a class of models. You can only test the models. What I am saying is that if you do not actually go and develop such a model you will, of course, never be able to test it.
Sabine,
DeleteLet's say there is a God, and He/She behaves kind at certain times and has an ugly side at other sometimes. You want to check if this is true. But if the universe's fate is such that you always observe this God on those times He/She is kind, then you will build a theory of a kind God.
Yes as you said, you can perform any test as usual, but due to the fate, the truth is very different, and forever beyond the reach of your knowledge. So the statement that God is sometimes unkind becomes unfalsifiable if fate is accepted.
(I am an atheist.)
tytung,
DeleteI don't know what any of that has to do with me, but you seem to have discovered that science cannot tell you what is true and what isn't; it can only tell you what is a good explanation for your observations.
Sabine,
DeleteNo, of course I do think Science can tell us what is true and what isn't, but this is precisely because Science implicitly assumes the freedom of choosing our measurements.
In a super-deterministic model, all measurement choices are predetermined. You are not able to falsify a theory through those measurements that it does not allow you to make. You can only build a theory through allowed measurements, and this theory may be very different from the underlying model. I think this is what Shor and many others are concern about superdeterminism.
tytung,
DeleteI know perfectly fine what they are saying and I have explained multiple times why it is wrong. Science does not assume any such thing as the freedom of choosing measurements. Science is about finding useful descriptions for our observations. Also, as I have said repeatedly, the objection you raise would hold equally well for any deterministic theory, be that Newtonian mechanics or many worlds. It is just wrong. You are confused about what science can and cannot do.
I was playing backgammon with a friend who is a nuclear engineer. The subject of stochasticity came up and how if one were able to computer the motion of dice deterministically from the initial conditions of a throw that stochastic or probability behavior could be vanquished. I mentioned that if we had quantum dice this would not be the case. Superdeterminism would counter that such predictions are possible, at least in principle.
ReplyDeleteIt is tough to completely eliminate any possible correlation that might sneak the obedience to Bell inequalities into quantum outcomes. If we are to talk about the brain states of the experimenter then in some ways for now that is a barrier. On the past light cone of the device and the experimenter at the moment of measurement are many moles of quantum states, or putative superdeterministic states. This gets really huge if the past light cone is considered to the emergence of the observable universe. Trying to eliminate causal influences of such is a game of ever more gilding the lily. Physical measurements are almost by necessity local, and with QM the nonlocality of the wave function plays havoc with the measurement locality. I would then tend to say that at some point one has to say there are no effective causal influences that determine a quantum outcome on a FAPP basis.
My sense then is that superdeterminism is most likely not an effective theory. Of course I could be wrong, but honestly I see it as a big non-starter.
Sabine,
ReplyDelete"...they are non-local, which makes my hair stand up..."
Well, reality may not be concerned with your coiffure. Isn't this the kind of emotional reaction to an idea your book argues against?
More specifically, why is superdeterminism more palatable to you than nonlocality? Superficially they seem similar: the usual notion of nonlocality refers to space, while superdeterminism is nonlocality in time.
Andrew,
DeletePlease excuse the flowery expression; I was trying to avoid repeating myself. It's not consistent with the standard model, that's what I am saying. Not an emotional reaction but what Richard Dawid calls the meta-inductive argument. (Which says, in essence, stick close to what works.)
Andrew,
DeleteI can give you many examples of physical systems that are not independent: stars in a galaxy, planets and their star in a planetary system, electrons and the nucleus in an atom, synchronized clockes, etc.
How many examples do you have where the speed of light has been exceeded?
Sabine,
ReplyDeletenot to be dragged onto the slippery slope of a never ending “free will” discussion, I would suggest to exclude right from the beginning a human experimenter and use two random number generators (RNGs) instead. (either pseudo RNGs or quantum RNGs or CMB photon detection from opposite directions, ...)
[These RNGs select (statistical independently) e.g. the orientations of the polarization filters in the Bell type experiment.]
Otherwise it will distract from what I understand your intention is, namely to show in a yet unknown model
“... that the detectors’ states aren’t independent of the system ...” and that this yet unknown model will predict more than QM can.
Reimond,
DeleteYes, that's right, alluding to free will here is unnecessary and usually not helpful. On the other hand, I think it is relevant to note that this seems to be one of the reasons people reject the idea, that they are unwilling or unable to give up on free will.
Conway and Kochen showed how QM by itself had conflicts with free will. This is without any superdeterminism idea.
DeleteThe switch thrown by an experimenter could be replaced with a radioactive nucleus or some other quantum system that transitions to give one choice or if in some interval of time there is no decay the other choice is made.
I think in a way we have to use the idea of a verdict "beyond a reasonable doubt" used in jurisprudence. We could in principle spend eternity trying to broom out some superdeterministic event. We may not be able to ever rule out how a brontosaurus, I think now called Apatosaurus, farted 100 million years ago and the sound generated phonons that have reverberated around and the hidden variable there interacted with an experiment. Maybe a supernova in the Andromeda galaxy 2.5 million years ago launched a neutrino that interacts with the experiment, and on it can go. So going around looking for these superdeterministic effects strikes me as George W Bush doing his "comedy routine" of looking for WMD.
Lawrence Crowell,
DeleteThe problem is that the assumption of statistical independence can be shown to be wrong for all modern theories (field theories). A charged particle does not move independently of other charged particles, even if those particles are far away. Likewise, in general relativity a massive body does not move independently of other massive bodies. What exactly makes you think that in a Bell test the group of charged particles (electrons and nuclei) that make up the source of the entangled particles evolves independently of the other two groups of charged particles (Alice and Bob)?
The proposal here is that everything in a sense is slightly entangled with everything else. That I have no particular problem with, though this causes a lot of people to get into quantum-woo-woo. An entanglement sum_ip_i|ψ_i>|φ_i> for some small probability p_i for a mixture is an aspect of decoherence. So in the states of matter around us are highly scrambled or mixed entanglements with states that bear quantum information from the distant past. I probably share tiny entanglements or quantum overlaps with states bearing quantum information held by states composing Abraham Lincoln or Charlemagne or Genghis Khan, but can never tractably find those.
DeleteThe superdeterminist thesis would be there are subquantum causal effects in these massive mixed entanglements which can trip up our conclusions about violations of Bell inequalities. This would mean such violations really occur because we can't make a real accounting of things. This is where I think an empirical standard similar to the legal argument of a verdict “beyond all reasonable doubt” must come into play. People doing these Bell experiments, such as forms of the Aspect experiment, try to eliminate classical influences sneaking in. I think Zeilinger made a statement a few years ago that mostly what is left is with the mind of the observer. If we are now to look at all weak entanglements of matter in an experimental apparatus to ferret out possible classical-like superdeterministic causes the work is then almost infinite.
Lawrence Crowell,
DeleteThe way I see EPR correlations explained in a superdeterministic context is as follows:
Under some circumstances, a large number of gravitating bodies correlate their motion in such a way as to create a spiral galaxy.
Under some circumstances, a large number of electromagnetically interacting objects (say water molecules) correlate their motion in such a way as to create a vortex in the fluid.
Under some other circumstances, a large number of electromagnetically interacting objects correlate their motion in such a way as to produce EPR-type correlations.
As long as the requirements for the above physical phenomenons are met, spiral galaxies, vortices or EPR correlations will be present. I think this hypothesis has the following implications in regards to the arguments you presented:
1. There is no specific cause, no singular event in the past that explains the correlations (like the fart of the apatosaurus). The correlations are "spontaneously" generated as a result of how many particles interact.
2. The efforts of Aspect or Zeilinger are not expected to make any difference because there is no way to change how those many particles interact. Electrons behave like electrons, quarks behave like quarks regardless of how you set the detector, if a human presses a button, or a monkey or a dog, or a computer doing some random number algorithm. The correlations arise from a more fundamental level that has nothing to do with the macroscopic appearance of the experimental setup.
Physics such as vortex flow or turbulence is primarily classical. I am not an expert on the structure of galaxies, but from what I know the spiral shape occurs from regions of gas and dust that slow the passage of stars through them. Galactic arms then do not rotate around as does a vortex, but the spiral arms are somewhat fixed in their shape. In fact this is what the whole issue with dark matter is over.
DeleteI also think the the quantum phenomena are in essence classical and could be explain by a classical theory, more exactly a classical field theory. The best candidate is, I think, stochastic electrodynamics.
DeleteThis is unlikely. Bell's inequalites tell us how a classical system is likely to manifest a probability distribution. Suppose I were sending nails oriented in a certain direction perpendicular to their path. These then must pass through a gap. The orientation of the nails with gap would indicate the probability. We think of the orientation with respect to the slats as similar to a spinning game, such as the erstwhile popular TV game show Wheel of Fortune. If the nails were oriented 60 degrees relative to the slats we would expect the nails to have a ⅔ chance of passing through. Yet the quantum amplitude for the probability is cos^2(Ï€/3) = .25. The classical estimate is larger than the actual quantum probability. This is a quick way of seeing Bell inequality, and the next time you put those RayBan classes with polarizing lenses on you are seeing a violation of Bell inequaltieis.
DeletePhysics has quantum mechanics that is an L^2 system, with norm determined by the square of amplitudes that determine probabilities. Non-quantum mechanical stochastic systems are L^1 systems. Here I am thinking of macroscopic systems that have pure stochastic measures. For convex systems or hull with an L^p measure there is a dual L^q system such that 1/p + 1/q = 1. For quantum physics there is a dual system, it is general relativity with its pseudo-Euclidean distance. For my purely stochastic system the dual system is an L^∞ system, which is a deterministic physics such as classical mechanics of Newton, Lagrange and Hamilton. There is a fair amount of mathematics behind this, which I will avoid now, but think of the L^∞ system where there are no distributions fundamental to the theory. Classical physics and any deterministic system, say a Turing machine, is not about some distribution over a system state. The classical stochastic system is just a sum of probabilities, so there is no trouble with seeing that as L^1. The duality between quantum physics and spacetime physics is very suggestive of some deep physics..
In this perspective a quantum measurement is then where there is a shifting of the system from p = ½ to p = 1, thinking of a classical-like probability system after decoherence, or with the einselection this flips to an L^∞ system as a state selection closest to the classical or greatest expectation value. I read last May on how experiments were performed to detect how a system was starting to quantum tunnel. This might be some signature of how this flipping starts. It is still not clear how this flipping can be made into a dynamical principle.
Avshalom pointed out an issue with thermodynamics, which I think is germane to this. With superdeterminism there would be more information in quantum systems, and this would lead to problems with the second law of thermodynamics and probably violations of entropy bounds.
I looked at Motl's blog last evening, and he declares with confidence there is no measurement problem. Of course he is steeped in his own certitude on most things, such as everything from climate change to string theory. In the spirit of what he calls and “anti-quantum zealot” I think it is not likely he has this all sewed up and hundreds of physicist who think otherwise are all wrong. Motl likes Bohr, but as Heisenberg pointed out there is an uncertain cut-off between what is quantum and what is classical. So Bohr and CI are an interpretation with holes. There is lots of confusion over QM, and even some of the best of us get off the rails on this. I have to conclude that 't Hooft went gang oft aglay on this as well.
Lawrence Crowell,
Delete"Bell's inequalites tell us how a classical system is likely to manifest a probability distribution."
This is not true. Bell's inequalites tell us how a system that allows a decomposition in independent subsystems should behave. As I have argued many times on this thread classical field theories (like classical electrodynamics) are not of this type. No matter how far you place two or more groups of charged particles they will never become independent, this is a mathematical fact. So, the statistical independence assumption does not apply here (or at least it cannot be applied for the general case). In other words classical field theories are superdeterministic, using Bell's terminology. This means that, in principle, one could violate Bell's inequalities with classical systems even in theories like classical electromagnetism.
Stochastic electrodynamics is a classical theory (please ignore the Wikipedia info, it has nothing to do with Bohm's theory), in fact it is just classical electrodynamics plus the assumption of the zero-point field (a classical EM field originating at the Big-Bang). In such a theory one can actually derive Planck's constant and the quantum phenomena (including electron's spin) are explained by the interaction between particles and the zero-point field. Please find an introductory text here:
Boyer, T.H. Stochastic Electrodynamics: The Closest Classical Approximation to Quantum Theory. Atoms 2019, 7, 29.
The theory is far from completely reproducing QM but it is a good example of a theory I think is on the right track.
"Avshalom pointed out an issue with thermodynamics, which I think is germane to this. With superdeterminism there would be more information in quantum systems, and this would lead to problems with the second law of thermodynamics and probably violations of entropy bounds."
Please take a look at this paper:
Classical interpretation of the Debye law for the specific heat of solids
R. Blanco, H. M. França, and E. Santos
Phys. Rev. A 43, 693 – Published 1 January 1991
It seems that entropy could be correctly described in a classical theory as well.
"I looked at Motl's blog last evening, and he declares with confidence there is no measurement problem."
I have discussed with him a few ideas regarding the subjective view of QM he is proposing. I had to conclude that the guy has no clue about what he is speaking about. He is completely confused about the role of the observer (he claims that the fact that different observers observe different things proves the absence of objective reality), he contradicts himself when trying to explain EPR (one time he says the measured quantity does not exist prior to measurement, then he claims the contrary). He claims there are observer independent events, but when asked he denies, etc. I cannot evaluate his knowledge about strings, I suppose he is good but his understanding of physics in general is rudimentary.
All this being said, I actually agree with him that there is no measurement problem, at least for classical superdeterministic theories. The quantum state reflects just our incomplete knowledge about the system but the system is always in a well-defined state.
Lawrence Crowell: "but from what I know the spiral shape occurs [...]"
DeleteThat's not how spiral arms form, nor how they evolve. You'll find textbooks (and papers) which confidently tell you the answers to spiral arm formation and evolution; when you dig into the actual observations, you quickly realize that there's almost certainly more than one set of answers, and plenty of galaxies which simply refuse to be neatly pigeon-holed.
@ JeanTate: I am not that versed on galactic astrophysics. However, this wikipedia entry
Deletehttps://en.wikipedia.org/wiki/Spiral_galaxy#Origin_of_the_spiral_structure
bears what I said for the most part. These are density waves, and are regions of gas and dust that slow the orbital motion of stars and compress gas entering them. So I think I will stick to what I said above. I just forgot to call these regions of gas density waves.
As for the evolution of galactic structure. I think that is a work in progress and not something we can draw a lot of inferences from.
The Bell inequalities refer to classical probabilities.
DeleteI read a paper on stochastic electrodynamics last decade. As I recall the idea is that the electric field has a classical part plus a stochastic variation E = E_c + δE, where (δE(t)) = 0 and (δE(t')δE(t)) = E^2δ(t' - t) if the stochastic process is Markovian. BTW, I am using parentheses for bra and ket notation because a lot of these blogs do not like carrot signs. I think Milgrom did some analysis of this sort. If the fluctuations are quantum then this really does not change the quantum nature of things.
Quantum mechanics really tells us nothing about the existential nature of ψ. Bohr said ψ had no ontology, but rather was a prescription for determining measurements. Bohm and Everett said ψ does exist. The problem is with trying to make something ontological that is complex valued. Ontology seems to reflect mathematics of real valued quantities, such as expectations of Hermitean operators. Yet epistemic interpretations leave a gap between the quantum and classical worlds. I think this is completely undetermined; there is not way I think we can say with any confidence that quantum waves are ψ-epistemic or ψ-ontic. As a Zen Buddhist would say MU!
Motl is a curious character, and to be honest I think that since he places a lot of ideology and political baggage ahead of actual science that his scientific integrity is dubious. I agree his stance on QM is highly confused, and based on his definition I am proud to be what he calls an anti-quantum zealot. He also engages in a lot of highly emotional negativity towards people he disagrees with. He will excoriate a physicist for something, but then for some reason found great interest in a 15 year old girl who does political videos that are obscene and in your face. His blog is useful for looking at some of the papers he references. I would say you can almost judge string theory from his blog; string may be a factor in physical foundations, but the vast number of stringy ideas illustrate there is no "constraint" or something that defines what might be called a contact manifold on the theory.
@ Lawrence Crowell: the Wikipedia article is, I think, a reasonable summary of some aspects of the topic of spiral arms in galaxies and their formation and evolution ... but it's rather out of date, and quite wrong in places. Note that density waves are just one hypothesis; the WP article mentions a second one (the SSPSF model), and there are more in the literature. Curiously, a notable morphological feature of a great many spiral galaxies is omitted entirely (rings).
DeleteThis is all rather OT for this blogpost, so just one more comment on this topic from me: galaxies are not like charm quarks, protons, or atoms.
Typo: "While philosophers on occasional".
ReplyDeleteThanks, fixed that!
DeleteTwo folks whom I admire greatly, John Bell and Gerard 't Hooft, have shown an interest in superdeterminism for resolving entanglement correlations — Bell obliquely, perhaps, and 't Hooft both recently and very definitely.
ReplyDeleteI understand and even appreciate the reasoning behind superdeterminism, but my poor brain just cannot accept it for a very computer-think kind of reason: efficiency.
Like Einstein's block universe, superdeterminism requires pre-construction of a 4-dimensional causality "crystal" or "block" to ensure that all physics rules are followed locally. Superdeterminism simply adds a breathtakingly high new level of constraints onto pre-construction of this block: human-style dynamic state model extrapolation ("thinking") and qualia (someday physics will get a clue what those are) must be added to the mix of constraints.
But why should accepting superdeterminism be any worse than believing in a block universe, which the most physicists already do — relativists via Einstein's way of reconciling diversely angled foliations, and quantum physicists via schools of thought such as Wheeler-Feynman advanced and retarded waves?
It is not.
That is, superdeterminism is not one whit less plausible than the block universe that most physicists already accept as a given. It just adds more constraints, such as tweaking sentient behavior. Pre-construction of the block universe is by itself so daunting that adding a few more orders of magnitude of orders of magnitude of complexity to create a "superblock" universe cannot by itself eliminate superdeterminism. In for a penny, in for a pound! (And isn't it odd that we Americans still use that expression when we haven't used pounds for centuries, except in the even older meaning of the word for scolding ourselves about eating too many chips?)
Here is why efficiency is important in this discussion: If you are a scientist, you must explain how your block universe came into existence. Otherwise it's just faith in a mysterious Block Creator, the magnitude of whose efforts makes a quick build of just seven days pretty puny by comparison.
The mechanism needed is easy to identify: It's the Radon transform, the algorithm behind tomography. To create a block universe, you apply the Radon transform iteratively over the entire universe for the entirety of time, shuffling and defuzzing and slowing clarifying the world lines until a sharp, crystallized whole that meets all of the constraints is obtained.
This is why I can respect advocates of superdeterminism, and can agree fully that it is unfair for superdeterminism not to be taught. If you accept the block universe, you have already accepted the foundations of superdeterminism. All you need to do is just the Radon sauce a bit more liberally!
My problem? I don't accept the necessity of the block universe.
I do however accept the concept (my own as best I can tell) of causal symmetry, by which I mean that special relativity is so superbly symmetric that if you take the space-like details of any given foliation, you have everything to you need to determine the causal future of the universe for any and all other foliations.
However, causal symmetry is a two-edged sword.
If _any_ foliation can determine the future, then only _one_ foliation is logically required from a computational perspective. It doesn't make other foliations any less real, but it does dramatically simplify how to calculate the future: You look at space around _now_, apply the laws of physics, and let universe itself do the calculation for you.
But only once, and only if you accept entanglement as proof that classical space and classical time are not the deepest level of how mass-energy interacts with itself. Space and time just become emergent features of a universe where that most interesting and Boltzmannian of all concepts, information and history, arose with a Big Bang, and everything has been moving on at a sprightly pace ever since.
A more promising way of attempting a deterministic extension of quantum mechanics seems to me a theory with some mild form of non-locality, possibly wormholes?
ReplyDeleteLeonard Susskind has advanced the idea of ER = EPR, or that the Einstein Rosen bridge of the Schwarzschild solution is equivalent to the nonlocality of EPR. This is a sort of wormhole, but not traversable. So there are event horizons that make what ever causal matter or fields there are unobservable and not localizable to general observers.
DeleteVery hard to make compatible with Lorentz-invariance. (Yes, I've tried.)
DeleteSabine, are you saying that a wormhole would violate Lorentz-invariance? Why would that be?
DeleteWell, one wormhole would obviously not be Lorentz-invariant, but this isn't what I mean. What I mean is if you introduce any kind of non-locality via wormholes and make this Lorentz-invariant, you basically have non-locality all over the place which is more than what you may have asked for. Lorentz-invariance is a funny symmetry. Very peculiar.
DeleteThis problem is related to traversable wormholes. A traversable worm hole with two openings that are within a local region are such that for the observer passing through there is a timelike path connecting their initial and final positions. To an observer who remains outside these two points are connected by a spacelike interval. An elementary result of special relativity is that a timelike interval can not be transformed into a spacelike interval. But if we have a multiply connected topology of this sort there is an ambiguity as to how any two points are connected by spacelike and timelike intervals.
DeleteOk, one might object, special relativity is a global flat theory, wormholes involve curvature with locally Lorentzian regions. However, the ambiguity is not so much with special vs general relativity but with the multiply connected topology. This matter becomes really odd if one of the wormhole openings is accelerated outwards and then accelerated back. For a clock near the opening there is the twin paradox issue. It is then possible to pass through the wormhole, travel back by ordinary flight and arrive at a time before you left. Now there are closed timelike loops. The mixing of spacelike and timelike intervals becomes a horrendous mix, as now timelike and spacelike regions overlap.
Traversable wormholes also run afoul with quantum mechanics as I see it. A wormhole converted into the time machine as above would permit an observer to duplicate a quantum state. The duplicated quantum state would emerge from an opening, and then later that observer had better throw one of these quantum states into the wormhole to travel back in time to herself. The cloning of a quantum state is not a unitary process, and yet in this toy model we assume the quantum state evolves in a perfectly unitary manner. Even with a nontraversable wormhole one might think there is a problem, for Alice and Bob could hold entangled pairs and Alice enters this black hole. If Bob times things right he could teleport a state to Alice, enter the black hole, meet Alice so they have duplicated states without performing a LOCC operation. However, now nature is more consistent, for Bob would need a clock so precise that its would have a mass comparable to the black hole. That would then perturb things and prevent Bob from making this rendez vous.
Thank you, Sabine, for explaining why wormholes violate Lorentz-invariance, and thank you, Lawrence for the detailed clarification of what Sabine meant. I want to give your responses some more thought, but currently the heat and humidity is not conducive to deep thinking. I'll await the passage of a cold front in a few days before exercising the brain muscle.
DeleteI think you should re-read Tim Maudlin's book more carefully. He spent a lot of paragraphs explaining that superdeterminism has nothing to do with "free will".
ReplyDeleteI didn't say it does. I said that many people seem to think it does but that this is (a) not a good argument even if it was correct and (b) it is not correct.
DeleteSuperdeterminism is a difficult concept for a lay person to unpack and to help understand it I ask whether it operates on the classical, "visible" world of things or, as the quantum was sometimes evaluated, exists only at the atomic and subatomic level?
ReplyDeleteIn short, do "initial conditions" govern ALL subsequent events or only events at the particle level? If the first is the case then does superdeterminism complicate our understanding of other scientific endeavors?
Here is an example of what I mean. The evolution of modern whales from a "wolf-like" terrestrial creature is sufficiently documented that I kept a visual of it in my classroom. Biological evolution is understood to operate through random genetic changes (e.g. mutations or genetic drift) that are unforeseeable and that are selected for by a creature's environment. If true randomness is removed from the theory then evolution becomes telelogical -- and even smacks of "intelligent design." I mean that in this instance (the whale) its ultimate (for us) phenotype was an inevitable goal of existence from when the foundation of creation was laid.
The same may be said of any living thing and, if so, evolutionary biology becomes a needless complication. Leopards were destined to have spots from the first millisecond of the Big Bang and camouflage had nothing to do with it.
(In a similar vein I once wrote that railroad stations are always adjacent to railroad tracks because this, too, was destined by the initial conditions of 13.5 billion years ago and not because human engineers found such an arrangement maximal for the swift and efficient movement of passengers.
So . . . are interested lay people better advised to understand superdeterminacy as a "quirk" of very small things or as an operating principle for the universe and without regard to scale.
Thank you.
A. Andros,
DeleteThe theories that we currently have all work the same way and in them the initial conditions govern all subsequent events, up to the determinism that comes from wave-function collapse.
Yes, if you remove the indeterminism of quantum mechanics then you can predict the state at any time given any one initial state. An initial state doesn't necessarily have to be an early time, it could be a late time. (I know this is confusing terminology.)
No, this is not intelligent design as long as the theory has explanatory power. I previously wrote about this here. Please let me know in case this doesn't answer your question.
Intelligent design requires a purpose, the assumption that the state at the Big-Bang was "chosen" so that you get whales.
DeleteDeterminism just states that the state at the Big-Bang is the ultimate cause for the existence of the whales. If that state were different, different animals would have evolved. What is the problem? If the mutations were truly random, a different mutation would play exactly the same role as the different initial state.
The railroad stations are always adjacent to railroad tracks because they both originate from the same cause (they were planed together by some engineer. That plan is also in principle traceable to the Big-Bang. Again, I do not see what the problem is supposed to be.
It must be...
Delete"if so, evolutionary biology becomes a needless complication. Leopards were destined to have spots from the first millisecond of the Big Bang and camouflage had nothing to do with it"
Is that right?
For a railroad station to be located adjacent to railroad tracks (a silly example, I know, but illustrative) the number of "coincidences" to flow from initial conditions (under superdeterminism) is so staggering as to strain belief. We must believe that 13.8 billion years ago the universe "determined" that all the following would coincide in time and place on just one of trillions (?) of planets: the invention of the steam locomotive; the manufacture of steel rails; the surveying of a rail route; appropriate grading for the right-of-way; the delivery of ballast to that right-of-way; the laying of that ballast; the cutting and processing of cross-ties; the shipment of cross-ties to the right place; the laying of cross-ties; the laying of rails; the installation of signals; the decision to build a railroad station; the creation of architectural plans for such a station; the simultaneous arrival of carpenters and masons at the site of the proposed station . . . and so on. All of these factors, and numerous others, were on the "mind" of the infant universe at the Big Bang?
DeleteAs an alternative, one can believe that the various contractors and engineers have AGENCY and thus could created the station/tracks.
And, if the Rolling Stones were superdetermined to hold a performance at such-and-such a place did the infant universe also determine that 50,000 individuals would show up at the right place and time, each clutching an over-priced ticket?
I wonder whether superdeterminism is just an attempt to get around Bell's Theorem and Quantum Mechanics and so restore a form of classical physics. And, since the concept cannot be tested -- is it not similar to string theory or the multi-verse?
Dr. Richard Dawkins showed how random events can give the impression of purpose. Superdetermism, though, hints at purpose disguised as random events.
As for the leopard and his spots . . . since everything that exists or acts results from initial conditions then biological evolution violates the teachings of the late Mr. Occam: it is simply an unnecessary embellishment of what was inevitably "meant to be."
Now, I don't doubt evolution for a moment -- but it is not consistent (because of its random nature) with a superdetermined universe.
Also . . . I thank Dr. H. for her helpful and kind reply to my note.
A. Andros,
DeleteBig-bang created the engineer, the engineer created both railroad stations and tracks. Railroad stations and tracks were not independently created at the Big-Bang. The correlations between them is perfectly explainable by their common cause.
Evolution does not require randomness, only multiple trials. You could do those trials in perfect order (mutate the first base, then the second and so on) and still have evolution.
Superdeterminism can be tested using computer simulations.
To belabor my point, the Big Bang did not create the engineer who "created both railroad stations and tracks." These are separate engineering feats requiring different skills. Literally hundreds of separate skills/decisions must perfectly coincide in time and place for such projects to exist. This is simply asking too much of coincidence.
DeleteThere is no need for "multiple trials" in evolution if the phenotype of an organism was dictated at the Big Bang. What is there to submit to trial? The outcome 13.8 billion years after creation was inevitable from the start --- what is there to evolve?
As for testing superdeterminsim on computers, the outcome of such simulations must also be dictated by initial conditions at the Big Bang and so are unreliable -- which, incidentally,seems to be Dr. H.'s opinion of numerous experiments that validate Prof. Bell's conclusions -- they cannot be trusted because the "fix" is in.
If one believes in superdeterminism then one is stuck with something akin to intelligent design. Or, did the 3.5 million parts of the Saturn V, designed by thousands of engineers and all working perfectly together, originate in a "whim" of the Big Bang?
It seems that superdeterminism provides a possible way out of Bell's theorem and quantum indeterminacy. But, it does so by endowing the universe with purpose and something like "foresight." IBM was down a fraction last week -- is that due to "initial conditions" almost 14 billion years ago?
A. Andros
Delete“…does superdeterminism complicate our understanding of other scientific endeavors?”
Yes, and thanks for broadening the discussion to include complications that superdeterminism would present for understanding the how and why of complex biological systems. Your examples are part of a nearly endless list of biological phenomena for which there is presently a strong explanatory rational within evolutionary biology. These interrelated rationales would be completely undercut by a theory superdeterminism and left hanging for some ad hoc explanation. As you note, evolutionary biology then becomes a needless complication. Going further, the existence of these unnecessary phenomena (spots on leopards) would in fact violate the least action principle of physics that governs state transitions. It doesn’t hang together.
Granted, we cannot free ourselves from the constraints created by the laws of physics, but neither can we free ourselves from the addendum of further constraints created by biological systems.
“In general, complex systems obviously can bring new causal powers into the world, powers that cannot be identified with causal powers of more basic, simpler systems. Among them are the causal powers of micro-structural, or micro-based, properties of a complex system” — Making Sense Of Emergence, Jaegwon Kim, (f.n. #37)
http://www.zeww.uni-hannover.de/Kim_Making Sense Emergence.1999.pdf
Evolutionary biology is a stepwise saga of contesting futures. Similarly, science is a stepwise saga of contesting theories. The fitness criteria are eventually satisfied.
A. Andros,
DeleteThe purpose of a superdeterministic theory is to reproduce QM. You have provided no evidence that, mathematically this is not possible. If you have such evidence please let me know. So, as long as all your examples with railroads and the evolution of animals are compatible with QM they will also be compatible with a superdeterministic interpretation of QM.
If one accepts superdeterminism then one must accept Intelligent Design. It may be that your mathematics do not align with 150 years of study of biological evolution but if that is the case then all the worse for superdeterminism. And, as I have said, the "coincidences" necessary for a superdeterministic universe to produce complex human interactions on a whim of initial conditions in a universe some 13.8 billion years ago simply strains credulity. There is ample evidence that biological evolution is, in large part, powered by random genetic events that provide the raw material on which natural selection works. Since superdeterminism disallows anything that is random then it, perforce, is a form of Creationism. All the math in the world, however cleverly employed, seems unlikely to verify a Genesis-like account of organic diversity over time. One might even become lost in math.
Delete"If one accepts superdeterminism then one must accept Intelligent Design."
DeleteThis is complete rubbish.
Forty years ago Fred Hoyle delighted fundamentalists by stating that the complexity of DNA is such that it was as likely to have evolved spontaneously in nature as the chances of a tornado assembling a 747 by sweeping through a junk yard. Superdetermnism does the tornado one better -- it assembles entire technological civilizations from initial conditions of the universe. Since you seem to allege that this is due neither to random events (which SD prohibits) or agency (also excluded by SD) then what device is left to explain complex social and technological systems other than intelligent design?
DeleteSD (superdetermnism) is simply a restatement of Augustinian theology -- as amplified by John Calvin. The term SD could just as easily be relabeled as "predetermination" and it would be fully acceptable in 17th century Geneva.
And, as part of the intelligent design, biological evolution simply goes out the window. I was taught that organisms evolve due to random genetic and environmental changes -- but SD insists that "randomness" does not exist. Thus, organisms cannot evolve and each living thing is created de novo -- as declared in Genesis. Since the universe decreed the emergence of, say, giraffes from the moment of it (the universe's) inception then evolution becomes so much hand-waving.
Your comment that my remarks are "complete rubbish" contain no arguments. I am merely trying to learn here. I am mystified how Dr. H. can believe that fantastically complex events in history (our 747, for example) could be implicit in initial conditions without intelligent design or random events or agency.
I also hope to learn from you how the gradual evolution of one organism into another, somewhat different organism over geological time is possible if the phenotype is implicit in the infant universe. This is simply teleology.
For all math in the world, SD seems to simply unpack Augustinian/Cavinistic thought and "science" and martial it for battle against the peculiar findings of Mr. John Bell.
I enjoy your column (and book) greatly. But, the implications of SD are not easily dismissed -- or believed (unless one is an Augustinian and then it all makes perfect sense.)
Regards.
A. Andros:
Delete" I am mystified how Dr. H. can believe that fantastically complex events in history (our 747, for example) could be implicit in initial conditions without intelligent design or random events or agency."
If you have a time-reversible evolution, they are always present in the initial conditions, as in the present state, and in any future state. If you want to call that intelligent design, then Newtonian mechanics and many worlds are also intelligent design.
Sabine,
DeleteI believe the point is that the design of today's complexity of biological structure and human artifact had to evolve as part of a natural process or be created by fiat of some agency. Unless the past is some kind of reverse engineering of the present.
Ilya Prigogine believed that the very foundations of dissipative structures [biological systems] impose an irreversible and constructive role for time. In his 1977 Nobel Lecture,2 he noted in his concluding remarks, “The inclusion of thermodynamic irreversibility through a non-unitary transformation theory leads to a deep alteration of the structure of dynamics. We are led from groups to semigroups, from trajectories to processes. This evolution is in line with some of the main changes in our description of the physical world during this century.”
A good article with experiments and references here:
https://aip.scitation.org/doi/full/10.1063/1.5008858
And consider that, one photon in a thousand striking the earth’s surface will land upon a chloroplast which is about one fifth of the diameter of a human hair and comprised of some three thousand proteins. Significantly, as the excitation moves through the photosynthetic process, one of these proteins is triggered to block it from returning to its previous state. Not to say that that process itself could not,in theory be rewound, but ingenious biology.
Thanks
I seem to recall Ian Stewart remarking in one of his books that one way Bell's proof can be challenged is via its implicit assumption of non-chaotic dynamics in evolution of hidden variables. Any mileage to that?
ReplyDeleteMaybe he talked to Tim Palmer :p
DeleteNo assumption is made of non-chaotic evolution of the hidden variables. Just go through the proof, and you'll see that.
DeleteLast paragraph, "this open the door ..."
ReplyDeleteShould be "this opens the door ..."
Thanks for spotting; I have fixed that!
DeleteSabine, do you think it possible for a superdeterminism model to get some place useful without incorporating gravity into the equation? Offhand, it seems unlikely to me, since any kind of progress on the measurement issue would have to answer the gravitational Schrodinger's cat question, discussed on your blog at some point. Namely, where does the gravitational attraction point while the cat is in the box.
ReplyDeleteSergei,
DeleteI don't see how gravity plays a role here, just by scale. Of course it could turn out that the reason we haven't figured out how quantize gravity is that we use the wrong quantum theory, but to me that's a separate question. So the answer to your question is "yes".
Not John "Steward" Bell, but John Stewart Bell.
ReplyDeleteAh, very attentive! I have corrected this.
DeleteIf the theory that space-time is a product of a multitude of interconnected entangled space-time networks, then it might be reasonable to suspect that any test of the state of a component of one of those networked entangled components of reality would be unpredictable as probed by an experimenter. The entangled nature of the network is far too complex and beyond the ability of the experimenter to determine. This complexity leads to the perception that the experiment is inherently unpredictable.
ReplyDeleteTo get a reliable test result, the experimenter may need to join the entangled network in whose component the experimenter is interested in testing. All the equipment that the experimenter intends to use in the experiment would be required to be entangled with the network component under test. The test instrumentation would now be compatible and in sync with the entangled component and be in a state of superposition with the entangled network that the component belongs to. During experiment initialization, the process of entangling the test infrastructure would naturally modified the nature of the network to which the component to be tested belongs as that infrastructure joins the common state of superposition.
As in a quantum computer, the end of the experiment is defined by the decoherence of the coherent state of the experiment. Then upon decoherence of the experiment, the result of the experiment can be read out.
"...and the latter wants to discretize state space. I don’t see the point in either." There is no mathematically rigorous definition of the continuum: https://www.youtube.com/watch?v=Q3V9UNN4XLE
ReplyDeleteHas anyone considered the possibility of spacetime itself being wavelike instead of trying to attach wavelike properties to particles? That is to say, the energy in space (time), varies sinusoidally over duration? This wave/particle duality seems to be a major problem with all "interpretations". This notion might define the "pilot wave"
ReplyDeleteYes, wave structure of matter started by Milo Wolff proposes that what we observe as particles is the combination of an in-coming wave and an out-going wave. The combination of these two waves (which are more fundamental and are the constituents of both photons or matter waves) remove wave-particle duality issues, singularities of near field behavior including the need for renormalization) and many other issues. The two best features that I see are the ability to calculate the mass of standard particles and the wave nature of gravity (we did measure gravity waves in 2015) joining nicely with the wave nature of particles.
DeleteWhat I don’t understand about this super determinism is why, with all the fine tuning necessary to simulate non-locality , the mischief-maker responsible didn’t go that little bit further and ensure my eggs were done properly this morning. It seems an extravagant degree of planning to achieve what?
ReplyDeleteDan,
DeleteWhat makes you think fine-tuning is necessary? Please be concrete and quantify it.
Dan,
Deletecorrelations between distant systems are a consequence of all field theories (general relativity, classical electromagnetism, fluid mechanics, etc.). The correlations need not be explained by any fine-tuning of the initial conditions. For example, the motions of a planet and its star are correlated (one way of detecting distant planets is by looking at the motion of the star) but this is not a result of a fine-tuning. All planetary orbits are ellipses. The reason they are not triangles or spheres has nothing to do with the initial state.
Sabine, my assumption is that the distinction between a super deterministic universe and ia merely deterministic universe must be that in the super deterministic universe, the initial conditions (at the Big Bang) must be precisely those that will simulate nonlocality in all Bell tests forever more.
ReplyDeleteDan,
DeleteI suggest you don't assume what you want to prove.
I should have been clearer. I do not seek to prove any aspect of super determination. I believe it to be as extravagant and inaccessible to empirical demonstration as many worlds. All I was trying to say was that super determinism appears to presume that a highly contrived and thus extravagant set of initial conditions existed at the onset of the big bang.
ReplyDeleteBut congratulations on an absorbing discussion of a very interesting topic.
The universe can into existence as a unitary entity. As the universe expanded and eventually factionalized, its various substructures remained entangled as constrained by its unitary origin. The initial condition of the universe is global entanglement and as such Super determination must be a fallout of primordial global entanglement.
DeleteDan,
DeleteThat you "believe" a theory "appears to presume" is not an argument.
Axil,
DeleteWith superdeterminism there is no entanglement (at least the version that mostly seems is being discussed here). It's being relied on to explain Bell Inequality violations locally.
Just a generic set of initial states for the early universe would do. If you take the physical state in which you do an experiment, then it's obviously not true that you could have made another decision w.r.t. to the experimental set-up while leaving everything else unchanged. Such a different future state in which the experiment is performed would under inverse time evolution not evolve back to an acceptable early universe state.
DeleteThe physical states we can find ourselves in are, as far as inverse time evolution is concerned, specially prepared states whose entropies will have to decrease as we go back in time. But because of time reversal invariance, any generic state will always show an increase in entropy whether or not we consider forward or backward time evolution.
Cited above (by Avshalom Elitzur).
ReplyDeleteVery interesting article. Thanks.
"The Weak Reality That Makes Quantum Phenomena More Natural: Novel Insights and Experiments"
Yakir Aharonov, Eliahu Cohen, Mordecai Waegell, and Avshalom C. Elitzur
November 7, 2018
https://www.mdpi.com/1099-4300/20/11/854/htm
Hi Sabine,
ReplyDeleteI am not sure where Cramer stands in those pictures.
Please could you tell me?
Best,
J.
I don't know. I haven't been able to make sense of his papers.
DeleteIf this is still about physics and not personal religion, there's no experimental basis for either super-determinism, or "classical determinism" for that matter. Instead, determinism was always a baseless extension in Newtonian mechanics and easily disproved by chaos experiments. If there was any lingering doubt, Quantum Mechanics should have killed determinism for good, but, zombies never die, hence the super-determinism hypothesis. Now, how about some experimental evidence?
ReplyDeleteWhat do you mean by "no experimental basis"? If no one is doing an experiment, of course there isn't an experimental basis. You cannot test superdeterminism by Bell-type tests, as I have tried to get people to understand for a decade. Yet the only thing they do are Bell-type tests.
DeleteChaos theory is classical determinism. It is called deterministic chaos, where the path of a particle is completely determined, but due to exponential separation in phase space one is not able to perfectly track the future of the particle.
DeleteNonlin.org: Chaos doesn't disprove determinism; it can be the result of an entirely deterministic evolution. Chaos addresses immeasurability. We have difficulty predicting the weather for precisely this reason, and the "butterfly effect" illustrates the issue: We cannot measure the atmosphere to the precision necessary to pick up every tiny event that may matter. In fact it was discovered because of a deterministic computer program: When the experimenter restarted an experiment using stored numbers for an initial condition, the results came out completely differently; because he had not stored enough precision. He rounded them off to six digits or whatever, and rounding off to one part in a million was enough to change the outcome drastically, due to exponential feedback effects in the calculations.
DeleteThat doesn't make the equations or evolution non-deterministic, it just means it is impossible to measure the starting conditions with enough precision to actually predict the evolution very far into the future, before feedback amplification of errors will overwhelm the predictions.
I believe there are even theoretical physics reasons for why this is true, without having to invoke any quantum randomness at all. There's just no way to take measurements on every cubic centimeter of air on the earth, and its physically impossible to predict the future events (and butterfly wing flaps) that will also have influence.
All of it might be deterministic, but the only way WE can model it is statistically.
And I'll disagree with Dr. Hossenfelder on one point, namely that "shut up and compute" is not dead; often that is the best we humans can do, in a practical sense. Speculating about "why" is well and good, it might lead to testable consequences; but if theories are devised to produce exactly the same consequences with zero differences, that is a waste of time and brains; it is inventing an alternative religion.
IMO The valid scientific response to "why" is "I don't know why, not yet."
And if we already have a predictive model that always works -- we might better serve ourselves and our species working on something else more pressing.
Sabine, do you have any specific suggestions for how to build a superdeterminism test? I think this is an intriguing idea.
ReplyDeleteOne reason why I love the way John Bell's mind worked was that he pushed he pilot model (which I should note I do not accept) to the extreme, and found that it produced clear, specific models that led him to his famous inequality. Bell felt he might never have landed upon his inequality if he had instead relied on the non-intuitive, mooshy-smooshy (technical term) fuzzy-think of Copenhagen.
Thus similarly, I am intrigued by your assertion that pushing superdeterminism to the limit may also lead to specificity in how to test for it. (Also, my apologies in advance if you have already done so and I simply missed it.)
So you discard Bohmian Mechanics? What are youre objections?
DeleteIs there a way to distinguish whether quantum mechanics is truly stochastic, or merely chaotic (in the mathematical chaos theory sense that the outcome is so sensitive to initial conditions that it is impossible to predict outcomes more than statistically because we can't measure the initial conditions with sufficient sensitivity)?
ReplyDeleteI don't think there is.
DeleteCan you help me understand that comment, please? I mean Schroedinger's equation yields a wave function - nothing chaotic or stochastic there. However when the wave function is used to determine (say) the position of an electron, using the Born rule, the result is a probability density. How would that be reformulated in terms of chaos?
DeleteSabine,
ReplyDeleteBell's inequality has been tested to a level of precision that most people in the field are accepting the results. To my knowledge this has been done with entangled particles, are you aware of anyone trying to do an inverted experiment using maximally decohered particles (Quantum discord?). Is this such an obvious move that all the Bell experimenters are already doing it as a control group?
If Suskin is right and ER == EPR then can we use that knowledge to build thrusters that defy conservation of momentum locally (transmit momentum to the global inertial frame a-la Mach effect?)
Just an engineer looking for better engines.
J*
Is superdeterminism compatible with the assumption of measurement settings independence?
ReplyDeleteI think that the implications of this (possibly the most overlooked in 20th C QM IMO) paper "Logic, States, and Quantum Probabilities" by Rachel Wallace Garden exposes a flawed assumption in the argument that violation of Bell inequalities necessitates some form of non-locality.
ReplyDeleteA key point being that the outcomes of quantum interactions do not produce the same kind of information as a classical measurement, and this has clear-cut mathematical implications as to the applicability of the Bell inequality to quantum interactions.
Rachel points out that the outcome of a quantum measurement on Hilbert space (such as a polarization test) is a denial (e.g. "Q: What is the weather? A: It is not snowing". In effect, all you can say about a photon emerging from channel A of a polarizing beamsplitter is that it was not exactly linearly aligned with B - all other initial linear and circular states are possible.
As Rachel shows, the proof of the Bell inequality depends on outcome A implyng not B. That is, it relies on an assumption that there is a direct equivalence between the information one gets from a classical measurement of a property (e.g. orientation of a spinning object) and the interaction of a photon with a polarization analyzer.
It is clear, while Bell inequalities do show that a measurement that determines a physical orientation in classical system is subject to a limit, nevertheless one cannot reasonably conclude that quantum systems are somehow spookily non-local when the experiments produce a different kind of information (a denial) to the classical determination. As Rachel's mathematical analysis shows, when a measurement produces a denial, then violation of the Bell inequality is an expected outcome.
International Journal of Theoretical Physics, Vol. 35, No. 5, 1996
What are your thoughts on validation of different models using experiments in fluid dynamics?
ReplyDeletehttp://news.mit.edu/2013/when-fluid-dynamics-mimic-quantum-mechanics-0729
mh,
DeleteI wrote about this here.
A layman like me sees it this way: The detectors are observers, and there is reality. This reality is in turn an interpretation or description of the actuality. In quantum mechanics the detectors are observing a reality that came about by the subject of the experiment i.e., the experiment itself is a program, an interpretation, a description of say actuality because given the conditions of the double slit experiment the results are invariable the same up until this day. It is the output of this program that the detectors observe, and they in turn interpret this output or reality according to the their settings--like mind set. This interpretation is the influence of the detectors on the output which we see as a collapse of a number of possibilities to a single possibility i.e., the absence of interference. The detector is itself a program, a hardware program in that. We can go on wrapping in this way, interpretation after interpretation, until we arrive at something well defined and deterministic. That is, it is movement from infinite possibilities or indeterminism tending towards a finite possibility or determinism or clear definition with each interpretation along the way. And all along the way there is an increasing degree of complexity and with complexity come form and definition. Lets say there is the uninterpreted primordial to begin with. Then there are programs which interpret. Among the programs there is a common factor: the least interpreted primordial lets say light or photons. Starting with the least interpreted primordial if we unravel or reverse engineer the program, we get to the program. Once we have got the program then cant we predict the output, which is the interpretation, the description or "the observed"?
ReplyDeleteIf the big bang produced a singular primordial seed, then the number of ways that the subsequent universe evolved from that seed can only be partitioned in a finite number of ways. The universe must then possess a finite dimensional state space. There is a finite number of ways that the universe can be partitioned. This constraint imposed on the universe by this finite dimensional state space mean that all these various states can be knowable. This ability to know all the states that the universe can assume is where Superdeterminism is derived from.
ReplyDeleteThere was a period at the very earliest stages of the universe’s differentiation; it was possible to know the states of all its component parts. At that time the universe was in a state of Superdeterminism. This attribute of the universe is immutable and therefore cannot change with the passage of time. It follows that the state of Superdeterminism must still be in place today.
It is true that determinism kills free will. But as David Hume pointed out, the inverse case is also relevant: free will requires some kind of determinism to work. You need a causal connection between decisión and action, ideally we expect the same decision to cause the same action...Not an easy problem indeed.
ReplyDeleteThe easily overlooked energy cost of ab initio superdeterminism: Set up your pool table with a set of end goals. Set your launcher angles for each ball, eg to 5 digits of precision, and see how many generations of bounce you can control.
ReplyDeleteLet's naively say it's roughly linear, 5 generations of bounce control for 5 digits of launch precision. Thus controlling a million generations of bounce requires (naively) a million digits of launch precision, _for each particle_. The actual ratio will be more complex, but inevitably will have this same more-means-more relationship.
What is easily overlooked is that since information is a form of entropy, it always has a mass-energy cost. Efficient coding can make that cost almost vanishingly small for classical systems such as computers, but for subatomic particles with quantum-small state spaces, figuring out even how to represent such a very large launch precision number per particle becomes deeply problematic, particularly in terms of the mass-energy cost of the launch number in comparison to the total mass-energy of the particle itself.
This in a nutshell is the hidden energy cost of any conceivable form of ab initio superdeterminism: The energy cost of a launch with sufficient precision to predetermine the entire future history of the universe quickly trends towards infinity.
In another variant, this same argument is why I don't believe in points of any kind, except as succinct summaries of the limit behavior of certain classes of functions.
@Sabine
ReplyDeletePerhaps one reason why people do not like superdeterminism (SD) is that, as to giving us a comprehensible picture of reality, it scores even worse than standard QM. It amounts to saying that the strange correlations we observe (violations of Bell's inequalities) are due to some mysterious past correlations that arose somewhere at the beginning of the universe. By some magical tour de force, such correlations turn out to be exactly those predicted by QM. Except that, without the prior development of QM, we would be totally incapable of making any predictions based on SD alone.
Yes, SD is a logical possibility, but so far-fetched and so little fruitful that it mainly reveals how contemptors of Copenhagen are running out of arguments.
opamanfred,
DeleteYes, that's right, that is the reason they don't like it. As I have explained, however, that reason is just wrong. Look how you pull up words like "mysterious" and "magical" without justifying them. You are assuming what you want to argue, namely that there is no simple way to encode those correlations. This is obviously wrong, of course, because you can encode them on a future boundary condition in a simple way, that simple way being what's the qm outcome.
Of course you may say now, all right, then you get back QM, but what's the point of doing that? And indeed, there is no point in doing that - if that's the only thing you do. But of course what you want is to make *more* predictions than what QM allows you to.
@sabine
Delete"because you can encode them on a future boundary condition in a simple way, that simple way being what's the qm outcome."
Of course, you can. But you must admit it is a strange way to proceed: You observe the present, and then carefully tailor the past so that the present is a consequence of that past. If that's not fine tuning...
Besides, I see another problem. QM clearly shows a lot of regularity in Nature. How would you accommodate such a regularity using past correlations, if not by even more fine tuning? Like, imagine a roulette that yields with clockwise regularity red-black-red-black-r-b-r-b-r-b-r-b and so on. Would you be comfortable with ascribing that to some past correlations?
opamanfred,
Delete(a) If you think something is fine-tuned, please quantify it.
(b) I already said that superdeterminism trivially reproduces QM, so you get the same regularities for the same reason.
opamanfred: You observe the present, and then carefully tailor the past so that the present is a consequence of that past. If that's not fine tuning...
DeleteSounds like straightforward deduction to me; if Sherlock finds a dead man skewered by an antique harpoon, he deduces a past that could lead to the present, e.g. somebody likely harpooned him.
There's no fine-tuning, if anything is "fine-tuned" it is the present highly unusual event of a man being harpooned on Ironmonger Lane.
(a) Perhaps I misunderstand SD. But how do you fix the past correlations in such a way that QM is correctly reproduced now? I had the impression this is done completely ad hoc. Slightly different correlations would yield predictions vastly different from QM, hence "fine tuning".
Delete(b) For the same reason?? In one case the regularities are the result of a consistent theory (QM) that is rather economical in its number of basic postulates. In the other, you choose an incredibly complex initial condition for the sole purpose of reproducing the results of the aforementioned theory.
opamanfred
Delete"Slightly different correlations would yield predictions vastly different from QM, hence "fine tuning"."
You cannot make such a statement without having a model that has a state space and a dynamical law. It is an entirely baseless criticism. If you do not understand why I say that (and evidently you do not), then please sit down and try to quantify what is fine-tuned and by how much.
Sabine,
Delete“... superdeterminism trivially reproduces QM, so you get the same regularities for the same reason.”
In QM the reason for the non-local correlation of measurement results of EPR pairs is their (e.g. singlet) state.
Is this also the reason in SD?
So far, I thought the reason for the correlation in SD is sacrificing statistical independence ... I am confused ...
Maybe the prediction might be the same (or more), but the explanation or reason is a completely different one.
(Remark: I find how TSVF trades non-locality for retrocausality quite interesting. And since the unitary evolution is deterministic, retrocausality works. But as you said this does not solve the measurement problem.)
"If you do not understand why I say that (and evidently you do not), then please sit down and try to quantify what is fine-tuned and by how much."
DeleteMaybe I am misunderstanding something, but naively it seems easy to come up with a model that does this.
Let's suppose Alice and Bob prepare N spin-1/2 particles in a singlet state. They each have a measuring device that can measure the projection of the spin of a particle along the axis of the device. For simplicity, the relative alignments are fixed to be either perfectly anti-aligned, or else misaligned by a small angle theta. Quantum mechanics predicts the expected cross correlation of the spins will depend on theta like -1+theta^2/2. So for large enough N, by measuring the cross correlation for a few different values of theta, Alice and Bob can measure the expected quantum mechanical correlation function.
A super-determinist could also explain the correlation function by imposing a future boundary condition on each particle. Given the mis-alignment of the detectors which the pair will experience -- the relative alignment of the detectors is assumed known since we are imposing this condition on the particles in the future after the measurement is made -- we pick the final spin values from a distribution which reproduces the quantum mechanical behavior. I will also assume a simple evolution rule, that the final spin we impose as a final boundary condition does not change when we evolve backwards in time -- for example, imagine a electron moving through a region with no electric or magnetic fields.
Here's why I think a super-determinist would have a fine-tuning problem. From the super-determinist's point of view, there are 2N future boundary conditions to choose (2N numbers in the range [-1,1]) -- the value of the spin projected on the measuring device for each particle in each of the N entangled pairs. The number of possible future boundary conditions is exponential in N. Given the statistical error associated with N measurements, there will be an exponential in sqrt(N) possible configurations which will be consistent with the quantum mechanical correlation function. The super-determinist knows nothing about quantum mechanics (otherwise what is the point), so I will assume it's natural to use a distribution that does not privilege any particular choice of initial conditions, like a uniform distribution from -1 to 1. In that case, it's exponentially unlikely (in sqrt(N)) that the drawn distribution of spin components will be consistent with the quantum mechanical prediction.
If the point is that the super-determinist should choose future boundary conditions according to a distribution which is highly peaked around values that will reproduce the quantum mechanical correlations, I agree this is mathematically self consistent, but I don't see what is gained over using ordinary quantum mechanics. It seems that one has just moved the quantum mechanical correlations, into a very special distribution that (a) is naturally expressed as a future boundary condition (which raises some philosophical issues about causality, but I'm willing to put those aside) and (b) one has to use a probability distribution which (as far as I can tell) can't be derived from any principles except by reverse engineering the quantum mechanical result.
Maybe I am just missing the point...
Reimond,
DeleteStatistical dependence is not the reason for the correlation in the measurements; statistical dependence is just a property of the theory that prevents you from using Bell's theorem to falsify it.
Just where the correlations come from depends on the model. Personally I think the best way is to stick with quantum mechanics to the extent possible and just keep the same state space. So the correlations come from where they always come from. I don't think that's what 't Hooft and Palmer are doing though.
Andrew,
DeleteThanks for making an effort. Superdeterminism reproduces QM and is therefore equally predictive for the case you mention. It is is equally fine-tuned or not fine-tuned for the probabilistic predictions as is quantum mechanics.
The probabilistic distribution of outcomes in QM, however, are not the point of looking at superdeterministic models. You want to make *more* predictions than that. Whether those are fine-tuned or not (and hence have explanatory power or not) depends on the model, ie on the required initial state, state space, and dynamical law.
... https://arxiv.org/abs/1901.02828 ...
ReplyDeletelet the paper speech by itself.
Sabine Hossenfelder (9:05 AM, July 30, 2019) wrote:
ReplyDelete"I already said several times that retrocausality is an option I quite like, but I do not think that it makes sense if the Schroedinger equation remains unmodified because that doesn't solve any problem."
This a very interesting comment that I think is right—that retrocausality would require a modified [current quantum formulation]* to make sense. Perhaps there will be a future post on this?
* Schroedinger equation, or path integral
Philip,
DeleteWhat I meant to say, but didn't formulate it very well, is that to solve the measurement problem you need a non-linear time evolution. Just saying "retrocausal" isn't sufficient.
Yes... will probably have a post on this in the future, but it will take more time for me to sort out my thoughts on this.
(As you can tell, it took me a long time to write about superdeterminism for the same reason. It's not all that easy to make sense of it.)
I have never thought this time reversal idea was worth much. QM is time reversal invariant, so as I see things it will not make any difference if you have time reversed actions.
DeleteThe idea of nonlinear time is interesting, and this is one reason of the quantum interpretations I have a certain interest in the Montevideo interpretation. This is similar to Roger Penrose's idea that gravitation reduces waves. I have though potentially the GRW might fit into this. Quantum fluctuations of the metric which result in a g_{tt} fluctuation and nonlinear time evolution might induce the spontaneous collapse. If one is worried about conservation of quantum information, the resulting change in the metric is a very weak gravitational wave with BMS memory.
Lawrence Crowell: I read this summary of Montevideo interpretation (https://arxiv.org/abs/1502.03410). That seems like a promising route. But what I got was that the nonlinear evolution of (env + detector + system) exponentially diminishes all but one eigenstate, to the point that it is impossible to measure anything else but that one state. So 'collapse' isn't an instantaneous event but more of reaching an asymptotic state, so it would require a detector larger than the universe to distinguish any but the dominant eigenstate.
DeleteIt also makes the "detection" entirely environmental, no observer required; which fits with my notion that a brain and consciousness are nothing special, just a working matter machine and part of the environment, so 'observation' is just one type of environmentally caused wavefunction collapse.
I like it!
The Montevideo Interpretation proposes limitations on measurement due to gravitation. Roger Penrose has proposed that metric fluctuations induce wave function collapse. Gambini and Pullen are thinking in part with a quantum clock. It has been a long time since I read their paper, but their proposal is that gravitation and quantum gravitation put limits on what can be observed, which in a standard decoherence setting is a wave function reduction.
DeleteThe Bondi metric, written cryptically as
ds^2 = (1 - 2m/r)du^2 - dudr - γ_{zz'}dzdz' + r^2 C_{zz}dz^2 + CC + D_zC_{zz} + CC
gives a Schwarzschild metric term, plus the metric of a 2-sphere, plus Weyl curvature and finally boundary terms on Weyl curvature. We might then think of concentrating on the first term. Think of the mass as a fluctuation, so we have m → m + δm, where we are thinking of a measurement apparatus as having some quantum fluctuation in the mass of one of its particles or molecules. With 2m = r this is equivalent to a metric fluctuation, and we have some variance δL/L for L the scale of the system. For the system scale ~ 1meter and δL = â„“_p = √(Għ/c^3) = 1.6×10^{-35}m δL/L ≈ p or the probability for this fluctuation and p ≈ 10^{-35}. Yet this is for one particle. If we had 10^{12} moles then we might expect some where a molecule has a mass fluctuation approaching the Planck scale. If so we might then expect there to be this sort of metric shift with these Weyl curvatures.
Does this make sense? It is difficult to say, for clearly no lab instrument is on the order of billions of tons. On the other hand the laboratory is made of states that are entangled with the building, which are entangled with the city that exists in, which is entangled with the Earth and so forth. Also there is no reason for these fluctuations to be up to the Planck scale.
The Weyl curvatures would then correspond to very weak gravitational waves produced. They can be very IR and still carry qubits of information. If we do not take these into account the wave function would indeed appear to collapse, and since these gravitational waves are so weak and escape to I^+ there is no reasonable prospect for a recurrence. In this way Penrose's R-process appears FAPP fundamental. This is unless an experiment is done to carefully amplify the metric superposition, such as what Sabine refers to with quantization of large systems that might exhibit metric superpositions.
We have the standard idea of the Planck constant intertwining a spread in momentum with a spread in position ħ ≤ ΔxΔp, and the same for time and energy. Gravitation though intertwines radial position directly with mass r = 2GM/c^2, and it is not hard to see with the Gambini-Pullen idea of motion we can do the same with r = r0 + pt/√(p^2 + m^2) that we can include time. The variation in time, such as in their equation 2 due to a clock uncertainty spread can just as well be due to the role of metric fluctuations with a mass.
Sabine:
ReplyDeleteI know what you mean because you have reiterated sufficiently often in previous posts :) You are technically correct, but your position precludes any meaningful way to do science. You can "explain" everything by positing initial conditions. Fine. But it leads nowhere.
Castaldo:
Come on, don't be disingenuous. Sherlock's deduction would be entirely reasonable of course. Much less so if Sherlock had maintained: "We have found this dead mean skewered by an antique harpoon because of some correlations that arose 13.8 billion years ago at the birth of the universe". Sherlock was notorious for substance abuse, but not to that point...
Opamanfred,
Delete"You are technically correct, but your position precludes any meaningful way to do science. You can "explain" everything by positing initial conditions. Fine. But it leads nowhere."
You are missing the point. (a) You always need to pose initial conditions to make any prediction. There is nothing new about this here. Initial conditions are always part of the game. You do not, of course, explain everything with them. You explain something with them if they, together with the dynamical law, allow you to describe observations (simpler than just collecting data). Same thing for superdeterminism as for any other theory.
(b) Again, the point of superdeterminism is *not* to reproduce quantum mechanics. The point is to make more predictions beyond that. What do you mean by "it leads nowhere". If I have a theory that tells you what the outcome of a measurement is - and I mean the actual outcome, not its probability - how does this "lead nowhere"?
I think the discussion above overlooks the possibility that hidden variables based on IMPERFECT FORECASTS by nature are behind "quantum" phenomena. I provide convincing arguments for this hypothesis in two papers. I copy the titles and abstracts below. You can search for them online. Both papers cite experimental publications. The second paper departs from an experimental observation made by Adenier and Khrennikov, and also makes note of an experiment performed on an IBM quantum computer which I believe provides further evidence pointing towards forecasts.
ReplyDeleteTitle: A Prediction Loophole in Bell's Theorem
Abstract: We consider the Bell's Theorem setup of Gill et. al. (2002). We present a "proof of concept" that if the source emitting the particles can predict the settings of the detectors with sufficiently large probability, then there is a scenario consistent with local realism that violates the Bell inequality for the setup.
Title: Illusory Signaling Under Local Realism with Forecasts
Abstract: G. Adenier and A.Y. Khrennikov (2016) show that a recent ``loophole free'' CHSH Bell experiment violates no-signaling equalities, contrary to the expected impossibility of signaling in that experiment. We show that a local realism setup, in which nature sets hidden variables based on forecasts, and which can violate a Bell Inequality, can also give the illusion of signaling where there is none. This suggests that the violation of the CHSH Bell inequality, and the puzzling no-signaling violation in the CHSH Bell experiment may be explained by hidden variables based on forecasts as well.
"How much detail you need to know about the initial state to make predictions depends on your model."
ReplyDeleteBetter, on your particular experiment. If the knob on the left device is controlled by starlight from the left side, and that of the right side controlled by the right side, you need quite a lot of the universe to conspire.
"And without writing down a model, there is really no way to tell whether it does or doesn’t live up to scientific methodology."
No, this decision is quite trivial. If there is superdeterminism, no statistical experiment is able to falsify anything, thus, one has to give up statistical experiments as useless.
"Rather, it’s that the detectors’ states aren’t independent of the system one tries to measure. There just isn’t any state the experimentalist could twiddle their knob to which would prevent a correlation."
No. The original state of the device, say, a0, may be as dependent from the measured state as you like. If you use the polarization of incoming starlight s together with your free will decision f simply as additional independent inputs, (both evenly distributed between 0 and 360 degrees), so that the resulting angle is simply a = a0 + s + f, then the statistical independence of f or s is sufficient to lead also to the statistical independence of a. That a0 is correlated does not disturb this at all. So, at least one source of independent random numbers would be sufficient - all you have to do is Bell's experiment with devices controlled (just influenced in a sufficiently heavy way is sufficient) by these independent random numbers. Thus, the whole world has to conspire completely.
"...they are non-local, which makes my hair stand up..."
Non-Einstein-local. A return to the Lorentz ether, that's all. As it is, quantum theory is indeed nonlocal, but it could be easily approximated by a local (even if not by an Einstein-local) theory. Nothing more seriously non-local than the situation with Newtonian gravity.
The configuration space trajectory q(t) is, like in Newtonian theory, continuous in all the realist interpretations of QT, all the real change is a local, continuous one. Only how it is changed possibly depends on the configuration far away too. So, there is nothing worse with the non-locality than in NT. What's the problem with such a harmless revival of a problem scientists have lived with already in classical physics is beyond me.
Ilja,
Delete"If the knob on the left device is controlled by starlight from the left side, and that of the right side controlled by the right side, you need quite a lot of the universe to conspire."
Which is why it's hard to see evidence for it. You better think carefully about what experiment to make.
"No, this decision is quite trivial. If there is superdeterminism, no statistical experiment is able to falsify anything, thus, one has to give up statistical experiments as useless."
Patently wrong. You cannot even make such a statement without having an actual model. "superdeterminism" is not a model. It's a principle. (And, as such, unfalsifiable, just like determinism.)
Ilja,
Delete"If there is superdeterminism, no statistical experiment is able to falsify anything, thus, one has to give up statistical experiments as useless."
This is completely false. The only claim a superdeterministic theory has to make is that a particular type of electromagnetic phenomena (emission of an entangled particle pair and the later behavior of those particles in a magnetic field, such as the one in a Stern-Gerlach device) are not independent. There are plenty of examples of physical systems that are not independent, an obvious one being the motion of any of two massive objects due to gravity. Such objects will orbit around their common center of mass, regardless of how far they are. Some other examples are stars in a galaxy, electrons in an atom, synchronized clocks, etc.). Superdeterminism maintains that the particles we call "entangled" are examples of such systems. The only difference is that we can directly intervene and "mess-up" with the correlations due to gravity, we can de-synchronize clocks but we cannot control the behaviour of quantum particles because we ourselves are built out of them. We are the result of their behavior so whatever we do is just a manifestation of how they "normally" behave. So, superdeterminism brings nothing new to physics, it is in complete agreement with all accepted physical principles, including the statistical ones.
A slightly different way to put this is to observe that a superdeterministic interpretation of QM will have the same predictions as QM. So, as long as you think that statistical experiments are not useless if QM is correct, those experiments will also not be useless if a superdeterministic interpretation of QM is correct. On the other hand if you do believe that statistical experiments are useless if QM is true they will also be useless if a non-local interpretation (such as de-Broglie-Bohm theory) of QM is true.
As you can see, the most reasonable decision between non-locality and superdeterminism is superdeterminism because this option does not conflict with any currently accepted physical principle and it does not require us to go back to the time of Newton.
The quantum world is fully deterministic based on vacuum quantum Jumps: God plays dice indeed with the universe !
ReplyDeleteHowever living creatures seem to be able to influence initiatives suggested by RP I to Veto actions by RP II at all levels of consciousness according to Benjamin Libet. ( Trevena and Miller)
Sabine,
ReplyDeleteDo you know sbout the Calogero conjecture (https://en.m.wikipedia.org/wiki/Calogero_conjecture) and do you think it is related to SD?
Opamanfred,
DeleteThanks for bringing this to my attention. No, I haven't heard of it. I suppose you could try to understand the "background field" as a type of hidden variable, in which case that would presumably be superdeterministic. Though, if the only thing you know about it is that it's "stochastic", it seems to me that this would effectively give you a collapse model rather than a superdeterministic one.
Sabine,
ReplyDelete"It’s not like superdeterminism somehow prevents an experimentalist from turning a knob."
why not? How can an experimentalist possibly turn the knob other then how she/he has been determined to turn it?
If hidden variables exist their evolution would have to be highly chaotic. What's the likelihood that any predictions could be made, even if we could somehow find those variables and ascertain their (approximate) initial values?
i aM wh,
DeleteShe cannot, of course, turn the knob other than what she's been determined to do, but that's the case in any deterministic theory and has nothing to do with superdeterminism in particular.
"If hidden variables exist their evolution would have to be highly chaotic. What's the likelihood that any predictions could be made, even if we could somehow find those variables and ascertain their (approximate) initial values?"
That's a most excellent question. You want to make an experiment in a range where you have a reasonable chance to see additional deterministic behavior. This basically means you have to freeze in the additional variables as good as possible. The experiments that are currently being done just don't probe this situation, hence the only thing they'll do is confirm QM.
(I am not sure about chaotic. May be or may not be. You clearly need some attractor dynamics, but I don't see why this necessarily needs to be a chaotic one.)
Is this here (chapter 4) still the actual experiment you want to be performed?
DeleteReimond,
DeleteNo, it occurred to me since writing the paper that I have made it too complicated. You don't need to repeat the measurement on the same state, you only need identically prepared states. Other than that, yes, that's what you need. A small, cold, system where you look for time-correlations.
Sabine,
Delete“...don't need to repeat the measurement on the same state...”
This means you remove the mirrors? But then there is no correlation time left, only the probability that e.g. a photon passes the two polarizers. SD, if true, would then give a tiny deviation from what QM tells us. Is it like this?
Reimond,
DeleteYou can take any experiment that allows you to measure single particle coherence. Think, eg, double slit. The challenge is that you need to resolve individual particles and you need to make the measurements in rapid succession, in a system that has as few degrees of freedom as possible.
(The example in the paper is not a good one for another reason, which is that you need at least 3 pointer states. That's simply because a detector whose states don't change can't detect anything.)
Why not just accept Lagrangian mechanics, successfully used from QFT to GR, which is not only deterministic (Euler-Lagrange), but additionally time/CPT-symmetric, well seen in equivalent action optimization formulation.
ReplyDeleteIn contrast, "local realism" is for time-asymmetric "evolving 3D" natural intuition.
Replacing it with time-symmetric spacetime "4D local realism" like in GR: where particle is its trajectory, considering ensemble of such objects, Feynman path ensemble is equivalent with QM.
Considering statistical physics of such basic objects: Boltzmann distribution among paths in euclidean QM or simpler MERW ( https://en.wikipedia.org/wiki/Maximal_entropy_random_walk ) we can see where Born rules come from: in rho ~ phi^2 one psi comes from past ensemble (propagator from -infinity), second psi from future ensemble (propagator from +infinity), analogously to Two State Vector Formalism of QM.
Hence we directly get Born rules from time-symmetry, they allow not to satisfy inequalities derived in standard probabilistics: without this square. I have example construction of violation of Bell-like inequality for MERW (uniform path ensemble) on page 9 of https://arxiv.org/pdf/0910.2724
Pure determinism doesn’t work as an explanatory system because it doesn’t have the basis for explaining how new (algorithmic) relationships could come into existence in living things.
ReplyDeleteEquations can’t morph into algorithms, except in the minds of those people who are religious believers in miraculous “emergence”.
@Lorraine Ford, these days when I encounter the word "emergence" in an article, I lose interest. Because I know the hand waving has begun.
Deletejim_h: "emergence" is not hand waving. Cooper pairs and the low-energy effective field theory of superconductivity are emergent. Classical chaotic behavior (KAM) is emergent from quantum mechanics, and easily demonstrated by computation.
Delete"Equations can’t morph into algorithms"
DeletePerhaps then equations (from a Platonic realm of Forms) should be left behind by physicists, and replaced by algorithms.
dtvmcdonald: "emergence" is a vapid truism. All phenomenological entities are emergent.
DeleteWell if the sacred cow random is sacrificed and reconize that nature does not play fair and is not playing with a double sided coin but a double head coin where the ohsevation point is always unpredictible and dump the cosmos as a machine and a more organic perspective yeah it works. It's consistent with weather, forest fires, earthquakes etc. In regards Free will we have an infinite choice of wrongs and one right. That's neither deterministic nor freewill. That's also inspite of ourselves When we get it right.
ReplyDeleteI believe that there may be a way to “circumvent the initial conditions of the universe” quandary but more to the point the more complicated case dealing with the “initial conditions that affect the process that the observer wants to examine” can be dealt with.
ReplyDeleteIn a takeoff of the classic Einstein though experiment the rider/observer can observe what is happening on the relativistic train that he is traveling on. The observer is synchronized in terms of initial conditions that apply to both he and what he wants to observe. He is also completely synchronized in space-time with the condition that he wants to observe. Now if the observer becomes entangled with the condition that he wants to observe, he now becomes totally a part of that condition since he has achieved complete simpatico with it.
In an illustrative example of how this could be done, the person who wants to observe what a photon does when it passes through a double slit, both the double slit mechanism, the photon production mechanism, and the sensors sampling the environment in and around each slit would need to be entangled as a system. This entanglement would also include the data store that is setup to record the data. After the data has been recorded, the observer decoherers the entanglement condition of the system. After this step, the observer can examine the data store and is free from quantum mechanical asynchrony to finalize his observations and make sense of it.
Even if one could construct a superdeterministic quantum theory in Minkowski space, what would be the point? We do not know what the geometry of the spacetime is that we live in and if it is deterministic in the first place, and I don't think that we will ever know.
ReplyDeleteReading about the umpteen "interpretations" of QM is interesting, e.g.
ReplyDeleteReview of stochastic mechanics
Edward Nelson
Department of Mathematics, Princeton University
Abstract. "Stochastic mechanics is an interpretation of nonrelativistic quantum mechanics in which the trajectories of the configuration, described as a Markov stochastic process, are regarded as physically real. ..."
- https://iopscience.iop.org/article/10.1088/1742-6596/361/1/012011/pdf
but all together it reminds be of the witches' song in Macbeth:
Double, double toil and trouble;
Fire burn and caldron bubble.
Fillet of a fenny snake,
In the caldron boil and bake;
Eye of newt and toe of frog,
Wool of bat and tongue of dog,
...
If one accepts the block-universe picture (BU) entailed by relativity theory (and one should on consistency grounds) then Bell's theorem becomes ill formulated. Ensembles in the BU are those of 4D `spacetime structures', not of initial conditions. QM then becomes just a natural statistical description of a classical BU, with wave functions encoding various ensembles; no measurement problem either.
ReplyDeleteI have recently published a paper on the topic which made little to no impact. The reason for this is (I think, and would be happy to be corrected) that physicists are jailed in IVP reasoning (Initial Value Problem). They say: we accept the BU, but ours is a very specific BU; complete knowledge of data (hidden variables included) on any given space-like surface, uniquely determines the rest of the BU (a ridiculously degenerated BU). In this case, QM being a statistical description of the BU, though still consistent, becomes truly strange.
IVP reasoning started with Newton's attempt to express local momentum conservation in precise mathematical terms. However, there are other ways to do so without IVP. In fact, moving to classical electrodynamics of charged particles, one apparently MUST abandon IVP to avoid the self-force problem, and similarly with GR.
Without IVP, QM becomes just as `intuitive' as Relativity. Moreover, QM in this case is just the tip of the iceberg, as certain macroscopic systems as well should exhibit `spooky correlations' (with the relevant statistical theory being very different from QM if at all expressible in precise mathematical terms).
arXiv:1804.00509v1 [quant-ph]
Sabine,
ReplyDeleteI've heard an argument that if superdeterminism is true quantum computers won't work the way we expect them to work. I guess the idea is that if many calculations are done, correlations would show up in places we wouldn't expect to find them. Is there anything to that? Would it be evidence in favor of SD if quantum computers don't turn out to be as "random" as we expect them to be?
i aM wh,
DeleteThat's a curious statement. I haven't heard this before. Do you have a reference? By my own estimates, quantum computing in safely in the range where you expect quantum mechanics to work as usual. Superdeterminism doesn't make a difference for that.
't Hooft made that suggestion here:
Deletehttps://arxiv.org/abs/gr-qc/9903084
see page 12 and 13. I'm not sure if 't Hooft has changed his opinion on this matter.
Dear Sabine, I have a high opinion of your book, and consequently have a low opinion of myself, since I don't understand superdeterminism at all. I hope you or some other kind person can give me a small start here. I am not a physicist but do have a background of research in mathematical probability. I skimmed through the two reference articles you cited but didn't see the kind of concrete elementary example I needed. Here is the sort of situation I am thinking about.
ReplyDeleteA particle called Z decays randomly into two X-particles with opposite spins. Occasionally a robot called Robot A makes a spin measurement for one X-particle , after its program selects a direction of spin mesaurement using a giant super-duper random number generator. Perhaps Robot A chooses one of a million containers each containing a million fair coins, or takes some other pseudo-random steps, and finally tosses a coin to select one of two directions. On these occasions Robot B does the same for the other X-particle, also using a super-super random number generator to find its direction of measurement. A clever physicist assumes that three things are statistically independent: namely the result of the complex random-number generator for A, the result of complex random-number generator for B, and the decay of particle Z. With some other assumptions, the physicist cleverly shows that the results should satisfy a certain inequality, and finds that Nature does not agree.
If I understand the idea of superdeterminism, somewhere back at the beginning of time a relationship was established that links two giant super-duper random number generators together with particle Z in a special way, resulting in a violation of the inequality, while preserving the all the statistical independence that we constantly observe in so many places, and at so many scales. Is this the idea? Probably I have missed something. Of course there is no free will in what I described.
I will be very happy to understand a different approach to this!
jbaxter,
Delete"If I understand the idea of superdeterminism, somewhere back at the beginning of time a relationship was established that links two giant super-duper random number generators together with particle Z in a special way, resulting in a violation of the inequality, while preserving the all the statistical independence that we constantly observe in so many places, and at so many scales. Is this the idea? Probably I have missed something. Of course there is no free will in what I described."
Several misunderstandings here.
First, the phrase "a relationship was established" makes it sound as if there was agency behind it. The fact is that such a relationship, if it exists at one time, has always existed at any time, and will continue to exist at any time, in any deterministic theory. This has absolutely nothing to do with superdeterminism. All this talk about the initial conditions of the universe is therefore a red herring. No one in practice, of course, ever writes down the initial conditions of the universe to make a prediction for a lab experiment.
Second, regarding statistical independence. What we do observe is that statistical independence works well to explain certain classical observations. One could argue that violations of Bell's inequality in fact demonstrate that it is not fulfilled in general.
This is a really important point. You are taking empirical knowledge from one type of situation and apply it to another situation. This isn't justified.
"Second, regarding statistical independence. What we do observe is that statistical independence works well to explain certain classical observations. One could argue that violations of Bell's inequality in fact demonstrate that it is not fulfilled in general. "
DeleteAm I correct to understand this as saying that every two points in space-time are "correlated"?
IIRC, Susskind's EP = EPR combined with the holographic universe (the bulk can be described by a function over the surface) implies that there should be a lot of entanglement between any two points in the universe.
Such entanglements would mean that there is, indeed, no statistical independence between points in space (-time).
Thanks Sabine for taking the time to respond. Somewhat related to your second point, I would say that no one really understands how the usual independence which we observe, ultimately arises. We do understand how independence _spreads_ though, e.g. small shaking in my arm muscles leads to a head rather than a tail when I try to toss a coin. All this for our classical lives, of course. The infectious nature of noise is part of the reason that avoiding statistical independence is a tough problem, I suppose. Obviously it would be great to get progress in this direction.
Delete"Can you help me understand that comment, please? I mean Schroedinger's equation yields a wave function - nothing chaotic or stochastic there. However when the wave function is used to determine (say) the position of an electron, using the Born rule, the result is a probability density. How would that be reformulated in terms of chaos?"
ReplyDeleteClassical mechanics is as "emergent" from quantum mechanics as
Cooper pairs and a low energy limit field theory of superconductivity are emergent from quantum mechanics.
And thus chaos, in the KAM sense, is emergent as a low energy / long distance scale phenomenon from QM. This chaos can be fed back into QM through, e.g. deciding whether to have your polarizer at 0 or 45 or 90 degrees from Brownian motion.
Superdeterminism can only be an explanation for entanglement if the universe was created by a mind who followed a goal. And that is religion in my understanding.
ReplyDeleteantooneo,
DeleteAny model with an evolution law requires an initial condition. That's the same for superdeterminism as for any theory that we have. By your logic all theories in current science are therefore also "religion".
Sabine, what do you mean by "REQUIRES an initial condition"? In a Lagrangian formulation, for example, initial and final conditions are required, and only on position variables (this, BTW, is the case for retro-causality arguments). The universe, according to Relativity (block-universe) couldn't have been created/initiated at some privileged space-like three-surface, and then allowed to evolve deterministically. This is the IVP jail I was talking about.
DeleteSabine, think about Lagrangian mechanics - it has 3 mathematically equivalent formulations: Euler-Lagrange equation to evolve forward in time, or backward just switching sign of time, or action optimizing between two moments of time.
DeleteTheories we use are fundamentally time/CPT-symmetric, we should be extremely careful if enforcing some time asymmetry in our interpretation - what leads to paradoxes like Bell violaition.
Spacetime view (block universe) as in general relativity is a safe way, repairing such paradoxes.
Sabine:
DeleteAny model has initial conditions. Particularly of course a superdeterministic system. But that is not sufficient to have the results which are supposed to be explained by it, e.g. the entanglement of particles in experiments where also the setup has to be chosen by experimenters influenced in the necessary way. So, even if Superdeterminism is true, the initial conditions have to be set with this very special goal in mind.
The assumption of such a mind is religion; what else?
antooneo,
DeleteThat is correct, the initial conditions are not sufficient. You also need a state space and a dynamical law. Your statement that "the initial conditions have to be set with this very special goal in mind" is pure conjecture. Please try to quantify how special they are and you should understand what I mean: You cannot make any such statement without actually writing down a model
antooneo, Sabine,
DeleteIf you want the reason for initial conditions being prepared for future measurnments, instead of enforcing time-asymmetric way of thinking to time/CPT symmetric theories, try mathematically equivalent time-symmetric formulations, e.g. action optimizing for Lagrangian mechanics: history of the Universe as the result of Big Bang in the past and e.g. Big Crunch in the future, or using path-ensembles: like Feynman's equivalent with QM - leading to Born rule from time symmetry (https://en.wikipedia.org/wiki/Two-state_vector_formalism): one amplitude from the past (path enesemble/propagator), second from the future.
Sabine,
DeleteIf superdeterminism assumes that e.g. the cases of entanglement are not caused by physical laws as we traditionally know and accept them, but are defined already at the time of Big Bang for all times in future, then there must be a map of all single interactions in our universe for all future. That also means that there are in truth not any physical laws, but what we understand as laws is in fact this setting which was made for the Big Bang.
This would also mean that the setting of initial conditions is much more complex than the whole universe as it is now in all details.
Question is: which “instance” or “authority” has defined this setting? Followers of a religion do anyway tend to believe this. Natural scientists on the other hand believe in general rules. – This is the difference to religion which I mean.
antooneo,
DeleteWhat you say is trivially wrong, and I have already told you this several times. Any deterministic theory has initial conditions and the present state is therefore equivalent to the initial state by a map given through the dynamical law. Of course this does not mean that there are not "in truth any physical laws" as you put it, but that you are mistaken about what it means to have a scientific explanation.
Look, I don't want to have to repeat this infinitely many times. Why don't you try to figure out what you think is different between superdeterminism and determinism and why your objection applies to the former but (presumably) not to the latter, because the latter is arguably scientific (unless you want to declare much of science "unscientific" which is nonsense).
A schema (e.g. one consisting of nothing but a set of deterministic equations and associated numbers), that doesn’t have the potential to describe the logical information behind the evolution of life, is not good enough these days.
ReplyDeleteThe presence of algorithmic/ logical information in the world needs to be acknowledged, but you can’t get
algorithmic/ logical information from equations and their associated numbers.
When we say that there is a Renaissance going on in QM, do we mean the community actually going to take a serious look at new ideas? Is there any experimental evidence that points to one new flavor over another one? The last time we revisited the subject a few years ago I remember hearing that very few professionals were willing to consider alternatives. I know there is a fledgling industry of quantum computing and communications, but getting industry to pursue and finance research into new flavors of QM could be harder than getting academia to do it, even if one flavor does offer tantalizing features.
ReplyDeleteI certainly hope they will!
DeleteThis comment has been removed by the author.
ReplyDeleteDo you have any reference on how "Pilot wave theories can result in deviations from quantum mechanics"?
ReplyDeleteMoreover, what's your opinion on a stochastic interpretation of quantum mechanics?
Taking up Lorraine Ford's thread, is life deterministic? It is deterministic as long as the pattern continues, the moment you break out of the pattern as a matter of insight that insight may be inventive and therefore we come upon something totally new. Evolution talks about the animal kingdom and the plant kingdom let us say the evolution of the animate world. Then I think there is also the evolution of the inanimate world. Let us start with a big bang and arrive at iron, thereafter, matter evolved into gold and so on up to uranium. Now the radioactive elements are unstable because they are recent species, they have not stabilized or rather fully fallen into a pattern. What I am trying to say is life is only a branch of inanimate evolution i.e., the evolution of the universe branched of into inanimate evolution and animate evolution. And animate evolution is not deterministic it was a quirk.
ReplyDelete@Sabine
ReplyDeleteThere is a weird version of Creationism that appears to be worryingly similar to SD. To put it shortly, some creationists accept that all the evidence from the archeological record is real and that the conclusion that life forms have evolved is a logical deduction.
However, these creationists believe that all this evidence was put there on purpose by God to make us believe that the world is several billion years old, while in fact it was created only a few thousand years ago. It's just that God put evry stone and every bone in the right place to make us jump to the Darwinian conclusion.
So I wonder. How's that different from SD when you replace God by Big Bang, and Darwinian evolution with QM?
Boshaft ist der Herrgott, after all...
Opamanfred,
DeleteI explaiend this very point in an earlier blogpost. The question is whether your model (initial condition, state space, dynamical law) provides a simplification over just collecting data. If it does, it's a perfectly fine theory.
Superdeterminism, of course, does that by construction because it reproduces quantum mechanics. The question is whether it provides any *further* simplification. To find out whether that's the case you need to write down a model first.
Think, just think please... is evolution possible without memory? Is life possible without memory? Certainly not. Is memory possible without "recording"? Certainly not. Could life on earth have been possible if "recording" was not already going on in the inanimate world, If "memory" was not there in the inanimate world? Taking this questioning further, I suspect that "recording" takes place even at the quantum level, and "memory" is part of the quantum world. And it seems that this capacity to "record" and hold in "memory" gradually led to the appearance of gold and life on earth.
ReplyDelete...let me take the thread...uninterpreted primordial up....
ReplyDeleteEntropy. Firstly, "energy in a pattern is matter" J Krishnamurti. When the pattern repeats not necessarily reproduction we have matter. Because it is energy functioning in a pattern, matter is a program. And because that pattern of functioning repeats it is a record. When energy falls into pattern, is trapped in a pattern it becomes a record of that energy and that record is matter.(You cannot notice a pattern without time, and also because matter is a record, matter is time--about this sometime latter). There is undefined primordial energy to begin with. Then some energy as a definition is matter. Energy as a definition is matter. Entropy tells us what the primordial energy is doing. Is the energy absolutely quite as the uninterpreted primordial or the undefined, or is it trapped in a complex pattern and well defined, highly interpreted as matter. In this context, the uninterpreted primordial has maximum entropy, and the most interpreted primordial like man, fossil fuels have least entropy(so far). Furthermore, the quantum world has greater entropy than the macrocosm.