“There are only three people in the world who understand Superdeterminism,” I used to joke, “Me, Gerard ‘t Hooft, and a guy whose name I can’t remember.” In all honesty, I added the third person just in case someone would be offended I hadn’t heard of them.

What the heck is Superdeterminism?, you ask. Superdeterminism is what it takes to solve the measurement problem of quantum mechanics. And not only this. I have become increasingly convinced that our failure to solve the measurement problem is what prevents us from making progress in the foundations of physics overall. Without understanding quantum mechanics, we will not understand quantum field theory, and we will not manage to quantize gravity. And without progress in the foundations of physics, we are left to squeeze incrementally better applications out of the already known theories.

The more I’ve been thinking about this, the more it seems to me that quantum measurement is the mother of all problems. And the more I am talking about what I have been thinking about, the crazier I sound. I’m not even surprised no one wants to hear what I think is the obvious solution: Superdeterminism! No one besides ‘t Hooft, that is. And that no one listens to ‘t Hooft, despite him being a Nobel laureate, doesn’t exactly make me feel optimistic about my prospects of getting someone to listen to me.

The big problem with Superdeterminism is that the few people who know what it is, seem to have never thought about it much, and now they are stuck on the myth that it’s an unscientific “conspiracy theory”. Superdeterminism, so their story goes, is the last resort of the dinosaurs who still believe in hidden variables. According to these arguments, Superdeterminism requires encoding the outcome of every quantum measurement in the initial data of the universe, which is clearly outrageous. Not only that, it deprives humans of free will, which is entirely unacceptable.

If you have followed this blog for some while, you have seen me fending off this crowd that someone once aptly described to me as “Bell’s Apostles”. Bell himself, you see, already disliked Superdeterminism. And the Master cannot err, so it must be me who is erring. Me and ‘t Hooft. And that third person whose name I keep forgetting.

Last time I made my 3-people-joke was in February during a Facebook discussion about the foundations of quantum mechanics. On this occasion, someone offered in response the name “Tim Palmer?” Alas, the only Tim Palmer I’d heard of is a British music producer from whose videos I learned a few things about audio mixing. Seemed like an unlikely match.

But the initial conditions of the universe had a surprise in store for me.

The day of that Facebook comment I was in London for a dinner discussion on Artificial Intelligence. How I came to be invited to this event is a mystery to me. When the email came, I googled the sender, who turned out to be not only the President of the Royal Society of London but also a Nobel Prize winner. Thinking this must be a mistake, I didn’t reply. A few weeks later, I tried to politely decline, pointing out, I paraphrase, that my knowledge about Artificial Intelligence is pretty much exhausted by it being commonly abbreviated AI. In return, however, I was assured no special expertise was expected of me. And so I thought, well, free trip to London, dinner included. Would you have said no?

When I closed my laptop that evening and got on the way to the AI meeting, I was still wondering about the superdeterministic Palmer. Maybe there was a third person after all? The question was still circling in my head when the guy seated next to me introduced himself as... Tim Palmer.

Imagine my befuddlement.

This Tim Palmer, however, talked a lot about clouds, so I filed him under “weather and climate.” Then I updated my priors for British men to be called Tim Palmer. Clearly a more common name than I had anticipated.

But the dinner finished and our group broke up and, as we walked out, the weather-Palmer began talking about free will! You’d think it would have dawned on me then I’d stumbled over the third Superdeterminist. However, I was merely thinking I’d had too much wine. Also, I was now somewhere in London in the middle of the night, alone with a man who wanted to talk to me about free will. I excused myself and left him standing in the street.

But Tim Palmer turned out to not only be a climate physicist with an interest in the foundations of quantum mechanics, he also turned out to be remarkably persistent. He wasn’t remotely deterred by my evident lack of interest. Indeed, I later noticed he had sent me an email already two years earlier. Just that I dumped it unceremoniously in my crackpot folder. Worse, I seem to vaguely recall telling my husband that even the climate people now have ideas for how to revolutionize quantum mechanics, hahaha.

Cough.

Tim, in return, couldn’t possibly have known I was working on Superdeterminism. In February, I had just been awarded a small grant from the Fetzer Franklin Fund to hire a postdoc to work on the topic, but the details weren’t public information.

Indeed, Tim and I didn’t figure out we have a common interest until I interviewed him on a paper he had written about something entirely different, namely how to quantify the uncertainty of climate models.

I’d rather not quote cranks, so I usually spend some time digging up information about people before interviewing them. That’s when I finally realized Tim’s been writing about Superdeterminism when I was still in high school, long before even ‘t Hooft got into the game. Even more interestingly, he wrote his PhD thesis in the 1970s about general relativity before gracefully deciding that working with Stephen Hawking would not be a good investment of his time (a story you can hear here at 1:12:15). Even I was awed by that amount of foresight.

Tim and I then spent some months accusing each other of not really understanding how Superdeterminism works. In the end, we found we agree on more points than not and wrote a paper to explain what Superdeterminism is and why the objections often raised against it are ill-founded. Today, this paper is on the arXiv.

**Rethinking Superdeterminism**

S. Hossenfelder, T.N. Palmer

arXiv:1912.06462 [quant-ph]

Thanks to support from the Fetzer Franklin Fund, we are also in the midst of organizing a workshop on Superdeterminism and Retrocausality. So this isn’t the end of the story, it’s the beginning.

I'm glad to have argued in favor of super-determinism as a good answer to the Measurement Problem a few months ago. :-)

ReplyDeleteThere's this observation in your paper:

"Should we discover that quantum theory is not fundamentally random,should we succeed in developing a theory that makes predictions beyond the probabilistic pre-dictions of quantum mechanics, this would likely also result in technological breakthroughs."

I'd like to also mention another possible next step in this line of thought, which I think you've argued against before, that should there be evidence for a superdeterministic universe, this would also increase the probability that our universe is a simulation running in an external universe (with possibly or even probably different rules of physics).

Should we be able to observe the statistical anomalies of how quantum mechanics deviates from true randomness, we might also be able to infer something about that external universe. For example statistical "shortcuts" that would reduce the computer "runtime" of any external simulation would support such a view.

One could argue that quantum mechanics could be such a computing time shortcut to begin with: by adding the randomness of quantum mechanics the required precision from the universe is a lot lower than from a universe following precise, classical physics.

Intentionally imprecise calculations and noise insertion are actual valid techniques in numerical analysis and computer simulations today.

An observer measures the length of a ruler, at any moment of time that she thinks she freely chooses, and always find that it is 5 cm. Her conclusion: there is a 5 cm ruler on the table. Its length is always 5 cm. Why? because the ruler (or the world) never knows when she decides to measure it. Her choice of measurement time is free.

ReplyDeleteBut this conclusion would be wrong if the world is superdeterministic. The ruler could be 8 cm when the observer was not measuring it, and changed to 5 cm whenever she measures it. How does the ruler know when to change to 5 cm? It was all written in the initial condition. The initial condition determines that whenever she measures, the length becomes 5 cm.

In general, if the world is deterministic, all of our physical theories could be wrong and we will never know it. We can never test it because our very act of testing it may be predetermined to never find any errors in our theories, even though they are wrong.

tytung,

DeleteYou use a straw-man version of superdeterminism that has nothing to do with its intended meaning.

Superdeterminism, acording to Bell, means a theory that implies that the hidden variables depend on the detectors' settings. There is a correlation between the hidden variables and the states of those detectors. Superdeterminism does not say that state of the detectors change when they are detecting the particles.

Your analogy is also silly from another point of view. A superdeterministic interpretation of QM will have the same predictions as QM, that's the point of passing Bell's theorem. So, if you believe that a superdeterministic interpretation of QM predicts that the ruler changes changes when you use it you need also to accept that QM itself must predict such a thing. I would like to see what evidence you have to back that up.

"A superdeterministic interpretation of QM will have the same predictions as QM, that's the point of passing Bell's theorem."

DeleteThat's one of the things I don't really get, though. QM gives me a well-defined value for the amount of violation of a given Bell inequality, say Tsirelson's bound for the Cash-inequality (2*sqrt(2)). A super deterministic theory, it would seem, could yield any value up to the algebraic maximum of four. So why the Tsirelson bound? QM provides an answer; and it seems like that explanatory power is lost with superdeterminism.

I don't think it is a straw-man version. If we accept superdeterminism, we must accept its most general consequences, such as its application in macroscopic observations, not just limited to the situation formulated in the HVT formalism.

DeleteIn my example, the decision of carrying out measurement can be taken as a state of detector. So there is a correlation between the hidden variable of the ruler (that determines its length) and the detector state.

Regarding the idea of accepting superdeterminism to bypass Bell's theorem, I think it is a futile attempt. There is a reason why everyone avoids this path. A severe consequence of superdeterminism (as mentioned in the last paragraph of my previous comment) is that implies the impossibility of Scientific method to discover the state of the world and that we can never know if our theories describe the world. The scientific method hinges on the idea of freedom of measurement choice.

Jochen,

DeleteDepending on the formalism of the specific superdeterministic theory you can indeed get different predictions. This is also true for QM, GR or Newtonian mechanics. Sure, if instead of F=m*a we put F=sqrt(m*a) the predictions would be different. Yet, for some reason, this does not seem to be a widespread complaint against the explanatory power of classical mechanics.

The error in your argument consists in asking a unique prediction from a class of theories (superdeterministic theories) instead of a specific, well defined theory. If your argument is sound you may also rule out all deterministic theories, all field theories, etc.

tytung,

Delete"If we accept superdeterminism, we must accept its most general consequences, such as its application in macroscopic observations, not just limited to the situation formulated in the HVT formalism."

Sure. Please deduce the macroscopic implications of the assumption that the spin of the entangled particles is not independent on the configuration of the charged particles (electrons and quarks) in the detectors.

"In my example, the decision of carrying out measurement can be taken as a state of detector. So there is a correlation between the hidden variable of the ruler (that determines its length) and the detector state."

OK, I'll propose you a clear example of superdeterministic theory. It's called stochastic electrodynamics (SED). You can find an introductory material here:

Stochastic Electrodynamics: The Closest Classical Approximation to Quantum Theory

Timothy H. Boyer, Atoms 2019, 7(1), 29; https://doi.org/10.3390/atoms7010029

It's on arxiv:

https://arxiv.org/pdf/1903.00996.pdf

Please perform the required calculations using this theory and let me know what the predicted length of the ruler in your experiment will be.

"A severe consequence of superdeterminism (as mentioned in the last paragraph of my previous comment) is that implies the impossibility of Scientific method to discover the state of the world and that we can never know if our theories describe the world."

Can you provide an argument that takes as a premise the definition of superdeterminism given by me (and by Bell) and outputs "the impossibility of Scientific method to discover the state of the world and that we can never know if our theories describe the world."?

"The scientific method hinges on the idea of freedom of measurement choice."

Superdeterminism does not require that the choice is not free, whatever you might mean by that, it only requires that this "choice" is not independent of the physical state of other systems. This is a direct consequence of us being made out of charged particles (electrons and quarks) and of the fact that charged particles interact electromagnetically at arbitrary large distances. Unless you want to argue that classical electromagnetism is not compatible with the scientific method you should revise your stance.

Jochen:

Delete"A super deterministic theory, it would seem, could yield any value up to the algebraic maximum of four."I have no idea why you think that is and it would be really great if people could please stop inventing things that they dislike about superdeterminism and then claim they must be general properties.

As we state in the paper very clearly of course any viable superdeterministic theory must reproduce quantum mechanics to within present measurement precision. It follows as a corollary that it obeys any bounds that quantum mechanics also fulfills.

*Of course* the challenge is to come up with a theory that actually does that. No one says it's easy. But please stop preemptively declaring that it must be impossible just because you cannot think of a way to do it.

"I have no idea why you think that is and it would be really great if people could please stop inventing things that they dislike about superdeterminism and then claim they must be general properties."

DeleteThe point is simply that on superdeterminism, you have the actual distribution of values, and the distribution of values as sampled in the actual experiment---i. e. the experiments won't yield a fair sample of the actual distribution. So, you inherently have more explananda than a non-superdeterministic theory---there, all you have to explain is why the distribution of values is the way it is, as you fairly sample from it, while on a superdeterministic theory, you also have to explain how the sampling occurs. Thus, we inherently need an additional level of explanation in order to fit our observations.

This is independent of what specific superdeterministic theory we consider. So while Andrei is right that I'm kinda comparing apples and oranges in comparing a specific theory (quantum mechanics) with a class of theories (superdeterminisitic ones), the problem remains---the non-superdeterministic theories produce the outcomes we observe, because they're just what's there (whether in the sense of pre-existing hidden variables or values produced probabilistically upon collapse), while superdeterministic theories have the values that are there *and* some particular mechanism for us to observe only a suitably biased subset of those values to match observation.

To give an example, consider a coin being thrown repeatedly, its value being observed at 'freely chosen' moments, which we observe to be heads up 2/3 of the time. A non-superdeterministic theory would simply explain that by the coin being biased in that way, while a superdeterministic theory could hold that the coin is fair (or biased in nearly every way), we just have an increased likelihood of making an observation if the coin in fact came up heads. That theory could assume almost any underlying distribution, and postulate a suitable 'rule' for us checking its value so as to match observations---but could equally well match any other possible observed distribution, just by some other checking rule. In the same way, you can obtain any value for Bell inequality violation.

Jochen,

DeleteFor all I can tell what you are saying is that there are many ways to write down wrong theories. I don't know why you think this is an insight.

And of course as I said the whole point of superdeterminism is that quantum mechanics derives from it. So, yes, you have to "explain" how the distribution comes about. Alas, as I already pointed out above, if all you want is to reproduce the predictions of quantum mechanics then these will be fulfilled because you reproduce quantum mechanics qua assumption. In other words, if you want to merely make the predictions that quantum mechanics makes you might as well use quantum mechanics.

Jochen,

Delete"on superdeterminism, you have the actual distribution of values, and the distribution of values as sampled"

This is false. Where did you get this? Superdeterminism implies a correlation between the hidden variable (say the particles' spin) and the experimental setup (detectors' orientation). It is a similar case with the Solar system where you have a correlation between the planetary orbits and the mass of the Sun. A different experimental context implies different spins in a Bell test just like a different Solar mass implies different planetary orbits. It's not that unobserved entangled particles do not display the same correlations as the observed ones, they always do. They will be correlated with the state of whatever is going to absorb them (say the laboratory wall).

That's not really the point I'm trying to make. Superdeterministic theories essentially allow you to pick your favorite ontological commitments and stick with them, no matter the evidence---because no matter what you observe, you never really have to adapt your picture of what actually happens, you can always just get away with adapting the rule according to which observations are sampled from the actual underlying distribution, while keeping that distribution the same.

DeleteConsider the difference between Quantum Mechanics, a local hidden variable theory, and the world as described by Popescu-Rohrlich boxes. Most would hold that these are three different, competing physical theories, with experiments (namely, measuring the degree to which certain Bell inequalities are violated) necessary to adjudicate between them.

By adopting superdeterminism, however, you can just wholesale reject that: you can say, well, the world is described by the local realist theory, but we only get to see those outcomes that make it look like it's described by QM. Or, if that had been the case, as described by PR boxes. Or even, the world really is described by PR-boxes, which would in a certain sense be simpler, since these are the maximal correlations allowed by the no-signaling constraint, yet we only observe those such that things look like they were described by quantum mechanics.

Each of these combinations is, in principle, possible; thus, this adds a degree of arbitrariness to theory building you don't have in non-superdeterministic theories, where we just sample fairly from the distribution of outcomes the theory produces.

Andrei wrote to tytung:

Delete>OK, I'll propose you a clear example of superdeterministic theory. It's called stochastic electrodynamics (SED).

Andrei, looking over the paper you link to, it seems to me to involve simple old-fashioned classical determinism, not superdeterminism in any significant sense.

By the way, it seems inevitable that SED will fail to give correct results for the Aspect-tyoe experiments that test Bell's theorem. SED is classical and therefore local in Bell's sense, but of course the experimental results violate Bell locality.

Part of the point of superdeterminism is to come up with a theory that agrees with the Aspect-type experiments (and thereby violates Bell's inequality) but in some reasonable sense is still local.

SED won't do that.

As far as I can tell, tytung's objection rehashes the old "free-will vs. determinism" debate. In practice, that issue is not going to be seriously addressed until someone presents a concrete superdeterministic theory that can explain, for example, the results of Aspect-type experiments.

Dave

Jochen,

DeleteI cannot make sense of what you write. You have a theory. That theory makes predictions. If these predictions are in conflict with experiment, the theory is wrong. You can then go and modify the theory. Alas, if you just modify the theory every time it makes a wrong prediction, your theory doesn't make any prediction, so it's not a scientific theory.

That is correct. It is correct regardless of what field of science or what subdiscipline of physics you are talking about. It is also correct when it comes to superdeterminism of course, but I don't know what you think anyone learns from that.

PhysicistDave,

Delete"Andrei, looking over the paper you link to, it seems to me to involve simple old-fashioned classical determinism, not superdeterminism in any significant sense."

The "old-fashioned classical" field theories like classical EM are superdeterministic in Bell's sense. Due to the long-range interactions between charged particles, this theory does not allow one to write down the equations describing the hidden variables without a direct mention of the detectors' settings (in the form of the electron/quark distribution that defines those settings).

SED is just classical EM with a certain assumption about the initial state, so it's superdeterministic as well.

Bell-type experiments cannot refute classical electromagnetism either.

If you think that classical EM allows you to have two distant independent systems please provide me with such an example.

Andrei:

Delete"Superdeterminism implies a correlation between the hidden variable (say the particles' spin) and the experimental setup (detectors' orientation)."

To me, this just reads as a rephrasing of what I said---there's some 'background' distribution of values, from which experiments however (due to the correlations you mention) do not sample fairly, hence yielding a 'phenomenological' distribution that's different from the background distribution. In particular, while the background distribution may be in line with the predictions of local realism, the phenomenological distribution may violate a Bell inequality, thus 'saving' local realism.

The problem with this is that you can do this to 'explain' any inconsistency of your theory with the data: just add the right sorts of correlations to get the observed distribution, and you'll never have to question your original commitments---i. e. you get to hold fast to local realism no matter what the evidence.

Sabine:

"You have a theory. That theory makes predictions. If these predictions are in conflict with experiment, the theory is wrong. You can then go and modify the theory."

Well, there's different ways to modify a theory, though. In your post on the questionable utility of the multiverse, you use an analogy to a set of axioms as giving a physical theory. So suppose we have an axiom, LR, which gives certain constraints on the expected outcomes. These constraints are experimentally violated. One way to modify your theory then would be to throw out LR, and try to find something better.

Another way, however, would be to add another axiom to try and keep LR, while removing the inconsistency with observation. That's what superdeterminism seems to be: you add an axiom SD (or perhaps, an 'invariant set postulate'?), which has the effect of 'masking' the background distribution in such a way as to lead a phenomenological distribution that's in line with observations. That is, you weaken the predictions of your theory---different versions of SD would lead to different phenomenological distributions, you just choose the one to fit the data to observations---effectively introducing new parameters to match the data.

In effect, this isn't much different from the strategy of the multiverse proponents: you don't remove anything from your original theory, but extend it, with each possible addition yielding a possible way the actual world could be, but isn't. You start out with a theory including LR, and once that doesn't fit the data, extend it to LR + SD, where SD can always be chosen in such a way as to fit the data---basically, any data you can think of.

I mean, once that original SD doesn't yield a good fit anymore, why not just add another? And so on, accommodating each new discrepancy with another ad-hoc modification to tweak the background distribution of values to the currently observed one.

Some propose that a good theory should be one that's hard to vary---that is, that has few, if any, loose screws you can tune to fit the observations. Superdeterminism seems invariably to introduce new screws with the express purpose of being tunable to accommodate almost any sort of data---a way to soften up a rigid theory, make it more malleable and adaptable---something you've been vocally critical of in other contexts.

Jochen,

Delete"that's what superdeterminism seems to be: you add an axiom SD (or perhaps, an 'invariant set postulate'?), which has the effect of 'masking' the background distribution"This is patently false. To begin with, you need SD to *remove* an inconsistency, as we explain in the paper. Second, SD starts with removing an assumption (that's the assumption of statistical independence). Third, and most importantly, the point of SD is *not* to reproduce quantum mechanics, but to make predictions beyond that. For those predictions, you need SD and you need further details to specify the model. Of course, as I have said, if all you want is to make the predictions that quantum mechanics already makes, you don't need superdeterminism. No one debates that.

Sabine, you've replied to my comment, but I don't see it posted yet.

DeleteStill, I think we can narrow down our disagreement somewhat. First of all, I don't think SD really removes some extra assumption of statistical independence; that 'assumption' really just says that we can observe the actual distribution of hidden variables, i. e. that equality holds in your equation (3). Anything else, to me, seems to be in need of some further justification---it seems that any superdeterministic theory would have the 'phenomenological' probability distribution on the left hand side of (3) be some function of the 'background' distribution on the right hand side, and the detector orientations.

The choice of that function is where the additional freedom comes in, and also what makes it such that superdeterminism 'outputs "the impossibility of Scientific method to discover the state of the world and that we can never know if our theories describe the world."', as Andrei put it above.

Because ultimately, the same phenomenological distribution (lhs of 3) can be obtained as different functions of different background distributions (rhs of 3). So, going back to my coin example, you might hold that the coin is fair, and we check such as to preferentially see the 'heads' outcome; you might hold that the coin is biased towards showing 'tails' more often, with a different bias, such that we see it coming up 'heads' 2/3 of the time; and so on, whereas on a non-SD theory, you'd just consider the coin itself to be biased. (If that's still too unclear, my apologies, I will write out the example more explicitly when I have more time.)

Jochen,

DeleteI am not even disagreeing with you, I am merely telling you that you are not making sense. First, an assumption you don't make is an assumption you don't make. I don't know what there's to discuss. Second and more importantly, we know if our theories describe the real world when we put them to experimental test. Someone should put SD to experimental test. Third, I have now told you sufficiently often that if you merely want to make the predictions that quantum mechanics makes you do of course not need a new theory. So why are you still talking about coins and stuff.

Andrei wrote to me:

Delete>Bell-type experiments cannot refute classical electromagnetism either.

Quantum mechanics violates Bell's theorem.

Classical electromagnetism does not. So classical electromagnetism is really not relevant to the whole issue.

I'm willing to be corrected here if anyone can propose an actual, concrete experiment in which classical electromagnetism predicts a result that violates Bell's inequality, but I am pretty sure this does not happen.

Andrei also wrote:

>If you think that classical EM allows you to have two distant independent systems please provide me with such an example.

Well, electromagnetic interactions die off at least as 1/r, usually faster. If they are a few light years apart they are effectively independent.

In any case, that is not really the point. Bell's theorem is based on no faster-than-light signalling and on counterfactual definiteness. Both hold for classical electromagnetism.

Andrei also wrote:

>SED is just classical EM with a certain assumption about the initial state, so it's superdeterministic as well.

Well... the assumption about the initial state in SED is, as far as I can see, simply that there is a Lorentz-invariant random distribution of radiation present (by the way, that will give an infinite energy density).

Again, as far as I can see, there is no special correlation between the random radiation in SED and detector settings and such. And there is no need for it: SED will trivially obey Bell's inequality. So, I do not see in what sense it is super-deterministic.

By the way, Feynman-Wheeler radiation theory is an interesting example of a theory that

appearsto have temporally bidirectional causation, but that is, in fact, observationally equivalent to classical electromagnetism, so that you do not see the bidirectional causation.Which, I suppose, simply shows that dealing with all this involves difficult subtleties.

Again, though, I want to emphasize that the way to make all these issues very concrete is to focus on violations of the Bell inequality. QM, both in theory and experimentally, violates the Bell inequality. Bell argued that this is impossible if you have Bell locality and counterfactual definiteness.

So, you only have a problem that needs solving

ifyou have a violation of the Bell inequality.Superdeterminism is a possible loophole in Bell's argument (basically, it gets around countefactual definiteness).

It all goes back to Bell: unless you are facing a violation of Bell's inequality (or a similar Bell-like inequality), there is just no issue.

And, I see no reason to think SED violates the Bell inequality, so it seems to me that SED is not relevant. By the way, that is also why I am pretty sure that SED cannot explain QM: QM violates the Bell inequality and SED does not.

Dave

(I'm sorry this got a bit long; this is part 1/2)

DeleteSabine:

"I am not even disagreeing with you, I am merely telling you that you are not making sense."

Yes, I am aware, you've gotten that across quite clearly. But as you acknowledge, this is a difficult subject that, after all, only you and like two other people fully understand; so perhaps you'll permit me trying to figure out where, exactly, what I'm saying doesn't make sense anymore.

With that, the promised/threatened elaboration. I'm going to be talking about coins and stuff, but that doesn't have anything to do with whether the theory is just supposed to reproduce quantum mechanics or not, but rather, with whether it's a sensible theory in the first place, no matter what, if anything, it predicts.

So start with a system comprised of a coin in a box that's repeatedly flipped (by some internal mechanism, we may imagine). We make experiments by periodically opening the box up, and noting which way the coin faces. Suppose we have two theorists, Alice and Bob, trying to predict the distributions of heads vs. tails we will observe.

Bob's theory is that, since the situation is symmetric, and symmetry is always nice, the probabilities of observing heads or tails should be equal, and hence, 1/2. Alice's theory, on the other hand, is that we'll see heads 2/3 of the time (whatever her reasoning).

Now, and that's the first important point, there's only one thing either of them can sensibly predict: namely, that if we check, we're actually going to see heads about half, or two thirds, of the time. They're not making an 'assumption of statistical independence' in making this prediction, they're simply predicting all that their theories enable them to predict. They're saying that we'll see what we'll see, because it's what's there.

Neither of them would be justified in predicting a probability at variance with their respective frequencies. If Bob were to say, the coin lands heads 1/2 the time, but we'll observe it to be up 1/3 of the time because the times we check are correlated with the way the coin is facing, we wouldn't praise him for cleverly avoiding the dodgy 'assumption of statistical independence', we'd consider him to simply be irrational.

OK, moving on. Alice and Bob then resolve to settle their differences by means of experimentation. They (or an experimentalist they trust) open the box repeatedly, check the way the coin faces, note that down, and compute relative frequencies. And it just so happens that the coin faces heads roughly two thirds of the time.

But Bob's still pretty enamored with his symmetry principle. No sensible theory, he reasons, could violate it. Something else must be going on. So now he claims that, in fact, the times we check are correlated with the way the coin is facing---such that we have a higher probability of checking when the coin is facing heads up, hence observing it to show heads more often than it in fact does---IOW, the coin still 'actually' faces heads up half the time, we're just constrained to observing it facing heads two thirds of the time.

(Part 2/2)

DeleteThere are several problems I have with that. One is that what started out as a theory of the box now has become a theory of the box and its observer---that is, on a superdeterministic theory, object systems can never be modeled independently of their measurement context. We thus loose an important criterion of objectivity, the separation between the experimenting subject and the object being experimented upon. This isn't fatal, and there's good arguments that this distinction can't be upheld in a quantum context anyway, and certainly not in theories of the whole universe, there being preciously few outside observers available in that case. Still, it marks a departure from the way theories are typically built, and one should at least question whether that's a price worth paying.

Two, somewhat more philosophically, we run into issues of such theories being somewhat self-defeating: collecting evidence, we formulate a theory based on that evidence; but if that theory entails that the evidence we've built it on is spurious, we've never actually had any justification to propose it. Hence, accepting the theory entails questioning the evidence that forms the only basis on which to accept the theory. Bob can't have formed his belief that symmetry is good on the empirical basis of observing the coin, hence, his superdeterminist 'correction' isn't supported by the evidence given by the experiments---observing the coin to come up heads two thirds of the time doesn't support the theory that it actually comes up heads half the time, we're just constrained to observe it coming up heads two thirds of the time, because we don't have---and can't have---any evidence of it 'actually' coming up heads half the time---we just see it coming up heads two thirds of the time.

In the same vein, Bob's theory is completely immune to any sort of outcome we could have observed. All he has to do is to adjust the correlations---to use a different axiom SD, as I phrased it above---to accommodate every possible observed frequency. He never has to make any change on his original, symmetry-based theory---no matter the evidence, he gets to hold fast to that.

Finally, as a consequence, there's nothing unique about Bob's combination of background-theory and superdeterministic gloss: one could imagine Charlie, Dave, Eve and so on, each initially predicting a different distribution of heads and tails---heads 4/5 of the time, 1/10 of the time, 99% of the time---each just coming up with their own superdeterministic hampering of observing the actual frequency, and matching the actually observed one.

Each of these theories, and infinitely many others, is at least as good as Bob's, and empirically indistinguishable from it. There's only one theory that stands out from these, and that's the one that doesn't mention any correlations between observations and outcomes, where you observe what you observe just 'cause it's what's there, and that's Alice's. Moving to a superdeterministic framework forfeits this kind of uniqueness, leaving us with a theory that's highly underspecified by the available evidence, and can be easily made to accommodate whatever further observations one cares to make.

Now, I'm not claiming what you're doing is like what Bob's doing. But the basic arguments against superdeterminism are, as far as I can see (which may just be due to my present lack of enlightenment, I'll acknowledge), not avoided by your proposal.

Jochen,

DeleteAndrei:

"Superdeterminism implies a correlation between the hidden variable (say the particles' spin) and the experimental setup (detectors' orientation)."

"To me, this just reads as a rephrasing of what I said---there's some 'background' distribution of values, from which experiments however (due to the correlations you mention) do not sample fairly, hence yielding a 'phenomenological' distribution that's different from the background distribution. In particular, while the background distribution may be in line with the predictions of local realism, the phenomenological distribution may violate a Bell inequality, thus 'saving' local realism."

There is no such a thing as a "background distribution". Any physical context implies a different distribution, specific to it. Going back to my Earth-Sun example, are you claiming that the current orbit of Earth is not representative, it's just "phenomenological", while the "background" one, which I suppose it should be a straight line (when the Sun has no effect) is the "true" orbit?

What you really ask is that charged particles should not be perturbed by the presence of other charged particles so that a change of the experimental setup should have no implication on the behavior of the test particle. Sorry, this is false in both classical and quantum mechanics.

"The problem with this is that you can do this to 'explain' any inconsistency of your theory with the data: just add the right sorts of correlations to get the observed distribution, and you'll never have to question your original commitments---i. e. you get to hold fast to local realism no matter what the evidence."

This is false. The way in which a certain experimental context determines the hidden variable is not a free parameter you may adjust at will. For example, in classical EM the polarization of the entangled photons depends on the way the electron that emits them accelerates, which in turn is determined by the Lorentz force acting on that electron, which in turn is determined by the charge distribution/momenta of the detectors. For each state of the detectors the polarization is uniquely determined.

PhysicistDave,

Delete"Quantum mechanics violates Bell's theorem.

Classical electromagnetism does not. So classical electromagnetism is really not relevant to the whole issue."

Can you please present the calculation of the predictions of classical electromagnetism for a Bell test? How do you know the inequality is not violated?

"I'm willing to be corrected here if anyone can propose an actual, concrete experiment in which classical electromagnetism predicts a result that violates Bell's inequality, but I am pretty sure this does not happen."

There is no reason for me to do that. It is computationally impossible. What I can do, however, is to attack Bell's independence assumption and show that in classical electromagnetism (CE) it is false. This is pretty easy:

1. The polarisation of an EM wave depends on the specific way the electron accelerates.

2. The only way an electron can accelerate is Lorentz force.

3. The Lorentz force is given by the electric and magnetic field configuration at the locus of the emission.

4. The electric and magnetic field configuration at the locus of the emission does depend on the position/momenta of distant charges.

5. The detectors are composed of charged particles (electrons and quarks).

Conclusion: From 1-5 it follows that the hidden variable, λ, depends on the detectors’ states. I cannot say in what way it depends, only that it does. But this is enough to put Bell's theorem to rest. Logic is beautiful, right?

"Well, electromagnetic interactions die off at least as 1/r, usually faster. If they are a few light years apart they are effectively independent."

Let's take a simple case, two orbiting charges. At what distance, r, would they stop orbiting each other and start moving randomly? 1 m? 1 Km?, 1 light year? I see no such limitation in CE formalism, but I guess you should know better.

"Bell's theorem is based on no faster-than-light signalling and on counterfactual definiteness. Both hold for classical electromagnetism."

This is true, but you should be careful how you implement counterfactuality. Say you want to change the setting of one detector, far away from the source. In order to that you need to change the positions of some charged particles. That change should be accompanied by a corresponding change of electric and magnetic fields at the location of the source so that CE equations are satisfied. That means a change of the hidden variable. So, independence fails.

Andrei:

Delete"There is no such a thing as a "background distribution". Any physical context implies a different distribution, specific to it."

I'm referring specifically to equation (3) in the paper. The right hand side is what I would call the 'background distribution', the left hand side is my 'phenomenological distribution'. These not agreeing (while maintaining locality, such that a different choice of measurement directions does not actively influence the hidden variables) is what I would say defines a superdeterministic theory. I think if you disagree with that, you have a different notion in mind also than Sabine and Tim Palmer in their paper.

"What you really ask is that charged particles should not be perturbed by the presence of other charged particles so that a change of the experimental setup should have no implication on the behavior of the test particle. Sorry, this is false in both classical and quantum mechanics."

I think you're mixing up two things here (perhaps only terminologically). That a measurement setting should have an (instantaneous) influence on the hidden variable distribution would be something that violates locality, generically. I agree that with abandoning locality, Bell violations can be replicated.

But the issue with a superdeterministic theory rather is that only certain combinations of measurement settings and hidden parameter values can occur together---without the measurement settings thereby having any influence on the hidden variable values.

As a stark case, consider my coin example. Suppose the coin shows 'heads' half the time. A superdeterministic theory could be cooked up that essentially entails (by, for instance, a suitable fixing of the initial conditions), that in every case where one actually looks in the box, one sees heads, leading to the conclusion that it's always heads. This isn't an influence of the choice of measurement on the value of the variable (the orientation of the coin), but it's just the world aligning such that we're systematically mistaken about the actual distribution of outcomes of the coin flip.

As for the independence of systems, you're right that, in principle, every charge in the universe influences every other. But that doesn't invalidate the notion that in practice, charges can be thought to an extremely good approximation to be independent of one another in certain circumstances. Indeed, the success of our theories establishes as much: it makes me confident that my computer will work tomorrow as it does today, even if I haven't taken into account the precise arrangement of charges somewhere in the Alpha Centauri system.

Jochen,

Delete"I'm referring specifically to equation (3) in the paper. The right hand side is what I would call the 'background distribution', the left hand side is my 'phenomenological distribution'."

You need to tell me what is the physical meaning of that notation. So, the so-called background distribution corresponds to what experiment?

"...while maintaining locality, such that a different choice of measurement directions does not actively influence the hidden variables"

The hidden variables are directly influenced by the measurement directions in the following way: The E-M fields originating in the charged particles from the detectors act at the source thereby influencing the hidden variable. The state of the detectors at the time of detection will be different, but not independent on the past state (the theory is deterministic). The result is that the hidden variable is not independent on the instantaneous detector orientation even if the theory is local.

"That a measurement setting should have an (instantaneous) influence on the hidden variable distribution would be something that violates locality, generically."

See my above explanation.

"But the issue with a superdeterministic theory rather is that only certain combinations of measurement settings and hidden parameter values can occur together---without the measurement settings thereby having any influence on the hidden variable values."

Same explanation here. The past state of the system determines both the hidden variable and the detectors' settings so that they are not independent, yet the physics is local.

"As a stark case, consider my coin example."

Your coin example is irrelevant because looking at the coin does not make it to flip. In electromagnetism moving some charges around has a physical influence on the test system. This is the crucial point. The so-called quantum behavior is a particularity of field-mediated long-range interactions. Rigid-body mechanics with contact forces only (like coins) do not display such behavior and they make bad examples.

"As for the independence of systems, you're right that, in principle, every charge in the universe influences every other. But that doesn't invalidate the notion that in practice, charges can be thought to an extremely good approximation to be independent of one another in certain circumstances."

Yes, those circumstances are realized when the systems can be approximated by rigid-body mechanics with contact forces only (coins, billiard balls, etc.). This happens when you have large aggregates of particles that are neutral overall (what we would call the typical macroscopic objects). For such objects the fields' influence on the center of mass of the object cancel each other, so the objects appear not to interact at all, unless they are very close. Once you are interested in small objects, like electrons, atoms or molecules such interactions cannot be neglected.

"it makes me confident that my computer will work tomorrow as it does today, even if I haven't taken into account the precise arrangement of charges somewhere in the Alpha Centauri system."

This is so, because your computer does not base its output on instantaneous properties of single particles. The calculations involve a huge number of particles so the effect of distant charges cancels out.

Andrei:

Delete"The result is that the hidden variable is not independent on the instantaneous detector orientation even if the theory is local."

Sorry, but that's just not right. I mean, yes, in principle, 'everything is connected', but that doesn't entail that these connections introduce any correlations between systems.

As an example, take the cellular automaton rule 30 (https://en.wikipedia.org/wiki/Rule_30). It's a completely deterministic system, yet, the center column can be taken to produce an effectively random bitstring. So take, say, the first ten thousand bits of that center column, and take the ten thousand bits starting at the one trillionth place of that column.

You'll get two random bitstrings that have no correlation at all (or at least, generically, two bitstrings chosen in such a way will have no correlation---one can of course always be spectacularly unlucky). Yet, they originate from the same, entirely deterministic, process.

One could use one bitstring to yield the 'measurement orientations' to perform measurements on the other. Say, if the nth bit of that bitstring is 1, you measure (read off the value) the nth bit of the other. That way, you'll get a substring of the second bitstring which will have exactly the same statistical properties (again, in sufficient limits etc) as the first one.

In that example, the full bitstring corresponds to \rho(\lambda), while the substring sampled from that by means of the choosing procedure using the other bitstring corresponds to \rho(\lambda|a,b). Clearly, you will have \rho(\lambda) = \rho(\lambda|a,b); thus, the theory describing the 'world' produced by Rule 30 is deterministic, but not superdeterministic.

A superdeterministic theory would be one in which the underlying automaton is such that each such choosing procedure would yield a biased substring; that is, not just bits from the original string (the background distribution) chosen at random, but rather, one with a different distribution of ones and zeros. Moreover, every such choosing procedure would have to yield an equivalently biased sample---say, you always end up sampling 75% ones, instead of ones and zeros in roughly equal proportion, for example.

So no, that the measurement settings and the measured values are produced by the same deterministic process---and hence, are not 'independent' of one another in that sense---does not suffice for a superdeterministic theory; such a theory must have every possible way of making measurements biased in just the right way to yield the observed, rather than actually present, distribution of values.

Anfrei wrote to me:

Delete>1. The polarisation of an EM wave depends on the specific way the electron accelerates.

>2. The only way an electron can accelerate is Lorentz force.

>3. The Lorentz force is given by the electric and magnetic field configuration at the locus of the emission.

There are no point-like electrons in classical EM: the reason is quite simple -- the energy stored in the field of a point particle is positive infinite and therefore must be compensated be an infinite negative bare mass for the electron. And, a particle with a negative bare mass behaves very strangely: pre-acceleration and all that.

By the way, similar problems do seem to arise in QED, though the infinities are softened by quantum effects (though still infinite): Sakurai's book discusses this. QED is probably not mathematically consistent, at least not without the other parts of the Standard Model.

Andre also wrote:

>Conclusion: From 1-5 it follows that the hidden variable, λ, depends on the detectors’ states. I cannot say in what way it depends, only that it does. But this is enough to put Bell's theorem to rest. Logic is beautiful, right?

No, you have no logic at all.

You see, there are no "hidden variables" in classical EM

and no hidden variables in your 5 premises!Logic cannot draw a non-trivial conclusion about some entity not mentioned in the premises. Therefore, by a basic theorem in logic, there is no possibility of logically getting from your premises to your conclusion.Quod erat demonstrandum.And, by the way, your second premise is also false: classical EM allows non-electromagnetic forces all the time, like, say, forces holding static charges in place.

Andrei also wrote:

>Can you please present the calculation of the predictions of classical electromagnetism for a Bell test? How do you know the inequality is not violated?

Of course, I can: Bell already did that in the proof of his theorem. The whole point of his theorem is to show that any ordinary theory, such as

alltraditional classical theories such as EM, obey the Bell inequality. If you do not get this, you do not understand the proof (nor the statement) of Bell's theorem.The real debate we are having is what the term "superdeterminism" means. It is normally applied, as Sabine applies it, to some special sort of determinism going beyond old classical determinism. You are not using it that way.

This leads to confusion.

Jochen,

DeleteA superdeteterministic theory is a theory where the hidden variable is not independent of the measurement's settings. This is the definition used in the paper and the one I use as well.

"So no, that the measurement settings and the measured values are produced by the same deterministic process---and hence, are not 'independent' of one another in that sense---does not suffice for a superdeterministic theory"

Yes it does, according to the above, correct definition.

"such a theory must have every possible way of making measurements biased in just the right way to yield the observed, rather than actually present, distribution of values"

No. This is a definition of a superdeterministic theory that is also true. It's a similar situation with non-locality. The fact that a theory is non-local is enough to make it immune against Bell's theorem, yet it's not enough for it to be true (reproduce QM's predictions). I defend the claim that field theories are superdeterministic in the above correctly defined sense. I am not prepared to defend the claim that superdeterminism is also true. I can provide evidence for this later claim in the form of the theory of stochastic electrodynamics. Not rock-solid evidence, but evidence nevertheless.

PhysicistDave,

Delete"There are no point-like electrons in classical EM: the reason is quite simple -- the energy stored in the field of a point particle is positive infinite and therefore must be compensated be an infinite negative bare mass for the electron. And, a particle with a negative bare mass behaves very strangely: pre-acceleration and all that.

By the way, similar problems do seem to arise in QED, though the infinities are softened by quantum effects (though still infinite): Sakurai's book discusses this. QED is probably not mathematically consistent, at least not without the other parts of the Standard Model."

I did not claim that the electrons are point-like, I don't know how electrons are and I am aware of the problems you mentioned. Still, this does not mean that the theory is useless, and, as you say, a similar problem exists in QED as well. I do not think this observation disproves my argument. What I intend to prove here is that there are some characteristics to field theories that makes them superdeterministic in the sense it is defined in the paper.

"You see, there are no "hidden variables" in classical EM and no hidden variables in your 5 premises!"

The hidden variables are the polarizations of the emitted photons/EM waves. They are mentioned in premise 1. They are not hidden from the point of view of classical EM, sure, but they play that role in QM, where the state describes them as superpositions. What the argument shows is that the polarizations of these photons do depend on detectors' settings when they are emitted. The polarizations will be measured later by the detectors.

"And, by the way, your second premise is also false: classical EM allows non-electromagnetic forces all the time, like, say, forces holding static charges in place."

Probably I should have mentioned that I assume the system is described completely by classical EM. It's not intended to be a perfect description of what happens in the real world. Anyway, the only force that is not EM in origin would be gravity, and I think we can ignore it for the time being.

"Of course, I can: Bell already did that in the proof of his theorem. The whole point of his theorem is to show that any ordinary theory, such as all traditional classical theories such as EM, obey the Bell inequality."

This is not yet established. I'll wait your answer first.

Andrei:

Delete"A superdeteterministic theory is a theory where the hidden variable is not independent of the measurement's settings. This is the definition used in the paper and the one I use as well."

In general, then, these superdeterministic theories will make the same predictions as deterministic theories, and thus, be completely indistinguishable. The reason being that if there isn't any correlation between the hidden variables and the measurement settings---as in my example with Rule 30---you'll be able to sample the distribution of the hidden variables without bias (meaning they're not really 'hidden'), and obtain the same statistics as in a theory where measurement settings and hidden variables are 'independent' in your sense. Hence, most people use 'superdeterministic' to describe a theory which does introduce a systematic bias into the observed values---which is also the sort of theory needed to generate Bell inequality violation in a local realistic setting, since you need to have the observed statistics differ from the underlying hidden variable distribution.

Besides, even in your setting, I can easily produce measurement settings uncorrelated with the hidden variables, by probing causally disconnected patches; a difference in these patches can't have any mutual effect in a local theory.

So take again a 1D-cellular automaton as an example. We will have an initial condition given by an infinite bit string, which will then evolve according to the CA rule. In particular, any change in a cell in the initial conditions will only influence the neighboring cells in the next time-step, those next to those in the one afterwards, and so on.

So consider two different such initial conditions, where in one, the value of one bit at position 0 is flipped. Generally, after n time steps, a patch of 2n+1 cells centered around 0 may differ between the two evolutions of the CA. At time-step k, I sample the cell k steps away from 0 to decide whether to read the value of the k+1st cell. That cell will have the same value no matter what evolution we're in; but my measurement setting may differ. Hence, the two must be independent---there are valid configurations where I measure, or don't measure the value of the cell, without that having any effect on said value.

This isn't true on superdeterminism proper. In a sense, you can consider that superdeterminism to be a constraint on the initial state of the CA---say, only the first initial state is permitted, not the second, so there is a nontrivial correlation between measurement setting (or my choice of measurement) and its outcome, for instance such that in every case I choose to measure, I will see the outcome to be one, even though the hidden variable (value/color of the cell) may also be zero---I just never observe it to be. Only in that case do we actually have superdeterminism; everything else is just ordinary determinism.

Folks, what Jochen says is similar to my issue with SD. It implies a divergence between information theory and QM. There would be different correlations, and thus more information. I see nothing more than big problems with SD.

DeleteAndrei wrote to me:

Delete> Anyway, the only force that is not EM in origin would be gravity, and I think we can ignore it for the time being.

I assume you mean the only other force

known to classical physicswas gravity? You know that there are two other forces, the strong and weak forces, right?Anyway, that is not even true for classical physics: classical physicists knew that there were various sorts of contact forces, cohesive forces, etc.: they simply had no fundamental theory of those forces. (Of course,

weknow that many of those forces are of quantum electromagnetic origin, but they did not.)Andrei also wrote:

>[Dave}"Of course, I can: Bell already did that in the proof of his theorem. The whole point of his theorem is to show that any ordinary theory, such as all traditional classical theories such as EM, obey the Bell inequality."

>[Andrei] This is not yet established. I'll wait your answer first.

Already done: that

isBell's proof of his theorem. If you actually understand classical EM and Bell's proof, you really, truly are supposed to see that this is what he proved. If you do not see this, you either do not understand Bell's proof or classical EM (or both).I am beginning to see that I assumed that you had an understanding of classical EM that you do not have, and maybe this is the source of our miscommunication. In particular, I should have addressed your following point:

>This is true, but you should be careful how you implement counterfactuality. Say you want to change the setting of one detector, far away from the source. In order to that you need to change the positions of some charged particles. That change should be accompanied by a corresponding change of electric and magnetic fields at the location of the source so that CE equations are satisfied. That means a change of the hidden variable. So, independence fails.

No.

In classical EM, if you change the position or motion of charges over here ("at the detector"), then

nothingneed change at the same time or any previous time over there (in your words, "at the location of the source").Of course, after a time delay corresponding to the speed of light, alteration in charges over here will, indeed,

eventuallyproduce changes in E and B fields over there.This is a mathematical theorem, derivable from Maxwell's equations, a theorem in fact that you are supposed to know like the back of your hand if you ever took a serious course on classical EM.

Have you taken such a course, at the level of, say, Dave Jackson's

Classical Electrodynamics, that showsin detailhow to calculate vector and scalar potentials (and thereby E and B fields) from the source charges and source currents?This is most easily done in the "Lorentz gauge" and is covered in detail in Section 6.4-6.6 of the second edition of Jackson.

So, no, you are simply mistaken (i.e.,, there is a mathematical proof of your error) in your belief that, in classical EM, when you change something over here, then E and B fields must change over there at the same or an earlier time.

And, by the way, E and B fields are not hidden variable in classical EM: they are the not-at-all-hidden variables of the theory: you are using extremely weird terminology in calling them "hidden variables."

What happens in classical EM is just plain old classical determinism: there is no need or reason at all to call it "superdeterminism."

It's hard to respond to your posts because it is clear that you are

veryconfused, but the rest of us do not know what your background is. E.g., are you just using the word "superdeterminism" in the way that almost everyone else uses the word "determinism"? Or, as I suspect, do you have some very strange misconceptions about classical EM?Let us know your background and level of knowledge and it may be easier to help you clear up your confusion.

"In general, then, these superdeterministic theories will make the same predictions as deterministic theories, and thus, be completely indistinguishable."

DeleteSuperdeterministic theories are a subclass of deterministic ones. They will be different from non-superdeterministic deterministic theories by the absence of long-range forces/fields. Classical EM, GR, fluid mechanics, Newtonian gravity, electrostatics are superdeterministic. Rigid-body mechanics with contact forces only is an example of non-superdeterministic deterministic theory. It is possible to change the state of a system while leaving other systems unchanged in rigid-body mechanics. You can move some billiard balls here without any effect on some billiard balls there. But you cannot move Jupiter without a corresponding change in Earth's orbit. You cannot move a magnet here without any effect on an iron bar there. This is the difference.

"Besides, even in your setting, I can easily produce measurement settings uncorrelated with the hidden variables, by probing causally disconnected patches; a difference in these patches can't have any mutual effect in a local theory."

In my example I assume a closed universe (no imput from outside) completely described by classical EM. In such a universe no causally disconnected patches exist. I doubt that they exist in our universe either.

"So take again a 1D-cellular automaton as an example...."

The model you present is not superdeterministic. There are no long range forces/fields there, so it is similar with rigid-body mechanics with contact forces only.

In a superdeterministic theory not any initial state you can imagine is valid. For example in classical EM the electric field at a certain distance from a charge must have a specific value. An universe containing a single charge where the field at location r is different from what Coulomb's law requires is an invalid initial state. You need to make sure that the position/momenta of charges and E-M fields are correlated. You can freely chose the initial state but it must be a valid one.

"In a sense, you can consider that superdeterminism to be a constraint on the initial state of the CA"

That's fine-tuning. You may "transform" a non-superdeterministic theory in a superdeterministic one in this way but it's a bad, useless example as it discredits this type of theories.

Lawrence Crowell,

DeletePlease read my answer to Jochen! I think I addressed his (and yours) objections.

Andrei,

DeleteCan we generally say that all Lagrangian mechanics are superdeterministic - especially assuming that solution was chosen accordingly to the least action principle?

To start the classical electron has a finite radius. Using the Coulomb potential it is not hard to show this. However, the quantum electron appears pointlike with radial symmetry to within 5 orders of magnitude of the Planck length.

DeleteIf SD gives the same results as standard QM it would mean these inner dynamics or hidden variables are nonlocal. This puts right back in line with Bell's theorem.

PhysicistDave,

Delete"I assume you mean the only other force known to classical physics was gravity? You know that there are two other forces, the strong and weak forces, right?"

I was responding to your comment about "holding static charges in place". I didn't know it's possible to use the color force for that.

" If you actually understand classical EM and Bell's proof, you really, truly are supposed to see that this is what he proved. If you do not see this, you either do not understand Bell's proof or classical EM (or both)."

We shall see.

"No.

In classical EM, if you change the position or motion of charges over here ("at the detector"), then nothing need change at the same time or any previous time over there (in your words, "at the location of the source").

Of course, after a time delay corresponding to the speed of light, alteration in charges over here will, indeed, eventually produce changes in E and B fields over there."

Your above statement shows that, contrary to my advice, you were not " careful how you implement counterfactuality". Let me explain.

A counterfactual universe is a very close "copy" of ours where everything except the state of the detector is different. Problem is that this universe should also be consistent with the laws of EM. You cannot have that universe evolving like ours up to the moment just before detection and then suddenly "jump" to the desired new state. As required by any deterministic theory, in order to change the present you also need to change the past. So, the past of the detector (and much more) needs to be different. So, while it's true that the source does not experience the fields associated with the instantaneous state of the detector, but those corresponding to the past state of the detector, it still means that those fields were different. So, in our counterfactual universe, the state of the source prior to emission has to be different.

"So, no, you are simply mistaken (i.e.,, there is a mathematical proof of your error) in your belief that, in classical EM, when you change something over here, then E and B fields must change over there at the same or an earlier time."

See above.

"And, by the way, E and B fields are not hidden variable in classical EM: they are the not-at-all-hidden variables of the theory: you are using extremely weird terminology in calling them "hidden variables.""

In my earlier post:

"The hidden variables are the polarizations of the emitted photons/EM waves. They are mentioned in premise 1. They are not hidden from the point of view of classical EM, sure, but they play that role in QM"

So I don't see your point. In my argument classical EM plays the role of a hidden variable theory that attempts to reproduce QM. In QM the entangled photons do not have a well-defined polarization until detected. In classical EM, the EM-waves have one.

"It's hard to respond to your posts because it is clear that you are very confused"

Paying a little more attention to what I write might add some clarity, would you agree?

" E.g., are you just using the word "superdeterminism" in the way that almost everyone else uses the word "determinism"?"

A superdeteterministic theory is a theory where the hidden variable is not independent of the measurement's settings. This is the definition used in the paper and the one I use as well.

" Or, as I suspect, do you have some very strange misconceptions about classical EM?"

I don't think I have, but we shall see.

"Let us know your background and level of knowledge and it may be easier to help you clear up your confusion."

I am a chemist. I'm doing just fine, but thanks!

Andrei wrote to me:

Delete>[Dave]" If you actually understand classical EM and Bell's proof, you really, truly are supposed to see that this is what he proved. If you do not see this, you either do not understand Bell's proof or classical EM (or both)."

>[Andrei] We shall see.

Well, I think I shall leave it at that: if you really think that Bell's theorem does not apply to classical EM, all you need to do is come up with a classical EM experiment that violates the Bell inequality.

No one ever has, and I know that no one ever will, because I do understand Bell's theorem.

You think I am wrong.

Fine: produce the counter-example that shows I am wrong.

If you do so, I can promise you that you will indeed be quite famous: you'll be the guy who proved that a classical, non-quantum physics theory violates Bell's inequality.

Dave

Jarek Duda,

Delete"Can we generally say that all Lagrangian mechanics are superdeterministic - especially assuming that solution was chosen accordingly to the least action principle?"

Superdeterministic theories are a subclass of deterministic ones. They will be different from non-superdeterministic deterministic theories by the presence of long-range forces/fields. Classical EM, GR, fluid mechanics, Newtonian gravity, electrostatics are superdeterministic. Rigid-body mechanics with contact forces only is an example of non-superdeterministic deterministic theory. It is possible to change the state of a system while leaving other systems unchanged in rigid-body mechanics. You can move some billiard balls here without any effect on some billiard balls there. But you cannot move Jupiter without a corresponding change in Earth's orbit. You cannot move a magnet here without any effect on an iron bar there. This is the difference.

As far as I know you can apply the least action principle to all these theories, superdeterministic and non-superdeterministic alike, so, my answer to your question would be no.

Lawrence Crowell,

Delete"To start the classical electron has a finite radius. Using the Coulomb potential it is not hard to show this. However, the quantum electron appears pointlike with radial symmetry to within 5 orders of magnitude of the Planck length."

I think that you make the mistake of generalizing a certain classical model to classical physics in general. Do you have any proof that any classical theory should describe the electron as a sphere of charge?

"If SD gives the same results as standard QM it would mean these inner dynamics or hidden variables are nonlocal. This puts right back in line with Bell's theorem."

I don't see the relevance of Bell's theorem here. The nature of the electron is a completely different discussion.

Andrei:

Delete"They will be different from non-superdeterministic deterministic theories by the absence of long-range forces/fields."

There's no reason at all for a superdeterministic theory to contain long range interactions. The chief characteristic of a superdeterministic theory is that measurement directions can't be chosen independently of hidden parameter values; that can be true (or false) equally well in theories with long-range interactions and those without.

"In such a universe no causally disconnected patches exist. I doubt that they exist in our universe either."

This is another odd requirement---whether a theory is superdeterministic or not would then depend on the precise details of the cosmological model that describes our universe. No such requirement is in place for what people usually take 'superdeterminism' to mean.

"The model you present is not superdeterministic. There are no long range forces/fields there, so it is similar with rigid-body mechanics with contact forces only."

This is a complete red herring. Any change I make in a cellular automaton state will eventually propagate to any region of the automaton, by virtue of nearest-neighbor interactions. That's exactly how it is in electromagnetism, as well: moving a charge here has no instantaneous effect on charges over there, but rather, the effects propagate at light speed, along the light cone. That's how it is with every local theory.

"For example in classical EM the electric field at a certain distance from a charge must have a specific value."

In an equilibrium configuration, maybe. But you could take one initial state, wiggle a charge around, and have that changed state together with an expanding electromagnetic wave-front as an initial state which will lead to different detector settings, yet identical hidden variables at a given point (because the wave-front has not yet propagated far enough to influence them). This was the point of my CA-example.

And anyway, the point remains that you're simply using a different definition of 'superdeterministic' than everybody else. As Sabine and her co-author put it in their paper:

"The most distinctive feature of superdeterministic theories is that they violate Statistical Independence. As it is typically expressed, this means

that the probability distribution of the hidden variables, ρ(λ), is not independent of the detector

settings."

This will not in general be the case in your superdeterministic theories, where easily the hidden variable distribution and the detector settings may be statistically independent, that is, no matter what you set your detector to, you'll sample from the same probability distribution for the hidden variables. And without this being the case, you can't for instance construct classical theories with definite values for all observable parameters and local interactions that violate a Bell inequality. With a true superdeterministic theory---that is, a theory on which the probability distribution of the measurement outcomes is systematically biased, given the measurement directions---you can; and for that, it doesn't matter at all if that theory has long-range or nearest-neighbor interactions.

Lawrence Crowell wrote:

Delete>To start the classical electron has a finite radius. Using the Coulomb potential it is not hard to show this.

Well... not really.

Have you read Fritz Rohrlich's famous

Classical Charged Particles? There never really has been a successful classical theory of the electron.Yes, I know about the so-called "classical radius of the electron," of course, but that is just a formal calculation: it is not part of a coherent theory.

The problem is that if you try to make the electron an extended body, so that the energy in the Coulomb field does not exceed the rest mass of the electron, then you get into the familiar problem of extended bodies in special relativity. And, quite famously, if you let the electron be a point particle, then the energy in the Coulomb field exceeds the rest mass of the electron so that the electron itself has a negative bare mass. This leads to a third-degree differential equation which has the paradoxical feature that electrons seem to start accelerating

beforea force is applied. The Wikipedia summary is not bad if you can't get hold of a copy of Rohrlich's book.Pre-acceleration of course violates causality, which is not good.

The real problem is that the negative bare mass of the electron leads to a mechanical instability: the harder you push on it the more it moves in the wrong direction. The "pre-acceleration" is really a kludge to prevent exponentially runaway solutions due to this mechanical instability.

The interesting question is how this plays out in QED: the divergence of the electron bare mass is softened in QED by quantum effects (see Sakurai's book), but it still becomes infinitely negative.

This is one of the problems of QFT that is swept under the rug but that probably deserves further investigation.

Dave

Nice read. Keep on the journey.

ReplyDeleteSabine,

ReplyDeleteIn Bell's paper:

On the Einstein Podolsky Rosen Paradox

J. S. Bell, Physics Vol. 1, No. 3, pp. 195-290

DOI:https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195

we read at the begining of page 196:

"The vital assumption [2] is that the result B for particle 2 does not depend on the setting a of the magnet for particle 1, nor A on b"

This is the so-called independence assumption, or "free-will" assumption. Superdeterminism is the denial of the independence assumption, in other words:

A deterministic theory that, when applied to a Bell-type experiment, implies that the hidden variable corresponding to particle A is not independent of the detector's B setting and the hidden variable corresponding to particle A is not independent of the detector's B setting is a superdeterministic theory.

I think that this definition is the correct one, as the concept of superdeterminism was invented by Bell exactly for the reason I mentioned. It is also an exact definition, one can evaluate quite easily, by inspecting the theory's formalism if it qualifies or not as a superdeterministic one. It's also a good definition because it allows an easy rebuttal of the usual arguments against superdeterminism, such as "superdeterminism requires fine-tuning at the Big-Bang" or "superdeterminism implies an abandonment of science", etc.

The best way, in my opinion, to overcome the negative reaction towards superdeterminism is to present an example of such a theory that nobody can accuse of being non-scientific, or fine-tuned. The best example is the classical theory of electromagnetism (Maxwell + Newton's laws + Lorentz force law). Due to the long-range interactions between charged particles, this theory does not allow one to write down the equations describing the hidden variables without a direct mention of the detectors' settings (in the form of the electron/quark distribution that defines those settings). I can also prove that the typical argument, that the fields can be made arbitrarily weak by increasing the distance between the source and detectors does not work. I would very much like to know your opinion in regard to my claim (classical electromagnetism is a SD theory).

Sabine,

ReplyDeleteI have another important thing to add. It should be stressed that not only is superdeterminism a respectable scientific option, but it is inevitable if locality is to be maintained. I think that one should directly attack the non-hidden variable views, which can be proven, by a trivial use of the EPR reality criterion, that have to be either non-local or non-fundamental (statistical approximation). I'll repost my entire argument here:

Let's take a look at EPR's reality criterion:

"If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."

Let's formulate the argument in a context of an EPR-Bohm experiment with spin 1/2 particles where the measurements are performed in such a way that a light signal cannot get from A to B:

1. It is possible to predict with certainty the spin of particle B by measuring particle A (QM prediction).

2. The measurement of particle A does not disturb particle B (locality).

3. From 1 and 2 it follows that the state of particle B after A is measured is the same as the one before A is measured (definition of the word "disturb")

4. After A is measured B is in a state of defined spin (QM prediction)

5. From 3 and 4 it follows that B was in state of defined spin all along.

6. The spin of A is always found to be opposite from the spin of B (QM prediction)

7. From 5 and 6 it follows that A was in a state of defined spin all along.

Conclusion: QM + locality implies that the the true state of A and B was a state of defined spin. The superposed, entangled state is a consequence of our lack of knowledge in regard to the true state. So, QM is either an incomplete (statistical) description of a local deterministic hidden variable theory or it is non-local.

Let's take a look at Bell's theorem now. What is less known is that Bell was not a replacement of EPR, but a refinement of it. The purpose of Bell's theorem is to choose one of the two remaining options after EPR (local realism and non-locality). If one agrees that Bell's theorem successfully ruled out local-realism the conclusion would be that physics is non-local. But most physicists forgot EPR and they put back on the table non-realism, so they arrive to the wrong conclusion that realism has to go.

Will is never free; it is circumscribed by the program. Even if I will, and do what I will, the human-program will not let me take wings or fly. The only thing that I can do is to create another hardware-program called an airplane which will carry me through the air across the seas. Choice is never free; it is also circumscribed by the human-program.

ReplyDeleteA program is always constraining because it is limited in itself; it can function only within a set pattern. This implies that measurement is limited. Measurement is not only measuring with a ruler or a weighing machine but also means interpreting and sieving input. I think Professor, we need to thoroughly define measurement first.

ReplyDelete...can we say anything which is the product of a program is a measure?

ReplyDeleteHave just read your paper. Fascinating. I can't pretend to any serious knowledge of physical theories involved, but as an (ex-)statistician I have long argued with my colleagues (and others) that in a truly deterministic world, the assumption of independence of statistical outcomes cannot be justified and hence e.g. applications of the Large Numbers Theorem becomes philosophically problematic. Call it an unreasonable efectiveness of statistics, if you like. :-)

ReplyDeleteWell, at least now I know where my email and attached file about philosophy, mathematics and QM to you was placed! I did, however, read your book, which I enjoyed.

ReplyDeleteReading though the paper, this sentence, when discussing objections to Superdeterminism, struck me: "Superdeterminism would fatally undermine the notion of science as an objective pursuit." If you see this as an unworthy position, it is surprising, since over the years (including in your book "Lost in Math") you seemed to be strongly against the notion that science is a pursuit for objectivity, i.e. the Truth..

I apologize beforehand, because I have not completed reading your paper. I only recently started reading 't Hooft's book, so my knowledge of 'superdeterminism' is presently on shaky terrain. Now, in reading an interview in Physics Today (July 2017) 't Hooft says: "What is the reality described by quantum theories? I claim that we can attribute the fact that our predictions come with probability distributions to the fact that not all relevant data for the predictions are known to us, in particular important features of the initial state." My question is this: How can an experimenter ever know "all relevant data," whether it be at the microscopic or macroscopic level, what determines the specification of "all" and "relevant" data ? He also says: "Since the real world appears to be described by a local quantum field theory, it remains to be explained how locality can be restored." Even though his theory appears to violate nonlocality he soldiers on with it. He says: "but, I am not worried by this apparent contradiction." Is not that a form of cognitive bias ? Is not the discomfort with probabilistic aspects of quantum mechanics a cognitive bias ? Quantum Mechanics (over 100 years old) has withstood every experimental challenge put to it, why is there a lingering discomfort with its foundations ?

ReplyDeleteVery interesting.

ReplyDeletePaper Copyedit:

In your arxiv PDF, 3.1 par 2, line 4: talking about "locality loophole":

"An extreme version of his has been presented" ->

"An extreme version of this has been presented"

3.1 par 3: final line is missing a period after "postulates".

3.1 par 4 (page 9):

"generalizes the tautologially true fact" ->

"generalizes the tautologically true fact"

4.1 par 3 (page 10): Missing a period after "as to this extent".

4.2 par 4 (page 12):

"to find out whether theory is or is not predictive" ->

"to find out whether a theory is or is not predictive"

4.2 (page 12):

"i.e., chose very precisely" ->

"i.e., choose very precisely"

4.2 (page 12):

"for an example how this" ->

"for an example of how this"

4.3 (page 13):

"a video game to chose detector" ->

"a video game to choose detector"

4.3 (page 13):

"(the possibility that [...] the particle properties."" ->

The final period and double quote should be closing paren and period. i.e. "properties)."

4.4 page 14 quote from Mateus Araujo:

"we couldnt hope" ->

"we couldn't hope"

5.1 page 15 footnote [3] is missing a period.

6, page 19:

"non-essential changes, ie, reduce noise"

Elsewhere (page 4, twice on page 12) you use "i.e.".

Sections etc:

Delete1 Introduction

2 Why?

3 What?

. 1 Psi-epistemic

. 2 Deterministic

. 3 Violation of Statistical Independence

. 4 Locality

. . 1 Output Independence

. . 2 Parameter Independence

3.1 Retrocausality and Future Input Dependence

3.2 Disambiguation

4 Common Objections to Superdeterminism

4.1 Free Will and Free Choice

4.2 The Conspiracy Argument

4.3 The Cosmic Bell Test and the BIG Bell Test

4.4 The Tobacco Company Syndrome

5 How

5.1 Invariant Set Theory

5.2 Cellular Automata

5.3 Future-bounded Path Integrals

6 Experimental Test

7 Discussion

8 Conclusion

The software used to create the PDF takes care of sections, sub-sections, numbering thereof, etc.

But for a human reader, numbered points within a (sub-)section can create confusion. Maybe use A, B, C, etc instead?

I believe the hidden variables of QM are spin, mass, and charge.

ReplyDelete(These physical properties have been slowly stripped from particles over the years.)

In this model, the wave function represents the interaction of particles as coupled simple harmonic oscillators.

This picture is currently blurred because one particle is usually represented by a potential in the wave equation.

This model is classical and deterministic. There are no collapsing wave functions and no quantum measurement problem.

Sincerely, Greg

The breadcrumbs from E8 are indeed beautiful. Are they independent of interpretation? I think it requires another level of abstraction.

DeleteAnd Dr H. swings for the fences. Is this theory empirically falsifiable? Does it tell us anything about nature that can be verified empirically?

ReplyDeleteHow about you read at least the abstract of the paper.

Delete"Most importantly, we will discuss how it may be possible to test this hypothesis in an (almost) model independent way. "

DeleteSo, maybe.

Steven Evans,

DeleteA superdeterministic theory is as falsifiable as any deterministic theory. In fact, I can show you that all field theories (like GR or classical electromagnetism) are superdeterministic. You can test them against known atomic properties like atomic spectra, etc.

Sabine, at the end of your arxiv paper you mention quantum technologies. One topic that has recently made headlines is the quest to establish "supremacy" of quantum computers over classical computers. Can you tell us whether superdeterminism has something to say on the feasibility of large scale quantum computers?

ReplyDeletePascal,

DeleteA most excellent question. Alas, that's one of the points Tim and I did not agree on. I think the answer is no.

This is the funniest post I've ever read by Sabine on Backreaction. Very entertaining to read! Now a serious thought - if everything is predetermined could this fact underlie the apparent ability of some people to glimpse the future to some extent? Perhaps human/animal brains can probe a little bit into the already determined future. But then, I suppose, the 'probe' would itself be predetermined. Many anecdotal stories come to mind, like President Abraham Lincoln dreaming of the aftermath of his assassination by John Wilkes Booth. And even more mundane experiences like someone having a hunch on a scratch ticket, or a feeling that a slot machine will have a win when played. Personally, I've had such experiences, where scanning the array of scratch tickets at a store one just calls out to you, and sure enough it's a winner. Likewise with slot machines. But these experiences are so infrequent that it wouldn't be wise to quit your day job in the expectation of living off gambling wins.

ReplyDeleteDavid,

DeleteRecently I was thinking about experiences similar to yours, in relation to the Jungian notion of synchronicity. I wondered to myself whether such events might be examples of quantum entanglement.

FrankX,

DeleteI’ve been sort of laying low with my brain frozen, as our region is having a stretch of single digit lows: -1 to +2 Fahrenheit (high negative teens C.), and well below freezing highs, as low as 17 F. (-8.3 C.). But mercifully the howling winds subsided and the exterior 1 degree F. temp didn’t penetrate the walls as much this morning. Indeed, I’ve had experiences that could be categorized as Jungian synchronicities. For example, my twin brother and I stopped at a mall enroute to a casino to buy coffee and I gazed on the license plate number of the car parked in front of us. The first three digits read 757. To a slot player such a number sequence is like manna from heaven, as multiple 7’s on the payline is always a win, and a multiplier of 2, 3, or 5 is even better. And, would you know it, we got to the casino and on a machine that featured 5X multiplier symbols I hit the 757 sequence, netting 600 dollars. A win like that is rare, by the way, otherwise casinos wouldn’t stay in business.

Truly improbable synchronicities, as recorded by Jung (my numerical coincidence was likely the result of chance), have always intrigued me, and like you, I thought there could be a quantum connection to these experiences. Such events, taken at face value, entail correlations across space and time and are thus intrinsically non-local. Superdeterminism would keep things local. But, I confess to an emotional bias towards the probabilistic underpinning of the Copenhagen school of thought. I simply can’t discard my cherished belief in free will, as unscientific as that posture is. I feel repelled by the notion that reality is a rigid unfolding of events already encoded somehow in a Block Universe; which I assume is the same thing as Superdeterminism (I’ll probably catch flak here from experts, as perhaps there’s some subtle, or major, difference between them).

Years ago I dreamed up a hidden variable model to explain certain aspects of the quantum domain, which I didn’t think were adequately addressed in the standard treatment. To be sure the model is totally amateur grade, and would likely be shot down with a single burst from a quantum expert’s flak gun. I submitted multiple rewritten versions of it to the Gravity Research Foundation in the last decade, or so. The most recent submission (2018) excluded mention of hidden variables in the abstract to avoid instant rejection. Among other features it has a built-in mechanism for non-local correlations, though that mechanism poses some difficulties at the observational level.

On the plus side of the ledger the model may be testable with relatively modest laboratory experiments involving mechanical, or electromotive, acceleration of Bose condensates. Such experiments were done by the Austrian Research Center in the 2003-2006 time period, and in hundreds of runs they detected tiny acceleration signals (.03g) with modest angular acceleration of their spun up niobium ring of around 8 g’s at the ring’s periphery. They were attempting to confirm the existence of the gravitomagnetic London moment. My model would have a different origin for these signals, and some time ago I’d planned to solicit the assistance of a professional lab to run experiments on a niobium rod immersed in liquid helium. Never got around to it, though.

I have never given superdeterminism much serious thought. The problem is this adjusts the fundamental meaning of quantum entropy. If hidden variables are assumed to be not independent of the measurement settings this means they can be local, and further quantum states are no longer an effective counting of accessible quantum information, say the Holevo's theorem. I will however try to read this paper, where it has now taken up position as another open tab with a physics preprint or reprint. It may though be a while before I get to it in a serious way,

ReplyDeleteThere is another quantum path not taken; that is the quantum mechanics of Jordan and Wigner. This is one that employs the Jordan product and lends itself to the Jordan matrix J^3(O) and the Freudenthal determinant. This was published in 1935 at a time when the politics of Europe was getting crazy. Wigner shortly after left Hungary and Jordan professed loyalty to the Nazi party and worked on von Braun's rocket program during World War II. The umbrage towards Jordan's passion for Nazism and that Wigner turned his attention to other matters meant this approach to QM languished. My interest here is somewhat multifold, one being the role of exceptional groups in QM and the other is the relationship between determinants and permanents in how quantum computation fits within NP problems.

I have the conjecture that the measurement problem is not solvable. There simply can't exist a solution to this problem in a way that is consistent with QM. I have outlined this in a way with the idea that a measurement of a quantum system is ultimately a process whereby a quantum system encodes quantum states with its quantum states. This then leads to a Gödel type of incompleteness. More formally, in the little bit that I have done, the occurrence or outcome of eigenvalues is a sort of Diophantine computation by a measurement apparatus as a large quantum computer of quantum states. This then leads to the incompleteness of a any method of such solutions as found by Matiyasevich based on ground laid by Post and Robinson et al. This type of mathematics also connects with Shimura varieties and some interesting structure that enters into the Langland conjecture. This would then lend to the physical conclusion that a quantum outcome happens for no causal reason at all.

I will try to given this a reading. It might illuminate either how to complete my conjecture above or tell me why it is wrong. The measurement problem is in someways a big outstanding question, but I tend to think the answer may be opposite of there being any sort of causal or deterministic structure underlying it.

I would be very interested in reading something more about your ideas regarding quantum mechanics and Diophantine incompleteness. I have proposed something that seems related last year in an article in Foundations of Physics: https://link.springer.com/article/10.1007/s10701-018-0221-9

DeleteEssentially, I map measurement to Lawvere's fixed-point theorem, which can be seen as elucidating the structure behind Gödelian incompleteness, the undecidability of the halting problem, Russell's paradox and the like, to show that there will always be measurement results that are not logically implied by full knowledge of the state of the system.

Incidentally, I've also done some thinking on the exceptional group approach to quantum mechanics, in particular as related to the old Günaydin-Gürsey approach to the strong interactions, and related work, such as that by Geoffrey Dixon and Cohl Furey. I think it's intriguing that these structures also appear in the description of few-particle entanglement (https://iopscience.iop.org/article/10.1088/0305-4470/36/30/309).

But this sort of thing is really a bit beyond me.

You have the jump on me with respect to incompleteness of QM in measurement. I am a bit like Truman Capote who confided with an inmate he had been taking notes on,"I haven't written a thing."

DeleteThe work of Dixon,Furey and others suggests a duality between gauge symmetries and entanglement. The quotient space geometry with entanglement is I think dual to "modulo gauge redundancies."

I will look at your Foundations of Physics paper.

I read your preprint of the Found. Phys. paper at https://arxiv.org/abs/1805.10668v2 to section III. The use of Chaitin's halting probability is something I have pondered. We might think of there being N = 2^n binary strings of length n. These are then Turing machine binary codes, and as N becomes very large the measure of nonhalting codes grows. However, with quantum mechanics we might have to consider entanglement with a binary code and its complement.

DeleteThe concept of the epistemic horizon is interesting. The set of all possible Turing machines or binary von Neuman machines are not accessible to any possible universal Turing machine (ITM) to evaluate its halting status. However, there are some interesting work-arounds, but with limitations. The UTM has to emulate or encode the states of any possible TM, where if that does not halt that could be infinite. One idea then is to have the TM change state in 1 second, then change in the next ½ second, then in the next ¼ second and so forth. That way an TM, or emulation of a TM could be accomplished in 2 seconds. The problem is the frequency of change increases and “in principle” that system would become a black hole before ever finishing. Of course realistically it would fly apart. But the black hole is interesting for if one has some observer/UTM enter a black hole the mathematical solution, or “eternal black hole,” has all possible null rays from I^{-∞}, that form a Cauchy sequence or set asymptotic to I^{+∞}, piling up at the interior horizon r_-. This is because I^{+∞} is continuous with r_-. This then forms a sort of Malament-Hogarth Spacetime, though inside a black hole.

Of course we know that black holes emit quantum or Hawking radiation so they are not eternal. This means the MH spacetime hypercomputation is maybe not accessible to even an observer in a black hole. In fact the observer may reach the inner horizon right as the black hole enters its final Hawking evaporation or explosion. This then seems to imply the event horizon and what might be called a quantum entropy surface, or quantum epistemic horizon, are related to each other. They prevent any possible computation, indeed maybe quantum computation, for ever working around the Gödel incompleteness result.

I have your paper in a tab next to Sabine's which I will also read in greater detail. I have so far not gotten beyond the first couple of pages there.

The appeal to Chaitin incompleteness comes in due to the fact that it allows one to derive a concrete bound on the approximation of a value possible within any given model. It doesn't suffice to merely point out that there are undecidable measurement outcomes, or even that they form a measure-one subset---after all, we can approximate every real with a series of rational numbers to arbitrary precision, so, with finitely precise experiments, it might just be that we'd never notice anything amiss. But if we have a series of propositions further localizing a system within its state space, and we can show that only finitely many of these propositions can be decidable, then we've shown that there is an upper bound to the localizability of a system within its state space, which yields a kind of uncertainty principle.

DeleteThis is what yields my 'epistemic horizon' (with 'horizon' here rather being taken in the sense of the actual horizon on the spherical Earth): it's an intrinsic limit on the amount of information that can be obtained about any system; like a horizon, we can obtain more information (by walking in a particular direction) only at the loss of prior information (which now vanishes behind the horizon). This is really an intrinsic limit to modeling, which ensures that no single framework suffices to cover all phenomena---which is the root of complementarity. The necessary loss of information upon procuring additional information is where the 'collapse' comes from---think vaguely like Zeilinger's principle: if you only have one bit of information to describe an electron's spin, if that's exhausted by your knowledge of its x-component, you can't know anything about the y-component; and if you consequently learn the y-spin, the knowledge about the x-component must get invalidated, leading to the 'updating' of the state.

But that's all better explained (I hope) in the paper.

I understand this from your paper. It is similar to Holevo's theorem as a bound or limit on the information accessible about a quantum system.

DeleteWhat I conjecture is the fundamental principles of physics prevent any effective computation that circumvents Gödel's theorem. General relativity has exact solutions with Cauchy horizons, such as the Kerr solution and Riessnor-Nordstrom solution ewith inner horizons that are Cauchy horizons. Here null geodesics pile up in a Cauchy sequence. The horizon bounding chronal and non-chronal regions of a wormhole is Cauchy. These arethen HM spacetimes that permit a hyper-Turing machine. Hyper-computations are a work around Gödel's theorem.

Quantum physics appears to rescue Turing and Gödel. Since a black hole quantum evaporates the inner horizon is not longer continuous with positive infinity. There can no longer be a universal Turing machine that can determine the halting status of any machine in the exterior region. Any such observed would reach r_- as the black hole bursts it's final Hawking radiation.

I would propose that quantum gravity follows this as well. A wormhole connecting two times could duplicate states. You hold a quantum state and a copy emerges from the opening. You later throw your original into the time machine back to your self. So no-cloning in QM = no wormholes in GR.

Lawrence:

Delete"Quantum physics appears to rescue Turing and Gödel. [...] A wormhole connecting two times could duplicate states."

That's a good insight, I think. Copying of states is what you would need to implement the diagonal map leading to an inconsistency in Lawvere's construction; since such an operation is forbidden due to the categorical nature of Hilbert spaces, standard QM is 'safe' in that respect.

If you want to do QM with sets instead of Hilbert spaces, you can do so, as in the phase-space formulation, but at the cost of having to introduce extra structure to avert looming inconsistencies (the star-product structure).

Interestingly, that extra structure is nonlocal in a way, since the star product of two phase-space functions depends on all their differentials.

Lawrence Crowell wrote:

Delete>I have the conjecture that the measurement problem is not solvable. There simply can't exist a solution to this problem in a way that is consistent with QM.

Bohmian mechanics solves the "measurement problem." Therefore it is, in principle, solvable. QED

Now of course most physicists (as I have said, from Lubos to Sabine to myself) do not much like Bohmian mechanics for various reasons (most notably the bizarre way in which Bohmian mechanics satisfies the requirements of special relativity).

But still... Bohmains mechanics is an existence proof that the measurement problem can be solved. So, the remaining problem is how to solve the measurement problem in a way that most of us physicists find more palatable.

Dave

Have you considered the link between superdeterminism and the holographic principle? According to the holographic principle the information contained inside a surface is encoded on the surface. I suspect such encoding must also have a non-Euclidean distance measure--small changes on the inside might lead to very different encoding on the surface.

ReplyDeleteIt might also be the case that the surface cannot encode all possible configurations of the space inside and the set of all possible holographic encodings could be a sparse subset of the 3-space.

Perhaps the similarities are superficial. Nevertheless, I'm curious to hear what you think. And thank you for the wonderful paper.

I find it disorienting that those topics have a hint of good fit, since in a weak sense they are at odds. The holographic principle allows model simplification in exchange for (potentially arbitrary) space of experimental development. If that gap can be closed, I'd suspect an inductive theory generalizing on Bosonic string theory. No time for that fortunately.

DeleteSo, how does superdeterminism help in the understanding of quantum field theory and quantum gravity?

ReplyDeleteSabine,

ReplyDeleteIn the paper you say

“Further, not everyone also assumes that a superdeterministic theory is deterministic in the first place.”Could you point me to a reference? (And at first sight it is not hidden in

“[5, 11]”)Somehow it reminds me of this comment.

Ken Wharton and Nathan Argaman (see references in paper) don't assume that a superdeterministic theory is deterministic.

DeleteThis is a wonderful paper. I was intrigued by: "More specifically, IST is a deterministic theory based on the assumption that the laws of physics at their most primitive derive from the geometry of a finite,

ReplyDeletefractal-like set, IU , in state space. States of physical reality – the space-time that comprises our

universe and the processes which occur in space-time – are those and only those belonging to

IU ; other states in the Euclidean space in which IU is embedded, do not correspond to states

of physical reality. Dynamical evolution maps points on IU to other points on IU , whence IU is

invariant under dynamical laws of evolution."

It is tempting to imagine a Super State Space in which we have infinitely many trajectories, IUa, IUb, etc and each trajectory defines SOME physical reality. I'm not trying to drag many worlds into the picture by the back door. I'm only suggesting this because it allows for greater generality. These various trajectories in this super state space or hyper state space would not influence each other and would be independent, but we might suppose that they fill the state space. And the reference to the finite state model brought back memories. I think you and Palmer did some really important work here.

I too find IST fascinating. And really weird.

DeletePractical question: how would even start to try formulate hypotheses to test IST, experimentally or observationally? Even in principle.

An excellent question, and the short answer is: I have no idea.

DeleteIn layman's terms, it seems that counterfactual definiteness is the real hurdle that people have to get beyond. The idea that we could have done otherwise. This is an implicit assumption of most work in science, but it is NOT necessarily an assumption of other 'interpretations' of QM. Namely, Roselli's Relative QM interpretation does away with this.

ReplyDeleteFrankly, Sabine, I am struck by the resemblance between the avenue you are pursuing and Roselli's interpretation. The spirit of Rovelli's interpretation lives inside the avenue you are pursuing, but of course with one HUGE DIFFERENCE: you are attempting to find a deterministic theory that underlies it that can actually make predictions ... ie, it ain't just an interpretation.

Kudos! I can't wait to see more!!

“There are only three people in the world who understand Superdeterminism,” I used to joke, “Me, Gerard ‘t Hooft, and a guy whose name I can’t remember.”

ReplyDeleteReminds me of Palmerston's summary of the Schleswig-Holstein Question: “Only three people have ever really understood the Schleswig-Holstein business — the Prince Consort, who is dead—a German professor, who has gone mad — and I, who have forgotten all about it."

Stuff miracles are made of... the universe conspired favorably... delicious train of eventes... and the topic is superdeterminism!... can´t wait for the sequel....

ReplyDeleteRe: Future-bounded path integrals (section 5.3)

ReplyDeleteInteresting.

Here's how an uneducated layman sees it: for about 100 years, science has been up against the question of how we get the Actual from the Possible. And many scientists seem to want to escape the jaws of this dilemma this without involving conscious agency.

ReplyDeleteSo...Many Worlds says there's no "actual". And Superdeterminism says there's no "possible".

My intuition is that neither of these theories really explains anything, in the end they're just shell games played with words like "exists" and "happens". I think we need to get serious about that conscious agency thing and find out if there is in fact something we can say about it other than "it is".

Sabine,

ReplyDeleteGerard t’Hooft suggested in https://arxiv.org/pdf/1405.1548.pdf a test of superdeterminism: try using a quantum computer to solve a problem which would be impossible with a deterministic computer working at the Planck scale. Is this equivalent to the tests that you suggest in your paper? And is the definition of such a problem given by t’Hooft as “factoring a number with millions of digits into its prime factors” sufficiently challenging?

I read your recent paper »Rethinking Superdeterminism« and I don’t want to spoil the moment just because I have a different opinion.

ReplyDeleteBUT...There are far more fundamental problems concerning QM to “start with”...

examples in easy words ...

A »time-energy uncertainty (principle)« is often mentioned lightly in connection with the »position-momentum uncertainty principle«. In quantum mechanics, time is not an observable, but a number that parameterizes the chronological sequence of the quantum processes. So there is no* time operator with a unique universal exchange relation that could be investigated... (see for instance *How to Introduce Time Operator Zhi-Yong Wang, Cai-Dong Xiong https://arxiv.org/ftp/quant-ph/papers/0609/0609211.pdf )

Feynman-Stückelberg interpretation for "inexplicable" negative energy values of the Dirac equation.

In the picture of quantum mechanics, this problem was supposedly "solved" with the help of Heisenberg's uncertainty principle, by arbitrarily interpreting the corresponding solutions as entities with positive energy that move backwards in time. The negative sign of energy is transferred to time (Feynman-Stückelberg interpretation). But this is from an epistemologically point of view frankly speaking “nonsense” and leads to a far more controversial aspect, the time-reversible Lagrangian (field theory), which is not observed in real physics. Break your teacup and try to restore it...On closer inspection one has to “bridge” microscopic interactive atoms and macroscopic teacups with a formalizable model, this model would be certainly not based on time reversal.

The calculation of ground state energies is based neither on quantum mechanics nor on quantum electrodynamics. Because a significantly decisive part is determined by the ratio of the interacting masses. This ratio is neither QM and certainly not QED based (see Bethe-Salpeter equation from Green’s function, Dyson equation). "Look" at the reduced mass term in muonic hydrogen to get an impression who "bad" the situation is. The reduced mass [mred = mA / (1 + mA / mB)] is - whether you want it to be true or not – (historically) derived from "Newtonian celestial mechanics" within the framework of standard physics. In plain language, this means that in terms of atomic interactions ground state energies are neither QM nor QED based.

This is one of those touchy issues which are completely out of focus because there is not even a hint how to solve this in the QM framework.

Do you think Superdeterminism “helps” QM to understand phenomenologically and finally to calculate ground state energies?

Guten Tag Frau Hossenfelder,

ReplyDeletebitte korrigieren Sie meinen Übersetzungs-Flüchtigkeitsfehler in meinem Kommentar, »how statt who« siehe die Passage

..."Look" at the reduced mass term in muonic hydrogen to get an impression (who) how "bad" the situation is...

Danke.

Dirk,

DeleteI cannot edit comments. I can merely entirely delete them, but you can do that yourself.

Not only do antipaticles travel backwards in time, they come from the other side of the vacuum, or something.

DeleteMadness.

ReplyDeleteWell, for me, the solution is to consider space as the other part of the problem; and what we call a particle is a whole action-reaction that includes the particle as asymmetry of space and the reaction of space. Where do I get this reaction ?; because after the passage of the particle through space, the latter is not distorted, curved, magnetized, asymmetric; on the contrary, it is as if a reorganization accompanies the particle from all directions; in the case of the photon, that reaction is transversal because it travels to C. Let's say that the motion of a particle is subject to some conservation of symmetry by space; perhaps linked in some way to the conservation of the energy of the total space-particle system. I expose this even if I make a fool of myself; but luckily I don't live from theoretical physics

Sabine Hossenfelder wrote:

ReplyDelete“There are only three people in the world who understand Superdeterminism,” I used to joke, “Me, Gerard ‘t Hooft, and a guy whose name I can’t remember.” In all honesty, I added the third person just in case someone would be offended I hadn’t heard of them.

----------------

Sabine, didn’t Einstein vaguely imply a personal leaning toward superdeterminism in his famous quip “God does not play dice”?

Furthermore, doesn’t superdeterminism suggest an extreme (machine-like) level of order to the universe?

And if so, then before you can offer-up superdeterminism as a solution to the measurement problem, you must first provide an irrefutable explanation of HOW and WHY a universe that is capable of superdeterministic processes, came into existence.

Otherwise, you will simply be wrongfully ignoring the problem that Rupert Sheldrake expressed in the following quote:

“It’s almost as if science said, “Give me one free miracle, and from there the entire thing will proceed with a seamless, causal explanation.”’

Keith,

DeleteEinstein certainly thought that quantum mechanics would have to be underpinned by a deterministic theory but I do not know what his opinion was about superdeterminism.

"before you can offer-up superdeterminism as a solution to the measurement problem, you must first provide an irrefutable explanation of HOW and WHY a universe that is capable of superdeterministic processes, came into existence."If I want to calculate a prediction for the outcome of an experiment this does not require me to "provide an irrefutable explanation" of how and why the universe came into existence.

Sabine Hossenfelder wrote:

DeleteIf I want to calculate a prediction for the outcome of an experiment this does not require me to "provide an irrefutable explanation" of how and why the universe came into existence.

----------------

Yes, that’s true.

Nevertheless, the point I was trying to make is that materialism has always viewed the universe itself as being a mindless phenomenon in which random and disparate particles of matter somehow managed...

(by means of sheer luck)

...to coalesce into mind-blowing levels of order (such as our Earth/Sun system).

However, your insistence on superdeterminism not only puts the kibosh on the idea of the particles of matter being random and disparate, but it basically sees them as working together in an “omniscient” state of cooperation.

And the ultimate point is that the impenetrable “mystery” surrounding the origin of that seed-like “kernel” of compressed matter inherent in the Big Bang theory, is now exponentially elevated with the addition of superdeterminism.

And to me, if nothing more than being a matter of principle, that mystery needs to be resolved. Otherwise (as mentioned in my prior post), all of our theories are simply “after-the-fact” observations of the workings of a pre-existent “miracle,” as was noted in the Sheldrake quote.

Now with all of that being said, I actually think you’re on the right track; not so much with “super”-determinism with its implications of the nonexistence of free will,...

...but with the idea that because of the implied quantum entanglements present in the initial stages of the universe, that:

“...Everything is correlated with everything else–not a little bit, but very, very strongly...”

That is a quote from Gerard ‘t Hooft that appeared in an article in the online Scientific American. It is from an interview that took place in 2013 – (an article, btw, in which your name is mentioned).

However, although I agree that everything in the universe is strongly correlated, my main contention is with your assumptive belief that the mathematical formulas that apply to particles of matter, also apply to the essence of consciousness, or to the inner-workings of our minds.

I humbly suggest that quantum physics oversteps its bounds in that regard.

First, full disclosure: In the spectrum of possible quantum interpretations, my own "bits-first" (bits, not qubits) classical information interpretation of both quantum and classical phenomena is about as directly opposite to superdeterminism as is conceivable, a point I may explain in a later in a separate comment on why I think superdeterminism is redundant. Sabine is fully aware of this, yet she still posts my work. This is a powerful testimonial to her sense of fairness and the need for open discussion in science, and she should be thanked and applauded for the diversity and disagreements she makes possible within this moderated blog. (On research funding issues Sabine and I are much more compatible.)

ReplyDeleteSabine's post made me genuinely angry, but not for the reasons you might think.

I am furious that she, the remarkable Gerard 't Hooft, and folks like Tim Palmer get dismissed as cranks by fellow physicists for believing in

exactly the same view that they advocate, but in different words: They call it a block universe.If you believe in a block universe, do you not realize you are also predetermining exactly which thoughts and decisions will go through the heads of the folks who are controlling Bell's Inequality experimental equipment? Or that the nominally random results of quantum observations must also be fully predetermined, since they will then affect the classical worldlines around them in real, historical ways?

Get serious. Such presetting is the very definition of superdeterminism: You get odd Bell Inequality results

because they were predetermined to begin with.If anything, superdeterminism is the most

commoninterpretation among physicists, since both relativistic and quantum physicists are powerfully inclined to assume the need for a block universe, although for somewhat different reasons. Drs Hossenfelder, 't Hooft, and Palmer are just some of the few folks honest enough and bold enough to say such unsettling things out loud.Again: Once you head down

anypath that includes a fully fixed future, no outcome other than superdeterminism is possible. You have unavoidably and pretty much by definition precoordinated the actions of experimenters and "random" quantum events so that they will conspire perfectly to give the expected Bell inequality results.If that is still not enough for you, please go back to my reply to Sabine's earlier July 28, 2019 posting on superdeterminism and look for my comments there. I even give the computational method needed, the Radon algorithm, to ensure that the subtleties of quantum randomness can be reconciled with the more classical world lines of relativity. Granted, it's phenomenally expensive computationally, but then so is every variant of the block universe.

You cannot invoke a preexistent block universe without first creating it.To think that you "don't need to" is nothing more than sloppy thinking, and does not meet the criteria of scientific analysis. The real problem becomes how to create such a block universe while using mechanisms such as the Radon algorithm that at least plausibly could occur naturally. Shoot, even if you invoke a deity to do the job for you, guess what? She still has to run all of the necessary physics-constraints algorithms too, just to get an internally self-consistent block universe built!So: Sabine, cheers to you, Dr Palmer, and Dr 't Hooft for saying what so many others either don't realize they are also saying, or are not bold enough to say.

Terry,

DeleteYou wrote:

>One reason I mentioned the Radon algorithm back in your July 2019 blog on this same topic is that if you first assume all-encompassing, crystallized space time — a block universe — then the approach that requires the least up-front intelligence is to apply an iterative annealing algorithm that tweaks the block structure until it meets some set of goal constraints, in this case Bell violations. This requires that time be treated like a fully bidirectional spatial dimension, with temporal causality transforming into a fine-grained network that meets the physics causality constraints of the annealing algorithm.

Terry, some physicists have been thinking along these lines for over three decades: it goes by the name of "stochastic quantization" (or Parisi-Wu quantization), not to be confused with Nelson's "stochastic quantum mechanics" (the Wikipedia page seems to be confused on this).

Quoting from the review paper by Damgaard and Huffel:

>"The main idea of stochastic quantization is to view Euclidean quantum field theory as the equilibrium limit of a statistical system coupled to a thermal reservoir. This system evolves in a new fictitious time direction t until it reaches the equilibrium limit as t--> ∞."

The rub is the reference to "Euclidean" QFT: what this means, in essence, is that the ordinary time variable becomes imaginary, so that you are in ordinary four-dimensional space, not spacetime.

In terms of most of the quantities we are interested in -- e.g., expectation values of operators -- this works rather nicely.

But, unfortunately, it does not actually produce time evolution of a physical system in spacetime.

Now, of course, a lot of people (including me!) have thought, "Hey! -- it ought to be an easy tweak to kick things over from imaginary time to real time!" But, as as far as I know, no one has fully succeeded in doing this.

Mikio Namiki's 1992 book

Stochastic Quantizationhas references to early work on all this.I agree with you that taking the "block-universe" approach and "annealing" the whole universe to satisfy the necessary constraints could logically give you the non-local correlations needed for quantum mechanics. I suspect that when all is said and done, the result might be equivalent to what Sabine wants to do with "superdeterminism" or what others (e.g., Huw Price) want to do with temporally bidirectional causation.

My "gut feeling" is that something along these lines should work.

But I have not been able to get it to work, nor has anyone else that I know of.

Of course, perhaps all we need is some bright young guy or gal who sees things a little bit differently than we have viewed it and who sees the magic trick that makes it all work.

Dave

P.S. I also have thought that the Radon transform is relevant here (in essence spacelike hyperplanes in different frames of reference seem like the different sections in an MRI or CAT scan), but I could never get that to work, either.

I'm pleased to read this post. I'm a third person, if not The third person. When I read that Bells inequality can be evaded by superdeterminism, I thought: Great! How easy was that? Part of my take on this is like Einstein's old God doesn't play dice intuition/proclamation/whatever. It's certainly not science, it's not even an argument really, more of an intuition of how things could possibly happen. If GR "reveals" anything to me it's not that the universe is a Lorentzian manifold, it's that it's a causal network. The causality seems to me to be "prior" to spacetime itself, that the Lorentzinity preserves causality, not the other way around. Otherwise, why would such a manifestly crazy system exist! (Permission to contemplate emergent spacetime falling out of that somehow is a beautiful bonus.)

ReplyDeleteAnyway, if a revived medieval Catholicism added double slits to the Index of Forbidden Physics I'd be tempted to pluck out my physicalist heart and sign up because it would bring deep order to the world, but on second thoughts maybe no, but let's give superdeterminism a serious look.

On p. 5 (Section 3) of your paper, you write, "Neither can we interpret ρ as a probability in the Bayesian sense, for then it would encode the knowledge of agents and thereby require us to first define what 'knowledge' and 'agents' are."

ReplyDeleteThis is simply not true. Probability theory is logically antecedent to decision theory. Bayesian probability theory as the logic of epistemic "degrees of plausibility" may be derived without invoking any notion of agents. This is what Cox's Theorem [1] does, as well as my own theorem deriving probability theory as the uniquely determined extension of classical propositional logic to handle degrees of plausibility [2].

As to defining "knowledge," that can be just a collection of logical statements that includes the definition of your theory, and past observations. As Jaynes and others showed over 60 years ago, this suffices to get you maximum-entropy distributions for the microstate of a physical system for which only measurements of macroscopic quantitities (such as temperature and pressure) are available.

[1] Cox, R. T., 1946. “Probability, frequency and reasonable expectation.” American Journal of Physics 14 (1), 1–13.

[2] Van Horn, K. S., 2017. "From propositional logic to plausible reasoning: a uniqueness theorem." Int'l J. of Approximate Reasoning 88, 309--332. (Preprint).

Kevin,

DeleteAnd who or what do you think makes decisions and what does it mean to decide something? If you could please give me a reductionist definition from first principles, thank you.

Sabine,

DeleteDecide what? There is no notion of decision inherent to Bayesian probability, no need to consider any action to be taken. In the Jaynesian view it's just a set of rules for assigning plausibilities to propositions. It's a logic of plausible reasoning, an extension of the classical propositional logic.

I think you've been misled by the fact that some people working in the philosophical foundations of Bayesian probability proceed via decision-theoretic arguments, such as de Finetti's "Dutch book" arguments. Some advocates of MWI (e.g. David Wallace) do the same, as do the QBists. But as the references I gave show, you don't have to take that route.

Hello Sabine,

DeleteI have read with interest your paper and I have some observations regarding Section 3 "Violation of Statistical Independence". I will stick to finite sets of events for the sake of the discussion and as our instruments for measuring phenomena and storing data can only have finite precision.

Observation #1: Your wording implies that Bayesian probabilities require to define the concepts of "knowledge" or "agents". This is not reasonable, as Bayesian probability theory is not built on any notions of "knowledge" or "agents", but on the Bayes' Rule:

P(\lambda|a,b) = P(b,a|\lambda) P(\lambda) / P(a,b)

which is a method to update the left hand side of the equation above. Referring to that conditional probability as "knowledge" is just a metaphor, like calling "quarks" to elementary particles.

I think that you will generally agree with the statement that attaining physical interpretations to conditional probabilities, or other mathematical objects, such as polytopes, is inspired by fancy, not fact. They do not need it to be useful or valid tools to make sense out of reality (here we may be in disagreement?).

This method depends on the ability to decompose P(\lambda,a,b) into "factors" (conditional probability distributions and the prior probabilities). The "agent" is whatever computation system applies the Bayes' rule to update P(\lambda|a,b).

I think it is unfortunate to so much emphasis on the "metaphor" or "interpretation". Bayesian probability calculus and inference is just an algorithm for updating assessments of plausibility represented as conditional probabilities. It is (in my opinion) a useful one, because it allows to exploit the logical structure of a theory (which would correspond to the particular decomposition of P(\lambda, a, b)) to obtain a computational advantage (i.e. less time consuming calculations).

Observation #2: The Bayesian approach to doing science as in tracking and updating conditional probabilities as new data is available is compatible with the "statistical independence" postulate discussed in your paper if one is assuming that

P(a,b|\lambda) = P(a,b) = 1

or in words, that (a,b) is the only possible combination of configurations for the two detectors.

Observation #3: Equation 3 is pretty much a corollary of the law of total probability, a known mathematical fact that states

p(\lambda) = \sum_{a \in A} P(a) \sum_{b \in B} p(\lambda|a,b) P(b)

unless, of course, P(a) = P(b) = 1 (and therefore A={a} and B={b}).

Question #1: Why do algorithms/methods to do calculations need to have plausible "physical interpretation" in order to be "valid"? Do Feynman diagrams have such an undisputed interpretation? I kind of feel I am missing something because the reality of *any* algorithm/calculation is something that follows from us, humans, using them or remembering them.

The quote to Bell was very useful to clarify what countless popular science write-ups of Bell's Theorem have muddled for me over decades.

Thanks again for sharing the pre-print, it was certainly an interesting and stimulating reading (more than the papers I am reviewing).

Kevin,

DeleteYou were the one talking about decisions. I am asking you to define what you are talking about. Of course you can just "write down a set of rules" but that means the same as just postulating it.

Sabine,

DeleteNo, I was responding to this: "Neither can we interpret ρ as a probability in the Bayesian sense, for then it would encode the knowledge of agents and thereby require us to first define what 'knowledge' and 'agents' are."

I assumed you were referring to decision-theoretic justifications for Bayesian probability, and implying that this was a necessary part of Bayesian foundations. If that's not what you meant, then I have no idea what the sentence you wrote means; why does a Bayesian interpretation of probability require invoking some notion of agents, any more than does a frequentist interpretation?

Kevin, Geek,

DeleteYou two guys are very confused. Of course you can just postulate Bayes' rule instead of saying it comes down to decision theory. Alright. But that's does not solve the measurement problem, it merely replaces one postulate with another one. Excuse me for thinking this was obvious.

Probability and statistics has either the frequentist objective foundation or the subjective or epistemic Bayesian foundation. In the first it tends to require the analyst have some full description of a sample space. That is not always possible. In the Bayesian approach one imposes a prior probability and updates this as information comes forth. This does have a reference to a mental observer and a subjective choice. What this really means is we do not have a really complete understanding of probability. In the end you use one or the other of these depending upon whether it works for the problem at hand.

DeleteLawrence Crowell wrote: "

DeleteProbability and statistics has either the frequentist objective foundation or the subjective or epistemic Bayesian foundation."In my own work it's certainly one or the other. But for those who study this topic from the POV of pure mathematics, are these truly the only two (independent) foundations? If there's a third (fourth, etc), how does it relate to the measurement problem?

I am not that versed in the Foundations of probability theory. From my more basic or elementary knowledge there are the objective and subjective interpretation of the orthodox and Bayesian interpretations. It may be these are what have a lick on probability theory. It might be connected to the dualism between ontic and epistemic QM interpretation

DeleteLawrence Crowell wrote: "we do not have a really complete understanding of probability"

DeleteThe two "foundations" are not complete opposites. A Bayesian will happily use frequencies, if data are available. It is misleading to call Bayesian probabilities subjective -- given the same data, everybody should come to the same conclusion. I prefer Jaynes' view that probability theory is (quantitative) logic that allows you to draw valid conclusions from the information that is available.

In physics, the meaning of probability is indeed sometimes unclear. For some people probability is a "property" inherent to quantum particles; for others e.g. the Boltzmann factor drops out of the sky. For physics probability theory is just as indispensable as is geometry, and the use of the Maxwellian velocity distribution is the best way we've come up with to deal with incomplete information (if all we know is a temperature).

For Sabine, statistical theories are apparently only second-rate theories. Fundamental theories should not require probabilities. Therefore I suspect her to have little sympathy for Arnold Neumaier's "thermal interpretation", which is the clearest exposition of the quantum formalism I have come across so far. (But it still doesn't explain what "measurement" is. It is treated as a primitive concept.)

Sabine, please correct me if I've misunderstood your postings.

Werner,

DeleteWerner,

Thanks. Unfortunately you have misunderstood my explanations. I have no particular opinion about what postulates are or aren't allowed in a theory. I am saying the postulates have to be consistent. Keep in mind that physics isn't math. To have a theory you not only need mathematical definitions, you also need a map from the mathematical structures to observables in the real world.

The dichotomy between the frequentist and Bayesian interpretations is blurred. I was not going to get into that much. I will be honest, when discussions turn into kvetching over this it is often a big snore.

DeleteI really think QM is a foundation for probability. I think parallel to this dichotomy in probability is between ontic and epistemic interpretations of QM. I don't think there is any decision procedure to determine that QM is either.

Lawrence Crowell wrote: "I really think QM is a foundation for probability"

DeleteFor me, it's just the other way round: probability theory is part of the mathematical infrastructure of physical theories, just as geometry.

"ontic and epistemic interpretations of (the wavefunction)"

I think it's misleading to focus on the wavefuntion. It is but a single piece of a statistical theory -- every |ket> needs to be combined with a <bra| and a trace taken before you arrive at something that can be confronted with experience. The wavefunction (plus unitary evolution according to Schrödinger's equation) cannot be the whole story.

Sabine,

ReplyDeleteHaving been one of the few people who both comment here and have actually read the paper, I will offer a few comments:

* In the paper you seem to alternate between Output Independence and Outcome Independence, unless I'm missing something.

* You assert that General Relativity respect CoA. However, there is a caveat there: the stress energy tensor must obey the dominant energy condition for a local Cauchy development to exist, specifically the part of the condition that Tuu<=0 for any timelike u, i.e. no superluminal signaling (no "action at a distance"). While an eminently reasonable condition, it's not a constraint of the theory itself, but merely of the form of the stress-energy tensor. If you are pushing the boundaries of the accepted assumptions, it is worth spelling out every assumption you keep or change explicitly. For example the ER=EPR conjecture uses Planck-size wormholes, which does not play well with the dominant energy condition, as I understand it.

* As a fellow anti-realist, I was happy to see that the abstract promised testable predictions. However, the bait-and-switch was not appreciated: the stated predictions were just references to your earlier paper of 2011.

* I was also expecting you to directly address Scott Aaronson's critique of superdeterminism, that "With “superdeterminism,” you simply assert that your cosmic conspiracy is only to be invoked to explain the one phenomenon that bothers you for whatever reason (e.g., quantum entanglement), and not to explain any of the other things that it could equally well have explained (e.g., bigfoot sightings). A scientific theory, by contrast, should intrinsically tell you (or let you calculate) which situations it’s going to apply to and which not." (this quote is from https://www.scottaaronson.com/blog/?p=4400, but it's a recurrent theme of his.) I assume that a more careful reading through your paper would reveal the relevant argument, but I would rather see it spelled out explicitly.

Finally, thank you for writing the paper, it is accessible enough for me, a lowly PhD in GR with no subsequent research credentials.

Sergei,

Delete1) Will have to check this; probably a typo.

2) Yes, that's right. We should have formulated that more carefully.

3) No, it's not the same as in the 2011 paper. The 2011 paper was about models in which the hidden variables are *additional* properties of the particles. It occurred to me later that this is an unnecessary assumption which means that the experimental test is easier. It also occurred to me that this would have to be a generic property of any type of model with these properties. This is what we explain in this current paper. Of course we have a reference to the earlier paper.

* I wasn't aware of this particular "critique" but this generic "it's unscientific" claim is addressed in our section on conspiracy argument.

Sergei,

DeleteI have addressed Scott's arguments on his blog (Comment #122). I quote from there:

"Scott,

“With “superdeterminism,” you simply assert that your cosmic conspiracy is only to be invoked to explain the one phenomenon that bothers you for whatever reason (e.g., quantum entanglement), and not to explain any of the other things that it could equally well have explained (e.g., bigfoot sightings).”

Not at all. Bell’s independence assumption is false in any theory with long range forces, like classical EM, General Relativity, Newtonian gravity, etc. Any such theory that is also deterministic will be “superdeterministic” in Bell’s sense. All these theories imply correlations between distant systems as a result of those long range fields/forces. Such examples abound: planets in planetary systems, stars in a galaxy, electrons in a wire, atoms in a crystal, etc. Those theories apply equally well to bigfoot, if such a being exists, but the observable effects will be different because, well, bigfoot is different from an electron and a camera is different from a Stern-Gerlach detector. The trajectory of an individual electron will be significantly affected by the EM fields produced by other electrons and quarks. The center of mass of a large aggregate of electrons and quarks, like bigfoot, will not be significantly affected because those effects will cancel out (objects are neutral on average). The larger the object is, the less likely it is for all those fields to arrange in such a way so that we get observable consequences. Still, correlations between the motion of elementary particles inside bigfoot and inside the camera are to be expected, it is just difficult to measure them.

“A scientific theory, by contrast, should intrinsically tell you (or let you calculate) which situations it’s going to apply to and which not.”

Sure. Bell’s independence assumption is false in the general case as a direct consequence of long range fields/forces but it is approximately true for large, macroscopic objects where those forces cancel to a large degree. You can ignore those forces when doing coin flips or searching for bigfoot but you cannot ignore them when properties of elementary particles or small groups of such particles are involved."

Andrei, I went back and read your conversation with Scott and have emailed and asked him to read over Sabine's paper and give a response. If he is not to busy, I imagine he will given his obvious passionate distaste for superdeterminism. In this rare case, I think Scott has it wrong and is letting his emotional objection to superdeterminism override his reason. Let's see what he says...

DeleteAnyway, I wanted to note that in the paper itself you can find Sabine and Palmer's answer to the thrust of Scott's critique in section 4.4 the so-called "Tobacco Company Syndrome." This section also gives Scott an explicit answer as to *where/how* a superdetermistic QM theory will draw the line as to where it should be applied (in Bell type experiments) vs where it should not (failure to observe Bigfoot)! This quote from the paper is highly significant:

"It is only when our theoretical interpretation of an experiment requires us to consider counterfactual worlds, that differences between classical and quantum theories can emerge."

This is the nut of it. To really grapple with any anti-realist interpretation of QM one must grapple with how/why counterfactual definiteness is dispensed with.

What I find striking about Scott's objections is they are equally damning of his favored interpretation: MWI. Why don't we see Bigfoot? Because we are not in that branch of the universal wave function...

manyoso,

DeleteI think there are even more ways to refute the Bigfoot argument.

1. Assuming that a superdeterministic theory cannot reproduce QM is circular reasoning. But, unless Scott believes that QM itself implies Bigfoot sightings conspiracies, he is doing exactly that.

2. Deriving the macroscopic consequences from a type of subatomic behavior is not trivial. It should involve computer simulations on a scale never attempted. Yet Scott asserts, without even looking at a possible mathematical implementation of superdeterminism, that he knows how a Bigfoot - camera system is going to behave. Completely unjustified.

A living cell is an extremely noisy entity! The human brain consists of about 100 billion such noisy units. So why would anybody in his right mind argue about fundamental physics based on free will which is dependent on living cells?

ReplyDeleteThe world is not deterministic. There is always something smaller and always more. So there is an infinity of outcomes. We do not control our destiny. There are two many variables that leads to the place we are and will go. There is no absolute determinism or absolute free will. Only common paths that lead to similar outcomes.

ReplyDeleteSabine,

ReplyDeleteA minor quibble, but you repeatedly refer to "

thep-adic metric." However, there are actually an infinite number of (inequivalent) p-adic metrics, one for each prime, on the rationals. And each has its own completion, all, I think, also inequivalent to each other as metric spaces (the issues here are complicated -- algebraic completion and then metric completion apparently lead to a field algebraically equivalent to the complex numbers but not metrically equivalent; anyway, these details are beyond my level of competence!).Does not affect any of your conclusions of course, but just in case a number theorist reads your paper...

All the best,

Dave

Dave,

DeleteThat's right of course. Not sure what's the correct grammar here. Should we just make that a plural? While they are inequivalent for different p, they have the same definition & properties, so it made sense to me to make it a singular.

Sabine,

DeleteI've seen it as a singular, but I think in the context where the writer is basically saying "For a given prime

p, the p-adic metric..."So logically in your context, I think it should be plural, but yeah, it is a close call (like: what is the plural of "computer mouse"?).

By the way, I just picked up on this thing about the metric closure of the algebraic closure of a p-adic field being algebraically (but not metrically) equivalent to the complex numbers: apparently, you do need two closures, first algebraic, then metric. This is very weird: obviously, p-adic fields do not fit with human intuition!

(The big difference, for anyone interested, is you can metrically close the rationals to get the reals and then the complex numbers are just a quadratic extension of the reals. Apparently nothing this simple works in the p-adic worlds.)

Dave

Dave,

DeleteYes, funny you would say that because I was just on the weekend giving my husband a speech about how odd it is you need to close this field twice. Only to then find that the resulting complex field of p-adic numbers is isomorphic to the usual complex numbers. Math can be wild.

Sabine,

ReplyDeleteI've seen your paper and I think it's great. It's the best case made for superdeterminism in a physics paper.

I will reiterate some of my points made before I had the chance to read your work and make some sugestions of how the strength of the superdeterministic case can be further increased.

1. Not only is superdeterminism a respectable scientific option, but it is inevitable if locality is to be maintained. I think that one should directly attack the non-hidden variable views, which can be proven, by a trivial use of the EPR reality criterion, that have to be either non-local or non-fundamental (statistical approximation).

2. Show that the statistical independence assumption fails in all field theories (GR, classical EM, fluid mechanics) as a consequence of the fact that each object produces a field everywhere and any object is acted upon by this field. This would make all arguments against superdeterminism look silly.

In order to justify 2. it is necessary to address the folowing objections:

O1: Why the independence assumption seems to work in mundane circumstances, like coin flips?

O2: How is that you cannot get independence by simply increasing the distance between the source and detectors?

Let me present the answers below:

O1: As mentioned in 2. independence fails in field theories, but it is true in theories with contact forces only (Newtonian billiard balls with no gravity). Because distant systems cannot interact there is no other way to get correlations except by fine-tuning the initial state. So, if we disregard the fine-tuning option we can expect that the independence assumption will hold in all circumstances where Newtonian mechanics with contact forces only is a good approximation. This includes experiments like double-blind medical tests in regards to the selection procedure.

O2: Let's consider the case of a particle source and a detector. When these devices are very close it is expected that the fields produced by their internal particles (electrons and quarks) will have a significant effect on the hidden variable (say the polarisation of the photons). If the detector is placed very far, say 1 light year away, the field produced at the location of the source is very weak, so one would naively expect that different detector states should not have any significant influence on the source. However, there is a catch here. The increase distance implies an increased time for the experiment, in this case it cannot be less than 1 year. That means that, even if we modify the initial state by an infinitesimal amount, we will have at least 1 year of chaotic evolution. So, an infinitesimal change of the initial state will still imply a significant change at the end of the experiment.

The rationale of Superdeterminism appears to be to uphold locality and determinism. But these are features of classical theories. What makes you so sure that they must carry over to the "final" theory? I'd call this an example of classical cognitive bias.

ReplyDeleteIn the past, theoretical advances were triggered by observations (e.g. spectral lines, Lamb shift). Attempting to replace QT with something more fundamental without having any observational clues as to the nature of the hidden variables strikes me as perversely ambitious. Do you envision a theory that could (in principle) predict the decay time of an individual neutron? What makes viewing QT as a non-deterministic, "merely" statistical theory so distasteful for you?

I share the view that there is a quantum measurement problem, but I don't think QT must be replaced. The puzzling interpretations of the formalism have been with us for decades, but this is not without a historical precedent. Maxwell's electrodynamics was highly successful and passed all experimental tests, but it was not fully understood until 1905. For Maxwell the theory was clearly about the aether, for light had been firmly established as a wave, and how can there be waves without something doing the "waving"? Like today's quantum objects, which are both particles and waves, the aether had contradictory properties: it was at the same time solid (to support transverse waves) and fluid (for vortex lines to form). Today the aether is considered obsolete (and has been replaced with something even more bizarre that we call vacuum). If the history of physics can be a guide, I'd conclude that QT has to get rid of its awkward metaphysical baggage and find a true synthesis of the wave and particle pictures.

Werner,

Delete"The rationale of Superdeterminism appears to be to uphold locality and determinism. But these are features of classical theories"

This is wrong. Superdeterminism is the only way to save locality. The EPR reality criterion implies that that QM is either non-local or incomplete. Bell proved that the only way to keep locality is superdeterminism. No other option exist. You may hear the claim that you can keep locality by abandoning realism. That's wrong. Local non-realism has been refuted by EPR.

Werner,

DeleteAs Andrei says, this is the relevant point. There isn't any other option. Though, as we explain in the paper, one has to be careful with the word "locality" as it's sometimes used in confusing ways. If you want a theory that is "local" in the sense that GR is (ie in the sense of local propagation of information) and you want a theory that is deterministic, and you want it to be a reductionist theory (this is to do away with people who are content with saying the wave-function is "knowledge" without explaining what that knowledge is about) then superdeterminism is the *only* way to do it.

Sabine,

Delete"If you want a theory that is "local" in the sense that GR is (ie in the sense of local propagation of information) and you want a theory that is deterministic, and you want it to be a reductionist theory (this is to do away with people who are content with saying the wave-function is "knowledge" without explaining what that knowledge is about) then superdeterminism is the *only* way to do it."

The case for superdeterminism is stronger than that. EPR proved that any local theory must be deterministic. A stochastic local theory cannot explain the fact that spin measurements using the same detector orientations always gives anti-correlated results. So determinism is not an independent assumption, but a requirement for locality. So, if you want a theory that is both local and fundamental (complete) you are forced to accept determinism. Bell's theorem further constraints the class of available local theories (which already consists only of deterministic ones) to superdeterministic type.

So, unless you want to scrap relativity and go back to an absolute reference frame you need to accept superdeterminism.

We should be very careful here, because if you admit that determinism is an assumption independent of locality, most physicists will just abandon determinism (an useless old, phylosophycal assumption) and declare victory.

Any realism is also impossible if the facts are interpreted in this way: "When the observer wants to know which slit the photon passes through then the interference does not occur; it only occurs when the observer does not know which slit the photon passes through." Associating the knowledge of the observer with the choice of the path of the photon is extreme subjectivism. The trick that information cannot travel faster than C in the phenomenon of quantum entanglement is a subjectivist invention. It is no easier to think that because the particle does not exist independently of space and is linked to it, when there are two equivalent paths energetically and geometrically, space responds in both alternatives that can interfere; And in the case of quantum entanglement, it is better to think that in its relation to space, the parameters of the particle are related to the symmetric structure of space, which somehow preserves the result? Due to this, since the space is symmetrical, no matter how much separation there is between two previously linked particles, each particle has a relationship of conservation of the symmetry with its surrounding space; when a measurement is made on a particle, what is discovered is that relationship with space; therefore the relationship of the other particle with space is inverse. It is my humble opinion.

DeleteSabine wrote: "There isn't any other option."

DeleteI don't insist on locality, so I have more options. :-)

Frankly, I don't see how Superdeterminism can get you any closer to solving the measurement problem. Rather than trying to "explain" things following a pre-defined philosophy of physics, I'd suggest a much more modest goal: finding a consistent *description* of the world around us without superfluous metaphysical baggage.

Sabine,

ReplyDelete"Superdeterminism" is the cure to a non-existent problem since there is no "measurement problem" in QM.

Some physicists have got into major trouble by inappropriately conferring physical reality on the wave function and other mathematical abstractions used to organize what we know about the world.

isometric,

DeleteWe have explained in the paper what the problem is. I don't care what your personal opinion is.

isometric,

Delete""Superdeterminism" is the cure to a non-existent problem since there is no "measurement problem" in QM."

It is a cure for a serious problem, finding a local and complete theory of physics. EPR + Bell proved that the only way to have a local theory is superdeterminism. You may go for non-local theories like Bohm's or stick your head in the sand and pretend not to care about not having a theory that can describe the world (Q-bism) but these options are not problem-free either.

Thank you :-)

DeleteThis "opinion" is shared in many physics departments. Raising a "measurement problem" in QM is like raising a "roundness problem" about the earth. I am afraid that this question was solved nearly a century ago.

isometric,

Delete"This "opinion" is shared in many physics departments."Yes, thank you, I am well aware of this. That doesn't mean it is correct, however. We have explained in the paper why it is a problem. This is science. If you want to refute the argument, you will have to think about it and not appeal to popularity.

Andrei

Delete"finding a local and complete theory of physics"

QM is local. Locality doesn't mean that there are no correlations. The singlet state is strongly correlated and perfectly local. There is nothing non-local about the perfect anticorrelations in quantum mechanics.

You need need non-locality in a classical theory to fake the quantum results.

isometric,

Delete"QM is local"

According to EPR, if it is local, it is not complete. You may start by refuting the below argument:

Let's take a look at EPR's reality criterion:

"If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."

Let's formulate the argument in a context of an EPR-Bohm experiment with spin 1/2 particles where the measurements are performed in such a way that a light signal cannot get from A to B:

1. It is possible to predict with certainty the spin of particle B by measuring particle A (QM prediction).

2. The measurement of particle A does not disturb particle B (locality).

3. From 1 and 2 it follows that the state of particle B after A is measured is the same as the one before A is measured (definition of the word "disturb")

4. After A is measured B is in a state of defined spin (QM prediction)

5. From 3 and 4 it follows that B was in state of defined spin all along.

6. The spin of A is always found to be opposite from the spin of B (QM prediction)

7. From 5 and 6 it follows that A was in a state of defined spin all along.

Conclusion: QM + locality implies that the the true state of A and B was a state of defined spin. The superposed, entangled state is a consequence of our lack of knowledge in regard to the true state. So, QM is either an incomplete (statistical) description of a local deterministic hidden variable theory or it is non-local.

Andrei,

Delete"If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."

This is a statement about philosophy not a statement about physics. Physics is about predicting values of measurements using mathematics.

General comment: You are making these statements as if you were able to follow the particles through the air like little balls. This is not possible without interacting with them and measuring something.

I encourage you to think in terms of probabilities, probability densities, densities spreading in the vacuum, correlations, anticorrelations.

If you do not give a ontological meaning to the wave function then all "problems" with QM disappear and you'll see that locality and QM are perfectly compatible.

isometric,

Delete"This is a statement about philosophy not a statement about physics."

This is irrelevant. Please indicate which of the propositions (1-7) is false, or show what implication does not follow. If you cannot do that it means the argument is sound and you have to accept its conclusion regardless of your philosophical preferences.

Andrei,

DeleteThis is absolutely not "irrelevant". This is the difference between philosophy and physics.

Following your propositions leads to wrong conclusions because you are using wrong propositions and a wrong classical toy model since the very beginning.

It's similar to:

1. It is possible to walk on the earth along a straight line (yes)

2. We never encounter the end of the earth (yes)

3. The earth is infinite and flat (yes)

If you look at QM with the eyes of classical mechanics, you are right ! But unless you propose a better theory (making better predictions) you are wrong.

isometric,

DeleteI use no "toy model" at all. I only describe what we observe and what QM predicts. Why aren't you more specific about which of the 1-7 propositions contains the reference to that toy model? I may try to change the wording so that it suits you.

"It's similar to:

1. It is possible to walk on the earth along a straight line (yes)

2. We never encounter the end of the earth (yes)

3. The earth is infinite and flat (yes)"

Premise 1. is false, there are no straight lines on Earth, because it's a sphere. See? It's not that difficult to spot a false assumption. Why don't you proceed in the same way with my argument?

"If you look at QM with the eyes of classical mechanics, you are right !"

What proposition assumes classical mechanics? I don't know what are you talking about.

In your paper we read, page 17: "In this approach there is strictly speaking only one ontic state in the theory, which is the state that the universe is in." Gerard 't Hooft writes: "Our basic assumption is that a classical evolution law exists, dictating exactly how space–time and all its contents evolve. The evolution is deterministic in the sense that nothing is left to chance. Probabilities enter only if, due to our ignorance, we seek our refuge in some non-ontological basis." (page 66, Cellular Automaton Interpretation of Quantum Mechanics, 2016). My question is this: How is 'superdeterminism' an improvement over Hugh Everett's original thesis ? Everett writes: "the probabilistic assertions of the usual interpretation of quantum mechanics can be deduced from this theory, in a manner analogous to the methods of classical statistical mechanics..." (page 109, Theory of Universal Wave-Function, 1957). He wrote: "The hidden variable theories are, to me, more cumbersome and artificial..." (Letter to Bryce Dewitt.The entire letter can be read on pbs.org/nova/manyworlds/).

ReplyDeleteSabine,

ReplyDeleteYour paper summarizes a few different approaches to developing an actual theory of superdeterminism including T'Hooft's Cellular Automata, Palmer's Invariant Set Theory, and your upcoming Future-bounded Path Integrals. I enjoyed reading about these and noted that all of them save one includes a decent discussion of the flaws or downsides to each approach.

Notably, the summary of Palmer's Invariant Set Theory does not include any discussion of its weak points. Just reading this, I'm left with the impression that the theory is pretty well developed and has no obvious downsides.

Given your emphasis on battling cognitive bias I am surprised that you don't insist on more language detailing the problems with this approach. I understand that this section likely came from Palmer, but I would hope you can insist on updating it to include the known inadequacies or downsides of the approach.

The downside of Palmer's theory is that he postulates that a dynamical law exists but doesn't actually have one. So it is unclear how one reproduces quantum mechanics.

DeleteThanks! It could be that most educated readers will see/understand this without it being explicitly stated, but it does help me at least to have it explicitly stated. It sounds more like an approach than an actual theory at this point.

DeleteHow about your upcoming paper on future path integral superdeterminism? Do you expect to have an example of a 'suitable function' or something that may exhibit the property of optimizing for only those initial states that do not evolve into "final states containing superpositions of detector eigenstate states?"

Sabine,

ReplyDeleteHere are my reflections on your paper:

Hidden variable theories exist because we do not yet have a model of reality that explains what we see, and therefore have postulated the existence of hidden or extra variables. From within a hidden variable theory:

Determinism reflects that these variables help seed or are the initial conditions of our universe, and the universe invariably evolves from them. These hidden variables could be prior to the big bang, or they could be constantly injected, and could potentially vary themselves on an ongoing basis.

Superdeterminism refers to an apparent inability to measure these variables in an observer-dependent way. There is no subset of matter (an observer) in our universe that reveals the variables. It is unclear whether, if one takes the entire universe to be an observer, that said observer has access to the hidden variables. Such an observer could theoretically see the variables from all vantages and infer their values, yet at the same time, this seems to be because you have popped out a level and now exist in the same plane as the hidden variables themselves.

The most natural resolution to these issues is to posit that because we can only see so much of the past that we cannot predict the future, and it is simply our inability to measure everything within our universe that leads to our inability to induce its laws. Since we will never have access to the dataset of the past, we are scouring hidden variable theories in the hope that somehow a structure in advanced mathematics such as the amplituhedron contains an embedding of our observable universe.

Even in the case that we found such an embedding or mathematical model, our inability to measure the past would leave us confused as to whether we had discovered a general model of our universe, or a model that overfit our observations of our universe. That is, unless that model deterministically and correctly predicted the future evolution of our universe based on a very small number of observations, such as what we can currently observe (ie video cameras on Earth plus JWST, the LHC, etc).

Even then, we would be left wondering whether we, by deploying the model, had narrowed the scope of paths that our universe might take, essentially boxing ourselves in.

In summary, this field is about as dangerous as it reads.

*observer-independent way

Delete@Sabine, could you please correct / comment on my mistakes, and anything else that came to mind. I am a psych major ;-)

DeleteSabine,

ReplyDeleteInteresting article, though I'm not a physicist so I read only the intro and the section on experimental testing. In the latter you say

"...if we manage to create a time-sequence of

initial states similar to each other, then the measurement outcomes should also be similar."

Isn't this normally the case in physics experiments? I.e. many trials are repeated in a short window of space and time with exactly the same devices. So might it be possible to test your theory with a meta-analysis of existing experimental results, looking for sample deviations slightly less than predicted?

Dear Dr. Hossenfelder,

ReplyDeleteI posted a criticism of your paper in my blog.

Mateus,

DeleteYes, well, thank you. You simply reiterated your previous mistakes. We have already explained in the paper why you are wrong.

Reading your paper, I'm still confused as to how your definition of CoA manages to avoid counterfactuals. You give the following example of a counterfactual-free account of causation:

ReplyDelete"the clapping led to the excitation of acoustic waves in the air, which reflected off the wall and propagated back to vibrate Newton’s ear drums, sending electrical signals to his brain."

But this account uses the phrase "led to", which is synonymous with "caused" and thus begs the question. Can you rephrase this example without using any synonym for "cause"?

4gravitons,

DeleteI am afraid I don't understand the question. If you wish you can replace the word "led to" by an equation. Say, one that describes how a shock wave is formed in gases when two semi-elastic objects collide with each other. Now you have some pressure perturbation in the air which propagates and hits the walls and so on. The point is that these are all processes which actually take place in space-time and not virtual variations.

Also, let me clarify that it wasn't our intent to say one should avoid the word "cause". We were merely pointing out that there are two ways to think about causality. Also, allow me to add that I think it is meaningless to distinguish between cause and effect unless one has introduced an arrow of time, so really the relevant statement is about the correlation between two events, not that you call one "cause" and the other "effect".

Thanks for your reply. If indeed I can interpret your use of causation as meaning correlation (perhaps also demanding a "physical" or "observable" story of why the correlation occurs? Your first paragraph makes it sound like you're demanding that as well) then I think I understand better what you're saying.

Delete4gravitons,

DeleteWell, no, it's not just correlation. It's correlation of the information at (1) with every 3-surface between (1) and (2), see figure in paper. You can take information to mean something that is locally measurable.

My naive understanding about superdeterminism is that it is making a stronger claim than 'everything affects everything'. It's about long-distance correlations that would not appear to share any form of direct cause.

ReplyDeleteExample. I am on Earth, my friend is on a planet orbiting Alpha Centauri. We run the Bell experiment. We each independently (we think) select a different basis for measurement. But we can't (according to superdeterminism). If I pick A, my friend is more likely than not to pick A, and vice versa for B. This will hold no matter when we conduct the experiment, nor does the manner of our selection (choosing for ourselves, rolling dice, using radioactive decay).

That's asking a lot of the initial conditions. It's perhaps plausible to imagine what happens during the Big Bang causing the universe to unfold in such a way that there are some correlations between my state and the state of my friend. But to the extent that _every_ way of determining the measurement basis is correlated in precisely the _same_ way? And in no other way (the color of hat we choose to wear, the names we give our children, the number of times we sneeze)?

Scott - it is indeed a high price to pay, to "save locality" and deny conscious agency. Or so it seems to someone like myself who can only understand words and not math.

DeleteONE WAY TO COORDINATE "MINUTE VIOLATIONS"

ReplyDeleteDr Hossenfelder, in your intriguing paper Rethinking Superdeterminism, Section 5, paragraph 1, you say: "It has since been repeatedly demonstrated that it requires only minute violations of Statistical Independence to reproduce the predictions of quantum mechanics locally and deterministically."

True, but one might also call that the Optimistic Lottery Player argument: "If I make the right minute changes in how I pick numbers, I will win the lottery every time." I think it's probably fair to say that the magnitude of the violations is less of an issue than the problem of how to coordinate them so as to ensure an otherwise unlikely outcome.

One reason I mentioned the Radon algorithm back in your July 2019 blog on this same topic is that if you first assume all-encompassing, crystallized space time — a block universe — then the approach that requires the least up-front intelligence is to apply an iterative annealing algorithm that tweaks the block structure until it meets some set of goal constraints, in this case Bell violations. This requires that time be treated like a fully bidirectional spatial dimension, with temporal causality transforming into a fine-grained network that meets the physics causality constraints of the annealing algorithm. (There are fancier ways to say that, but I think it conveys the gist.) Early iterations of the algorithm will of course do poorly, but after enough tweaks a compliant structure should emerge. To provide a smoother and less overtly iterative journey, the Radon algorithm could be translated into quantum computing form.

The net result of this block-Radon approach to superdeterminism is a rather remarkable multi-level, billiard-ball-game-like version of locality and causality for

everything, including even quantum measurements. In the case of quantum measurements, if a block universe can exist at all, quantum outcomesmustdepend on exactly the kind of hidden variables that John Bell so hoped to uncover, since without such variables to tweak the annealing algorithm will have no way to coordinate quantum outcomes with larger-scale events. To a local observer living within the resulting block universe, any and all local events will look like conspiracies, ones with indefinitely fine levels of causal detail, some of which will be so inaccessible as to create the illusion of quantum randomness.The extreme locality of this conspiracy is of course a bit of a fraud, since all the hard work was done up front, before time as we know it even began to operate. But at least its origins are more of the iterative type that also creates crystals and snowflakes. That I think is not so bad.

(And a reminder: While I love to explore ideas and what-ifs to see if they can produce workable outcomes, this does not necessarily mean I think they are the best approaches. It's surprisingly productive to explore an idea with which you do not agree by first identifying exactly why you do not agree with it, and then trying hard to overcome your own objections.)

Sabine, in your paper you wrote: "This example also illustrates that

ReplyDeletejust because a theory is deterministic, its time evolution is not necessarily predictable."

For purposes of this statement, what is the definition of "predictable"? To be clear, I'm not trying to argue; it's something I've thought about but didn't want to state because I have no background. References about the concept of predictability would be appreciated.

Jeff,

Deleteit means you can calculate what happens before it actually happens

Superdeterminism is no longer magical if looking from perspective of time/CPT symmetric models, like the least action principle of Lagrangian mechanics.

ReplyDeleteAssuming the history of the Universe was chosen by such action optimization while fixing e.g. in Big Bang in the past and Big Crunch in the future, switching to mathematically equivalent: forward evolution using Euler-Lagrange equations, its hidden state was originally chosen also accordingly to all future measurements.

Very nice toymodel for time symmetry is Maximal Entropy Random Walk: just uniform ensemble among paths. Its simple combinatorics gives Born rules (one psi from past paths second from future), which allow for Bell violation constructions.

"Very nice toymodel for time symmetry is Maximal Entropy Random Walk"

DeleteThis sounds very interesting. Can you give me a reference where I can find more details about this toy model? Thanks!

I recommend starting with https://en.wikipedia.org/wiki/Maximal_entropy_random_walk

DeleteSee its derivation section - simple combinatorics leads to rho~psi^2 Born rules as direct consequence of time symmetry: ensemble of paths toward past or future would have rho~psi, for ensemble of complete paths we need to multiply both. It is analogous e.g. to Aharonov's two-state vector formalism.

With Born rule we can make Bell violation constructions - page 9 of arXiv:0910.2724v3.

Jarek, I very much like your invocation of least action Lagrangian mechanics.

DeleteTo folks reading this: Do you realize who Jarek is, and how good he is in this area? The impact of his freely distributed, incredibly non-obvious, and deeply insightful compression algorithms on the Internet and IT in general has been world-changing.

For this reason I will now speak with some caution. Earlier in this set of comments I noted that there should exist a quantum version of Radon reconciliation, one that would work across all of time to create a superdeterministic, resolutely local block universe. My casual mental image of what this quantum version of Radon would look like is a least-action quantum Lagrangian algorithm based structurally on Feynman QED. For non-physicists interested in why I would say that, check out Feynman's very readable, non-math book

QED. In that work he shows how the astonishingly intelligent outcomes of least action derive from phase-interfering possible-future-histories world lines that collectively "figure out" optimal paths. This is a very real form of quantum computing that takes place every time you use the lenses of your eyes to see the world around you, such as while you are reading this. If that same time-spanning, phase-interference power of superposition of possible world lines can be adapted to issues such as ensuring Bell-violating correlations, the result would be a profoundly QED-compatible approach to creating a superdeterministic block universe.(And again, folks: If in response to what I just said Jarek says "Terry, you are full of it!"… well, listen to Jarek, not me. You will not easily find a better algorithmist on this planet.)

(But also, amusingly: Jarek and I profoundly disagree on a number of physics issues. On those issues I cede no such ground, but I still respect his views… :)

Superdeterminism is monism.

ReplyDeleteMr. Palmer, this seems to be astonishing:

ReplyDeletehttps://m.youtube.com/watch?v=GqeTTAFjDYY

But I can't hear a single word of it as I am on a low-quality tab. Do you have the slides available for public devouring, uh, delight? (I am the guy who puts gravity above all else, actually!)

Thank you.

John Bell writes: "I am convinced that the word 'measurement' has now been so abused that the field would be significantly advanced by banning its use altogether, in favor for example of the word 'experiment.' " (On The Impossible Pilot-Wave, Foundations of Physics, 1982).

ReplyDeleteAny MWI fans here? One thing I find striking about the criticisms of superdetermistic theories is that they seem equally applicable (if not more so) to MWI. Consider the tobaccco company telling the judge... no, the reason these people have cancer has nothing to do with our cigarettes... they just happen to find themselves in that branch of the multiverse where this happens... moreover, we actually had no real choice to make them... we just happen to find ourselves in that branch of the multiverse... it was inevitable.

ReplyDeleteIsn't MWI just as "unscientific" as superdeterminism here?

While I do not label myself an 'MWI-fan,' I am a supporter of Hugh Everett's original 1957 Ph.D Thesis. Careful reading of his thesis says more about his 'theory' (meta-theory as he terms it) than I ever could. Hugh Everett writes: "we shall exploit the correlation between subsystems of a composite system which is described by a state function," and "While our theory justifies the personal use of the probabilistic interpretation as an aid to making practical predictions, it forms a broader frame in which to understand the consistency of that interpretation." (page 9 and 118, respectively). I do not pretend to understand all that his paper encompasses, but to use the word "unscientific" fails to do that paper justice. In a letter to Bryce Dewitt he explains: "I must confess that I do not see this 'branching process' as the 'vast contradiction' that you do. The theory is in full accord with our experience (at least insofar as ordinary quantum mechanics is). It is in full accord just because it is possible to show that no observer would ever be aware of any 'branching,' which is alien to our experience, as you point out." (pbs.org/nova/manyworlds).

ReplyDeleteGary,

DeleteThe rumor for a very long time is that Everettt wanted to be much more explicit about the many-worlds aspect, but that Wheeler forced him to tone it down.

From a physicist's viewpoint, the real probleem is not that MWI is counter-intuitive (the Cpernican model is counter-intuitivee!) but that there are severe technical problems with MWI.

The best known are the so-called "preferred-basis" issue and the "probability-measure" issue.

I myself am bothered by the fact that "branching" does not really occur: all the possible states in Hilbert space are always there -- Hilbert space does not "branch." (The MWI enthusiast David Deutsch embraces that point and simply denies that time exists at all, even in the physicists' common "block-universe" picture: all the states, past, present, and future co-exist without any temporal dimension. I will leave it to you to see problems with Deutsch's perspective.)

Another problem is to explain what happens to "measurement." In textbook QM, measurement is a big deal: "observables" map as Hermitian operators, the measured value of an observable is always one of the eigenvalues of the corresponding Hermitian operator, a quantum state with a definite value is represented by a corresponding eignenvector, etc.

MWI enthusiasts assure us that all this is no longer much of an issue in MWI, but I, at least, have never seen the details of how to get from MWI to the textbook explication of "measurement."

Everett's original approach, and the metaphor of "branching universe" can sometimes help us to feel more comfortable with the calculations we do. But if that metaphor is taken completely seriously, there are lots of unresolved problems that have not been solved in over sixty years.

Dave

DeleteForgive me to interrupt the discussion with my ignorance, but; The problem of measurement ?, well, let's use an analogy, they are never good but; suppose that a ship's mast points to a star in the sky; but due to the waves, that projection really forms an 8 in the sky around that star; suppose that nothing can be measured unless an energy is applied to the system; then I decide to measure the orientation of the mast and the energy that I apply is able to lay flat the sea, now I measure and I see that it points exactly to that star, and the supposed 8 that should obtain collapse; I have really caused the entire system to collapse. If to measure the quantum particle I have to impose a field that organizes the space-particle system based on that new energy; Then I have collapsed everything, including MWI, which is unnecessary.

Gary,

DeleteThe "preferred-basis problem" isn't a problem only with MWI; it's a problem with any theory of quantum mechanics which says that "everything is quantum". And for all these theories, it's addressed by Zurek's theory of pointer bases and quantum Darwinism (which is still a work in progress).

And if you don't subscribe to the belief that "everything is quantum", you need to say where the classical world stops and the quantum world starts, which to me seems an even thornier problem.

Thanks for directing me to Zurek. I am enjoying his paper "Quantum Theory of the Classical-- Quantum Jumps, Born's Rule, and Objective Classical Reality via Quantum Darwinism" (arXiv:1807.02092). The paper begins: "The Relative-State interpretation set out 50 years ago by Hugh Everett is a convenient starting point for our discussion. Within its context, one can re-evaluate basic axioms of quantum theory (as extracted, for example, from Dirac)." Continuing with Zurek, his 2016 paper "Quantum probabilities from quantum entanglement-- Experimentally unpacking the Born rule," which says: "The Born rule establishes a connection between the wavefunction used to represent the state of a quantum system (a purely mathematical object), and the probabilistic outcomes of measurements made on that system as experienced by observers (the 'physical reality')."

DeletePeter Shor wrote to Gary:

Delete>The "preferred-basis problem" isn't a problem only with MWI; it's a problem with any theory of quantum mechanics which says that "everything is quantum".

Well, Peter, it is not a "problem" for Bohmian mechanics or Nelson's "stochastic mechanics" or other similar approaches to quantum mechanics, simply because they just start out with the variables being position as the first assumption.

Now, of course, you could object that MWI could also just start out making that assumption, and so it could. But most MWI proponents seem to want to avoid that and to claim that they do not need that assumption. And, therefore, they set up for themselves the problem of how to derive the preferred basis if they do not just assume it from the get-go.

As you say, Zurek's work is a "work in progress", and I predict that it always will be, just like the other problems with MWI! In my experience, this sort of project in MWI always ends up with a conclusion along the lines of "Well, it would certainly be nice if MWI

didwork this way..." without ever showing that it actually does. Dave Deutsch (who is certainly a very bright guy) has been doing this for a long time.Gary, thanks for the cite to the arXiv: it does help to facilitate communication to know what the other guy is looking at.

All the best,

Dave

Peter and Gary,

DeleteThere does seem to be a rather obvious counter-example to Zurek's "proof" of the Born rule in Section IV of the paper Gary cites.

Suppose, for the sake of argument, that

allpossible states of the system are equiprobablealways. As far as I can see, that scenario is obviously consistent with all of Zurek's premises. In which case, as a matter of logic, his premises cannot be used to derive the Born rule. QED.Perhaps I have misunderstood the premises, but, to show I am wrong, someone has to show that universal equiprobability violates one of his premises.

There are numerous other problems: for example, decoherence never

reallyoccurs: decoherence is just a matter of noticing that in many situations it becomes practically impossible for humans to measure phases. And his definition (or is it an axiom?) of measurement will rule out exact position measurements in the non-relativistic QM of a free particle: measure the position exactly at any point in time and, due to the uncertainty principle (or use the Feynman propagator if you prefer), at any later time, no matter how small a non-zero Δt you use, the position is completely uncertain. So, in the simplest possible case his postulate iii is false.Perhaps those problems can be fixed in a more detailed exposition.

But either I am missing something with my counter-example to Zurek's proof of the Born rule, or he is wrong.

Dave

As might be predicted from the post and paper, some readers have complained "it is indeed a high price to pay, to "save locality" and deny conscious agency."

ReplyDeleteAs a data point for them to consider:

A computer program operates in a totally deterministic environment, does it not? There are a finite number of basic instructions the computer's CPU can respond to, and the output of each is fixed for fixed input. Despite this, the desirable effects of unpredictable randomness can be simulated with pseudo-random functions or by counting the number of cycles between external inputs.

Within this deterministic environment, AlphaGoZero taught itself to play the game of Go at a level beyond that of human grandmasters. Now consider AGZ as it is about the make its next move in a game. It must place a black or white stone somewhere on the board. Who makes that decision, if not AGZ? Who is responsible for it, and wins or loses the game as a consequence?

It seems to me that AGZ has as much agency in its narrow field as any human who has to make a decision. Furthermore, if the same situation occurs a year later, would AGZ make the same decision? Not necessarily, because in the intervening time it may have learned a better move, or in fact the earlier move might have been a pseudo-random choice among several alternatives. AGZ can learn by trial and error, which I believe is also a major factor in how humans learn to make decisions.

So as I see it, determinism does not imply any loss in agency, responsibility, or the ability to learn from mistakes and not repeat them. It merely gives a firm basis for how these things can be achieved. (Whereas, a computer without fixed rules could only accomplish any calculation randomly and with extreme unlikelihood.)

As for conscious agency, how do you know your decisions are made consciously and not by unmonitored neurons firing away in the background? Personally, I suspect that the magic associated with consciousness is due to having solutions suddenly become conscious after being crafted by those unmonitored neurons (whose activity an MRI could have detected).

Anyway, not only do I not feel any loss in considering myself a deterministic entity, I don't have any idea of how I could possibly exist any other way.

DeleteI think the same as you; There are those who want to establish our free will directly from QM. The following experiment (joke) explains it well: we have a magnet, a grass bale and a vessel with water; then we launch an electron and it will always look for the magnet, 100 pitches, 100 times look for the magneto; now, that same electron is a constituent part of a camel, we release the camel: And what happens ?, only twice out of curiosity the camel goes to the magnet, the rest is divided between the grass and the water; That is called free will.

It's interesting to consider this from the perspective of a quantum computer, such as the Google one (Sycamore?) which many feel has demonstrated "quantum supremacy" (see Scott Aaronson's blog for good discussion on this). Just how deterministic are its outputs?

Delete"Superdeterminism requires encoding the outcome of every quantum measurement in the initial data of the universe, which is clearly outrageous. Not only that, it deprives humans of free will, which is entirely unacceptable."

ReplyDeleteA purely stochastic process while not involve a single possible "fate" just as effectively deprives humans of free will as a superdeterministic universe.

As an analogy, In superdeterminism, the oracle at Delphi can tell you that you son will have no choice but to fight in war when he turns nineteen years old and it will happen no matter what regardless of the child's choices. But, if by virtue of a purely fair random lot son's draft number comes up when he turns nineteen and a draft in in place at the time, he still has no free will concerning when he serves or not.

Lack of free will is a weaker threshold than complete determinism and fate.

I'm a MWI fan, but I note that the MWI is often discussed in the context of the possible existence of totally different branches. I think that it's far more interesting and useful to focus the attention on the astronomically large number of "micro branches" that are indistinguishable to observers.

ReplyDeleteIf we consider a classical computer that implements an AI then even at the level of the classical bits there would not be a one to one correspondence between states of consciousness and the classical information present in the computer. The subjective state of the AI would not be located in any branch with a definite bit string, rather it would be a superposition of a large number of different bitstrings that describe slightly different inputs and outputs of the computer.

The quantum state is an entangled superposition with the environment due to fast decoherence, which makes the system behave in a classical way. Nevertheless, the consciousness of the system will remain in a superposition of these states that falls within the resolution of perception of the conscious awareness. In fact, conscious awareness is nothing more than this entangled superposition, the range of the states defines the resolution of the awareness while the different inputs and outputs nail down the algorithm that is running.

A problem with the classical single world view is that an AI or brain will at any given moment be in some precise bit string. This makes the program that is running not well defined at any given instant. So, an AI simulated in a virtual environment that evolves deterministically through a set of states, should be conscious as any time. But all that happens is that it moves through a sequence of states. There is no physical difference between such a system and a trivial system like clock that also moves through some set of sates.

Yet the AI is supposed to be conscious of the processed information at any given time. One may invoke that it does process information, but the transition of the computer from one state to the next state from one moment to the next is all that can be involved in the consciousness during that moment. This should remain invariant as far as that particular transition is concerned under replacement of the AI by a trivial system that always jumps to the next state no matter what the fist state was.

One can get to many other similar paradoxes by exploiting the microscopic description of the dynamics of the AI. But they all have in common that it's paradoxical to be able to describe both the precise bit state of the AU and also the algorithm that is being run to process the bits at a single moment. You need a range of states and specify how different initial states evolve to different final states.

In the MWI you get around this problem as you're never located on a precise micro branch.

SUPERDETERMINISM AND QED

ReplyDeleteWhen on July 28, 2019 Dr Hossenfelder promoted superdeterminism as a way to solve the measurement problem in quantum mechanics, she noted that: "Collapse models solve the measurement problem, but they are hard to combine with quantum field theory which for me is a deal-breaker."

Quantum field theory, as exemplified by the Feynman version of QED, has a delightfully intuitive graphical interpretation that is known (of course) as Feynman diagrams.

The building blocks and rules of Feynman diagrams are few in number and simple to apply. Electrically charged, infinitely small particles move in lines through space and time, and photons form bridges between these points. Yet by using these rules to build an increasingly detailed repertoire of such figures, it is possible to create highly specific computational algorithms capable of exquisitely accurate prediction of how real particles behave.

This accuracy carries enormous weight, since any theory that seeks to supplant QED must demonstrate at least the same level of predictive accuracy.

That is why Dr Hossenfelder and many others disdain interpretations of quantum mechanics such as wave collapse that lack any obvious opportunities for integration with quantum field theory. The extraordinary predictive power QED and its siblings

mustbe preserved when they are finally integrated with a mathematically precise theory of measurement.A DIFFERENT PATH

There is however one oddly simple approach to the QED preservation that I've never seen explored, one that has roots in a subtle but impactful form of cognitive bias. The bias is this: It is

notthe figures and abstract equations of QED that provide the actual predictive power of QED. Instead, that predictive power resides in the iterative algorithms that implement QED — and those algorithms are fully capable of supporting radically different physical interpretations, including ones that abandon even such traditionally obvious concepts as point-like particles and infinitely smooth space. Algorithmic reinterpretations of QED, and of other theories such as general relativity, provide opportunities for new approaches both to measurement and to quantization, while at the same time trivially keeping the predictive sanctity of such theories fully intact.Wavefunction collapse is a property of standard QM like Schrodinger equation, e.g. while excited atom deexcitates - releasing energy to environment in form of photon. We needed to break with unitary evolution for this moment because the used description did not include this environment ... but imagining we could expand the description to wavefunction of the Universe, there is no longer external environment, evolution should be unitary (?), time-symmetric.

DeleteIn perturbative QFT like QED or QCD, we include such e.g. release of photon in the description: there is no need for breaking with model like during wavefunction collapse, the CPT theorem says that such models have to satisfy this symmetry - we have just pure Lagrangian formalism.

So there (only?) remains question of standard QM description for finite systems: it misses some information e.g. regarding environment, hence can only provide random answers while relating to this missing information e.g. during collapse, measurement.

Assuming this missing information in fact has some objective just unknown value: "hidden variables", leads to the problem with Bell inequalities.

Superdeterminism tries to resolve it by saying that this hidden information was already chosen accordingly to all future measurements - it sounds magical unless we assume that the history of the Universe was already chosen in a time/CPT way (as in QFT, unitary evolution, Largangian formalism, path ensembles):

- through the least action principle for Lagrangian formalism, or

- from ensemble of Feynman or Boltzmann paths from -infinity to +infinity, or

- like in two-state vector formalism: present moment being a result of meeting of propagators from -infinity and +infinity.

Terry Bollinger wrote:

Delete"The building blocks and rules of Feynman diagrams are few in number and simple to apply."

Yes, sort of simple. :-)

"Electrically charged, infinitely small particles move in lines through space and time, and photons form bridges between these points."

No. Of course you know that we should not think of electrons and photons as lines in space-time. But Feynman diagrams do capture "elements of reality". I believe that what is real are not photons and electrons, but only the interactions between them. QED is a theory of events in space-time and the correlations between them. The vertices in a particular patch of space-time can be connected in many different ways, and there is not one true diagram that represents reality. There is no fact of the matter whether "this" or "that" electron moved here or there, because electrons are identical. Only by adding up the contributions of all possible diagrams do we arrive at probabilities, i.e., how likely that particular pattern of events is.

"supporting radically different physical interpretations, including ones that abandon even such traditionally obvious concepts as point-like particles"

Yes. I wish I had a better grasp of the mathematics of point processes. There must surely be a close correspondence with QED, but I haven't been able to complete the mapping.

Terry Bollinger wrote:

ReplyDelete> Instead, that predictive power resides in the iterative algorithms that implement QED — and those algorithms are fully capable of supporting radically different physical interpretations, including ones that abandon even such traditionally obvious concepts as point-like particles and infinitely smooth space.

How do you know? It would be nice to have an example.

In any case, the actual experimental predictions of QED certainly violate Bell's theorem and therefore violate Bell locality.

So how can that happen?

You are still left with the need for superdterminism or some other solution (or of course you can just bite the bullet and go with an explicitly non-local theory such as Bohmian mechanics, but such theories violate special relativity at the level of the hidden variables).

The key point here -- and this is Bell's real achievement -- is that the problem does not lie in any particular formulation of QM or QFT: the problem lies with the

observational results, which violate the Bell inequality.In some sense, nature itself, and not just our theories, is truly non-local. The problem is how to deal with that.

In order to solve the Measurement Problem, we need to know what is exactly time and evolution. I propose a new path. It's called "hyperdeterminism". There is also a Many Worlds Interpretation in the hyperdeterministic approach, but these possibly infinite many worlds are static (stationary), they don't evolve, they are like still frames of reality). Then, what is time?. Time is the passage of a consciousness through a conformal line of worlds. An interval of a conformal line of Worlds, is like a cinematographic long take (scene with no cuts), where a world differentiates from the next only by a small (infinitesimal) variation. So, an observer is a consciousness traveling through an interval of a conformal line of Worlds. Each world (frame) is like a 3D fractal, where an observer consciousness) can travel through any of the three spatial dimensions, but also through a fourth dimension, called scale. Then, free will would mean a consciousness sometimes can choose between different paths in the line of worlds.(

ReplyDeleteI have never been interested in super determinism (SD) because it does not smell right. I may be accused of appealing to naturalness, but so be it. One problem is with quantum entropy. SD introduces more degrees of freedom and if these are observable it means there is a disconnect between information physics and QM. Also with all we know about QM it runs counter to things. Of course one can't disprove a theory with a theory. So this no proof, but SD just seems contrary to QM.

ReplyDeleteSabine, thank you for writing and sharing that beautiful paper. The way it's written is really good for people trying to get to grasps with the philosophy of quantum mechanics and what it means for its interpretations, gently defining the concepts and guiding the reader to a proper defense of your hypothesis.

ReplyDeleteThank you for Lost In Math too. I'm eager to keep reading your ramblings about philosophy of science.

So, we cannot measure because we do not fully understand the laws of the universe and what we call free will is the product of a random generating machine called the brain.

ReplyDeleteCount me in.

I can't access the actual paper, wouldn't understand it anyway, being non-mathematical. Can the concept be meaningfully expressed in words? And if not - shouldn't I be suspicious?

ReplyDeleteThis is a hilarious comment. Basically, "Hey, if even I can understand it... it must be shoddy work!"

Delete@manyoso - it seems you misread my post.

Delete"... shouldn't I be suspicious?"

DeleteThe direct answer is, not necessarily. If one has studied in a field for eight years, and practiced in it for many more years, there might not be a verbal description which could describe advanced concepts in the field, which would take less than years to read. That is, the specialized words and equations in a paper might each need many pages or books of simpler description in more familiar terms. The people such papers are intended for have the necessary years of background to understand them.

For another, simpler reason, which I do not assume applies to you, try explaining to a dog how to play poker. Should it be suspicious of your intent if you do not succeed?

(I read the paper and got something from it, but did not understand all of it, with only an undergraduate training in physics and math. I did not see a reason to be suspicious of it, other than the fact that no one is infallible.)

PhysicistDave,

ReplyDeleteTo be clear, I am the exact opposite of a localist. I accept the numerous well-done and repeatable experimental demonstrations of Bell inequalities as proven facts, so well proven in fact that you can order equipment over the Internet that is based on this physics. I would further assert that the most concise and computationally efficient models of Bell violations treat nonlocality as the

norm, not as the exception.Locality in this view is purely emergent, a subset of complete physics. We just think it's universal because, like fish in water, we live and swim only within that subset.

To be even more specific, locality exists only for systems that contain information, with information itself being a rare and specific emergent quantity that can only exist when a narrow set of constraints are present. That might seem like an overly stingy definition of locality, one that could lead to causal paradoxes. However, since causality is necessarily historical and thus information based, linking locality to the existence of information turns out to be a powerful constraint.

Regarding superdeterminism: One superbly delicious irony of fully abandoning locality is that it makes superdeterminism easier, not harder. That is because with full entanglement the configuration of the entire entangled universe becomes available to each wave collapse. The final outcome of each such collapse thus can be a very large but ultimately deterministic function of the configuration of all of those entangled particles.

The other reason why wave collapse is important in full non-locality is that it becomes the specific mechanism (an interaction of quantum with classical) by which new information is created and entropy is advanced.

Terry Bolinger wrote to me:

Delete>To be clear, I am the exact opposite of a localist.

That's fine, but, prima facie, non-locality disagrees with special relativity. So, the problem is: how are you going to resolve that? And, by "resolve" I mean a detailed mathematical model.

Terry also wrote:

> I would further assert that the most concise and computationally efficient models of Bell violations treat nonlocality as the norm, not as the exception.

Well... exactly which "computationally efficient models of Bell violations" are you thinking of?

Somehow, Nature is happy enough with how it does it, but we humans would like to know!

Terry also wrote:

>One superbly delicious irony of fully abandoning locality is that it makes superdeterminism easier, not harder.

No, you do not need to worry about superdeterminism at all if you are sufficiently cavalier about non-locality.

Bohmian mechanics is explicitly non-local, and it works perfectly well if you do not mind its non-local nature and its violation of special relativity at the level of the hidden variables (observationally, it

obeysspecial relativity -- quite weird).So... if none of that bothers you, just go with Bohmian mechanics, which I think few people would describe as "superdeterministic" (it's just old-fashioned deterministic!), and don't worry about the whole thing.

Terry also wrote:

>The other reason why wave collapse is important in full non-locality is that it becomes the specific mechanism (an interaction of quantum with classical) by which new information is created and entropy is advanced.

So, which part of the universe is truly "classical" vs. which part is truly "qunatum" and how do you tell the difference? The problem is that, in principle, measuring devices, human sense organs, etc. are described by quantum mechanics (in some case, you

haveto do that): there is no place to hide a classical world!Sorry, but while your words sound nice, you are just evading

allof the actual issues. Take each point I have made above and try to put your response in a precise mathematical form and... well, you can't.No-signalling makes nonlocality of QM consistent with the causal structure of relativity.

DeleteLawrence Crowell wrote:

Delete>No-signalling makes nonlocality of QM consistent with the causal structure of relativity.

Yes and no.

Yes, the (anti)commutativity rules of QFT imply the no-signalling theorem, which guarantees that no

observationswill contradict special relativity.On the other hand, if you have any realist interpretation of QM, or some trans-quantum theory that gives the same observational predictions as QM -- whether deterministic like Bohmian mechanics or statistical like Nelson's "stochastic mechanics" -- then Bell's theorem seems to show that then special relativity will be maximally violated at the "hidden-variables" level, even though this cannot be detected observationally.

This is, of course, utterly bizarre: the underlying dynamics has a preferred rest frame in which interactions at a distance are simultaneous, and yet Nature conspires to make this completely undetectable to us.

Nearly all physicists -- from Lubos to Sabine to me -- find this quite unacceptable and therefore reject such theories. On the other hand, if I understand his position correctly, the philosopher Tim Maudlin, who is indeed a bright guy and who does understand the relevant physics, seems to think Bohmian mechanics is the obvious solution.

In any case, Bell showed that the relation between QM and special relativity is too complicated to just be dismissed by the no-signalling theorem, although the no-signalling theorem is indeed true.

Dave

I re-read the lovely little book of Bell's papers: "Speakable and Unspeakable in Quantum Mechanics." Bell appears to advocate De Broglie-Bohm pilot wave theory (page 160, On the Impossible Pilot Wave). A recent paper: "As with relativity before Einstein, there is then a preferred frame in the formulation of the theory...but it is experimentally indistinguishable. It seems an eccentric way to make a world." (page 180, Beables for Quantum Field Theory). A more recent paper claims: "In our view, the significance of the Bell theorem, both in its deterministic and stochastic forms, can only be fully understood by taking into account the fact that a fully Lorentz-covariant version of quantum theory, free of action-at-a-distance, can be articulated in the Everett interpretation." (Bell on Bell's Theorem, arXiv:1501.03521).

DeleteThis comment has been removed by the author.

ReplyDeleteTerry Bollinger wrote to PhysicistDave:

ReplyDelete“To be clear, I am the exact opposite of a localist. I accept the numerous well-done and repeatable experimental demonstrations of Bell inequalities as proven facts, so well proven in fact that you can order equipment over the Internet that is based on this physics. I would further assert that the most concise and computationally efficient models of Bell violations treat nonlocality as the norm, not as the exception.

Locality in this view is purely emergent, a subset of complete physics. We just think it's universal because, like fish in water, we live and swim only within that subset.”

----------------

Precisely! And very well stated, Terry.

It’s as if we are all unwitting participants in some vast video game whose underlying (“non-local”) informational script (software), spells-out precisely how the phenomenal details of what appears on a two-dimensional screen will play-out.

Except in the case of the universe, the “screen” is the three-dimensional context of the “local” conditions of what we call "spacetime."

The quirky thing is, if we delve even deeper into this idea of our entire (video game-like) reality being the emergent explication of that which is encoded in its informational underpinning, then it can be deduced that the existence of space itself is simply an aspect of the information.

In other words, what we see as empty space between the stars and planets is founded upon fields of information that are every bit as entangled with the information that forms the stars and planets themselves.

I’m talking about a situation where the information itself simply declares that in the context of "local" reality, there will be the “appearance” of a separate emptiness in one area (space) and the “appearance” of a separate somethingness in another area (stars), when, in truth, there is no actual separation of anything at the "non-local" (informational) level.

Indeed, what is written in the informational script of “non-local” reality is the very reason why we even have a “local” reality in the first place.

The bottom line is that the entire universe seems to exist in a seamless and interpenetrating state of “oneness” at the deepest level of reality.

I am afraid that (super)determinism is just the same cognitive bias as thinking that religion and politics are different issues.

ReplyDelete