Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.

As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.

But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.

Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.

The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.

The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.

In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.

A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.

Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.

I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.

Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.

That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.

I was therefore excited to see that James Wells has a new paper on the arXiv

**Naturalness, Extra-Empirical Theory Assessments, and the Implications of Skepticism**

James D. Wells

arXiv:1806.07289 [physics.hist-ph]

So, now that a man has said it, I hope physicists will listen.

**Aside:**I continue to have technical troubles with the comments on this blog. Notification has not been working properly for several weeks, which is why I am approving comments with much delay and reply erratically. In the current arrangement, I can neither read the full comment before approving it, nor can I keep comments unread, so as to remind myself to reply, what I did previously. Google says they’ll be fixing it, but I’m not sure what, if anything, they’re doing to make that happen.

Also, my institute wants me to move my publicly available files elsewhere because they are discontinuing the links that I have used so far. For this reason most images in the older blogposts have disappeared. I have to manually replace all these links which will take a while. I am very sorry for the resulting ugliness.

## 343 comments:

«Oldest ‹Older 201 – 343 of 343Tim,

You have now launched an attempt at distracting from the topic. To remind you, I say you are making wrong claims about superdeterminism.

Of course the hidden variables are not non-local for heaven's sake, the correlations are, can you please stop the nonsense.

You write

"You can falsify locality even though it is a property of a class of models rather than a model, because Bell demonstrates that *every possible* local theory must satisfy a set of constraints on its empirical predictions."No you can't. It's remarkable how careless you are both in reading and in writing. What Bell's theorem shows is that certain combinations of assumptions are in conflict with each other.

In any case, would you please stop the distractions and get back to the point. Do we agree that you cannot make a statement about whether or not a superdeterministic model is scientifically explanatory without evaluating its dynamical laws?

Best,

B.

Sabine

Every sentence you just wrote (save the one of mine you quoted) is either false or hopelessly confused. In order:

1) I have not "launched an attempt at distracting from the topic". Without a clear definition of terms like "superdeterminism" and "non-local hidden variables theory" there is no topic. It is you who are now trying to deflect from the fact that for months you have been writing very strong opinions in public without having the vaguest idea of what you are talking about.

2) "Of course the hidden variables are not non-local for heaven's sake, the correlations are, can you please stop the nonsense." What in the world is a "non-local correlation"? How do you tell if a correlation is "local" or "non-local"? For example, I do an EPR experiment at spacelike separation, checking for the outcomes when spins in the same directions are "measured". I find that the results are perfectly correlated. Is that a "non-local correlation" or a "local correlation" according to your usage? How did you tell?

3) "What Bell's theorem shows is that certain combinations of assumptions are in conflict with each other." No. Just no. What Bell's theorem shows is that certain observable, empirical, statistical correlations for experiments carried out at spacelike separation cannot be predicted by any Bell-local (or Einstein-local) theory, so long as the setting of the experimental apparatuses is statistically independent of the initial state of the particles that the experiment is carried out on. Period. That's the conclusion. This does not say that any "combinations of assumptions are in conflict with each other". But your statement demonstrates your lack of understanding beyond any possibility of mistake.

(Please, please to not respond to point 3 with the vacuous claim that every deductive argument "shows is that certain combinations of assumptions are in conflict with each other", namely the premises of the argument and the denial of the conclusion. Please don't waste more of our time. Bell showed that a whole huge class of theories, including every classical theory, is in conflict with *observations that have actually been carried out in laboratories*, not with some other "assumption".)

4) "Do we agree that you cannot make a statement about whether or not a superdeterministic model is scientifically explanatory without evaluating its dynamical laws? "

The only sense in which I have to "evaluate the dynamical laws" of a theory in this instance is to check if the laws are local in Bell's and Einstein's sense. That's it. If they are, then the theory cannot explain violations of Bell's inequality for experiments done at spacelike separation without resorting to "superdeterminism" in the sense that I have defined, namely hyperfine tuning, and such hyperfine tuning does not constitute a scientific explanation. Since exactly such violations of Bell's inequality have in fact been observed for experiments done at spacelike separation (and a lot of effort and skill has gone into designing and carrying out those experiments) it follows that the entire class of Bell-local theories—all of them, including the infinite number that have never even been thought up or written down by any human being—are dead. Kaput. Finished. They cannot be accurate accounts of the actual physics of the universe we live in.

That is what Bell, together with Clauser and Aspect and Zeilinger, proved. Demonstrated. Revealed about the nature of the physical world.

It is the most amazing and remarkable feat of scientific discovery in the history of humanity. Sorry that you missed understanding it.

Reading this thread is becoming like watching some low-budget horror movie; one wants to look away, but the image is too compelling...

"DON'T GO IN THE BASEMENT!"

"DON'T RESPOND TO THAT!"

... I need more popcorn ...

sean s.

Tim,

1) More distraction

2) More distraction

3)

"Please, please to not respond to point 3 with the vacuous claim that every deductive argument "shows is that certain combinations of assumptions are in conflict with each other", namely the premises of the argument and the denial of the conclusion. Please don't waste more of our time."Ah, let me see. You notice that your statement is wrong, correct it, and then tell me that I am not allowed to tell you what's correct. Fun move. Also, more distraction.

4)

"The only sense in which I have to "evaluate the dynamical laws" of a theory in this instance is to check if the laws are local in Bell's and Einstein's sense. That's it. If they are, then the theory cannot explain violations of Bell's inequality for experiments done at spacelike separation without resorting to "superdeterminism" in the sense that I have defined, namely hyperfine tuning, and such hyperfine tuning does not constitute a scientific explanation."Okay, then let me summarize your argument.

You define superdeterminism = hyperfine tuning, and hyperfinetuning = "an unnatural restriction on the phase space". You do not define "unnatural", so to begin with you don't have a definition. But more interestingly your argument is then to simply declare "unnatural restriction on the phase space" = "unscientific" because, I dunno, everyone says so? That's what you call an "argument"?

I am leaving aside here that all these supposed definitions make absolutely no sense, but even so, that's underwhelming. I sincerely hope you'll finally manage to write down a clean argument. Best,

B.

Sabine

John Bell proved something profound. It is a pity that you don't have a clue what it is. It is even more of a pity that you have no interest in finding out because your ego is so involved. Every single demonstration that you don't know what you are talking about—and these are indeed demonstrations—is labeled a "distraction".

And by the way, in asking you not to give a vacuous response to your characterization of Bell's argument as showing that "certain combinations of assumptions are in conflict with each other" I was not admitting any mistake in what I said. I was trying to preempt a vacuous claim that I knew (correctly, as it turns out) that you were about to make.

Suppose someone said Pythagoras proved that "certain combinations of assumptions are in conflict with each other". And when you ask what she has in mind, she says "Well, he proved that the assumption that something is a right triangle is in conflict with assumption that the square on its hypotenuse is not the sum of the squares on its two sides." In some completely unhelpful lawyerly sense, the statement she has just given is not false, but it is also not what anyone who understood the Pythagorean theorem would say. It is blowing smoke at such a prodigious rate that the fire departments from not just her own town but from the half-dozen surrounding towns are all on their way.

But the best defense is a good offense, so instead of providing even a single syllable explaining what a "non-local correlation" (your own trademarked phrase) is, or providing a single example of a "non-local correlation", or even answering the yes/no question of whether the EPR correlations constitute "non-local correlations", you just ignore this request for clarification entirely.

And instead of providing an explanation of what your claim that "Superdeterministic theories postulate the existence of non-local hidden variables to reestablish determinism. They reproduce quantum mechanics when averaged over the additional variables by assumption, that was the point of doing it. That's what my phrase referred to." means, which is written as if the *hidden variables* are non-local (which multiple people commenting here cannot make head or tail of) you just ignore it. In fact, if we ignore the obscure suggestion that it is the hidden variables that are non-local, what we have is a description of Bohmian mechanics. And no one, ever, most certainly not John Bell, ever referred to Bohmian mechanics as a "superdeterminstic" theory! So unless you can explain why Bohmian mechanics does not meet your definition of being "superdeterministic", we have yet another demonstration that you are completely out of your depth.

But instead of addressing these manifest problems in what you are claiming, you repeat (for the nth time) that I have not defined a hyperfine tuned theory to your satisfaction. And to show what it is to answer a question even when it is not posed in good faith, yet one more time:

When one writes down the dynamics for a theory, that dynamics automatically implies a natural space of initial conditions for the theory, namely the space of initial conditions for which the solution of the dynamical equations is a well-posed mathematical problem. Thus, for example, as soon as you write down F = mA as the master form of the dynamical equation of a particle theory, since it is second-order in time, you have defined the space of initial conditions for your theory as the space of all specifications of both the positions and momenta of the particles, i.e. phase space. In contrast, if you write down a first-order theory, such as the guidance equation of Bohmian mechanics, you implicitly define the space of initial conditions (given a wavefunction) as the configuration space of the particles rather than the phase space. Clear?

Con't

Now in usual circumstances, every single one of those solutions to the dynamical equations that arises from a given set of initial conditions is to be regarded as *a physical possibility* according to theory. It is something that, for all the theory says, could physically occur.

Let's take an example, just to make sure everyone is following. Suppose I am analyzing Bohmian mechanics, and I start with three particles in the GHZ state producted together with three other wavefunctions that represent the experimental situations that that triple of particles will eventually encounter. Good. Now treating the settings of the experimental apparatuses as "sufficiently free for the purposes at hand", to use Bell's phrase (why am I sure you never read Bell's article, as I recommended?) means that we can fill in the initial states of the three experimental apparatuses however we like, so long as they are states that count as states of spin-measuring devices whose orientations will be set some way or other. For example, they could all be attached to pseudo-random number generators, or each of their orientations could be contingent on the decay of single Uranium atoms while the GHZ particles are in flight, or indeed the three apparatuses could be firmly welded in place in whatever configuration I choose to consider.

Now we can ask—and it is just a mathematical question—whether every physically possible running of such an experiment, i.e. every model that satisfies the general description of a test of the GHZ correlations, will, according to the theory, yield the GHZ correlations. And the answer for Bohmian mechanics is "yes". Every one of the infinity of possible initial configurations will yield the right result, the result predicted by quantum mechanics. But of course Bohmian mechanics is a non-local theory.

Now: suppose you have a *local* theory, in the sense of Bell and Einstein. That local dynamics will also implicitly define a space of possible initial conditions for an experiment. And for the sort of GHZ test experiments described above we can ask: does this local theory reproduce, for every possible initial condition, behavior in accord with the GHZ correlations. And the answer—as a matter of pure mathematical fact—is "no". Not every possible initial condition will give the right behavior. Indeed, with respect to the natural measure of initial conditions, at least ¼ of the possible initial conditions *in every single run of the experiment* will give a bad result, a result that does not respect the GHZ correlation. And so given a set of just 1,000 such runs of the experiment, no more than 1 x 10^-125 of the possible initial conditions will satisfy the GHZ correlations on every single run as quantum mechanics predicts. But in the real world, the world we live in, the correlations match the quantum-mechanical predictions. So what is the defender of local physics to do?

There are two things. One is just to declare that, well, the local theory allows that the observed result is *possible* (which it does), and what more do you want from physics? Would you like to defend that move?

The other is to say that all of the "bad" initial conditions are just not physically possible. All but the tiny, tiny, tiny sliver of initial conditions that give the right behavior—the behavior predicted by quantum mechanics—are to be eliminated as physically possible. Now it is not correct to say that the "good" behavior is merely physically possible: it is now physically necessary. But the principle by which the bad initial conditions were excised was completely unprincipled and ad hoc: they were thrown out merely because they would lead to behavior that you don't like. Do you want to defend that move?

Those are the options available to the superdeterminist. Is it now clear, even to you, why superdeterminism is scientifically unacceptable as an explanation of any observed regularity?

Tim,

I am not responding to your attempts to question my knowledge because it's unproductive. I don't have to prove to you (or anyone for that matter) that I know quantum mechanics. You are the one who came here and made big claims that he can't back up. I read Bell of course but it's been a while.

I am sorry about the sentence with the non-local hidden variables. It was badly phrased. Look, I have written papers about the topic so excuse me for thinking it's obvious that I know it's not the variables that are non-local.

Also, as I said above, that's a distraction because the locality is not where we disagree.

You write

"The other is to say that all of the "bad" initial conditions are just not physically possible. All but the tiny, tiny, tiny sliver of initial conditions that give the right behavior—the behavior predicted by quantum mechanics—are to be eliminated as physically possible. Now it is not correct to say that the "good" behavior is merely physically possible: it is now physically necessary. But the principle by which the bad initial conditions were excised was completely unprincipled and ad hoc: they were thrown out merely because they would lead to behavior that you don't like. Do you want to defend that move?"I already said this a few times previously, but I'll try it one more time. Your phrase "tiny, tiny, sliver" is a vague way of saying that you think such initial conditions are quantifiably unlikely. You have nothing to back up that claim. It's just words.

You are assuming that all the initial conditions *for subsystems of the universe* are necessarily uniformly distributed in any superdeterministic model, regardless of the initial conditions of the universe which you are free to chose, and regardless of the dynamics of the model. That's a big claim. Prove it.

I don't have to "defend" anything. You are the one who is trying to make an argument. I told you several times already why your argument is wrong but you still haven't understood it. So please go and try to write down a proof for that claim you just made. Start with the definitions of a superdeterministic model and tell me how you arrive at the conclusion that generically (all but a "tiny, tiny, sliver") the initial conditions for GHZ experiments are "bad". Best,

B.

Sabine

"Start with the definitions of a superdeterministic model and tell me how you arrive at the conclusion that generically (all but a "tiny, tiny, sliver") the initial conditions for GHZ experiments are "bad"."

I gave you the calculation, but maybe it was too telegraphic. Here it is in more detail.

Superdeterministic theories are supposed to be local: that's the point. So to get around Bell's theorem they have to somehow deny the statistical independence postulate, i.e. the postulate that in the long run the setting of the experimental apparatuses are statistically independent of the state of the triple of particles when they are created. Let's see what that would entail.

In a GHZ set-up we know this: first, in order for a local theory to always, on every run, get a permissible result (i.e. a result allowed by quantum mechanics) the theory must be deterministic. If the theory were both local and indeterministic, with respect to the outcomes of any of the three experiments, then there would have to be a chance on every run of getting an impermissible result. Because in a local indeterministic theory, the indeterministic outcome can happen either way without in any way affecting the other distant experiments, but one of those two ways will yield an impermissible global outcome. So we have to be dealing with a local deterministic theory.

The GHZ argument proves that no local deterministic theory can yield a quantum-mechanically permissible outcome for all four of the possible relevant global experimental conditions, viz. XXX, XYY, YXY, and YYX. So for any given initial state of the particles, at least one of the four physically possible global conditions must be somehow avoided. We know that we can produce each of the four possible relevant experimental conditions in a way that they occur completely randomly: the setting of each device is statistically independent of the setting of the others. To mention just some of thousands of possible methods, hook each apparatus up to a deterministic pseudo-random number generator calculating the parity of the digits of, say, pi or phi or the 18th root of 31 or whatever. Or base the setting on the flips of three fair coins. Or use a coin for one, a random number generator for another, and the present value of the Dow Jones index for the third. In any of these cases the setting will verifiably pass every possible statistical test for randomness. (Of course you will get eight global settings: just ignore the results for the four ones we don't care about.)

So: if we use the pseudo random number generators, the settings are fixed by pure math can cannot be tampered with by any physics. For any initial state of the triple of particles, there is an objective 1 in four chance that a bad global setting is chosen for that triple, a setting that the particles will given an impermissible response to. So somehow or other (and nobody has ever has had a clue how except by mere fiat) the theory must forbid—wipe out of possibility—all of those global initial conditions that would lead to an impermissible result. And since on each run, there is a 1 out of 4 chance of an initial state of the triple of particles being "bad" (i.e. giving an impermissible result), *on each and every run at least 1 of 4 possible initial conditions—relative to the natural space of initial conditions that allows any initial condition for the triple to be matched with any initial condition for the experimental apparatus—must be somehow suppressed. And that suppression must be repeated for every single run of the experiment. Hence my calculation. For a run of just 1000 experiments (or 2000, if half are ignored because the global settings were not one of the relevant four), only 10^-125 of the natural space of initial conditions (i.e. the space implied by the dynamics) can be allowed.

And by my lights, suppressing all but 10^-125 of the natural space of initial conditions counts as only allowing a "tiny sliver" of that space.

Tim

It is time that you produce a mathematical demonstration of your contentions.

Nothing less is going to further your arguments. Your verbiage has confused every one attempting to follow the discussion.

Where have you published your work in this area of inquiry?

I wonder what this AI would conclude here...

https://www.eurekalert.org/pub_releases/2018-07/cu-atc072418.php

sean s.

Tim,

You write:

"if we use the pseudo random number generators, the settings are fixed by pure math can cannot be tampered with by any physics."A pseudo random generator is not made of math but is a physical thing. Like the experimenter who set it up and so on and so forth. You have used the "pseudo random generator" to postulate a probability distribution. You failed to prove these distributions are independent from each other. Not so surprisingly, because it's impossible to prove without taking into account the time-evolution of the model. As I said, you have no argument.

Best,

B.

Lockley

I have given the math above. Sabine is so desperate that she first suggests that the sequences of parities of the digits I mention are not statistically independent, and then realizing what a ridiculous claim that is complains that although it is true that they are statistically independent I haven't proven it in the right way. So even she has no coherent complaint about the math.

You might want to look at my book Quantum Nonlocality and Relativity, Blackwell, 3rd edition. The original edition was published in 1994, so yes, I have been thinking about this for a little while.

Sabine

"A pseudo random generator is not made of math but is a physical thing. Like the experimenter who set it up and so on and so forth." I congratulate you on your perspicuity, but am puzzled why you would be under the impression that I thought computers and/or people were made of ectoplasm or, even worse, out of mathematics. I am not Max Tegmark.

What a pseudo-random number generator is made out of is neither here nor there. What is important is that the output of one is statistically independent of the output of another that is programmed to calculate the parity of the digits of a different irrational number. If you really want to maintain that you are unsure that the strings of binary digits generated according to the recipes I gave are statistically independent, I will take a bet for $1,000 at 1,000 to 1 odds that the strings I mention will pass any test of statistical independence that you care to specify. Put up or shut up.

Since the output of a such a pseudo-random number generator is fixed by purely mathematical considerations, I don't need to "take into account the time evolution of the model": that's the whole point. I already know that the output of the machines—and hence the setting of the various devices—will satisfy a condition of statistical independence no matter what the computers are made of or what types of code they are running. They just have to be properly programmed and functioning.

Since nothing about the state of the GHZ particles—or of the physical process leading up to their creation—can have any influence at all on the settings of the apparatuses, the only way there could be statistical dependence of the state of the particles on the apparatus settings is either retrocausation (which is not at issue here) or pure coincidence. See again "Waiting for Gerard" (https://www.facebook.com/tim.maudlin/posts/10155641145263398).

The only way to remove the charge of coincidence is to go and delete the bad initial conditions—the initial conditions that lead to behavior that contradicts the quantum-mechanical predictions—by hand, leaving only the good initial conditions and violating Statistical Independence. And that procedure spells the end of scientific practice.

I not only have an argument, I have an airtight argument that has resisted all of your feeble attempts to find a hole in it. The more you squirm at this point the more clearly you reveal that you are just psychologically incapable of admitting error or of learning anything that contradicts whatever random opinion you happen to have formed about something you have given no serious thought to. Early on, I recommended that you read Bell's "Free Variables and Local Causality", which is exactly on this topic. You did not. You inform us that you read Bell many years ago and can (obviously) not recall any of it. It would have been much less trouble and taken up much less of your time and been much less embarrassing to you if you had just gone back and read Bell with some attention and learned something. Whatever you may think of me, you better recognize that Bell was smarter than both of us put together. But for whatever reason you are more inclined to hurl baseless insults than to actually try to understand a topic that you have obviously not studied or thought deeply about, and I have devoted more than half my life to.

What is with the complete and total blind arrogance of physicists when it comes to foundational issues? You are never taught about them, you never read deeply about them, they are no part of your training, but you are passionately convinced that you know more about them than people who have devoted all their time and attention to them. The psychology here is just weird and, frankly, pathological.

Sabine,

You deleted your comment: „Correction: It is possible to prove but not the w…”

Please reveal the weapon that Tim can use in an entirely deterministic world that critically depends on the initial condition.

I only can guess: „… not the w[ay]”

[way you try ...]?

[way you try without providing a deterministic time-evolution …]?

[way you try without providing a deterministic time-evolution. If this evolution does not at all reproduce our world with all its complexity, then this would be kind of a prove…]?

No, you would not use our real world in a comparison - it must be something more mathematical …

Please Sabine, reveal it!

Please

Sabine,

in his last post, that went through his calculation in great detail, Tim did what you accused him of earlier: presupposing a measure of some sort on the initial conditions space.

But I don't think this should be of any comfort to you, at all. Tim could let the superdeterminist pick whatever measure they like to quantify what counts as a "tiny, tiny sliver" of the space of possible initial conditions, as long as it is not specified in a cheating way, like "Uniform over all regions that are such that, throughout history, observable phenomena match the predictions of QM." The point is that there are

so manydifferent ways to set up a physical system (pseudo-random number generator; human; output of a QRNG; thermometer attached to a nice hot cup of tea; . . .) to determine the measurement directions in a GHZ experiment that, if we restrict our gaze to the class of models that have humans in them running GHZ experiments, and putanynon-cheating measure over the corresponding region of IC-space that you like, the part of that space in which the local, deterministic theory manages to reproduce the quantum predictions despite repeated and varied experiments attempting to falsify them will be -- by the measure YOU supplied -- a tiny, tiny sliver of that region. And it will be, moreover, one thatcannot be specified in a non-cheating way.That's why it seems to me absolutely correct to say that superdeterminism as a way to salvage locality in the face of Bell tests is silly, ridiculous, and unscientific. We don't have to wait to see what specific dynamics t'Hooft or anyone else puts forward, to make this determination; we can see already that the "theory" will have to place absurdly gerrymandered restrictions on the space of allowed initial conditions.

Reimond,

I deleted the comment with the correction and instead reposted the original comment with the correction already in it. As I have repeated like some dozen times, it's possible to make a sensible argument, but not without drawing on the law for the time-evolution (which is model-dependent).

Tim,

Again you produce many words (not to mention insults) but no argument. I don't care at all what you want to bet about your pseudorandom generators, because your bet isn't a quantum measurement.

Look, whatever is the initial condition and dynamical law of your superdeterministic theory produces (to good precision) all measurements we have made so far. That includes Bell-type experiments and GHZ and so on. You cannot argue against that by talking about other experiments.

Really I get the impression you don't understand what superdeterminism is about to begin with. Best,

B.

Tim,

I should add, you have changed your argument. You are no longer claiming that superdeterminism is unscientific as of principle, you are now drawing on empirical knowledge (about statistical properties of subsystems). That's a step in the right direction, but you have a few more steps to go.

Carl,

"Non cheating" is just another word that replaces other vague notions like "unnatural" or "hyperfinetuned".

"one that cannot be specified in a non-cheating way. "Prove it. I have explained several times above why that's not possible without drawing on the time-evolution law, but apparently it's hard to grasp. I am sure the moment you try to write it down it'll occur to you that you cannot make a statement about the probability of finding subsystems with certain properties without making an assumption about the initial state and the law that gets you to the present time.

While you're at it, also try to calculate the probability that the prepared state in a quantum measurement is not correlated with the detector. Best,

B.

Sean S,

From the article:

"The computer model, which also considered Google's Perspective, a machine-learning tool for evaluating "toxicity," was correct around 65 percent of the time. Humans guessed correctly 72 percent of the time."

Not so impressive for AI.

But here's a quick way to bring up both statistics. Let anyone who is not a credentialed physicist challenge anyone who is (no matter what branch of physics they have expertise in) on any topic in the foundations of physics. Let the non-physicist be right and the physicist be wrong. Let the non-physicist be stubborn enough not to be cowed by empty assertions of competence or knowledge. Then instead of the physicist actually learning something they don't know, and actually being grateful for being taught something new, the physicist will become toxic more than 72% of the time. (Thank goodness for the exceptions, which do exist!)

In trying to correct a physicist about an error he was making concerning AdS/CFT, and sitting with a rather distinguished physicist friend who completely agreed with me, I have had (and this is not made up or exaggerated) the physicist simply announce out of thin air "You don't know what a microcanonical ensemble is". I had not (my hand on Speakable and Unspeakable in Quantum Mechanics) even mentioned the term "microcanonical ensemble", but all of a sudden (the argument was not going well for him), out of the blue, comes this assertion.

Somewhat taken aback, I responded that I did, indeed, know what a microcanonical ensemble is. To which he replied, with complete confidence, "No you don't". And—to repeat—I had not even mentioned microcanonical ensembles in any of the points I was making!.

And then—I kid you not—he took out his iPad and announced the he was going to write down a partition function because obviously I had no idea what a partition function was. So he rather grandiosely wrote down the partition function for the microcanonical ensemble and displayed it to me, as if the very sight of it would work like the sight of a cross to a vampire and I would have to hide under the table in fear. I have a witness to all this.

Con't.

Another physicist told me to read a published paper she had written with a co-author. It was relevant to a point that was in dispute. So I found the paper, and glancing over it I spotted a fatal flaw in the argument (the authors had misapplied their own definition). It was not an accident or luck that I spotted the flaw: it was because I had been thinking about the topic and understood the logical dialectic. I thought that the ability to see the mistake that had eluded both of them (as well as the referees of a distinguished physics journal), and see it at a glance, might actually serve to convince her that I knew what I was talking about—maybe even better than she did—and might put her in a frame of mind to pay some attention to the points I was making.

But no. Instead, she continued undeterred, and when I brought up the error in her published paper she just got angry and asked if I wanter her to publish a correction! (Of course, it would never occur to her to *want* to publish a correction, because the paper would mislead anyone who read it and did not catch the error.) I had not mentioned publishing a correction, and that was not my point, but she simply could not comprehend the significance of what had happened and why I kept mentioning the error to try to get her to show some willingness to attend to what I was saying. I have all of this recorded in an e-mail exchange.

And just now, Sabine produced this: "Again you produce many words (not to mention insults) but no argument. I don't care at all what you want to bet about your pseudorandom generators, because your bet isn't a quantum measurement." I will, of course comment on it. But ask yourself how serious someone is being who can write such a thing. (Some physicists also seem to have adopted the following scheme. If you write too briefly, skipping over some obvious steps in an argument, they either misread it or claim they cannot understand it or that it is not a valid argument. But if, to avoid that, you write everything out in painful detail with concrete examples and illustrations, then they complain that you use too many words. It's a cute rhetorical trick.)

I have never been rhetorically sharp with anyone showing good faith and an honest willingness to learn and grapple with an argument. I get rhetorically sharper and sharper with people who stubbornly refuse to discuss seriously. I won't apologize for that.

And then there are the eminent physicists who announce, after having spent a whole afternoon exploring the topic, that all of climate science is a fraud. Don't get me started.

Sabine,

""Non cheating" is just another word that replaces other vague notions like "unnatural" or "hyperfinetuned". "Actually, I think that all three of these terms are pretty easy to understand, and in fact we all do understand them.

" "one that cannot be specified in a non-cheating way. "Prove it. I have explained several times above why that's not possible without drawing on the time-evolution law, but apparently it's hard to grasp. I am sure the moment you try to write it down it'll occur to you that you cannot make a statement about the probability of finding subsystems with certain properties without making an assumption about the initial state and the law that gets you to the present time. "

I am indeed making some assumptions about the time-evolution law and about the sort of theory the putative superdeterminist model is intended to be. I assume that the time-evolution law is deterministic. And I assume that the model describes some sort of time evolution of things recognizable as physical states, i.e., that the theory is recognizably

a physical theory. So it has, prior to restrictions being added, a huge "space" of possible states, sufficiently huge to contain models that (at the observable level) capture what we see around us,and an enormous variety of other sorts of world, e.g. worlds with space-aliens obsessed with doing Bell-tests, worlds where nothing like human beings exist, worlds where humans exist but never develop high technology, etc.That's all I need to assume, however, and I take it that this is understood by everyone in the debate, and should be non-controversial. But it does rule out certain sorts of "theory", e.g., the theory that says we are all simply immortal Cartesian souls and our sense-experiences are chosen by God and directly put into our minds." ... you cannot make a statement about the probability of finding subsystems with certain properties without making an assumption about the initial state and the law that gets you to the present time."You seem to be curiously obsessed with the idea of

theinitial state, as if what superdeterminist theories are supposed to do is give us a deterministic dynamical law AND a precise initial state of the actual world. This is not the way I, and I believe most people, think of superdeterminism. Instead, it's supposed to be an attempt at giving ageneralexplanation of the violation of Bell inequalities, one based on the existencein generalof correlations between the physical states that determine detector settings and the physical states of systems that we measure.My claim is that, if we don't put on blinders of some sort, we can see very clearly that any physical theory of the type we're talking about (with, remember, a huge "space" of potential initial states) which is deterministic, can only

reliablyproduce QM-like correlations, across a wide range of different scenarios where humans and space-aliens do (say) GHZ experiments over and over using multifarious methods to set the measurement directions, if it restricts, enormously, the "space" of allowed initial states. This restriction is what Tim has been calling "hyperfine tuning".Can I quantify in a precise way

how muchof the "space" of possible initial conditions has to be thrown away, without having a specific superdeterminist theory in front of me? Of course not. That's irrelevant. If you are claiming that, for all you can see, a superdeterminist theory might work without any hyperfine tuning, then I respectfully submit that your faculty of physical intuition has a huge blind-spot."While you're at it, also try to calculate the probability that the prepared state in a quantum measurement is not correlated with the detector."This request is ill-defined nonsense.

best,

Carl

Sabine

My knowledge that the outputs of the pseudo-random number generators will be statistically independent of each other, and that the strings of 1s and 0s produced by each machine will be statistically random, with about 50% of each, is not "empirical knowledge" and it does not depend on studying the dynamical laws of any model. Your belief that it requires empirical input to know these things is just false, and seems to be the source of your continuing errors. If the apparatuses in the GHZ experiment were hooked up to such devices then each possible global configuration of measurements would occur one eighth of the time, completely randomly and unpredictably. No local physical theory could manage to predict that the initial state of the GHZ particles would be statistically correlated with the later detector settings without restricting the possible initial conditions in—to use Carl's phase—a non-cheating way. By this he means that the physicists would have to just calculate the sequence of global experimental conditions and then adjust the initial states of the particles by hand.

You continue to claim that my knowledge of the statistical characteristics of the parities of the digits of pi is empirical in origin. It is not. How much clearer can I make that? And if you doubt it, take my bet. Use whatever computer running whatever code you like to calculate the parities. Since I don't know the physics of your machine, I cannot be basing my prediction on the physics of anything. Yet I make the prediction, and will bet you at handsome odds I am right. Put up or shut up.

Sabine

"Really I get the impression you don't understand what superdeterminism is about to begin with."

That is the very definition of chutzpah. I have defined hyperfine tuning, which is what "superdeterminism" is, over and over and over in more and more painful detail, probably boring the readers of this blog out of their skulls. And you have given at best some completely obscure and incomprehensible gibberish which you refuse to explicate or defend. My multiple attempts to force you to be clear about what (if anything) you have in mind have been repeatedly dismissed as "deflections" without so much as a syllable of clarification.

So do me a favor. Relieve me of my ignorance. Do tell what "superdeterminism" is and how it can manage to defeat Bell's assumption of Statistical Independence without making a mockery of the entire accepted scientific method. I'm all ears.

Tim,

You say:

"Since nothing about the state of the GHZ particles—or of the physical process leading up to their creation—can have any influence at all on the settings of the apparatuses, the only way there could be statistical dependence of the state of the particles on the apparatus settings is either retrocausation (which is not at issue here) or pure coincidence. "

You ignore again the counterexample I have provided.

The settings of the detectors are encoded in the state (position/momenta) of their internal particles (electrons and quarks).

This state is available at the location of the source in the form of the electric and magnetic fields generated by the detectors' internal particles.

The only assumption one needs to make is that the spins of the emitted particles are determined by the electric and magnetic fields at the location of the source. This is it, no need for retrocausation.

Carl,

" The point is that there are so many different ways to set up a physical system (pseudo-random number generator; human; output of a QRNG; thermometer attached to a nice hot cup of tea; . . .) to determine the measurement directions in a GHZ experiment that, if we restrict our gaze to the class of models that have humans in them running GHZ experiments, and put any non-cheating measure over the corresponding region of IC-space that you like, the part of that space in which the local, deterministic theory manages to reproduce the quantum predictions despite repeated and varied experiments attempting to falsify them will be -- by the measure YOU supplied -- a tiny, tiny sliver of that region. And it will be, moreover, one that cannot be specified in a non-cheating way."

All the possible ways to set up the detectors you mentioned above are fundamentally the same. A human, a computer, a thermometer, etc. are just groups of charged particles (electrons and quarks). There is a local and realistic theory (classical electromagnetism) that tells us that the state of any such number of charges is reflected in the electric and magnetic fields produced by them at any distance.

Once you realize that the state of the detectors is available at the location of the source in the form of those fields, the fact that the spins of the emitted particles are not independent of them does not look so strange anymore.

Sabine, Tim,

To drop the assumption of statistical independence obviously it is superdeterminism that has to provide the deterministic, local time-evolution law. And if there exist such a law then Gerard, Godot or any master of coincidence can indeed

choosethe initial condition as “unlikely” as they want, as long as superdeterminism does not hold for them - otherwise no choosing.Thus, it is the burden of Godot to provided such a law. While we are waiting for Godot, it would be unfair to call superdeterminism silly, but to continue with science I would say it is reasonable to open a window and make the assumption, that such a deterministic, local time-evolution law does not exist and look for another time-evolution law that utilizes Bell’s insight with a closed ‘freedom of choice loophole’.

Andrei,

Aside from the absurdity that the state of a triple of particles a light year from a computer programmed to calculate the digits of pi would be systematically and infallibly different from the state of a triple of particles a light year from a computer programmed to calculate the digits of phi *irrespective of the entire state of the rest of the universe, and in precisely the right way to alter the reaction of the particle in just the right way to preserve the GHZ correlation*, there is the further fact (as pointed out in my book and now actually being funded) that by making the setting depend on CMB in the right direction, even the entire past light cone of the triple at creation does not contain information about what the setting will be. The inability to appreciate these absurdities is indicative of just how far these discussions have come from anything physically reasonable. And even so, I pointed all this out over 2 decades ago.

Tim,

Your definition of hyperfine-tuning, as I have told you several times, relies on another ill-defined word "unnatural" which, when asked to explain, you attempt to define with phrases like "tiny, tiny, sliver" which you then attempt to define with pseudorandom generators because you keep confusing the probabilities within subsystems with the probability of finding such a subsystem to begin with. Your "tiny, tiny, sliver" remains to be proved.

I have told you several times already why this argument doesn't work and what an actual argument would look like, but you aren't listening. It's a great puzzle to me because I think it should be in your interest more than in mine. I have little to do with the foundations of quantum mechanics aside from an occasional outburst of frustration about there being too much talk and not enough walk.

To sum it up again: There is nothing wrong with picking an initial condition "just because" it works. We always do that. If you don't understand this you should look at some papers where physicists make actual predictions. Say, for the CMB. Or for the GHZ experiments.

What justifies an initial state? That it works. What does it mean that it works? It means that the initial state together with a time-evolution explains observations. By explaining I mean it's a simplification over just writing down the measurement outcome.

So how do you show that a superdeterministic model isn't scientific? By showing that it does not provide any simplification over just writing down the data. You can not do that without taking into account the time-evolution because generically it's the time-evolution that provides the simplification.

Now look, as I said earlier, if you have a reversible time-evolution you can always find an initial state that will give you the suitable result (provided it's possible at all). But this state will generically violate statistical independence. That's where superdeterminism comes in, I hope that answers your question. Your task is then to show that there is no way that you can generate the necessary initial states for the subsystems (the experiments in question) from an initial condition for the universe with the given time evolution so that you actually explain something.

Quantum mechanics explains something because you do not have to put the information about the final correlations (for the examples above) into the initial state of the subsystem to begin with (the argument that Mateus tried but ultimately failed to make). Otoh, you lose determinism. In a superdeterministic theory you gain determinism but now you haven't simplified anything in your experiment (subsystem), which brings up the question whether you can show that there is any initial state for the universe (another "past hypothesis" so to speak) for which this *generically* happens. Again let me emphasize you don't need to show that the initial state of the universe is generic (that's not possible) but that there's no initial state for which the subsystems are generic. (And of course the initial state should be compatible with observation...)

I suspect it's possible to show for certain models (like 't Hooft's) that this is not the case. In any case I'd be interested seeing such a proof. I am guessing that the models which survive are the retro-causal ones. Best,

B.

Reimond,

I agree that it's up to those who claim it's possible to come up with a useful model. But it's not helpful to declare their efforts "silly" without having proved it's not possible. That's a particularly bad attitude because it discourages physicists from making experimental tests that could settle the theoretical debate.

Sabine

So your confusions all trace back to indefensible misunderstandings about explanation. OK, let's start that discussion.

You write: "What justifies an initial state? That it works. What does it mean that it works? It means that the initial state together with a time-evolution explains observations. By explaining I mean it's a simplification over just writing down the measurement outcome."

So, I do a GHZ experiment with the settings X,X,X and get the outcome Up,Up,Up. There: I just wrote down the measurement outcome. If you are under the impression that any relevant initial state of any deterministic theory can be written in a simpler way, you have to find an initial state simpler to write than "Up, Up, Up". Send us a postcard when you find it.

As I have said over and over, you have no expertise in foundations of physics or in philosophy of science or in the theory of explanation. Whatever opinions you have about these topics you have arrived at in a rather random and uncritical way, and it shows. It took me all of 5 seconds to refute your suggestion. But your amazing self-confidence on these topics is undeterred.

Sabine

You write:

"I agree that it's up to those who claim it's possible to come up with a useful model. But it's not helpful to declare their efforts "silly" without having proved it's not possible. That's a particularly bad attitude because it discourages physicists from making experimental tests that could settle the theoretical debate. "

No experiment can possibly refute superdeterminism, just as no possible experiment could refute the hypothesis that the world has a completely local physics but we are all just brains in vats being fed inputs that create the appearance of a world in which Bell's theorem is violated for experiments done at space-like separation. The proper analog to a superdeterministic theory is a theory like that.

Sabine

Here's another one. You write"

"Your definition of hyperfine-tuning, as I have told you several times, relies on another ill-defined word "unnatural" which, when asked to explain, you attempt to define with phrases like "tiny, tiny, sliver" which you then attempt to define with pseudorandom generators because you keep confusing the probabilities within subsystems with the probability of finding such a subsystem to begin with. Your "tiny, tiny, sliver" remains to be proved."

What in the world does the " probability of finding such a subsystem to begin with" have to do with anything? What is the probability that some scientist somewhere will set up a GHZ experiment? Who knows? Who cares? They have done so, and given that they have done so there are the probabilities for the various global experimental arrangements that I used for my calculations. Neither you nor anybody else ever in the history of mankind from beginning to end will be able to provide a probability for such an experiment to actually be done. And even—per impossibile—you did come up with such a probability, it would be irrelevant. Quantum theory provides predictions for the outcomes of such an experiment whether it is ever done or not, and those predictions violate Bell's inequality. So quantum mechanics is a non-local theory.

Sabine, you said:

". . . it's not helpful to declare their efforts "silly" without having proved it's not possible. That's a particularly bad attitude because it discourages physicists from making experimental tests that could settle the theoretical debate."What sort of experiments could help settle the debate between those who favor superdeterminism vs. those who think nature is nonlocal? The whole point of a superdeterminist theory or model is to generate phenomena that

looknon-local in all experimental tests (and, more generally, produce the same observable phenomena as QM), while in fact, underneath at the non-observable level, being both local and deterministic.I suppose someone might come up with a superdeterminist theory that captures all the EPR/GHZ/etc. nonlocal correlations of QM, but diverges from QM's predictions in some way regarding some other phenomena. But, obviously, physicists can't start designing experimental tests to check whether that theory or QM is correct,

until they have that theory in hand.To my knowledge, no such superdeterminist theory has ever been produced; yet you seem to think that experimental tests could be done right now that would bear on this debate. Please explain.best,

Carl

Tim,

"Aside from the absurdity that the state of a triple of particles a light year from a computer programmed to calculate the digits of pi would be systematically and infallibly different from the state of a triple of particles a light year from a computer programmed to calculate the digits of phi *irrespective of the entire state of the rest of the universe"...

In order to properly account for the change in computer's programming you need to go back in time (at least two years, when the field present at the source in the moment of emission originated) and perform the required modification in the positions/momenta of the charged particles. Also you need to operate a global change of the state of fields across the universe as a result of your modification. At this point you can let nature follow its evolution. It remains to be seen how much those tiny changes would influence the spins, but I expect a significant change due to the chaotic behavior of complex EM systems.

At this point we have established that in the case of electromagnetism the independence assumption fails. We don't know in what way it fails (if it fails, as you say, in the "precisely right way" or not) but it clearly fails. So we are are in the presence of a theory that is local, realistic and scientifically respectable that cannot be ruled out by Bell. No nedd for fine-tuning, retrocausality, etc.

If you still want to argue that the statistics implied by the theory cannot reproduce the observed results you need to provide some other argument than Bell's theorem. I am waiting to see it.

"there is the further fact (as pointed out in my book and now actually being funded) that by making the setting depend on CMB in the right direction, even the entire past light cone of the triple at creation does not contain information about what the setting will be."

We don't have a theory that is capable of describing the Big-Bang so there is no way to know what correlations could be expected in regards to the particles created there. So I think that any appeal to that event is inherently question-begging.

Regards,

Andrei

Tim,

"So, I do a GHZ experiment with the settings X,X,X and get the outcome Up,Up,Up. There: I just wrote down the measurement outcome. If you are under the impression that any relevant initial state of any deterministic theory can be written in a simpler way, you have to find an initial state simpler to write than "Up, Up, Up". Send us a postcard when you find it."You are the one who claims it's not possible, you are the one who has the burden of proof.

"It took me all of 5 seconds to refute your suggestion."That's what you call to "refute" an argument? Proclaiming that you are allowed to make false statements because no one has proved the opposite?

"No experiment can possibly refute superdeterminism...That's right, but there are certainly experiments that can probe specific models for superdeterminism. Just that no one works on this because they have bought the story that it's just "silly". Best,

B.

Carl,

Once you have a dynamical law for the superdeterministic theory it will allow you to make predictions for single measurement outcomes, where quantum mechanics only gives predictions for probabilities. That you reestablish determinism is after all the whole point of doing this exercise. Whether you can measure this depends of course on what you have assumed the hidden variables are. 't Hooft I think claims they are somewhere at the Planck scale, which makes any test impossible (or so I would think). In a certain sense it would mean that the seeming indeterminism of quantum mechanics is due to a non-decoupling of scales.

I tend to think that it would be more resonable to first try and probe superdeterminism that fits into the effective field theory framework. A particularly minimal model is that the "hidden variables" are unaccounted for degrees of freedom of the detector itself. It is possible to test this idea, or at least constrain it, but no one even looks into it. Best,

B.

Sabine

In philosophy there is something we call "burden tennis". Someone playing burden tennis uses the rhetorical trick of shifting the burden of making an argument to the opponent while lifting level of justification required for an argument as high as possible. Meanwhile, the trickster steadfastly refuses to observe this same level of justification in making her own remarks. This is because the trickster has no sincere interest in the truth, but only in the sophistical goal of appearing to win an argument.

You are the Wimbledon champion of burden tennis. It is a pity that you don’t have the intellectual integrity to recognize that that is what you are doing and the common decency to stop it. You are acting as a role model for the sort of ridiculous behavior that I documented in the comment to Sean S. above (6:28 AM, July 26). The sort of physicist who is so self-involved and unable to acknowledge error that no rational discussion is possible. No wonder the field is such a mess.

Examples that anyone can check: Just look at the number of times you have falsely claimed that I did not define "superdeterminism" and my multiple patient and ever-more detailed definitions above. Compare that you your flat refusal to explicate your own phrases "non-local correlation" and "quantum mechanics with extra variables sprinkled over it". You demand further explication—which I patiently provide—and call my request that you do the same a "distraction". Nice work if you can get it.

Let's look at an example. You can just randomly look at your blog entries to find them. I say that no deterministic theory will ever be able to postulate an initial condition simpler than the phrase "Up, Up, Up" (10 characters long) from which that outcome can be derived via a dynamical law. This is so blindingly self-evident that no one who was speaking seriously would be dumb enough to deny it. Your response? "You are the one who claims it's not possible, you are the one who has the burden of proof."

Did I mention the phrase "burden tennis"?

Now let's look at a claim you just made in a recent post:

"Indeed, it would be an interesting exercise to quantify how well modified gravity does in this set of galaxies compared to particle dark matter with the same number of parameters. Chances are, you’d find that particle dark matter too is ruled out at 5 σ. It’s just that no one is dumb enough to make such a claim. When it comes to particle dark matter, astrophysicists will be quick to tell you galaxy dynamics involves loads of complicated astrophysics and it’s rather unrealistic that one parameter will account for the variety in any sample."

Chances are? What chances? Where did you get these chances? How do you define them? You made the claim, you are the one who has the burden of proof to define what the mentioned chances refer to and then provide the detailed calculation that you used to arrive at 5 sigma. Etc. etc. ad nauseam.

Your sort of burden tennis is childish and unenlightening, and you engage in it because you are hopelessly egotistical. I have bent over backwards to answer your questions as if they were being asked seriously and in good faith even though it is obvious that they were not. I have provided painfully detailed arguments and calculations. I have raised point after point where your own statements are completely opaque, and you simply label these observations "distractions". I started off saying that you ought to read Bell's beautiful and relevant paper on this topic, and you are too lazy to do that.

Stop being such a bad role model. It benefits no one, most certainly not yourself. It you want to help change the practice of physics, as you loudly proclaim, start with yourself. Childish sophistical tricks are for trolls, not serious people.

Andrei,

So now you are suggesting that in order for a superdeterministic theory to make a prediction about the outcome of a particular actual GHZ experiment it has to reference a precise theory of the Big Bang? Then no such theory will ever exist.

Question - if I go into the past light cones of Alice, Bob, Carl who are conducting the GHZ experimeent and that of the apparatus that is preparing and emitting the entangled quantum states, sufficiently back, no need to go all the way to the origin of the universe, and use that to construct the initial conditions Cauchy hypersurface, isn't that sufficient, and isn't everything after that determined by the current non-superdeterministic laws of physics, including what settings Alice, Bob and Carl are going to use on their apparatus? How do the ideas that initial conditions and laws of physics dictate the future square with the idea that Alice, Bob, Carl have the ability to choose the apparatus settings? Leave aside superdeterminism.

Tim;

“

Let anyone who is not a credentialed physicist challenge anyone who is (no matter what branch of physics they have expertise in) on any topic in the foundations of physics. Let the non-physicist be right and the physicist be wrong. Let the non-physicist be stubborn enough not to be cowed by empty assertions of competence or knowledge. Then instead of the physicist actually learning something they don't know, and actually being grateful for being taught something new, the physicist will become toxic more than 72% of the time. (Thank goodness for the exceptions, which do exist!)”OK. Let’s amend that slightly:

Let anyone who is not a credentialed

Xchallenge anyone who is on any topic inX. Let the non-Xbe right and theXbe wrong. Let the non-Xbe stubborn enough not to be cowed by empty assertions of competence or knowledge. Then instead of theXactually learning something they don't know, and actually being grateful for being taught something new, theXwillfrequentlybecome toxic.Some thoughts on this.

1)

Xcould be physics;Xcould be philosophy.Xcould be plumbing. What you describe is a human trait.2) The party (the non-

Xor theX) who isrightmay also “become toxic”; perhaps because the party-in-error is not easily persuaded.3) I believe it is an error to equate “becoming toxic” with “being wrong”. “Becoming toxic” equates to “

behavingwrongly”. Pardon the crudity, but one can be perfectly right and be an ass-hole. One can be stubbornly wrong and be pleasant about it.You provided some anecdotes that began with “

In trying to correct a physicist about an error he was making ...”In reality, you were

trying to correct a physicist about an erroryou thoughthe was making; perhaps hewas; perhapsyouwere wrong.You may as well be open to that because, lacking other accounts of the event, no one should be inclined to decide who was correct or who to blame for “becoming toxic”. It may have been a joint effort. It usually is.

You asked me to consider a comment Sabine made (“

Again you produce many words (not to mention insults) ...”) and you ask me to, “ask yourself how serious someone is being who can write such a thing.”Based on some of the comments you have made to Sabine; I have to say that her comment does not detract from her seriousness anymore than some of

yourharsh comments detract from your seriousness.I’m not taking sides in this toxic conversation: I suggest you both take a break. Even if one of you is truly innocent; taking a break is advisable.

Reading this toxic conversation has, for me, only a forensic value: to see how a conversation between two intelligent people can spiral into this muck is ... is

.AMAZINGsean s.

Tim;

I just saw this recent comment from you to Sabine:

“

As I have said over and over, you have no expertise in foundations of physics or in philosophy of science or in the theory of explanation.”Only a day before you proposed to me a scenario in which a heroic non-expert was “

stubborn enough not to be cowed by empty assertions of competence or knowledge...”Gotta love irony! This conversation is not particularly enlightening, but it sure is entertaining in a morbid sort of way.

sean s.

To

Reimond’s“waiting for Godot” comment,Sabinewrote:“

I agree that it's up to those who claim it's possible to come up with a useful model. But it's not helpful to declare their efforts ‘silly’ without having proved it's not possible. That's a particularly bad attitude because it discourages physicists from making experimental tests that could settle the theoretical debate.”I have to agree with this completely. A conclusory comment like “it’s silly” is not useful. It may be true, but it’s not a logical argument. In science, that is a significant difference.

sean s.

"A particularly minimal model is that the "hidden variables" are unaccounted for degrees of freedom of the detector itself."

If memory serves, there was a post about such a model, and how to test for it, at this very site, a few years back. At the time, I think experimenters were very busy building and testing various particle accelerators, which were expected to produce great results. (Not an unduly biased expectation even in hindsight, but one which did not pan out as well as expected--see Dr. Hossenfelder's book for details.)

Personally, I think a universe with some randomness built in is better than one without, not that the universe cares what I think. So I too would have voted for the particle accelerators over a pure science experiment to close another loophole; although the latter would have been much less expensive, of course. Still, there used to be a lot of smaller experiments like that rather than a few huge ones; such as the experiments to test that protons and electrons have opposite charges of equal absolute magnitude, to higher and higher precision.

Tim,

Yes, burden tennis, I like that. You are making a claim "superdeterminism is silly." I ask you to prove it. You now just simply refuse to and instead want me to prove... what exactly? That you go around and make empty proclamations? This thread should be proof enough. I have several times debunked your attempts to sneak in a priori probability distributions. Wisely enough, you have given up on it. The current status is that you have no argument.

Unfortunately, if I extrapolate your past behavior, this will not prevent you from repeating your claim with confidence anyway.

You are making several false assertions above. I have stopped replying to those because you get easily distracted and I don't have the patience to constantly correct you.

"I say that no deterministic theory will ever be able to postulate an initial condition simpler than the phrase "Up, Up, Up" (10 characters long) from which that outcome can be derived via a dynamical law. This is so blindingly self-evident that no one who was speaking seriously would be dumb enough to deny it. Your response? "You are the one who claims it's not possible, you are the one who has the burden of proof."I ignored that your assertion is a straw man and instead tried to address what might have been its reasonable content, but no, you actually want me to debunk your straw man. Fine, here we go: "Data" is plural. You can't extract laws from one data point. In your example you'd have to collect a series of measurements.

Now look, quantum mechanics tells you there are correlations in that data. For that you use assumptions about the initial data and the dynamical law. That's the explanation I said it provides.

But quantum mechanics isn't deterministic, so it will never predict the actual outcome of a measurement. Superdeterminism can do that. To pick an example, it might tell you that two measurements are correlated in time. That, again, would be a simplification of the data by help of a dynamical law and an initial condition (if you can write down a model for it of course).

It is evident from your comments above that you don't have the faintest idea what to even do with a superdeterministic theory. Why do you even discuss the matter if you don't care about the science?

Your quote from my blogpost on dark matter is entirely irrelevant to the current discussion. "Chances are" is not a technical phrase, it merely expresses that I think that's what they would find would they look at it. And if they did, of course they could quantify their statistical likelihoods as they did in the present paper.

I have to agree with Sean S. that it would be useful for Tim and Sabine to stop diagnosing each others' defects for a bit. Not that a good flame isn't entertaining now and then, and edifying about human nature; but I am more interested in understanding substantive things about physics, namely:

-- Why some physicists, like Sabine, do not agree that superdeterminism is

obviouslya blind alley;-- How to express as strongly as possible that a superdeterministic theory will necessarily have to impose severe, unmotivated, and hence "conspiratorial" restrictions on the allowed initial conditions in order to reproduce the violation of Bell inequalities (as well as other quantum phenomena). To me, the arguments that I, and Tim, have offered already in this thread are already damn convincing. But Andrei and Sabine seem to find them still lacking, so it would be nice to have a more unassailable argument.

Regarding the former: Sabine, you mention an effective field framework model in which the "hidden variables" are unaccounted for degrees of freedom of the detector itself. Can you give references to such models?

Carl

Tim,

"So now you are suggesting that in order for a superdeterministic theory to make a prediction about the outcome of a particular actual GHZ experiment it has to reference a precise theory of the Big Bang? Then no such theory will ever exist."

Not at all. It is your assumption that superdeterminism can be ruled out by using photons originating soon after the big-bang. I do not agree with this, because we do not know anything about those photons. If they were created as predicted by classical EM, they should be correlated just like any other particles, as I have argued. If not, I would ask you to justify their lack of correlation.

Our physical theories do not deal with particle/fields appearing out of nowwhere , this is why I find necessary to have a theory desribing the Big-bang.

There is common syndrome that hinders progress in all fields. It probably has a name, but I don't know what it is. It is caused by the use of intelligence, not to arrive at a consensus of the most likely possibility, but to defend one's own position.

Many years ago, a nephew of mine (who now has a Phd in bio-genetics or something like that) received as a gift "The Encyclopedia of Basketball" (an excellent gift--not from me--since he loved basketball and liked to study things). In a phone conversation I was giving him things to check, and mentioned Nate "the Great" Thurmond, whose height in the pre-game introductions was always given as six feet and eleven-and-a-half inches. He looked him up and said, "No, the Encyclopedia says he was six feet, one-and-a-half inches."

I replied, "That must be a misprint." No, the great EoB would not, could not include errors. "Well, doesn't it say he played center? How could a starting center in the NBA be only 6'1-1/2"?" He quickly answered, "Maybe he was a great leaper."

That illustrated the syndrome. Intelligence is used primarily as a tool to defend one's well-being, which includes the integrity of one's positions (and possessions). I guess, evolutionarily, that was its primary purpose.

(I could have fired the counter-shot, "When he played with Wilt Chamberlain, weren't they known as the Twin Towers?", but I didn't think of it in time. Darn.)

Sean S.

I appreciate your efforts to tone down this discussion. And I understand why—acting as a sort of mediator—you would not want to take sides. But that stance promotes the idea that "everybody does it" and "both sides are to blame" and so on. And that is itself a very specific and important claim if true, and even much more significant if false. I claim it is false. I have provided some evidence above and can supply more. Just count the times that Sabine asserts that I have not defined "hyperfine tuning" or given any argument that hyperfine tuning is scientifically unacceptable. Then go to the previous posts and count the number of times I did define "hyperfine tuning" and did give specific arguments against it, and that Mateus gave similar arguments against it before I got involved, and Carl gave arguments against it. See if Sabine is responsive to—or even deigns to acknowledge—these arguments.

Philosophers do not, as a matter of course, attempt to shut down conversations on the basis of credentials. This goes all the way back to Socrates, who famously engaged anyone anywhere, and always engaged the argument. That's what we are trained to do and what we do when we are acting up to professional norms. We encourage our students to push back against anything we—or the authors we are reading—claim, but to push back by questioning the premises or challenging the validity of the arguments with counterexamples. Philosophy education is, by its very nature, undogmatic. Physics education, in contrast, is unrelentingly dogmatic, especially when it comes to quantum mechanics. It was not a philosopher who characterized physicists' usual approach to trying to understand quantum mechanics as "Shut up and calculate", it was not philosophers who both strongly discouraged and actively prevented physics students from studying foundations to try to make clear physical sense of quantum theory. If there is one constant complaint about my posts here it is that I use "too many words", which reminds me of the scene in Amadeus in which the Emperor complains that Mozart's opera has "too many notes". If I go on at length it is to make my meaning and arguments as clear as possible, and to prevent misunderstanding.

Should you take my own characterization of what I do at face value? Of course not! These are claims that can be empirically checked, if you are interested enough. Go back and look over this exchange or—in fact—any exchange between Sabine and anyone challenging her assertions. Compare the effort I have made to answer questions to the efforts she has made. And then make an informed judgment.

If that judgment is that we equally engage in empty rhetorical ploys, so be it. I would be interested to see how you counted them and where you found them. And if that judgment is that one party is much, much more guilty than the other, make that claim and back it up as well. instructive debate requires intellectual honesty on both sides: one side alone cannot carry it off no matter how much effort is put in. So when one party is not engaging honestly and openly, there is nothing to do but either call them out or call it a day. These issues are too important to call it a day.

Sabine

OK, so in place of "Up, up, up", let's use the detailed relevant phenomenon:

In a GHZ set-up, whenever the three apparatuses are all set to measure x-spins there are an odd number of "up" outcomes, and whenever one is set to measure X and the other two are set to measure Y there are an even number of "up" outcomes.

Now I'm sure you will agree that this statement describes data. Indeed, it describes a potentially unbounded set of data. And according to you, to explain this phenomenon the "superdeterminist" proposes to identify a *specific initial condition of the universe* from which, together with a local deterministic dynamical law, this phenomenon follows.

OK, have at it. In fact, give a shot at describing *any* universal initial state rich and variegated enough for there to even exist a GHZ set-up (irrespective of the outcome) that is more compact than the sentence above.

The notion that anyone could do that is—to use your own adjective—dumb. So by your own definition of what an explanation is (which is easily counterexampled in any case) the "superdeterminist" has zero chance of explaining the GHZ phenomenon. I would be astonished if any reader of this blog could fail to appreciate this point, and invite anyone who would dispute any claim I just made to do so.

Now, to your false claims about my position. It is flatly untrue that I tried to "sneak in an a priori probability distribution". I have talked about a "tiny, tiny slice of phase space", and explained that this is quantified via the natural measure, i.e. the measure that is preserved by the dynamics. (This is often mistakenly called "Lebesgue measure". That measure is not a priori because the dynamics is not a priori. I have also used a measure over the space of possible global configurations of the experimental equipment, and justified that by claims about the statistical properties of the parity of the digits of pi and phi, etc. The measure in this case is a priori, since it follows from purely mathematical considerations, but it is not a priori in its use since that depends on the contingent claim that the settings of the apparatuses are determined by the sequence of parities of the digits. That is a sort of experimental arrangement that is manifestly possible, and that any fundamental physical theory ought to make predictions about. And in such a situation, any local theory will get the wrong outcome about one our of four runs.

Try to be serious and professional. This kind of nonsense is just destructive.

Tim;

“

You are acting as a role model for the sort of ridiculous behavior that I documented in the comment to Sean S. above (6:28 AM, July 26). The sort of physicist who is so self-involved and unable to acknowledge error that no rational discussion is possible. No wonder the field is such a mess.”Well, actually, you didn’t “document” anything; you told us a story about something you claim happened. There’s no reason for anyone to agree with your one-sided account of the event; what you provided falls far short of “documentation”.

“

Stop being such a bad role model. It benefits no one, most certainly not yourself. It you want to help change the practice of physics, as you loudly proclaim, start with yourself. Childish sophistical tricks are for trolls, not serious people.”Excellent advice; you should follow it yourself.

sean s.

Tim,

"to explain this phenomenon the "superdeterminist" proposes to identify a *specific initial condition of the universe* from which, together with a local deterministic dynamical law, this phenomenon follows.No, the superdeterminist simply assumes that this is the case. As I have told you several times, superdeterministic models explain GHZ (or other QM) correlations as good or as bad as QM *by assumption*. There is nothing interesting going on here. I know that this isn't very interesting, but it's also not the point. The point is that if you do this, you get a theory that can *also* make other predictions. That is why it's interesting.

"Now, to your false claims about my position. It is flatly untrue that I tried to "sneak in an a priori probability distribution"...

The measure in this case is a priori, since it follows from purely mathematical considerations, but it is not a priori in its use since that depends on the contingent claim that the settings of the apparatuses are determined by the sequence of parities of the digits. That is a sort of experimental arrangement that is manifestly possible, and that any fundamental physical theory ought to make predictions about. And in such a situation, any local theory will get the wrong outcome about one our of four runs."

It is just wrong to claim that this is not a priori. I already told you this above but you then refused to respond.

No experimental setting ever follows from "purely mathematical considerations". Who or what do you think set up the experiment, or the pseudorandom generator, who or what do you think the experiment is made of and so on and so forth, and what is the probability for that? There is your a priori assumption.

Maybe more importantly, you failed to show that in these cases a superdeterministic theory would likely not give the correct outcome because that's the claim you make. So please fill in the gaps: Tell us how to calculate the probability that a superdeterministic theory will give the wrong outcome, and do so without using a dynamical law and without using an a priori probability distribution.

Best,

B.

Carl,

The request I made is ill-defined nonsense indeed. It's as ill-defined as your idea that superdeterminism is "hyperfine-tuned". As I said, you don't calculate probabilities for initial states. You simply pick one that works.

I am not "obsessed" with the initial state of the universe, I am merely using this to explain why Tim's assertion is wrong. We always chose initial states just because they work and there is no way to justify a probability distribution a priori. The easiest way to see this is for the universe as a whole. (You can justify probability distributions empirically of course, but that's a different matter.)

No, I can't give you a reference to such a model, sorry. It is basically impossible to get funding to work on the topic so I had to give up on it.

"Can I quantify in a precise way how much of the "space" of possible initial conditions has to be thrown away, without having a specific superdeterminist theory in front of me? Of course not. That's irrelevant. If you are claiming that, for all you can see, a superdeterminist theory might work without any hyperfine tuning, then I respectfully submit that your faculty of physical intuition has a huge blind-spot."I am not claiming this. I am merely stating that no one has proved the opposite. As I have alluded to above, I suspect that for certain kinds of superdeterministic models such a proof might be possible, but no one has delivered it.

Best,

B.

Carl,

Let me add some more information on my remark about EFTs though because I personally think that's the interesting point. Any experiment tests a certain resolution. If you assume that whatever your superdeterministic theory looks like the "hidden" degrees of freedom obey the EFT separation of scales, you can estimate how many there are and how quickly they evolve, the latter depending on the temperature.

Now, since the theory is deterministic this means the outcome of an experiment is determined by these additional degrees of freedom (of the detector, above the resolution scale). Consequently, if you manage to freeze in these dofs (reduce noise, basically) you should see time-correlations in the measurement outcomes where QM predicts there shouldn't be any.

This would test a certain class of superdeterministic models that I find plausible. But of course no one is doing such a test because they all think that superdeterminism is "silly". Instead they will do yet another Bell-type test because that will get published in Nature. Excuse the cynicism. If you read the original blogpost of this comment thread maybe you will understand where the cynicism is coming from.

All,

Regarding the initial state of the universe:

Perhaps we could take an example model, one I have been working on, I believe physicists would call it a “toy model”, although I am not entirely sure if the nomenclature implies the pejorative physicists are wont to have towards others, non-physicists in particular, although I have learned here that they are quite good at clubbing each other like cavepeople as well.

Anyway, in the model I have been working on there are cycles of cycles, but there is one point that seems to correspond with our view of the end of one major cycle and the start of the next. That is the point where the two fundamental particles s and t are arranged in an extremely energy absorbent configuration on an enormous scale. So, I think of a super massive black hole (is that the largest, if not, keep going) where the surface is s and t particles confined in a very dense matrix. Each particle is orbiting with a radius (Rs and Rt which are both close to 1 at this point) and each particle is spinning and each particle is vibrating (kinetic/thermal). The two particle types s and t are equal and opposite, charged magnetic dipoles. s is charge -1/3. t is charge +1/3. And they obey Maxwell’s equations. Anyway, as more and more energy and particles are absorbed, something eventually gives (there is no singularity), and bang, off we go. What is it that gives? Maximum energy density of that rigid dense hot plasma? Whatever it is that gives, well that is your initial state. Even so, that state is not completely defined. It could be that it is set off by absorbing something big, even for the smbh. On another cycle, it could be an accumulation of gas and dust that pushes past the limit and sets off the bang.

I’m not sure if this is entirely relevant to your discussion because it has been too brutal for me to follow in detail. However, two points - 1) the initial condition can be different each cycle or each occurrence of a bang and 2) Each bang is chaotic – can mathemeticians prove that chaos precludes superdeterminism?

Best,

Marko

p.s. In case you are interested, the model also imagines those s and t particles go on to make a physical spacetime with one s and one t in each octant of a cubic matrix. s particles in the matrix mark space. t particles in the matrix mark time. Asymmetry has transferred energy between s and t, such that s orbit (Rs) in each octant >> Rt in that octant. (Rs = 1/Rt). Physical spacetime is elastic, thus yielding gravity and general relativity. All other matter is built from s and t particles. It’s fairly easy to work out the formulas for the standard model particles, with a few exceptions. Gen II/III fermions are higher energy states of the Gen I particle formula. Also, the photon is a pure wave in the spacetime matrix.

Sean S.

Okay, You got me. I said I have a witness in one case and the actual e-mail thread in the other. I suppose I can block out part of the e-mail thread to preserve anonymity (I feel like the Justice department at this point) and request—what?—a notarized statement from the witness? Or should I request that he get online and verify my account of the situation? Or will you then try to dismiss what he says as easily as you want to dismiss what I am saying?

Do you honestly believe that I would make up all of the precise details of that exchange? Or that I am misremembering something so singular? Are you suggesting that I do not have the documentation I claim to have, and cannot produce the exact quotes from the e-mail exchange? And if I do, are you then going to accuse me of inventing them?

I will provide the proof that you want to insinuate I do not have. I just want you to agree beforehand what evidence you will find acceptable, so I am not wasting my time providing this evidence just to have it questioned in turn. Just I as I continue to answer Sabine's questions even though I have already answered them.

But here's what I want in exchange. If you name the evidence you find satisfactory that my account is accurate and I provide it, will you apologize for pushing the situation to this extreme? Sabine's non-responsiveness is there for you to verify in the thread above, as is my own responsiveness. But you seem so wedded to your false equivalency claim that you can't be bothered to read that over with the requisite care and attention. So in your quest to supposedly improve the tone, you have decided to implicitly question my honestly and integrity. I will answer your challenge. Just make clear what—if anything—will satisfy you.

Andrei

"Not at all. It is your assumption that superdeterminism can be ruled out by using photons originating soon after the big-bang. I do not agree with this, because we do not know anything about those photons. If they were created as predicted by classical EM, they should be correlated just like any other particles, as I have argued. If not, I would ask you to justify their lack of correlation."

Good, we now have something to bet on. I think that what we are seeing here is that you do not understand what a statistical correlation is. But let's avoid the discussion and get straight to the bet.

Set up two photon detectors pointing in opposite directions. Let them detect photons from the CMB. Check the photons by using a birefringent medium for polarization in two directions that differ by 45° and record the results as a string of "H"s and "V"s, corresponding to Horizontal and Vertical polarization in the given directions. Match the two lists by time recorded. Now: I will wager $1,000 at 1,000-to-1 odds that these lists pass every test for statistical independence that has ever been devised. You say "they should be correlated just like any other particles". I say they will not be correlated. At all. Zero.

Are you willing to take the bet? Or did you have something else in mind when you claimed that they should be correlated? Can you clearly (as clearly as I have) explain what you have in mind by that claim? You right as if a single pair of particles can be correlated or not. That is nonsense. Please explain what you have in mind.

Sabine

I'm afraid that I am paying too much attention to what you have written for you to get away with this. Would that you would pay one tenth of that attention to what I have written.

" As I have told you several times, superdeterministic models explain GHZ (or other QM) correlations as good or as bad as QM *by assumption*."

Yes, you have said this several times. But you have also said that what it is to explain a phenomenon by appeal to initial conditions is to provide an initial condition that is simpler to write down than the phenomenon being explained is to write down. And no theory can do this "by assumption" (even if you emphasize "by assumption"): it has to be done by actually providing the goods, the simple initial condition and dynamical laws from which the phenomenon follows. I claim that no universal initial condition for a string of 100,000 runs of the GHZ experiment will ever be simpler to write down than the statement of the phenomenon, which I have given above. Do you actually intend to dispute this?

"No experimental setting ever follows from "purely mathematical considerations". Who or what do you think set up the experiment, or the pseudorandom generator, who or what do you think the experiment is made of and so on and so forth, and what is the probability for that? There is your a priori assumption."

Which part of "The measure in this case is a priori, since it follows from purely mathematical considerations, but it is not a priori in its use since that depends on the contingent claim that the settings of the apparatuses are determined by the sequence of parities of the digits" did you not understand? How can I break this down even simpler?

"Tell us how to calculate the probability that a superdeterministic theory will give the wrong outcome, and do so without using a dynamical law and without using an a priori probability distribution."

I have already done this, but one more time: it is a mathematical fact that any state of the three particle in a local theory can at best be predisposed to provide an acceptable reaction to 3 of the 4 initial conditions. It is a mathematical fact that the parities of the digits I have mentioned are not statistically correlated. It is a physical possibility that an experimental arrangement using the parity of those digits to set the apparatuses. Hence the settings of the apparatuses will be statistically independent of the physical state if of the particles, and hence 1 in 4 GHZ experiments, on average, will fail.

I have told you how. Just stop falsely saying I haven't.

Carl,

"To me, the arguments that I, and Tim, have offered already in this thread are already damn convincing. But Andrei and Sabine seem to find them still lacking, so it would be nice to have a more unassailable argument."

I certainly find the arguments against superdeterminism lacking and I was very clear why.

I have made the claim that classical electromagnetism (as it is, without any assumption abut initial conditions) is an example of superdetrministic theory.

I have explained multiple times that in such a theory the reason some states are excluded has nothing to do with fine-tuning, hyper-fine-tuning, etc.

You have not provided any argument against this claim. Mateus did not provide one either, presumably because he was too busy. Tim is the only one that gave an answer saying that the fields are too weak to explain the correlations between objects situated at large distances.

I think this argument is wrong because it does not take into account the magnitude of the change for the entire system implied by even a tiny local modification.

Revisiting Tim's example about the computer programmed to set the detector according to the digits of PI, he claims that a computer programmed differently could not possible influence the source, presumably because the difference between the EM fields generated by the two situations are insignificant at a large distance. Yes, it appears to be so until you look at what consequences such a tiny change implies for the past. A computer programmed differently requires a different past state at the beginning of the experiment. So, can you modify that past state in such a way that it affects only the computer and not the particle source? I have not seen any justification for such a claim.

If you replace classical EM, a field theory with a set of Newtonian billiard balls the answer is easy. You just change the initial state of the computer-billiard balls and leave the source-billiard balls unchanged. So, yes, such a theory does comply with the statistical independence assumption. But in EM you need to change the fields everywhere, otherwise you are in a forbidden physical state. There is no way your change of the detector could leave the source untouched. It might be a tiny change, but the system is chaotic and it will most likely evolve in an entirely different way.

I have also put forward another challenge:

Find a a system containing two charge distribution at two distant locations so that the two subsystems are independent, while still obeying EM laws. Try to use 2, 3, 4 charged particles and see how it goes.

Andrei

Tim,

"you have also said that what it is to explain a phenomenon by appeal to initial conditions is to provide an initial condition that is simpler to write down than the phenomenon being explained is to write down"What I said is that the initial condition is always part of the model, so it's silly to complain that it's being chosen.

" I claim that no universal initial condition for a string of 100,000 runs of the GHZ experiment will ever be simpler to write down than the statement of the phenomenon, which I have given above. Do you actually intend to dispute this?"I don't know what "universal" means and I don't know what "the statement of the phenomenon, have given above" refers so, please clarify.

"It is a physical possibility that an experimental arrangement using the parity of those digits to set the apparatuses."That's both unproved and irrelevant.

"Hence the settings of the apparatuses will be statistically independent of the physical state if of the particles"Do I have to remind you yet one more time that the theory has hidden variables?

Sabine

From your latest:

""you have also said that what it is to explain a phenomenon by appeal to initial conditions is to provide an initial condition that is simpler to write down than the phenomenon being explained is to write down"

What I said is that the initial condition is always part of the model, so it's silly to complain that it's being chosen."

Please pay attention to the nested quotes here:

"You write: "What justifies an initial state? That it works. What does it mean that it works? It means that the initial state together with a time-evolution explains observations. By explaining I mean it's a simplification over just writing down the measurement outcome.""

As I said, you claimed that what explanation via an initial state amounts to is simplification: stating the initial condition is simpler than stating the measurement outcome. That is what you wrote. If you want to take it back (it's a pretty indefensible view) then take it back, but pretending that you never wrote it is really beyond the pale. The extra set of quotes indicates that this is not even the first time I have directly cited that very passage back to you: I already did just 3 days ago. How you can have the effrontery to suggest that you did not say it is quite beyond my comprehension. We are reaching Orwellian territory. We have always been at war with Eastasia.

"I don't know what "universal" means and I don't know what "the statement of the phenomenon, have given above" refers so, please clarify."

"universal" means the quantum state of the entire universe. That is what the dynamics runs off of. The statement of the phenomenon was the bit about when the apparatuses are all in the X configuration there is always an odd number of "up" outcomes, and when in the one-X-and-two-Ys configuration there is always an even number of "up" outcomes.

You don't think that the apparatuses can be hooked up to the pseudo-random number generators? You think that is not physically possible? Praytell: what will happen if you try? Will the ghost of Bohr appear and block you? What in the world do you think would prevent it?

"Do I have to remind you yet one more time that the theory has hidden variables?"

Well, I have been asking what "quantum mechanics with extra variables sprinkled over it" means for some time, and you have never deigned to respond. But let me get this clear. These hidden variables, which are somehow associated with the particles, will have a distribution that is correlated with the distributions of the parities of the digits of pi and phi? You are claiming you can specify a theory like that? Have at it! the should be amusing.

Andrei,

"Revisiting Tim's example about the computer programmed to set the detector according to the digits of PI, he claims that a computer programmed differently could not possible influence the source, presumably because the difference between the EM fields generated by the two situations are insignificant at a large distance. Yes, it appears to be so until you look at what consequences such a tiny change implies for the past. A computer programmed differently requires a different past state at the beginning of the experiment. So, can you modify that past state in such a way that it affects only the computer and not the particle source? I have not seen any justification for such a claim."

Really?

So you have these three computers, all programmed to act as pseudo-random number generators. The computers are made on completely different base architectures, running different programs that compile into different programming languages. But you think that some theory can provide that the electro-magnetic state of these computers can influence the initial state of the particle triple in just the right way to prepare them for the experimental condition they will eventually meet. And you can't provide any sort of a positive example of a theory like this. And you call this physics?

The Goldbach Conjecture is a claim. Claims are not proof. Replying that, "Okay I will bet you $X to a penny that you can't produce a counter-example to the GC" is still not a proof.

It seems to me there are two things being said here that were not necessarily unreasonable:

1) I can't think of any model for super-determinism that makes sense to me, so I am not interested in discussing or thinking about super-determinism.

2) I (different I) have some thoughts on how empirical evidence could be obtained by experiment to confirm or refute some models of super-determinism, and I think that is interesting and worth following up. At worst we might arrive at a definite proof that S-D cannot work, which would be more than we have now.

As I read the thread, proponents of (1) insulted position (2), and things went downhill from there. I don't think anything has been established that refutes either 1) or 2), not to my satisfaction, anyway. I am biased, however, because I think one should be polite to one's host and save the insults for some other medium (that I'm not on), such as Twitter. (This doesn't mean one can't make objections. Objections can be done politely.)

Speaking of amounts of money to be placed on propositions, I would contribute to a GoFundMe for research on this issue. That's the kind of bet I would enjoy making.

Tim,

There are two propositions that are relevant to this discussion:

1. Superdeterminism is not science (you say it requires fine-tuning, retrocausality, I say it does not, classical EM being an example)

2. Superdeterminism is not true.

It seems to me that we have moved from the first proposition to the second. Your proposed experiment involving CMB is supposed to experimentally falsify superdeterminism by providing an example of truly independent systems.

Those presumably independent systems are protons and electrons situated far away in opposite directions. It is implied that the laws of classical EM cannot be applied to them because their relative speed during universal expansion was higher than the speed of light. There are two problems about this argument.

1. I did not claim that classical EM is a TOE. I don't think that it can explain gravity, let alone the Big-Bang and inflation. I have only claimed that it is an example of a scientifically respectable superdeterministic theory that is capable to provide at least some qualitative explanations for quantum experiments in a regime where both (QM and classical EM) are expected to work.

2. If we were into the possession of a TOE we could find that it is also superdeterministic so, your example with CMB would not work as expected. For example the Big-Bang might create the original particles in some correlated state, those correlations might have been preserved during the inflation, etc.

"Set up two photon detectors pointing in opposite directions. Let them detect photons from the CMB. Check the photons by using a birefringent medium for polarization in two directions that differ by 45° and record the results as a string of "H"s and "V"s, corresponding to Horizontal and Vertical polarization in the given directions. Match the two lists by time recorded. Now: I will wager $1,000 at 1,000-to-1 odds that these lists pass every test for statistical independence that has ever been devised. You say "they should be correlated just like any other particles". I say they will not be correlated. At all. Zero."

If the emission of CMB radiation is correctly described by classical EM, or some other field theory that works on that regime, they will be correlated in the sense that they represent the magnitude of the electric and magnetic fields at the location of the detectors originating from the same system of charged particles. A change of the polarisation of one photon requires a change in the charge distribution/momenta which in turn would imply a change in the polarisation of the other photon.

I agree that no statistical test would find this correlation because:

1. The position/momenta of all charged particles are not known to the the software performing the tests so it simply has no way to deduce the correlation.

2. The measurements can only reveal a limited information about the photons because of uncertainty.

By "correlated" I meant the opposite of "statistically independent". I did not imply that there is some trivial correlation (linear, exponential, etc) of the type that is expected to be found by a statistical test. Sorry if I misled you.

Part 1 of 2

Tim;

Regarding a pair of your recent comments:

To begin with; I am not trying to tone down this argument; only you and Sabine can do that.

And I don’t want to act as a mediator in your argument with Sabine; I doubt the both of you would accept any finding by me.

“

Just count the times that Sabine asserts ..."[1]Here’s the biggest reason acting as a mediator is not terribly appealing: it is not the role of a mediator to act as

investigator; mediators evaluate evidence provided by the parties. If I’m supposed to be both investigator and mediator; the effort will be a lot of work for nothing.Imagine I took on the role of investigator/mediator, and then concluded you were at fault; wouldn’t you want to see my compilation of evidence against you or Sabine? You should. And a wise mediator will say to you: “OK, provide that compilation yourself, so you can be sure I don’t miss anything.” I’d say the same to Sabine, but you’re the one commenting on mediation, so I’m saying it here, now, to you.

“

Should you take my own characterization of what I do at face value? Of course not!”[3]I am sure you know how to cite things; don’t expect me to hunt down your claims UNLESS you are willing to accept my conclusions without challenge (which would be very foolish of you). I’m sure you are busy, but so am I. If you want mediation, please provide your cites (I would double-check them). If you don’t provide cites, you don’t want mediation.

“

Okay, You got me. I said I have a witness in one case and the actual e-mail thread in the other. I suppose I can”[5]... I will answer your challenge. Just make clear what—if anything—will satisfy you.”[6]I was not clear; so let me be clear now. Even if you provided all the evidence necessary to evaluate your anecdotes; and even if I or a jury determined that your conduct in THAT dispute were beyond reproach,I’ll own this one.what would that determination tell us about your conduct in THIS argument with Sabine?Nothing. Zilch.

It may be that the physicists in your anecdotes were entirely at fault; that does not make you innocent

now. That makes your anecdotes irrelevant here.In addition, your anecdotes border on

ad hominemregarding Sabine. They imply that all physicists are “bad”; and therefore, Sabine must be “bad” (because she’s a physicist). If you deny this malevolent assertion about physicists, then I apologize to you again. But ...That only adds emphasis to their irrelevance re. your current argument with Sabine.

And I note that this is not the first time you’ve inserted irrelevant claims in this argument.[7][8]

“

Philosophers do not, as a matter of course ...”[2]; “Philosophy education is, ...”[2]; “Physics education, in contrast, is ...”[2]; and so on.These comments appear to be attempts to recast this argument between you and Sabine into an argument between philosophy and physics. They also border on

ad hominem; Sabine’s position and behavior are not governed byunfoundedgeneralizations about philosophy or physics training. (These generalizations areunfoundedbecause no foundation as provided for them.)Equating

faultless conduct in the disputewithtechnical correctness about the topic in disputeis an error. Those two are not necessarily related.End Part 1 of 2

Footnotes at the end of Part 2.

sean s.

Part 2 of 2

Continuing; Tim wrote:

“

And then make an informed judgment. ... If that judgment is that we equally engage in empty rhetorical ploys, so be it. I would be interested to see how you counted them and where you found them. And if that judgment is that one party is much, much more guilty than the other, make that claim and back it up as well. instructive debate requires intellectual honesty on both sides: one side alone cannot carry it off no matter how much effort is put in. So when one party is not engaging honestly and openly, there is nothing to do but either call them out or call it a day. These issues are too important to call it a day.”[4]First of all; Sabine cannot carry off instructive debate on her own if the person she’s debating is not being intellectually honest. That may be what’s happening here. You may reject that but the rest of us have reason to accept its possibility.

And producing the results you’d want would be a lot of work. I’d be willing to do it, but I have no confidence that my findings would be accepted. The lack of citation is a sign of that. Pointing at the thread and saying your evidence “is in there” is not a reasonable way to proceed. Telling me to “go count” things is not a reasonable way to proceed. Would you accept a finding that you were at fault based on nothing but a statement that the evidence “is in the thread”? I hope you’d say

Hell. No.So why would I or anyone else accept “it’s in the thread” or “go count for yourself”? If you want mediation; please provide your own compilation of evidence. “It’s in there” is not good enough. If you don’t want mediation, you need do nothing.As for how Sabine feels about the idea of a mediation; that is her decision to make. Mediators must be accepted by both parties before they even consider the project; and she doesn’t know me from Adam. This is

thread after all. She would be within her rights to peremptorily give us both the boot. She has no reason to simply trust my intellectual honesty.HERIf, as you wrote, “

These issues are too important to call it a day” then they are too important to do sloppily. I volunteer to help--if there’s a commitment to doing it right. Otherwise, no. I got other things to do.I still believe the best use of everyone’s time would be to declare this thread done and put a stake in it. Given the work required to do mediation properly; and the unlikelihood that both of you will want it, better you both should just go on with your lives.

sean s.

[1] comment by Tim at 08:43, July 29, 2018; para. 1.

[2] ibid.; para. 2.

[3] ibid.; para. 3.

[4] ibid.; para. 3-4.

[5] comment by Tim at 02:20, July 30, 2018; para. 1.

[6] ibid.; para. 4.

[7] comment by Tim at 07:33, July 28, 2018; para. 7-8. I considered commenting on this, but Sabine beat me to the punch.

[8] comment by Sabine at 01:52, July 29, 2018; para. 9.

Not exactly

Blue Book, but good enough.Tim,

You quote me correctly with saying:

SH: "the initial state together with a time-evolution explains observations"And then go on to say:

TM: "you claimed that... stating the initial condition is simpler than stating the measurement outcome"Do I really need to tell you that this is not what I said?

"How you can have the effrontery to suggest that you did not say it is quite beyond my comprehension."Because I didn't?

As to your statement that:

TM "I claim that no universal initial condition for a string of 100,000 runs of the GHZ experiment will ever be simpler to write down than the statement of the phenomenon, which I have given above. Do you actually intend to dispute this?" ... "universal" means the quantum state of the entire universe."Needless to say, if you want to explain (some aspects of) the GHZ measurements, you don't need the initial condition for the whole universe. Unless possibly you also want to measure the state of the whole universe together with the outcome of the experiment, in which case I wish you much fun.

"You don't think that the apparatuses can be hooked up to the pseudo-random number generators? You think that is not physically possible? Praytell: what will happen if you try? Will the ghost of Bohr appear and block you? What in the world do you think would prevent it?"This is a deterministic theory. Pseudo-random generators are not actually random, as the word pseudo might have told you. Sure, apparatuses can and have been hooked up to this and that. Did that make them random? No, it didn't. I hate to break the news but if the time-evolution is deterministic the outcomes were all predetermined long ago.

" I have been asking what "quantum mechanics with extra variables sprinkled over it" means for some time, and you have never deigned to respond"I have responded to this several times already, you merely failed to notice. How about you stop assigning your mistakes to me? Here it is once again: superdeterministic theories reestablish determinism by hidden variables. If averaged over the hidden variables you reproduce quantum mechancis by assumption (though of course you'd want to prove that this actually is the case in any given model). That is what "sprinkled over" refers to. The hidden variables also result in correlations between the prepared state and the detector hence you can't use Bell-type tests to rule them out.

"These hidden variables, which are somehow associated with the particles, will have a distribution that is correlated with the distributions of the parities of the digits of pi and phi? You are claiming you can specify a theory like that?"I haven't claimed anything like that. You are the one who claims it is not possible. You still haven't proved your claim. Look, all your going on about pseudo-random generators is entirely irrelevant because in a superdeterministic theory whether or not you use them and how you initialize them is predetermined already. Besides this, you can't use them to initialize hidden variables if those variables are, well, hidden.

As I told you several times, it's not possible to prove your claim without referring to a dynamical law. I still hope you'll eventually get around to see that.

Sean,

I appreciate your efforts. It would be most helpful, I think, if you could remind Tim to stick to the topic, that being that he is trying to back up his claim that superdeterminism is unscientific.

Tim,

"So you have these three computers, all programmed to act as pseudo-random number generators. The computers are made on completely different base architectures, running different programs that compile into different programming languages. But you think that some theory can provide that the electro-magnetic state of these computers can influence the initial state of the particle triple in just the right way to prepare them for the experimental condition they will eventually meet. And you can't provide any sort of a positive example of a theory like this. And you call this physics?"

The derivation of Bell, GHZ, free-will theorem, etc. requires that the spins of the particles and the states of the detectors are statistically independent (SI). In order for a theory to qualify as superdetrministic it needs to negate that proposition. It does not need to show that the SI assumption is violated in the "right way". Any way will do.

My claim was that classical EM is superdetrministic. In other words, the state of the particle triple and the state of the computers are not independent parameters. My claim was not that classic EM is true. So, my burden is not to show that the state of the particles depend on the detector settings in the "right way", but only that they depend in "some way". This is another way of saying that just because a theory is superdeterministic does not mean it is true.

Do you still deny that the state of the particles depend on some way (not necessarily the right way) on the detector settings? OK, then tell me how do you change the way a computer is programmed without changing its past state and without changing the fields implied by such a modification at the location of the source and still comply with classical EM.

Sean S. (yet another anonymous commenter)

You seem to have piles of time on your hands that is being put to no productive use. First you start above remarking that you are going to get more popcorn. Great.

Then you suggest that we end this conversation completely, leaving the entire question apparently up in the air. Great.

Then you start adding in a bunch of weasel words like "may" as in "First of all; Sabine cannot carry off instructive debate on her own if the person she’s debating is not being intellectually honest. That may be what’s happening here. You may reject that but the rest of us have reason to accept its possibility." And you may be an ax-murderer. Great. And maybe you would like to provide this "reason" that the rest of you have. That would be enlightening.

Why do I say that Sabine is being intellectually dishonest? Because she is, in ways that I have specifically already documented and that, in spite of which, she continues. See the following comment addressed to her. And if you want to keep this thread focussed in superdeterminism, then comment on that. If you want to introduce matters of etiquette do it in a responsible way, not slathered over with false equivalence.

Sabine

There must be some limit to what you are willing to stoop to, but we have not found it yet. Until you acknowledge what you wrote, and what I have already cited—with complete quotation—at least twice I will not allow you to evade by giving you anything else to deflect with. Complete quote from your last post:

"You quote me correctly with saying:

SH: "the initial state together with a time-evolution explains observations"

And then go on to say:

TM: "you claimed that... stating the initial condition is simpler than stating the measurement outcome"

Do I really need to tell you that this is not what I said?

"How you can have the effrontery to suggest that you did not say it is quite beyond my comprehension."

Because I didn't?"

Here is the passage you wrote—directly cited yet again—in which you directly say what you now claim you did not say:

"What justifies an initial state? That it works. What does it mean that it works? It means that the initial state together with a time-evolution explains observations. By explaining I mean it's a simplification over just writing down the measurement outcome."

One more time: the key sentence.

"By explaining I mean it's a simplification over just writing down the measurement outcome."

The context in which you said this—and yes, no matter how much you seem to want to deny it, you said it—was one in which I was trying to make a key point about the nature of statistical explanation as it is used in physics. And that discussion is important because it touches on the question of what a scientific explanation of a phenomenon is, and why superdeterminism does not supply such an explanation and therefore is unscientific. This is a point Mateus also made, over and over, while you continued to assert falsely that he had made no argument.

Now: stop denying you said what you said. I am not adept at doublethink and have no intention to take up the practice now.

If you want to retract it, retract it. Then we can start a conversation about what scientific explanation of a phenomenon is and why superdeterminism does not provide it.

If you want to defend it, then the first step is to stop denying that you wrote it.

Which is it?

I renew my appeal for a theory of initial conditions.

As Bee just again pointed out - initial conditions + deterministic time evolution --> everything is in principle pre-determined. The 100,000 runs of GHZ experiments and the measurement settings that Alice, Bob and Carl chose - all are fixed by the initial conditions and deterministic time evolution.

We can get rid of deterministic time evolution. Or perhaps what is knowable, even in principle, about the initial conditions, is limited. The physical act of determining some of the initial conditions vitiates them enough to alter the future. Or maybe one can show that the initial conditions are under-determined by the present. In any case, some clarification is needed about what is it that we are assuming about (initial conditions + physical laws) that permit us to do scientific experiments.

Ok, so everyone else here seems to have a time-zone advantage over me: a batch of new posts appears overnight while I’m sleeping (in Europe) and by the time I can write something, Sabine goes offline for the rest of (my) day; so I never can respond

soonto anything. Oh well.Here are some responses to the past few days of posts. I’ll ignore the issue of who is arguing disingenuously, sophistically, etc., even though I have my own impressions based on the whole thread (which I read from the beginning once I found out about it).

To Andrei: classical e-m is a deterministic theory, but not a superdeterministic theory. Even though there are indeed mathematical constraints on what fields can be like

heregiven the configuration of particlesover there(and these constraints are analogous to the constraint equations of GR, a theory that to my knowledge nobody has every said was superdeterminist!), these constraints are natural (they follow from Maxwell’s equations) and do not reduce the allowed space of solutions to those equations in any way. Superdeterminism is a very different idea. The usual way of understanding it is as combining (i) a local deterministic dynamics, and (ii) a severe restriction of the allowed space of solutions, a restriction that is intended to guarantee that even though the dynamics is local, if agents do things like GHZ tests and Bell tests, the measurement-results correlations will make it look as though there was something nonlocal going on. (Or if you want to put it more neutrally: guaranteed to reproduce the predictions of (standard, nonlocal) QM.) Because of Bell’s theorem we know that if a local theory is to pull this trick off, there has to be a failure of the very natural presumption of statistical independence between the mechanism that chooses measurement directions and the pre-existing values of the particles to be measured.Tim’s and my arguments above have aimed to convince you that the restrictions on the allowed space of solutions are extremely severe indeed, and thus – given the way superdeterminism is characterized – deserve to be called “conspiratorial”, and hence dismissed. Have we

“proved”this, for all possible superdeterminist theories, in the mathematically rigorous sense that Bell proved his theorem? No, I guess not; I don’t think the very idea of a superdeterminist theory is well enough defined for rigorous proofs of anything. But what we’ve done is give enormously strong plausibility arguments for the claim that superdet theories involve severe, extreme, ad hoc restrictions of the solution space, and hence should be dismissed as unscientific conspiracy theories.Worse, I think we are literally arguing about

nothing. My suspicion is that the set of all possible, non-trivial, superdeterminist theories is the empty set. This brings me to a couple of overdue remarks about kinds of superdeterminist theories or models.CONT.

Cont’d:

There’s what I would call

trivialsuperdeterminism: the claim thatmaybethere is a fully deterministic theory out there that can, for at least one possible initial condition allowed by its dynamics, reproduce a quantum-looking world like we see around us. Trivial superdeterminism rests content with this vacuous observation as sufficient to dismiss the conclusion of Bell’s theorem as not forced on us. After all, who knows, maybe our world is like that and just happens to have that one special initial condition (or one of the set of them, if there’s more than one) that makes it look like QM is true and nonlocality a real feature of the world. Sometimes Sabine seems to slip into a way of thinking close to this trivial way of expressing and defending superdeterminism (meaning no offense – but her repeated allusions to how a superdet theory gets the Bell/GHZ pheomena right *by assumption*, and repeated falling back ontheinitial condition of the universe, do suggest this.) I hope it is clear that trivial superdeterminism is a joke, not a scientific suggestion.The second form of superdeterminism is what I described in part 1 of this post, superdeterminism that also works by selecting only part of the full space of solutions that the theory’s dynamics intrinsically allows. But now, moving slightly away from triviality, the idea is that there may be some non-trivial and non-conspiratorial way of specifying the needed restriction. (Something vague and hand-wavy is usually said about “correlations” between particle states in the early universe.) That sounds slightly more serious, I guess, but in the end it isn’t really. An analogy may help make clear why not.

Most of you believe that the existence of Carl Hoefer in our universe is naught but an accident, a matter of chance, but I hypothesize that you are all wrong. In fact, I claim that there is a deterministic theory such that the eventual existence of a being qualitatively identical to Carl Hoefer is inevitable, a feature of all allowed models of the theory. It’s not a conspiracy, mind you, it’s just that all allowed models of the theory feature correlations in the initial states of all the particles that guarantee the eventual evolution of Carl. Like my theory?

Finally, the 3rd type of superdeterminist theory seems to be what Andrei, and at times Sabine, think there might be out there. The 3rd type has a deterministic dynamics of some sort, and

no restrictions on allowed states or initial conditions whatsoever. It just happens to be the case – for reasons we won’t be able to understand until we have the theory laid out before us, of course – that the dynamicsalwaysimposes correlations between particle states and locations such that the Bell/GHZ experiments give their misleading (to the weak-minded) impression of something nonlocal going on.My response to type-3 superdet is to now advocate type-3 Carl-Hoefer-Inevitability theory. You may

thinkthat a deterministic dynamics could never be such as to have the existence of Carl Hoefer-like beings be a feature of every solution, but can you prove it? Go on, try!So, Sabine and Andrei: which type of superdeterminism is it that you think should be taken as a live option? Please let us know. Or if you think my classification is wrong, or has left out some 4th kind of superdeterminism, please do explain it as clearly as you can.

--[the Inevitable] Carl

Tim,

Yes, you were "trying to make a point" but you didn't.

Writing down an initial state without a dynamical law doesn't explain anything. I never said it does, and no matter how many times you misquote me, I will not agree on having said it. I have no idea what you are trying to achieve here.

To reiterate the problem with your argument: You claim that postulating a "hyperfinetuned" initial condition is "unscientific".

First thing to note is that postulating initial conditions is something we always do. There is nothing unscientific about it (not unless you want to claim that most of science is unscientific, but if you want to go there, please go without me).

Second thing to note is that your notion of "hypefinetuned", no matter how many times you rephrase it in terms of other words, requires a measure of probability which you cannot derive without a) postulating yet another initial condition and b) a dynamical law.

Once you realize this, you may try to prove that there is no initial condition for a theories' dynamical law that has explanatory power. This is not a complicated argument What is it that you don't understand about this?

Sabine:

In order:

1) "Writing down an initial state without a dynamical law doesn't explain anything. I never said it does, and no matter how many times you misquote me, I will not agree on having said it."

What in the world are you talking about? I never claimed or suggested that you held that the initial conditions *alone* could explain anything! You made this bizarre accusation at least once already and I flatly denied it—I said that we were all granting some deterministic dynamics. Here is what I already said:

"First, when you say "Second, more importantly, you cannot tell whether an initial condition leads to what you call a "generic feature" without knowing what the dynamics of the model looks like."

If you honestly think that this is news to anyone, then you are very, very misinformed. Of course we are assuming, and have been assuming all along, that the dynamics is given. We have also been assuming (although this is not required) that the dynamics is deterministic. And in the case of Bell's theorem we are assuming that the dynamics is—in a perfectly well-specified sense—local. So when you make this comment as if anyone thought otherwise what it shows is that you have no idea what the debate is even about."

July 21, 2018.

Why should I bother to answer your questions if I have already answered them? And more to the point, why do you keep asking them if I have already answered them? What are you trying to achieve?

2)"First thing to note is that postulating initial conditions is something we always do. There is nothing unscientific about it (not unless you want to claim that most of science is unscientific, but if you want to go there, please go without me)."

Actually, postulating initial conditions as precise and detailed as the ones the superdeterminist needs is something we never do. What scientists do do is make typicality arguments about generic initial conditions using statistical-mechanical considerations. See: Maxwell and Boltzmann. If all of the potatoes in Farmer Brown's farm suddenly started looking like Donald Trump, no scientist in the world would say: "Well, this is a deterministic theory so there is some initial conditions that leads to this. Let's just postulate that." As Mateus and Carl and I have been saying forever, such an "explanation" would not be considered scientific. Indeed it would not be considered an explanation. It would be tantamount to saying that this remarkable set of correlations just happened by chance. No one would ever accept that.

Con't

3) "Second thing to note is that your notion of "hypefinetuned", no matter how many times you rephrase it in terms of other words, requires a measure of probability which you cannot derive without a) postulating yet another initial condition and b) a dynamical law."

I never said that you could derive it without a dynamical law. In fact, I said that the natural measure of initial conditions is "natural" exactly because it is preserved by the dynamics, i.e. is equivariant. Again:

"Now, to your false claims about my position. It is flatly untrue that I tried to "sneak in an a priori probability distribution". I have talked about a "tiny, tiny slice of phase space", and explained that this is quantified via the natural measure, i.e. the measure that is preserved by the dynamics. (This is often mistakenly called "Lebesgue measure". That measure is not a priori because the dynamics is not a priori. I have also used a measure over the space of possible global configurations of the experimental equipment, and justified that by claims about the statistical properties of the parity of the digits of pi and phi, etc. The measure in this case is a priori, since it follows from purely mathematical considerations, but it is not a priori in its use since that depends on the contingent claim that the settings of the apparatuses are determined by the sequence of parities of the digits. That is a sort of experimental arrangement that is manifestly possible, and that any fundamental physical theory ought to make predictions about. And in such a situation, any local theory will get the wrong outcome about one our of four runs."

July 29. 2 days ago.

Now: stop with your deflections and explain what your own sentence—which I did not misquote, elide, take out of context, or in other other way fiddle with—namely "By explaining I mean it's a simplification over just writing down the measurement outcome." means, and in what way an possible precise initial condition (together with the dynamics!) can be "a simplification" over writing down the description of the phenomenon, which I have given above.

At least it is less work writing these posts, since I have already answered the questions many times and just have to cut and paste. But that practice is still a waste of everyone's time.

Tim;

Regarding your comment (addressed to me) of 01:51, July 31, 2018:

OK then. You’re looking for reasons to not have me mediate; you could have just said “No” and that would have been that.

Among the reasons you find is that I have refused to take sides; implying that you’d not accept mediation except from a mediator who’d already taken your side. You should realize that mediation does not (and should not) work that way.

“

I have specifically already documented and that ...”Unfounded claims do not constitute

documentation.“

If you want to introduce matters of etiquette do it in a responsible way, not slathered over with false equivalence.”Etiquette?Where did that come from?What matters here is basic human respect, and intellectual honesty. Do you not see that? This conversation has become toxic; and to a great extent futile; that is from lack of respect.False equivalence? Tim, the equivalence is not false. You don’t like that, but that’s the way it is.I take this as a rejection of mediation by you; OK. That’s fine.

sean s.

BTW, Tim;

Regarding, “

Sean S. (yet another anonymous commenter)”I think that word does not mean what you think it means...; I am not an “anonymous commenter”; you see my name; you’ve actually USED my name.Click on my name and you can send me an email. I know this works because others have already done it from here. Feel free; whenever. I won’t bite.

sean s.

None of my business but - I've previously at this site given a conceptual candidate for a hidden-variable theory which reproduces QM statistics and is deterministic: the simulation candidate; in another universe, in which the speed of light is many orders of magnitude faster, and other properties combine to make fantastically-powerful computers possible, this universe (or just our part of it) is being simulated. Just as we could simulate any (small enough) QM experiment on our computers, that computer simulates at least this whole solar system (the rest might be simulated at much less resolution). As I said before, I don't believe this to be the case, and no such simulation could be done on a computer in this universe, but I don't know of a way to prove it is not the case. (Dr. Hossenfelder might be able to rule it out if the study she proposed in a much-earlier post were funded.)

(Yes, this concept also rules out dualism which many will object to, since they are sure they have souls which no computer could simulate. Again, AlphaGo convinces me that it is possible, in principle, given enough computing resources. In any case, I know of no proof of dualism.)

If it were the case (that we are part of a simulation and all our futures are fixed) would that somehow make life and science not worth continuing? I don't think so. By definition, we would not feel any differently than we feel now, especially if we don't know that it is the case. (If it is a simulation and we are somehow able to determine that, that would be a huge accomplishment by itself. I for one would be proud of us for that; and also proud if we can determine it is not a simulation.)

Arun.

You write:

"As Bee just again pointed out - initial conditions + deterministic time evolution --> everything is in principle pre-determined. The 100,000 runs of GHZ experiments and the measurement settings that Alice, Bob and Carl chose - all are fixed by the initial conditions and deterministic time evolution."

Yes, of course. That how deterministic (not "superdeterministic", whatever that is supposed to mean, but plain old deterministic) theories work. If either you or Sabine are under the impression that this is news to me, or Carl, or Mateus, or anybody else who has been engaged in this argument then you are not following what we are writing. Nor is it news to Bell, if you go and read "Free Variables and Local Causation". We all know this. So if you are attributing to us any beliefs that require not understanding it you are just mistaken about what we believe and what we are arguing.

The existence of some initial condition or other that, together with the deterministic dynamics, implies the phenomenon you are interested in explaining just means that the theory *permits* or *is compatible with* the phenomenon. It does not at all suggest that the theory *explains* the phenomenon, or indeed that the phenomenon might not rightly be taken to *refute* the theory FAPP, to render it *rationally indefensible* to accept the theory.

Example: Joe the gambler has just rolled 20 sevens in a row, winning millions from the casino.

Theory 1: Joe's dice are rigged. This theory is advocated by the casino.

Theory 2: Joe's dice are fair, and he has just been very, very lucky. This theory is advocated by Joe.

The casino confiscates the dice. They do not even bother to analyze them, since that would require more time and effort than they want to put into the case. When they go to court, they claim that the dice are rigged, and Joe claims the dice are fair. The casino lawyers set up a standard craps table, vigorously shake the dice in a dice cup and throw them so they bounce off the far wall, as is required. They proceed to throw 100 sevens in a row. (By the bye, always a 2 and a 5. In fact, always the 2 on one of the dice and the 5 on the other). They point out that the chance of what just happened was 1 in 4 x 10^155 if the dice are fair. They rest their case. (I had originally written 1000 throws, but when I plugged that into the calculator to get figure out the odds, the calculator literally returned the value "infinity".)

In Joe's defense, he argues that since the dice are fair, there must be *some* initial condition that leads to the sequence of 100 sevens, as well as the previous 20 sevens. All a 5 and a 2. All with the 5 on one particular die and the 2 on the other. Joe further points out that any specific set of 120 outcomes would have the same chance of 1 out of 5.7 x 10^186. He insists (having read Sabine's blog) that there can be no *scientific* objection to his theory. He asserts that we always have to posit an initial condition, and he is just doing what all scientists do, which counts as a perfectly acceptable scientific explanation of the phenomenon. He says that the casino does not understand scientific method, which has been ably exposited by Sabine in her defense of "superdeterminism". He proclaims that there is no scientific reason to reject his assertion that the dice are fair. He rests his case. Then he sits down, picks up the dice, and idly rolls another 50 sevens in a row. All with 2 and a 5.

You are on the jury. How do find against Joe? Innocent or guilty?

"the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one"

what's wrong with this: as far as you don't have the final answer (the complete theory able to predict what you measure ) , every possible value is a priori almost equi probable within the current incomplete theoretical description (just because this description is not able to specify this value) so you use a uniform distribution from which you can conclude that indeed the measured value is extremely unnatural which merely means that a theoretical piece is missing in the current model.

This is a valuable motivation for trying to find out the missing piece : when you have one candidate , you should also have a new probability distribution associated to it that might tell you wether the same measured value is now extremely natural within the new more complete theoretical description or still not enough to be satisfied with it ...

What did i miss ?

Carl,

„classical e-m is a deterministic theory, but not a superdeterministic theory. Even though there are indeed mathematical constraints on what fields can be like here given the configuration of particles over there (and these constraints are analogous to the constraint equations of GR, a theory that to my knowledge nobody has every said was superdeterminist”

In the context of Bell’s theorem superdeterminism means the denial of statistical independence between the detectors’ settings and the hidden variable. Not more, not less.

Let’s define what this statistical independence means. Wikipedia reads:

„In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.”

As you say above, given the state of the detector („the configuration of particles over there”) - „there are indeed mathematical constraints on what fields can be like here” (at the location of the source). But the fields at the location of the source will determine the spin of the emmited particle, so the probability that the source emits a particle with that particular spin does depend on the detector being in that particular state. Therefore, classical EM is superdeterministic. And yes, I claim that GR is superdeterministic as well.

„ these constraints are natural (they follow from Maxwell’s equations) and do not reduce the allowed space of solutions to those equations in any way”

What those constraints imply is that you cannot assume that the probability to get a particle with a certain spin is the same for any state of the detectors, so the derivation of the theorem is impossible.

„Superdeterminism is a very different idea. The usual way of understanding it is as combining (i) a local deterministic dynamics, and (ii) a severe restriction of the allowed space of solutions, a restriction that is intended to guarantee that even though the dynamics is local, if agents do things like GHZ tests and Bell tests, the measurement-results correlations will make it look as though there was something nonlocal going on.”

I think you are confusing the premises of Bell’s theorem with its conclusions. The premises are:

1. The theory allows for statistical independence between the hidden variable (say the spins of the particles) and the settings of the detectors.

2. The theory is local.

The conclusion is:

The theory cannot reproduce QM’s predictions for the GHZ experiment.

Once you deny at least one of the premises (in our case the first) the conclusion does not follow. So there is no need to bother with the GHZ experiment anymore, it is irrelevant. Our theory – classical EM, or GR is not ruled out as a possible explanation below QM.

Sure, if the theory is also true, it should explain the entire QM formalism (including GHZ), but this is a different discussion. Let’s see if we can reach an agreement up to this point.

„So, Sabine and Andrei: which type of superdeterminism is it that you think should be taken as a live option? Please let us know. Or if you think my classification is wrong, or has left out some 4th kind of superdeterminism, please do explain it as clearly as you can.”

I think classical EM is superdetrministic as it is, without any assumptions about initial states, etc. I do think that with a speciffic choice of initial conditions (an EM field playing the role of the QED vacuum) it has a good chance of reproducing QM’s formalism. This proposal is pretty old and it is known under the name of SED (stochastic electrodynamics). It has already explained a lot of stuff (black-body radiation, the quantum oscillator, lamb shift, specific heat, Van der Waals forces, the stability of atoms – to some extent).

"What justifies an initial state? That it works. What does it mean that it works? It means that the initial state together with a time-evolution explains observations. By explaining I mean it's a simplification over just writing down the measurement outcome."I parse the above sentences to:

What justifies an initial state is that the initial state

together with a time-evolutionis simpler than just writing down the measurement outcome in the explanation of the observations.The alternative parsing of the above, leaving out the part I put in bold in my parsing is:

As I said, you claimed that what explanation via an initial state amounts to is simplification: stating the initial condition is simpler than stating the measurement outcome.Carl Hoefer wrote:

....there has to be a failure of the very natural presumption of statistical independence between the mechanism that chooses measurement directions and the pre-existing values of the particles to be measured.If initial conditions + laws of physics determine all subsequent physical state, then how is the presumption of statistical independence very natural any more? Isn't it something that has to be proven rather than assumed? (along the lines that the influence of charges over here on charges over there can be made small enough in the initial conditions that statistical independence can perhaps be proven?) That is, you can create the states of the experimenters and apparatus independently of the preparation of the particles to be measured.

I went into the corporate rather than academic world. In the corporate world a savvy director with jurisdiction would require a joint integrated discussion document and tie it into clearly defining positions and articulating differences precisely.

But no one’s getting paid, so this is a blood sport. Anyone need a fresh club? I say that sarcastically because there has to be a better way to get clarity.

Hi Tim,

1) Great then, glad that's settled.

2) It is correct that in most cases we do not specify the exact initial condition to arrive at an exact finial state, but make statements of the sort "if the initial condition is of type X and the dynamical law looks like that, then the final state has property Y". That's a corollary of what I said. You still, of course, make assumptions about the initial conditions, that being the relevant point.

TM: "initial conditions as precise and detailed as the ones the superdeterminist needs ..."There, you just did it again, rephrased your idea of "hyperfinetuning" with yet another vague turn of words.

3)

"SH: your notion of "hypefinetuned", no matter how many times you rephrase it in terms of other words, requires a measure of probability which you cannot derive without a) postulating yet another initial condition and b) a dynamical law.

TM: I never said that you could derive it without a dynamical law."

Great, then how about you stop declaring that you can make claims about superdeterministic theories without doing a derivation based on the dynamical law.

" in what way an possible precise initial condition (together with the dynamics!) can be "a simplification" over writing down the description of the phenomenon,"You are once again trying to put words into my mouth that I didn't use. I never said anything about "precise initial conditions". You did.

What do I mean when I say that a dynamical law together with an initial condition provides a simplification? I explained this above, I also explained this in an earlier blogpost. Think about the QM experiments that you like so much, the assumptions about the initial state *together with* assumptions about the dynamical law allow you to deduce there must be correlations in the measurement. This simplifies the data set in a quantifiable way. The theory explains something.

Now when it comes to quantum mechanics, the only explanations that you can make are of statistical types. A superdeterministic theory reproduces those, but you don't gain anything from that by itself because quantum mechanics already did it.

In a superdeterministic theory, however, you can in principle make more predictions that could lead to further simplification. To repeat the example I already mentioned several times above, a superdeterministic theory (with certain properties) would lead to time-correlations in the measurement that according to standard QM should not be there. A theory that can do that would explain more than quantum mechanics.

Now please tell me what you think is unscientific about this. Best,

B.

Sabine, Tim, Carl,

Sabine says:

“superdeterministic theories reestablish determinism by hidden variables. If averaged over the hidden variables you reproduce quantum mechancis by assumption (though of course you'd want to prove that this actually is the case in any given model). That is what "sprinkled over" refers to. The hidden variables also result in correlations between the prepared state and the detector hence you can't use Bell-type tests to rule them out.”Since in superdeterminism (SD) the statistical independence is gone, all GHZ experiments work like a charm.

But this above also sounds an awful lot like Bohmian mechanics (BM) (*) and for sure you want to use QFT in BM. Here three questions: 1.) “sprinkled over” refers to quantum equilibrium in BM? 2.) Doesn´t with the use QFT (**) in BM creep in indeterminism? 3.) In BM the guiding eq. is non-local. But the determinism in SD just liberates you from this very “non-locality” (***). Isn´t this a bit weird?

How about pursuing the following tactics? We just assume - for the moment – SD is correct and have a look what the consequences will be. (I personally also think that SD is a blind alley, I believe that QM randomness in observer independent triggered reductions is the key in the dynamics for our world, but I try hard not to be biased.)

-----------------

(*) Bohmian mechanics (BM) is (

q, ψ), i.e. Schrödinger eq., guiding eq. und quantum equilibrium hypothesis (ρ=|ψ|²). In BM the particle positionsq, the local beables are guided by the non-local, first order, non-linear guiding eq. As far as I know BM has no measurement problem, because particles always have a position and superpositions do not stay superposed because they are guided non-linearly. (How to derive classical physics in the limit is still an open question - from Dirk-André Deckert)(**) how to incorporate QFT (e.g. Lamb shift, Casimir effect) into BM is not yet decided whether it still is deterministic or indeterministic as Roderich Tumulka did, where particle creation in Fock space is a random, a Markov process. QFT is important, because e.g. the electrostatic 1/r, which in non-relativistic BM is just given, is an emergent property (one photon exchange) from QFT.

(***) as always “non-locality” in quotes to code spooky-action-at-a-distance, but no faster than light signaling.

Spooky-action-at-a-distance and entanglement is the very essence of QM. The free Schrödinger eq. is the relativistic Klein-Gordon eq.

(∂²+m²)φ=0in slow motion. The Klein-Gordon eq. as well as its “square root” the Dirac eq. just project out solutions that are not on mass shell, which ensuresE²-(p²=m²E=mc²for=0). Virtual “particles” usually are way off mass shell, because their propagatorspDobey(∂²+m²)D=δ. Their job in a Feynman diagram is to connect spacelike separated particle creation/annihilation events. Summed up in the path integral the probability amplitudes match all experiments so far.CONT.

Generally, in a deterministic theory like Newton’s clockwork universe the only room left for statistics (or statistical independence) is our (or the universe´s own) ignorance about the initial condition

s.Sabine’s search for superdeterminism with just a single initial condition is only the consequent next step in an entirely deterministic view of the world.

Sabine:

“But quantum mechanics isn't deterministic, so it will never predict the actual outcome of a measurement. Superdeterminism can do that. To pick an example, it might tell you that two measurements are correlated in time.”Thus, if SD works, there would be no spooky-action-at-a-distance, no “non-locality” and no problem with the QM measurement. A perfect world for a physicist who believes in an entirely deterministic world. All will be predictable (at least for the universe).If QM is real at all, then at most only the deterministic

exclusivelyunitary evolution holds. Thus, let us bury the Born rule and let us hope for new physics that will solve the BH info loss problem. (With an indeterministic evolution the BH info loss problem does not even exist.)Pseudorandom number generator and an algorithm running on a computer:

Sabine:

“No experimental setting ever follows from "purely mathematical considerations". Who or what do you think set up the experiment, or the pseudorandom generator, who or what do you think the experiment is made of and so on and so forth,…”If an algorithm e.g. to calculate the binary digits of

πis running on a computer, why do the electrons in the logic gates dance in the same rhythm as the algorithm, the program, the code. A believer in SD or even strict determinism/reductionism would I guess answer either 1.) this is the miracle of coincidence caused by the initial condition and a local deterministic time-evolution law or/and 2.) since only bottom-up causation is allowed the electrons in the computer determines the code of the program and the particles in the programmer controls his life and his writing of the code and both matches perfectly. This is achieved via hidden variables.Thus, in SD the program code or even

πfrom the Platonic realm cannot influence how the electrons in the physical realm dance through the logic gates to produceπ- no top-down causation. This is weird.Tim´s “Waiting for Gerard” is not only about correlation versus causation, it is also about reductionism and top-down causation - “Vladimir is controlling the light”.

Vladimir and the switch correspond to the programmer and the code. The light dancing in the same rhythm as the switch corresponds to the electrons dancing in the computer. This is the good old ‘mind body problem’ in an updated version.

Here Sabine’s view in "Free will is dead, let’s bury it".

"... top-down causation (which doesn’t exist) ..."; "... there isn’t any known law of nature that lets you meaningfully speak of “free will”.";And yes, in an entirely deterministic world an element of randomness is forbidden and only bottom-up causation is allowed, otherwise it would not be consistent.“This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no "will" - there is just some randomness sprinkled over the determinism.”

CONT.

Tim already wisely substituted Vladimir, the no free will zombie by a random number generator, but in SD also this device becomes a zombie, controlled by hidden variables. (sorry – I was biased in this last sentence, but to exaggerate sometimes helps. Now the zombie “slowly walks over to the window, opens it, and flings himself out. … Curtain.”)

By the way [the Inevitable] Carl in SD is not only inevitable, but also a zombie ;-)

And since Tim, righteously for a SD disbeliever, is defending “non-locality” in QM and statistical independence, when it comes to SD he becomes so … well … biting.

SD is an extreme version of an entirely deterministic view of the world. And this deterministic view is actually very widespread. Most physicists only accept the

exclusivelyunitary evolution in QM, because it is deterministic and the QM measurement would only break this beautiful linearity. That´s the reason why MWI is so widespread. They do not see that probability amplitudes are after all … well … probabilities when squared and that probabilities have to be realized, have to become actualities (*) and then randomness enters.Sabine writes in her essay "The Case for Strong Emergence" in "Limits of Reductionism"

“we could use effective theories derived from the standard model plus general relativity to calculate, say, election outcomes.”. Sean Carroll also believes in a possible physical Brexit prediction - watch here the clash between Sean Carroll’s reductionism/determinism and George Ellis.Now on to the for me personally important question: Can I control a light switch?

This is the question of top-down causation. (the big influence the small - this is just my short hand definition)

George Ellis, the defender of top-down causation writes in "Recognising Top-Down Causation" (also a FQXi contest):

“The assumption that causation is bottom up only is wrong in biology, in computers, and even in many cases in physics, for example state vector preparation, where top-down constraints allow non-unitary behaviour at the lower levels. It may well play a key role in the quantum measurement problem (the dual of state vector preparation)”; “... top down causation. This paper proposes that recognising this feature will make it easier to comprehend the physical effects underlying emergence of genuine complexity, and may lead to useful new developments, particularly to do with the foundational nature of quantum theory. It is a key missing element in current physics.”Sabine on the contrary writes in her essay

“Top down causation is the idea that the laws of a system at low resolution can dictate the laws at high resolution. ... Again we don’t know any case in which this happens. But even if there was it wouldn’t make strong emergence possible; it would merely mean that in at least some range of resolution the existing theories must be equally fundamental. The reason is, as previously, that (a) we already have a bottom-up causation by way of effective field theory and (b) any other theory is either compatible with that or wrong.”-----------------

(*) When probabilities are realized in a reduction and become actualities this is similar to localized beables in the sense of GRW, where the wavefunction becomes localized around a single “particle” and thus is disentangled when a flash acts. And yes, admittingly local beables, the positions

qin BM is a bit different – they always exist, but are guided non-locally.Anonymous Sean S.

You write

"“I have specifically already documented and that ...”

Unfounded claims do not constitute documentation.

“If you want to introduce matters of etiquette do it in a responsible way, not slathered over with false equivalence.”

Etiquette? Where did that come from? What matters here is basic human respect, and intellectual honesty. Do you not see that? This conversation has become toxic; and to a great extent futile; that is from lack of respect.

False equivalence? Tim, the equivalence is not false. You don’t like that, but that’s the way it is.

I take this as a rejection of mediation by you; OK. That’s fine."

1) My claims about how Sabine has ignored answers given and also ignored questions asked of her is documented by direct quotations, with citations of the date on which the posts occurred. That is called "documentation" in English.

2)Yes, this is about basic human respect and intellectually honesty. You are displaying neither. You just called clearly documented claims "unfounded". Why? Because you want to attack me and are unconcerned that your attacks are demonstrably false. I told you to look at my most recent response to Sabine, which is documented in precisely the way I indicated. So which is it: a) you couldn't be bothered to look, but just wrote your false accusation for fun or b) you did look, saw the documentation, and just flat lied because it suited your purposes? Please indicate a or b, or if you can manage some c that does not show that you are either irresponsibly lazy or intellectually dishonest, I am curious what it will be. You want to know why threads get toxic? Exactly because of behavior like yours. Ironic, eh?

3) Yes the equivalence is false, your completely unsupported assertion to the contrary notwithstanding. For example, look at point 2 above: I have already shown that you are so set on finding something wrong with what I have said and absolving Sabine of her documented evasions, false assertions, and either sloppy unawareness or intentional misrepresentation of what others have been posting, that you will immediately produce your own demonstrably false claims.

4) From your other post: You are anonymous in the obvious sense—which has been in discussion elsewhere in this blog—that there is no way to tell who "Sean S." is in real life, so none of what you post can possibly come back to you in real life. It is my observation that it is exactly that anonymity that emboldens people to post things that are sloppy, false, inflammatory, and toxic. If you want to stand behind what you write here, stand behind it: reveal your complete real name and who you are. Carl Hoefer has. I have. I know who Mateus is, and several others, and I definitely do not know who "physphil" or "Paul Hayes" or "dark star" or "Black Hole Guy" are. I given what they have written here and how they have behaved, they have every practical reason not to reveal who they actually are. How about you? look over what you have posted and make a decision: Am I willing to take public responsibility for what I have written? I am. Now how about you?

Sabine

The term "corollary" has a meaning you are perfectly aware of. When you say

"It is correct that in most cases we do not specify the exact initial condition to arrive at an exact finial state, but make statements of the sort "if the initial condition is of type X and the dynamical law looks like that, then the final state has property Y". That's a corollary of what I said."

you are either unaware of what you said (and what you say below) or are intentionally misusing the term.

At no point in this blog post have you ever mentioned the sort of explanation that Maxwell and Boltzmann gave, which is a prototypical statistical-mechanical explanation that precisely *does not* require reference to any *particular* initial condition. What Maxwell and Boltzmann do is show any initial condition will lead to a certain sort of behavior (such as non-decreasing entropy or approach to the Maxwell-Boltzmann velocity distribution) so long as an assumption of statistical independence holds. That assumption is called the Stosszahlansatz, and the proposition that it continues to hold over the relevant period of time (i.e. the period of time for which there are empirical data) is called the "Hypothesis of Molecular Chaos". These are the precise analogs of the statistical independence assumptions used by Bell and by PBR, and denying them just in order to save some pet claim that you are attached to (such as "Physics is local" or "The wavefunction is epistemic" or "Smoking does not cause cancer" (an example we have often used) or "Joe's dice are not loaded" (see my recent response to Arun) or "The closet light switch is on the left" (see "Waiting for Gerard") is so manifestly unscientific as to be rightly called "silly" and "conspiratorial".

Carl and Mateus and I and others have been giving examples of this sort over and over and over, and you have yet to respond to them. Indeed, you have yet to even acknowledge that they exist.

Con't.

So there is all the difference in the world between saying what I said about statistical explanation and saying what you constantly and repeatedly have said. For example:

"It's perfectly legitimate to chose the initial conditions that contain correlations that you observe at the final time" (July 13)

and (the first bit is you quoting me)

""some technique is used that extracts predictions from a model even though no precise initial condition is postulated. There are clear mathematical techniques for doing the latter, and the output of those techniques does depend on the totality of all initial conditions of the theory, not on some single initial condition."

Those techniques use probability distributions over initial states. You can use them for subsets of the universe (because you have an ensemble), not for the universe itself. I already told you above several times that you keep confusing the two things." (July 22).

(Note that what Maxwell and Boltzmann and Bell and PBR do works just fine for the universe itself. If you don't think so then you don't understand what they did.)

and

"The initial condition of the universe is a postulate that is contained in the model." (July 22)

The phrase "the initial condition of the universe" is naturally read as a reference to the precise, completely detailed initial condition of the universe, and that natural reading is further reinforced by your constant repetition that in a deterministic theory the laws and the initial conditions determined everything that happens. That is true, of course, only of the precise, exact, down-to-the-last-physical-detail initial conditions, which, I have said repeatedly, are *never* postulated as part of any serious model. This has been the heart of this dispute forever: I have been insisting that actual scientific explanations have a generic character, which is essential to their being explanations, and that appeal to the precise initial conditions *is not* the sort of thing that provides a scientific explanation. But now you are claiming that my account of statistical explanation is a *corollary* of your constant appeal to a single, precise initial condition! I think I have already mentioned "chutzpah".

No, my account of statistical explanation *is not* a "corollary" of what you have ever argued. It is, in fact, the direct *denial* of what you have constantly argued, as evidenced in the quotes above.

If you concede this, we can finally make progress. If not, then we will stick here until the issue is resolved.

Andrei,

OK, we have it down to a single sentence.

"The derivation of Bell, GHZ, free-will theorem, etc. requires that the spins of the particles and the states of the detectors are statistically independent (SI). In order for a theory to qualify as superdetrministic it needs to negate that proposition. It does not need to show that the SI assumption is violated in the "right way". Any way will do."

By "the right way" I mean, of course, that the failure of SI *determines how the particles will respond to the experimental arrangement that they later encounter, in just the way needed to reproduce the prediction*! So if the computers effect the initial state of the particles, but not in the right way, then noting the influence is pointless and does no explanatory work. Having a statical dependence *that is irrelevant to the observed outcomes we are trying to explain* is tantamount to having no statistical dependence at all as far a we are concerned. So your idea that you somehow can avoid Bell's argument by positing *irrelevant* failures of statistical independence is manifestly absurd: only *relevant* failures make a difference, where "relevance" is defined as "relevant for the phenomena we are trying to explain".

I will plead one more time that no one post another word to this thread without having carefully read Bell's "Free Variables and Local Causality". Bell is, as usual, meticulous and uses phrases like "sufficiently free for the purposes at hand" (for example, for the purpose of accounting for the GHZ phenomena).

What you have done by saying that SI does not need to be violated in the right way is to concede that your supposed example of superdeterminism simply cannot do what the superdeterminist wants to do. So your supposed example of a superdeterminsitic theory is pointless.

Re: the Lucky-Sevens dice example.

There are a number of things in the example that don't correspond with reality, such as how casinos deal with dice, but the most relevant is the question of how the dice always land on the same side. Loaded dice (one side weighted) do not always stop on that side. Magnetic dice would require a magnetic casino table. This should be studied further, not just by one side, but by an impartial team, with a control. There could be some new physics involved!

(My guess is that it was a setup by the casino, with a magnetic table, to get revenge against Joe for some reason, by getting him convicted of fraud.) (Ladies and gentlemen of the jury, as Joe's lawyer I ask you, who provides the dice? The casino does! Who provides the table? The casino! Who is more likely ... but I digress.)

One might well decide: I have more interesting or productive things to do, and don't want to be part of such a study; whatever the answer is, I am betting that no new physics is involved. But to go on to claim that studying an unanswered question empirically would be unscientific does not seem warranted, to me.

Tim´s “Joe the gambler” is a great example how an explanation works.

Theory 1: Joe's dice are rigged is the simplest.

For me a good explanation/theory is the one that uses the least assumptions.

Tim,

To explain a final state in detail you need an initial state in detail. If you specify the initial states only by some generic properties, you get generic outcomes (and, if you have a useful theory, explanations or predictions for those). I am not distinguishing the two because we can only ever make measurements to finite precision, hence we always deal with collections of state. Stop blaming your misunderstandings on me.

"The phrase "the initial condition of the universe" is naturally read as a reference to the precise, completely detailed initial condition of the universe, and that natural reading is further reinforced by your constant repetition that in a deterministic theory the laws and the initial conditions determined everything that happens. That is true, of course,"Yes, it's true. And of course no scientist ever in practice actually specify the initial state of the universe, unless - as I told you above - you actually intend to measure the state of the universe. In practice you specify some properties of the state. Say, it has some distribution of fluctuations. You evolve the state forward. You read off the properties of the final state (say, the spectral index). That either explains or doesn't explain something.

The point is, as I have told you a dozen times or so, that the justification for choosing the initial state (or class of states) is that it works. There is nothing unscientific about this choice. You always do it.

"This has been the heart of this dispute forever: I have been insisting that actual scientific explanations have a generic character, which is essential to their being explanations, and that appeal to the precise initial conditions *is not* the sort of thing that provides a scientific explanation."No, that's not the dispute. The dispute is that you are *claiming* that you need a precise initial condition, where "precise" is just another word for what you call "hyperfinetuned". We can all see that you "insist" on this, but that's not an argument. Go and prove it. That's the dispute. Just go and prove that it is true what you claim that any superdeterministic theory, regardless of the dynamical law, requires an initial condition that contains so much information that the theory no longer has explanatory power. I am telling you that you can not prove this without having a dynamical law.

I notice you are now trying to sneak in additional assumptions about the theory in question. I have no problem with that in principle. If what you want to say is a superdeterministic theory has no explanatory power if it fulfills condition A,B,C and you can prove your claim using condition A,B,C, then fine. That would be progress.

Jim V

The relevant hypotheses were "The dice are fair" and "The dice are rigged". Like any sane person, you went with "The dice are rigged", even though the data are *consistent* with the hypothesis that the dice are fair, i.e. there are initial conditions of the fair dice theory that lead to the observed result. The rest of your comment is, of course, irrelevant.

But, to make the point, here is a direct quote from Sabine:

"It's perfectly legitimate to chose the initial conditions that contain correlations that you observe at the final time" (July 13)

All that "contain correlations that you observe at the final time" can mean is something like "contain correlations that, via the dynamical laws, give rise to the correlations in the data you want to explain". And my whole point, and Carl's, and Mateus's, is that it is *not* perfectly legitimate to do this! It is *not* a scientifically acceptable way to go about "accounting" for an observed correlation to just build it into the initial conditions.

So once you strip away the irrelevant part of your answer, we find that you agree with every other sane person and disagree with Sabine. If only Sabine could be brought to see the light.

Sabine:

First, read my response to Jim V and the loaded dice case.

Now to your present position. I apparently will have to document to you what you yourself have repeatedly said, because all of a sudden you want to shift ground and pretend it was what you were saying all along. I suppose the shifting ground is progress, but the pretending is just childish.

You now claim that

"To explain a final state in detail you need an initial state in detail. If you specify the initial states only by some generic properties, you get generic outcomes (and, if you have a useful theory, explanations or predictions for those). I am not distinguishing the two because we can only ever make measurements to finite precision, hence we always deal with collections of state. Stop blaming your misunderstandings on me."

First: the claim about making measurements to finite precision is both novel and completely irrelevant. But let's play along with your pretension that you understood all along that "we always deal with collections of states". If that is true, then the *wrong* thing to do is "not distinguish the two" (i.e. not distinguish between positing a specific initial state and posting some class of initial states), it is rather to *sharply* distinguish the two and to forcefully deny that you are *ever* talking about a specific initial state! And even more important, if you have all along been thinking about predictions somehow derived from a *collection* of initial states and not from a *single* initial state then a lot of what you have written as trivially obvious is either flatly false or in need of extensive defense that you have not attempted to give. That is, any *charitable* reader would assume that you must mean a specific state, not a generic state, simply because it is only under that assumption that your claims make sense. Whenever that occurs, I blame my misunderstandings on you.

How about some examples.

Con't

You wrote to Mateus on July 5th

"Any theory with a Hamiltonian evolution can produce any correlation. All you have to do is take the present state and evolve it backwards. This will give you an initial state from which you get whatever you observe. Your criticism is hence unscientific itself."

Now: if you mean the *precise* present state can be evolved back to some *precise* initial state, and if you add your famous claim that "It's perfectly legitimate to chose the initial conditions that contain correlations that you observe at the final time" (July 13)" then we get something that is at least coherent, even though it is also completely wrong. But if you always had in mind *generic* states and not *specific* states, then the breeziness with which you assert these things is completely unjustified. Suppose I have a completely deterministic theory, but the dynamics is chaotic (like the dynamics of a pair of dice rattling around in a cup, to take a timely example). Then there is *no* "initial state" (i.e. state characterized only within the error bound of our measurements...which seems to be what you have in mind) that will "get you whatever state you observe". Given only finite precision and a chaotic dynamics, *no* "initial state" of the fair dice deterministic dynamics will get the the observed behavior of 170 7s in a row. None. Zero.

So adopting your new explication of what you had in mind all along, a bunch of stuff you said before become flatly false. It is not true, given this new understanding, that "any theory with a Hamiltonian evolution can produce any correlation". And it is not a coincidence that one sort of deterministic "randomizer" uses a system with a chaotic dynamics, namely the devices by which winning lottery numbers are chosen (ping-pong balls bouncing around in large chamber with elastic walls). Perhaps you would be willing to admit that to prepare the initial conditions of such a device so that it would spit out a specified sequence of balls would require hyper-hyper-hyper-hyper fine-tuning the initial conditions, far beyond our capacity to achieve and much, much,much, much beyond fixing the initial conditions within presently existing measurement bounds.

What you claim to have been thinking all along either 1) is not at all what you were thinking all along or 2) was what you were thinking all along, but you were so confused that you failed to see that the claims you were making were plainly false.

If you take that in—which I have just demonstrated with direct quotations—that will be real progress.

(cont'd)

2. You say:

The point is, as I have told you a dozen times or so, that the justification for choosing the initial state (or class of states) is that it works. There is nothing unscientific about this choice. You always do it.Usually this statement is fine, e.g. in boundary value (BV) problems, but you want to extend the idea to a

superdeterministic theory. Now the BCs play a much larger role, as I understand it: they are responsible for "explaining" how a local, deterministic theory can consistently reproduce an (apparently) non-local phenomenon, no matter how many times and locations that the phenomenon is demonstrated. I think that places a much greater burden on you to show that "it works" -- you can't just extrapolate from experience with ordinary deterministic theories and assume it still applies.In particular, can you show that the assumption that "it works" is self consistent when applied

collectivelyto all experimental demonstrations of the phenomenon?In a BV problem you get to choose your BCs freely and evolve the system forward, like you said. Or, like you suggested elsewhere, you can start from a state you want to reproduce and evolve it backward to find the required BCs, assuming the governing equation is time reversal invariant. In BV problems, one generally restricts attention to the smallest subsystem that manifests the phenomenon and ignores everything else, but in a superdeterministic theory things don't seem so simple to me: the BCs you posit for one subsystem aren't automatically consistent with required BCs for

anothersubsystem which demonstrates the same phenomeon. For example, when evolving the states of two subsystems back to where their light cones intersect, you might find that the inferred BCs for subsystem A are incompatible with the inferred BCs for subsystem B (or C, or D, or...). Trying to show they will always be consistent seems like a very daunting (impossible?) task; see comment #1 above.Hi Sabine,

This is more a sociological comment rather than a direct response.

First, I have been following your blog from its earliest days because I have found your views interesting, often thought provoking, and a product of someone with a mind of her own rather than someone who follows the herd. "Naturally" I bought your book shortly after it came out and enjoyed it (clicked on the "buy" link in your blog to give you a little extra!), even though I didn't agree with some of you diagnosis of why fundamental physics has gone astray. So in general I've been interested and usually sympathetic to what you write; I definitely give you the benefit of the doubt, that you have good reasons for what you say and that you are open minded.

This comment thread has left me feeling differently, especially in how you have interacted with Mateus and Tim. Clearly you disagree with their position that superdeterminism is an unscientific approach to theory, and you try to explain why. But as I watched the back-and-forth between you and Mateus, I saw you making claims about him that I thought were wrong at first sight, such as your repeated assertion that he wasn't presenting an argument when in fact he was (as he noted, you just didn't like his argument). He restated his argument and you made the same claim; it was very confusing to see you denying what was in front of you, since that seemed out of character for you. The discussion between the two of you became more disturbing when you quickly dismissed some technical claims he made, even after he told you it was his area of research focus and that he had read many and written some papers on the topic. What made it disturbing is that you seemingly gave no weight to all the time he has spent studying and thinking about the issues you were discussing -- it's his research area! -- and instead trusted your own (much less extensive) experience and intuition in that area. Why no professional respect? In the end, you simply banned him from contributing.

When Tim jumped in, it looked like more of the same. Tim is often blunt but so are you, so no problem, right? In the past I haven't seen Tim go into full attack mode unless someone insults him first. I think that's what happened here. Tim "scolded" you a bit for not paying attention in his July 15 (4:16AM) comment (partly justified IMO), but then you lobbed the first

ad hominemin response:You then accuse me of not reading what you wrote and continue with proclamations about your grandiosity. [...] How do you not notice this, oh Tim Maudlin, the great philosopher?The personal attacks escalated after that and still continue...Like it was with Mateus, you appear very confident that you have things figured out and that Tim is wrong if he disagrees. Even though Tim has devoted much of his career to foundational issues, and as a philosopher he has been trained (and trains others) to think carefully and critically about a topic, you seem to give little weight to his arguments. I don't understand what makes you so confident that he is the one missing something important. Maybe he is, but where is the professional respect and open-mindedness?

Let's just say I'm glad I bought your book

beforeI read this thread...Tim;

“

1) My claims about how Sabine has ignored answers given and also ignored questions asked of her is documented by direct quotations, with citations of the date on which the posts occurred.”Oh Good! You have a document!

Excellent!Would you please be so kind as to share it with us?“

2)Yes, this is about basic human respect and intellectually honesty. You are displaying neither. You just called clearly documented claims "unfounded".”Disagreement is not the same as disrespect. I’m sure you know this.

“

2) ... you want to attack me and are unconcerned that your attacks are demonstrably false. I told you to look at my most recent response to Sabine, ...”I haven’t “attacked you” and I don’t want to; I just disagree with some of your claims. Disagreement is not attack. You know this.

Your posts don’t provide a satisfactory basis for blaming all things bad on Sabine; nor do they exculpate you.

“

2) ... So which is it: a) you couldn't be bothered to look, but just wrote your false accusation for fun or b) you did look, saw the documentation, and just flat lied because it suited your purposes? Please indicate a or b, or if you can manage some c that does not show that you are either irresponsibly lazy or intellectually dishonest, I am curious what it will be.”C:I’ve read much of the thread and find no reason to not blame both of you. It’s a long thread; I’m not done; but so far ...“

3) Yes the equivalence is false, your completely unsupported assertion to the contrary notwithstanding. For example, look at point 2 above ...”I don’t know why I’d think your comments are blameless, or that your conduct has been any better than Sabine’s.

“

3) ... I have already shown that you are so set on finding something wrong with what I have said and absolving Sabine of her documented evasions, false assertions, and either sloppy unawareness or intentional misrepresentation of what others have been posting, that you will immediately produce your own demonstrably false claims.”More on this later. I have to wait for the room to stop spinning ...

“

4) From your other post: You are anonymous in the obvious sense—which has been in discussion elsewhere in this blog—that there is no way to tell who "Sean S." is in real life, so none of what you post can possibly come back to you in real life. It is my observation that it is exactly that anonymity that emboldens people to post things that are sloppy, false, inflammatory, and toxic. If you want to stand behind what you write here, stand behind it: reveal your complete real name and who you are. Carl Hoefer has. I have. I know who Mateus is, and several others, and I definitely do not know who "physphil" or "Paul Hayes" or "dark star" or "Black Hole Guy" are. I given what they have written here and how they have behaved, they have every practical reason not to reveal who they actually are. How about you? look over what you have posted and make a decision: Am I willing to take public responsibility for what I have written? I am. Now how about you?”“

... which has been in discussion elsewhere in this blog ...” Really? If someone’s curious about me personally, they could just ask. I’ve got little to hide, I’m actually pretty boring and unremarkable. And I’m not ashamed of anything I’ve written here.Can you send me a link to these “discussions”? I’d be happy to take questions.

I have a question for you (and any other curious persons):

what is my email address?You should know; anyone can find it from the thread; I even told you precisely how.[1] Others on this blog site have emailed me from here so I know you can. I leave it to you as an exercise. Anyone here can figure out who I am if they care to. I cannot imagine why they’d care to, but the world is full of little mysteries.I stand behind everything I write;

that’s why I always sign my posts.End part 1.

sean s.

[1] comment by sean s., 13:36, July 31, 2018; para. 3.

Part 2:

OK, now that the room has stopped spinning, I can deal with your paragraph 3).

3) is breathtaking.In it, you again object to my position that you and Sabine are equally responsible for the toxicity of this thread; you again characterize it as a “false equivalency” You again reject this equivalency and wish to place the blame squarely on Sabine.These objections are not new. In the past, you’ve accused me of promoting “

the idea that ‘everybody does it’[engages in toxic behavior]and ‘both sides are to blame’" [2]; that I am “wedded to[a]false equivalency claim” [3]; and my reasoning was “slathered over with false equivalence.”[4] And now in 3) you again reject equivalency as “false”.[5]What is new, and breathtaking is that suddenly. in that same paragraph, you accuse me of being set on “

absolving Sabine”.So, after repeatedly objecting to my charge that

both of youshare the blame for toxic behavior, you suddenly insist I’m trying to pin it all on you.In this one post, Tim, you complain for the fourth time about

unreasonableand suddenly complain ofimpartialityunfair partiality. I am somehow achieving the remarkable feat of being impartial and partial at the same moment. I am guilty of taking no side and one side; all at the same moment.The only response I can give to your remarkable inconsistency is ...

¯\_(ツ)_/¯I’m taking no one’s side, Tim.

sean s.

[2] comment by Tim; 08:43, July 29, 2018; para. 1.

[3] comment by Tim; 02:20, July 30, 2018; para. 4.

[4] comment by Tim; 01:51, July 31, 2018; para. 4.

[5] comment by Tim, 02:50, August 01, 2018; point 3).

Ps: I sent you a Facebook request; did you see it? (At least I think it was you.) I’m not hiding. -- ss.

Marty,

I am not confident about what I am stating, I am tired of having to deal with people who don't understand what I am stating to begin with. Look, we are not even having an argument here. 99.9% of this thread is people accusing me of what they want to believe I said.

Your vague accusations about what I supposedly ignored are yet another empty comment who contributes nothing than noise. You call it an "ad hominem attack" that I summarize what someone else said but have no problem with the original comment. That's, erm, interesting.

Hi again Marty,

There seems to be a comment from you missing. There's one with number "2" but not with number "1" (it's neither in the queue nor in the junk folder).

Regarding your second comment. I actually agree with this, especially with this point:

"In particular, can you show that the assumption that "it works" is self consistent when applied collectively to all experimental demonstrations of the phenomenon?"I don't know why you want me to show this. Let us say that someone who proposes a superdeterministic model should show this if they want to be taken seriously. I also said above that I suspect it's possible to show for some models it can't be done.

My point is merely that Tim's claim that all superdeterministic models are unscientific is an unproved conjecture. It is beyond me why he (or any one) goes around and makes such claims. If what they want to say is "I'll ignore superdeterminism until someone shows that it works because I don't believe it can work" that's all fine with me. That's an entirely different thing than claiming it's impossible for it to work from first principles.

I don’t know how you would manage the logistics, but kialo.com has a neat platform for laying out argument points pro and con in a tree hierarchy. A bit late for this slug fest , but might be worth checking out for the future.

Tim,

Even in a chaotic deterministic system there is an initial state for your dice. If what you wanted to say is that getting a specific outcome in a chaotic system requires high precision for the initial state, that's correct. I am sure you know that. I don't see the relevance of any of this or how it's supposedly in contradiction with anything I said. Are you trying to say now that superdeterministic theories are chaotic?

Having said this, you seem to now have arrived at a definition of hyperfinetuning at at last makes sense. That's progress. I am very happy about this. If you re-read your own comment about the chaotic system, I hope you will note that it makes explicit use of (properties of) the dynamical law. Do we finally agree that you cannot make a statement about whether or not a superdeterministic model is scientific without looking at the dynamical law?

The insistence on a hyper-fine-tuned initial state as evidence against a totally deterministic set of physical laws is beginning to sound like the creationists' Lottery Fallacy.

They (some of them) say, in order for us (humans) to exist requires such a fine-tuned set of physical constants that the odds against it must be unimaginably high (although they have no way to calculate the odds of any set of physical constants), therefore it could not have happened by chance and must have been planned.

Similarly, a lottery winner might say, the odds against me winning were millions to one, too much to happen by chance. However, those odds only are relevant if another person predicted the specific winner in advance. Otherwise, the applicable probability is that of anyone (unspecified) winning. Since numbers will be drawn until someone (or some two, or three, ...) wins, that probability is 1.

The odds of some universe happening are less clear, but for most sets of initial conditions something will happen; and what we have is something.

For the specific deterministic model I have described (simulation in a higher universe), initial conditions would have been chosen pseudo-randomly (and quantum uncertainties also pseudo-randomly--hence the determinism), but we could speculate that simulations would be done until something interesting happened, so the probability of something as interesting as what we have might be 1 also. (And if the pseudo-random function was not of extremely high resolution so as to compute faster, it could be detected empirically and therefore provide an explanation of that detection.)(Also, in this model, detector settings determine how the quantum simulation will be done so the two are not statistically independent. It is not a local model, however. It knows that entangled particles are entangled regardless of their separation, like a hidden variable.)

Such a scenario might be impossible, or too unlikely to be worth testing for, but the proof of that would at least interesting and I think quite an intellectual feat. An even greater feat would be to prove it for all possible deterministic models. Science and math have taught us that our innate intuitions are not always reliable guides.

Personally, I would prefer a finite, discrete universe with some inherent randomness (not too much, not too little), but I don't insist that the universe will be swayed by my preferences.

Sean S.

For someone who purports to want to reduce toxicity, you are either simply dissembling or are completely incompetent. By saying that I "documented" my claims I meant what any normal person would mean: I provided direct quotes and also the day that the post occurred so anyone could check that no relevant context is missing. But instead of taking the term in its obvious meaning you have to troll about it.

You are perfectly open to revealing who you are...but don't reveal it. That speaks for itself.

I do not respond to facebook friend requests from people who I do not know and who are not my friends.

As for your false equivalences, they speak for themselves as well.

Sabine

How many times to I have to repeat, over and over, that I never said or implied or suggested or in any way made any comment that presupposed that whether a theory is superdeterministic does not depend on what the dynamical equations are? You have made this completely false claim over and over and over, and I have denied it over and over and over in the strongest possible terms. I have documented my denials. But here you are making it again. What could account for this behavior?

Here are the only explanations I can think of:

1) You are just too lazy to actually read what has been written. You have convinced yourself so strongly that anyone questioning you must have made some ridiculous mistake that you figure its not worth your time to pay any attention to what they have written. You just fire off some pre-programmed response, smugly certain that it must be on target.

2) You are a troll. You are doing this on purpose, and know you are doing it, because you enjoy making people upset at having their position mischaracterized and then attacking straw men.

3) You are incapable psychologically of admitting error. This means that when you have made an error and anyone (me, Mateus, Carl) starts pointing it out, you just have to respond with something that suggests the person pointing out the error has made a mistake. So we get the repetitious manifestly false claims because otherwise you would have to just admit that you made a mistake and you simply cannot stand to do that.

If I have missed an explanation I would be glad to hear it. Not only have I never, ever, in any context and in any way suggested that superdeterminsm can be defined without reference to the dynamics, *I have in several different ways referred to the dynamics in defining it*. For example, I have said that the space of initial conditions of a theory is determined by the equations of motion: it is a mathematical fact which initial conditions provide the requisite information to specify a unique solution to those equations. Is the set of initial conditions for a theory a phase space or a configuration space? That depends on whether the dynamical equations are first-order or second-order in time. Obviously the whole concept of the "space of initial conditions" of a theory makes no sense without reference to the dynamics. I have also said that a natural measure over the space of initial conditions must be invariant under the dynamics. That obviously references the dynamics. But you continue to want to pretend that I was somehow confused about this and at long last you have straightened me out. That fable may serve your ego but it is a fable nonetheless, and I won't let it go unremarked.

Con't

Tim,

Yes, I have been wrong all the time with criticizing you. You never said that superdeterministic theories are unscientific. You said that superdeterministic theories with certain dynamical laws are unscientific. I am deeply sorry for my misunderstanding and I am glad we settled this.

Maybe some time in the future you can tell us which dynamical laws are unscientific for what reason. I would be interested to hear. But meanwhile I happy to hear that we have reached an agreement.

John,

I tried kialo just to find that most of my comments were not approved. Why not? Because, I was explained, they have a policy that you're not supposed to discuss the definition of the phrases used in the question. Needless to say, this is an entirely stupid policy because it means that people will talk past each other endlessly. (The thread in question was about free will. You can see how that goes wrong.) It's a shame, really, because I liked the visualization.

Tim,

Let’s define what this statistical independence means. Wikipedia reads:

„In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.”

Please let me know if you agree with this definition or not.

Let's now see how Bell's theorem is formulated:

Premise 1: The theory allows for statistical independence between the hidden variable (say the spins of the particles) and the settings of the detectors.

Premise 2: The theory is local.

The conclusion is:

Conclusion: The theory cannot reproduce QM’s predictions for the GHZ experiment.

Please let me know if you agree with this description of the theorem or not.

I have already presented evidence that classical EM is not SI (according to the definition above)

Now, assuming that you agree with:

1.the definition of SI

2.the description of the theorem above

3. classical EM is not SI (according to the definition above)

-and with the logical principle that if at least one premise is false the conclusion does not follow, we have:

It does not follow that classical EM cannot reproduce QM’s predictions for the GHZ experiment.

In other words either one of the propositions:

P1: classical EM cannot reproduce QM’s predictions for the GHZ experiment.

or

P2: classical EM can reproduce QM’s predictions for the GHZ experiment.

-could be true.

So, in the light of my argument above let me answer your last post:

"So if the computers effect the initial state of the particles, but not in the right way, then noting the influence is pointless and does no explanatory work."

True, but notice the "if" above. Sure, if the theory fails to reproduce QM it's wrong and if it reproduces QM it could be right. The problem is that we don't know what case we are in and there is no simple way we can find it.

So your idea that you somehow can avoid Bell's argument by positing *irrelevant* failures of statistical independence is manifestly absurd: only *relevant* failures make a difference, where "relevance" is defined as "relevant for the phenomena we are trying to explain".

I did not make the claim that the failures of SI are "irrelevant". I also did not make the claim that they are relevant. I just don't know. And, of course I can avoid Bell's theorem because I have proven that in the case of classical EM its first premise (SI) is false. And the part with "is/is not relevant" does not appear in the premise, but in the conclusion. You are making the same logical fallacy as Carl. You do not want me to deal with Bell in the logically correct way (by attacking the premise) but you want to force me to go the hard way (attacking the conclusion). Sorry, there is no justification for such a request.

What you have done by saying that SI does not need to be violated in the right way is to concede that your supposed example of superdeterminism simply cannot do what the superdeterminist wants to do.

I need not concede such a thing. Just because I cannot prove that SI is violated in the "right way" does not mean that you have proven that SI is not violated in the right way. We need to be careful with this kind of logical implications.

"So your supposed example of a superdeterminsitic theory is pointless."

No, I have proven what I intended to prove: that superdeterminism need not be fine-tuned, retrocausal, non-scientific, etc. What I did not prove is that superdeterminism is true, but I did not make such a claim either.

Tim,

Let’s define what this statistical independence means. Wikipedia reads:

„In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.”

Please let me know if you agree with this definition or not.

Let's now see how Bell's theorem is formulated:

Premise 1: The theory allows for statistical independence between the hidden variable (say the spins of the particles) and the settings of the detectors.

Premise 2: The theory is local.

The conclusion is:

Conclusion: The theory cannot reproduce QM’s predictions for the GHZ experiment.

Please let me know if you agree with this description of the theorem or not.

I have already presented evidence that classical EM is not SI (according to the definition above)

Now, assuming that you agree with:

1.the definition of SI

2.the description of the theorem above

3. classical EM is not SI (according to the definition above)

-and with the logical principle that if at least one premise is false the conclusion does not follow, we have:

It does not follow that classical EM cannot reproduce QM’s predictions for the GHZ experiment.

In other words either one of the propositions:

P1: classical EM cannot reproduce QM’s predictions for the GHZ experiment.

or

P2: classical EM can reproduce QM’s predictions for the GHZ experiment.

-could be true.

So, in the light of my argument above let me answer your last post:

"So if the computers effect the initial state of the particles, but not in the right way, then noting the influence is pointless and does no explanatory work."

True, but notice the "if" above. Sure, if the theory fails to reproduce QM it's wrong and if it reproduces QM it could be right. The problem is that we don't know what case we are in and there is no simple way we can find it.

So your idea that you somehow can avoid Bell's argument by positing *irrelevant* failures of statistical independence is manifestly absurd: only *relevant* failures make a difference, where "relevance" is defined as "relevant for the phenomena we are trying to explain".

I did not make the claim that the failures of SI are "irrelevant". I also did not make the claim that they are relevant. I just don't know. And, of course I can avoid Bell's theorem because I have proven that in the case of classical EM its first premise (SI) is false. And the part with "is/is not relevant" does not appear in the premise, but in the conclusion. You are making the same logical fallacy as Carl. You do not want me to deal with Bell in the logically correct way (by attacking the premise) but you want to force me to go the hard way (attacking the conclusion). Sorry, there is no justification for such a request.

What you have done by saying that SI does not need to be violated in the right way is to concede that your supposed example of superdeterminism simply cannot do what the superdeterminist wants to do.

I need not concede such a thing. Just because I cannot prove that SI is violated in the "right way" does not mean that you have proven that SI is not violated in the right way. We need to be careful with this kind of logical implications.

"So your supposed example of a superdeterminsitic theory is pointless."

No, I have proven what I intended to prove: that superdeterminism need not be fine-tuned, retrocausal, non-scientific, etc. What I did not prove is that superdeterminism is true, but I did not make such a claim either.

Here's the first part of my initial comment that seems to have gotten lost somehow:

Hi Sabine,

I hope you and Tim won't mind me jumping in, but your most recent post (12:05 PM, August 01) repeats a couple of claims that have been bothering me in this comment thread. I acknowledge ahead of time that I may be mischaracterizing what you mean by a "superdeterministic theory," but since I haven't seen you give a careful definition of it I'll have to take that risk...

1. You say:

And of course no scientist ever in practice actually specify the initial state of the universe, unless - as I told you above - you actually intend to measure the state of the universe. In practice you specify some properties of the state. Say, it has some distribution of fluctuations.I think you are *vastly* under-stating how precisely the initial state must be specified, given how important that state is to the "explanatory power" of superdeterminism. If you need a particular distribution of fluctuations, configuration of hidden variables, and so on, then you need a very precise model for how these variables, fluctuations and so on modify the evolution of the state; moreover, you need a very precise model for the origin of those fluctuations, configurations of variables, etc. if you want to be confident your posited initial state is internally consistent and is consistent with established physics. Surely the exact time an experimentalist starts the run is important to the spatio-temporal distribution of fluctuations, so what the experimentalist had for breakfast and how much traffic he/she encountered on the way to work are relevant because they affect when the experiment starts. The temperature may be relevant, the quality of the bi-refringent crystal is important (which in turn depends on its manufacturing conditions), the intensity fluctuations of the laser may be important, and so on. If all the governing equations are deterministic and there is no source of intrinsic (uncaused) randomness, then the initial conditions that ultimately determine all those variables both big and small must have entirely originated in the very early universe. I don't see how you can capture all that with an idealized model that considers only a manageable number of variables in a small region of spacetime.

Making matters much harder, it seems to me, is that one doesn't want to specify the initial state for just one run of the experiment, one wants to make sure that global state provides the necessary boundary/initial conditions (BCs) every time the experiment runs, both in the same lab and other labs throughout the world.

(cont'd)

Sabine,

Again, the weird pose in your most recent response either indicates some bizarre inability to read a post or some even more bizarre inability to respond in relevant way to it. There are various arguments that I and Carl and Mateus have given against the possibility of a scientifically acceptable superdeterminstic theory of the physics of the actual, physical universe we live in. Because the superdeterministic theory must somehow secure systematic, large and critically important violations of the Statistical Independence condition in Bell's proof, and since the scientific unacceptability of that depends on how, physically, the apparatuses are being set, there are a string of separate arguments that run off of various different possibilities for setting the apparatuses. I was just giving a brand-new explicit argument (which is intuitively obvious and should not need to be laid out in such detail) that if the apparatuses are set via a chaotic process (rolling dice) then trying to just "Pick initial conditions with the right correlations" is manifestly unscientific and unacceptable. So any superdeteriminsitc theory that can handle physical dice is out. So every superdeterminsitc theory that can handle the actual physical world is out. QED

I have long been using the deterministic pseudo-random number generator to construct a completely different argument, which also works. So: any superdeterminstic theory that has the resources for the existence of any sort of Turing machine is out. So any superdeterminstic theory that can handle the actual world is out. QED

Is that sufficient specification for you? It completes the detailed argument that any living breathing physicist pursuing a superdeterministic theory as a proposal for account for the actual physical world we live in is wasting her time. It completes it two different ways. And it never says a word about free will.

Hi Sabine,

I expect you'd agree that writing is a tricky communication medium because words alone are limited in how well they can convey the mood and intent behind them, even when one tries to be careful. So my reason for writing the "sociological comment" may not have been clear; it was not to criticize you for the sake of being critical. Even so, it may have had zero value to you in spite of my intent, but you alone can rightfully be the judge of that because I addressed it to you.

Just to be clear, my main reason for the comment was a hope that you might reflect a little on your "sparring technique" with those who disagree with you. There have been plenty of accusations and personal attacks flying back and forth between you and Tim (and before that, between you and Mateus until you summarily banned him for no good reason that was obvious to me).

I am not confident about what I am stating,I'll take this at face value, but please know that the way you sometimes state things it

looksto me (and apparently also to Mateus and Tim, judging from their reactions) like you are very sure you are right and your "opponent" is wrong, even when discussing something they have spent a lot of professional time on. But again, words alone often do a poor job of conveying intent.I am tired of having to deal with people who don't understand what I am stating to begin with.I believe this completely!

99.9% of this thread is people accusing me of what they want to believe I said.You certainly have made it clear that others often misinterpret what you say. But if Tim and others can quote you verbatim (copy and paste) and then address what you say, yet still misinterpret what you mean, I think that means something. To you, it may mean that Tim just wants to argue, to twist your words to prove you wrong instead of having a good discussion. To me it means that you probably aren't being clear enough in how you say what you mean, and so others misunderstand you too easily. (Maybe using extra words to be extra clear would reduce misunderstanding, and immediately clarifying your meaning when someone misrepresents you would also help. But attacking the person, claiming they are carelessly or willfully misrepresenting what you say, will probably not help.)

Anyway, that was the main intent of my comment.

Your vague accusations about what I supposedly ignored are yet another empty comment who contributes nothing than noise. You call it an "ad hominem attack" that I summarize what someone else said but have no problem with the original comment.Like I already said, if you see my comment as empty noise, that is what counts -- I meant it to help you see how you came across (to me, anyway) as a non-participant, and if that's unhelpful then you're the rightful judge of its value.

As you noted, I thought your response to Tim that I quoted had a different "flavor" than his original comment. Tim bluntly accused you of not paying attention, which you understandably didn't appreciate. But he was criticizing what he saw as your behavior, not you as a person. Your response looked to me like an attack on Tim-the-person rather than Tim's behavior -- you mocked him personally for grandiose self-promotion and thinking highly of himself. I know I would take that as a personal attack if it were directed at me. If the distinction between his and your remarks looks insignificant to you I can explain further.

Tim,

I'm a bit befuddled by your analogy of Joe and the casino, because, as I understand it, the dice belong to the casino.

-Arun

Hi Sabine,

I don't know why you want me to show this.[Where "this" was my question:In particular, can you show that the assumption that "it works" is self consistent when applied collectively to all experimental demonstrations of the phenomenon?]I'm just trying to decide to what extent I think the idea of superdeterminism looks like a scientific one. To me, a scientific idea is one that can be, at a minimum, empirically distinguished someday from an appeal to unseen forces guiding the universe in just the right way to produce a phenomenon of interest (e.g., "God made it happen; just don't ask how"). Naively at least, superdeterminism looks to me like an appeal to unseen variables and boundary conditions, motivated by a desire to reproduce a phenomenon, i.e., a kind of "just so" story that is supposed to be a substitute for a deeper theory that explains how/why the phenomenon occurs. But maybe that naive view is wrong.

In my view, dressing up a "just so" story in mathematical clothing doesn't automatically make it scientific; stronger criteria are needed. Also (in my view), it is not enough to claim that we can simply assume the presence of the boundary/initial conditions needed to "explain" a phenomenon superdeterministically -- we also need to show that such an assumption is self-consistent to make it a scientific proposal. If that assumption is actually not self-consistent, we are left with nonsense, a piece of muddled thinking masquerading as a scientific idea.

It seems to me that it is also valid to ask for a strong argument showing that the idea of superdeterminism, at least as it applies to deterministic local explanations of violations of Bell's inequalities, is internally consistent -- the burden to show why superdeterminsim is

nota scientific idea doesn't automatically fall on those who think it is little more than a fundamentally untestable "just so" story.Exploring this line of thought was the basic reason for my quoted question.

My point is merely that Tim's claim that all superdeterministic models are unscientific is an unproved conjecture.Was Tim actually making that claim for

allsuperdeterministic models? Maybe he was, but it seemed to me he was focusing more specifically on whether superdeterminism offers a "scientific explanation" for violations of Bell's inequalities. That would be a narrower question, I think, because the phenomenon and its theoretical description by QM are well known -- it seems easier to argue against a single well known case than the general and largely unexplored case.Reimond,

Tim´s “Joe the gambler” is a great example how an explanation works.Theory 1: Joe's dice are rigged is the simplest.

For me a good explanation/theory is the one that uses the least assumptions.

JimV, in the comment above you, has provided a good answer.

In addition, there is a hidden assumption that is not stated, which is that Joe the gambler provided the dice. By not stating all the assumptions, one might incorrectly state that a particular explanation has the least assumptions.

Tim,

Is it your contention that the selection of settings of measurement apparatus is akin to the kinetic theory of gases? (behavior quite independent of detailed initial conditions)?

Thanks!

-Arun

I too have tried kialo. Unfortunately it really all depends on the abilities of the particular moderator.

People may have noticed that there is a "Con't" at the end of my last post, but the continuation was never posted. I case it has gone astray, here it is:

Now of course there cannot be any real progress: I finally try to straighten out what you are now claiming to have meant all along, and you immediately make a claim that is inconsistent with that. You write: "Even in a chaotic deterministic system there is an initial state for your dice." Now: if by "initial state" you mean the most obvious thing—namely a *completely precisely specified* initial state, one that, according to the dynamics, will yield the observed correlations (170 throws of a 7 in a row), yes, there is such an initial state even if the dice are fair. But, as the whole example is meant to illustrate, just "choosing the right initial conditions to account for the data" is not an acceptable scientific move here. Anyone who tries to maintain that the dice are fair in the face of that data, merely by remarking that there is *some* initial condition that yields the data, is not acting scientifically. That person is acting like Gerard in "Waiting for Gerard". That person is a lunatic, dogmatically attached to some a priori view and unwilling to part with the dogma no matter what. Not a scientist. *That is the argument that Mateus and Carl and I have been making over and over and over.*

On the other hand, if by "initial condition" you meant what you recently claimed to mean—not a single initial state but a set of states delimited by some bounds that derive somehow from our present observational capacities—then your statement is just false. In that sense, there is no "initial state"—none—that predicts the phenomenon even though the dynamics is completely deterministic. That's due to the chaotic dynamics.

In your last post you suggested that maybe I have been referencing some property beside merely being superdeterminstic in my argument. well, I have certainly been assuming all along that we want a superdeterministic theory *that can account for the world we live in*, i.e. that is a possible theory of the actual world. And since the actual world contains chaotic systems, such as dice, and since those systems can be used to randomize apparatus settings, the superdeterminstic theory must be able to yield a chaotic dynamics. And the "pre-established harmony" in the initial conditions must be sensitive enough to guarantee an infallible perfect correlation between the state of the GHZ particles on the one hand and the initial condition for the dice being in the unbelievably tiny and thin and spread-out sort of initial conditions that will yield a particular outcome of the dice.That is what *cannot* be done on the "loose" reading of "initial condition" and what leads to unscientific behavior in the infinitely precise reading. take your pick.

Sean S.,

Tim wants everyone to read Bell (i.e., we all must seek information) but is unwilling to click on a link (your signature) and then click on a second link (that shows your email address with your name and affiliation) to find out who you are (i.e., information needs to be spoon-fed), I am tempted to post something here that would be of the spoon-feeding variety. I'll just say that if this blog were to have a mediator, you are well qualified person here to be one. :)

Tim,

I noticed the cont but there was no further comment from you in the queue. I kinda assumed you had something else to do. In any case, if anyone would prefer to continue this on a different platform, that would be fine with me. I myself meanwhile find the comment sections on Blogger seriously terrible. I'm sorry about this but not much I can do.

Tim,

Regarding the (previously missing) cont. You write

"just "choosing the right initial conditions to account for the data" is not an acceptable scientific move here."As I said, you want an explanation which amounts to a simplification. If that's not the case, I would agree on calling this non-scientific. What I have said is that choosing an initial condition is not by itself unscientific. If it works in the sense of explaining something that's all justification it takes. By claiming that superdeterminism is unscientific you have claimed that there is no initial condition that, together with the dynamical law, has explanatory power. I'm still waiting for the proof.

Marty,

I guess we have differences as to what we consider personal attacks. If someone accuses me for talking out of my ass, I consider this more than enough justification to refuse further communication with such a person.

As to Tim, well, as you have noticed he is still commenting. This means I haven't yet given up hope on him.

It seems to me we share the common interest of wanting to figure out if (and if so which) superdeterministic models can work. I even agree on your point of criticism though (as I noted earlier) it doesn't seem to apply to retrocausal models.

Andrei

Good, this is useful, You have indeed misunderstood the structure of Bell's theorem. Looking thing up on Wikipedia for a specific case has its limitations, and you have bumped up against that.

The statistical independence assumption made by Bell is formally analogous to the Stosszahlansatz, or more precisely the Hypothesis of Molecular Chaos made by Boltzmann. These assumptions do not in the least concern the thing you have found on Wikipedia, namely "In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.”

Note that this definition refers to only *two* events, and also makes essential reference to a "probability of occurrence" of the other. None of these notions even arise for the principle Bell uses. In fact, the phase "This pair of event are statistically independent of each other" is, in the relevant meaning of the phrase, complete nonsense. You flip a coin once. I flip a coin once. Each of us gets heads or tails, and our two outcomes either match of they don't. To further ask "Were those two coin flips statistically independent?" is meaningless. So it is not the Wikipedia definition that Bell has in mind.

Now, let's change the situation. You and I each flip our coins 1,000 times. We each now have a list of 1,000 outcomes, either heads or tails. And we can perfectly well ask, as a purely mathematical matter, whether those lists are statistically independent of each other (within some degree epsilon). That means that (roughly) the proportion of heads that occur on your list is (within epsilon) the same as the proportion of heads that occur conditional on my coin coming heads and to the proportion that occurred conditional on mine coming tails. In that sense, my coin coming heads or tails supplies no information about which way yours landed. There are more sophisticated mathematical measures of statistical independence, but you get the idea.

Let's suppose that in a GHZ set-up, the particle source does not always produce particles in exactly the same state. (If it does, we are done) Bell's assumption is that the frequency with which particles in a given state occur, over a long run, is statistically independent (in the sense just defined) of the setting of the experimental apparatuses. We can empirically check whether the apparatus settings on the two sides are statistically independent of each other. And the final assumption is that the initial particle states are statistically independent of those settings, as far as any characteristics that influence what the outcome will be.

Is that clear?

TM, RE: The relevant hypotheses were "The dice are fair" and "The dice are rigged". Like any sane person, you went with "The dice are rigged"...The rest of your comment is, of course, irrelevant.

Thanks for the reply. Yes, I did consider the rigged-hypothesis to be likely, but I also advocated further study before coming to a definite conclusion. As I understand it, Dr. Hossenfelder's main position and contention is that further study of super-determinism is not unscientific and that it remains unlikely but not definitively eliminated. So in my mind the rest of my comment was in fact relevant to this discussion.

I have tried to avoid giving subjective opinions on the course of this debate but since others have begun giving such opinions I will add this to the mix. There were a couple times, one in particular--the same one Arun noted in this segment of the thread--when you stated as Dr. Hossenfelder's position something which when I read I immediately thought, "That is not what she wrote or meant." Words are often ambiguous so I did not assume it to be a deliberate mis-statement but it did seem to be an uncharitable reading.

Dr. Hossenfelder has described, in another post, that she studied math extensively before segueing into physics. I think her arguments here are math-based, as to what constitutes a proof (that super-determinism is impossible). Arguments of plausibility are often used to guide scientific research, but she objects to claims that such arguments are mathematically rigorous--as I see it.

Sabine,

You just replied to Tim (i.e., it’s in the most recent batch of posts from you that I can now see):

Tim:

"just "choosing the right initial conditions to account for the data" is not an acceptable scientific move here."SH reply:

“As I said, you want an explanation which amounts to a simplification. If that's not the case, I would agree on calling this non-scientific. What I have said is that choosing an initial condition is not by itself unscientific. If it works in the sense of explaining something that's all justification it takes. By claiming that superdeterminism is unscientific you have claimed that there is no initial condition that, together with the dynamical law, has explanatory power. I'm still waiting for the proof.”You seem to have reverted to ambiguity when talking about initial conditions: I can’t tell whether in the above you mean to be talking about a precise, completely detailed initial condition, or instead about a general

typeof initial condition (i.e. one that can be instantiated by many many specific, fully detailed initial states). On the first reading your argument unravels completely; on the second reading, Tim has indeed given you the proof (for the Nth time) in his last few posts.First reading (precise IC): the problem is that a completely precise, detailed initial condition of the universe (or anyway a big chunk of it, e.g. our current causal past) is a monstrously big and complex thing, something that could never be expressed in writing or coded in a real computer. So even if I pretend I don’t know that and pretend that you could somehow “give me” that enormous initial condition, I would certainly not consider it a simplification of

anything smaller. Yet the thing to be simplified, by your account of explanation, is vastly smaller indeed. Tim expressed it in one sentence:“In a GHZ set-up, whenever the three apparatuses are all set to measure x-spins there are an odd number of "up" outcomes, and whenever one is set to measure X and the other two are set to measure Y there are an even number of "up" outcomes.”This is a statement of ageneralphenomenon – exactly the kind of thing we normally try to explain in science.But let’s suppose for a second that when you talked about “simplifying the data”, by ‘data’ you meant

the full expression of all the experimental data ever generated.Fine: your monstrously huge precise initial condition is still vastly bigger and more complicated and less simple than a complete specification of the detailed results of every GHZ experiment ever run or to-be-run in our world. So, in conclusion,according to your own account of what explanation is, choosing a precise and detailed initial condition cannot be seen as giving an explanation of the GHZ phenomena. [The initial condition plus dynamics mayentailthe phenomena, but (as philosophers have pointed out for over half a century) entailment is far from sufficient for explanation.] Now, I remind you of the first 2 sentences of your last reply:“As I said, you want an explanation which amounts to a simplification. If that’s not the case, I would agree on calling this non-scientific.”. So, are you ready to agree now that superdeterminism is non-scientific if it simply invokes the existence of some (precise, fully detailed) initial condition?CONT.

Cont’d:

The only way you can avoid it is by insisting that you meant the second reading of “initial condition” (generically-characterized IC). But if you go this way you really have to, finally, face up to the strength of the challenge: the superdeterminist theory has to be at least able to capture the behavior of stuff we have around us, like dice and Galton boards and computers using the digits of π to determine apparatus settings. And the near-infinite variety of ways that such chaotic and/or pseudo-random phenomena can be used to determine apparatus settings makes it clear as can be that no “loose” initial condition could possibly manage to infallibly get the GHZ experiment results to come out as QM predicts.

Is the problem that you think this (and all the variants that Mateus and Tim and I have given over the last hundreds of messages) is just not a strong enough argument, you want a “PROOF”?

The arguments already make clear that IF there are any initial conditions of a deterministic, local physical theory that can reproduce the GHZ data (plus all other Bell tests, plus what we see around us in general), they (if there’s more than one such precise IC!) would have to be incredibly thin slivers of the IC space, using whatever measure is “natural” for that space + dynamics (e.g., equivariant under dynamical evolution). A good analogy would be to the initial conditions in Boltzmannian stat mech that lead to the following behaviour of all gases in isolated boxes: “Oscillate back and forth between equilibrium and being such that the gas is all located in the left-hand side of the box, once per hour.” Do you really think that there might be a “loose” way of specifying all such initial conditions for gases in boxes displaying such behavior, one that isn’t trivial or cheating (i.e., something like “The class of ICs that lead to oscillation back and forth . . .”)? I doubt you do. But then, why would you think that there might be a non-cheating, non-trivial “loose” IC that could do the trick for a superdeterminist theory?

To me it is clear that the burden of proof that it is worthwhile to spend even a tiny bit of time or money on superdeterminism (or on experiments to rule out such theories) is on the would-be advocate, not on the critic.

JimV

As Aristotle noted more than 2 millennia ago, it is only appropriate to ask for as much rigor and proof as the subject matter admits: demanding more will of course end in pointless dissatisfaction. If Sabine is looking for mathematical certainty in an empirical subject like physics then she is looking for what obviously can never be achieved. There is no point in that. Philosophers have recognized since the ancient skeptics that it is inappropriate to seek absolute certainty or rigorous proof in the empirical sciences. As your own "simulator" argument shows, here is a (trivial) way to recover locality in physics: assume we are all brains in vats, and that the physics of the brains and the vats is local, and some evil genius is feeding us experiences designed to create the false impression that we are all living in a world in which Bell's inequality is violated at spacelike separation. Welcome to the Descartes Evil Demon hypothesis. We all know that there is no logical refutation of that hypotheses, and we also all know that is would be crazy to believe it. If not everyone on this thread is starting from the obvious starting place then we need to have an even more foundational conceptual discussion.

Arun

No, it is my contention that because the apparatus settings can be made using so many completely different physical randomizing devices (Including shaken dice, and the parity of the digits of pi, and parity of the number of raindrops that fall on a given square inch of glass in a given minute during a rainstorm, and the polarization of photons coming from the last scattering surface after the Big Bang), that the assumption of statistical independence used by Bell is secure, and the non-locality of the physical world is the only scientifically acceptable conclusion to draw.

Tim,

‘Joe the gambler’ is a great example how an explanation works based on statistical independence (SI).

Here in my

“Generally, in a deterministic theory like Newton’s clockwork universe the only room left for statistics (or statistical independence) is our (or the universe´s own) ignorance about the initial conditions”I deliberately did not consider SI based on statistical mechanics, because I wanted to exemplify the consequences of superdeterminism depending on the initial condition(s) in the very beginning of time. For this I explicitly had to exclude any sign of randomness.Boltzmann got SI from classical determinism via the H-theorem or better the Stosszahlansatz uses SI and thus injected randomness or a probability distribution.

QM (via Einstein’s light quanta hypothesis) was born from the conflict between Maxwell (smooth field) and Boltzmann (atomistic, granular matter). The QM measurement injects QM randomness.

Boltzmann did not know at his time that matter are no billiard balls. Might it be that the H-theorem (Boltzmann’s SI and randomness) has something to do with QM randomness in observer independent triggered QM reductions?

Again, solving the measurement problem is the key and would bring the foundations of statistical mechanics/thermodynamics and QM together.

So far QM calculates energy levels and the Boltzmann factor

e^βHpopulates these. And there is this nice formal analytic continuation to QMe^-iHt- coincidence?Andrei,

I sort of agree with Tim’s last reply to you, although one can look at what Tim’s saying as simply fleshing out what the Wikipedia definition must mean, if we are to apply it to Bell’s theorem and the context of deterministic theories. In that sense, you’re free to invoke that definition in your reconstruction of Bell’s argument, just keep in mind what it amounts to.

The real issue with your stance about superdeterminism is that you’re still making the mistake of conflating logical non-/independence with statistical dependence/independence, and hence wrongly declaring Maxwellian EM to be a superdeterministic theory.

Let’s consider sources and detectors and experimentalists as made up of charged particles obeying classical EM. Imagine a full initial state of a large region to be given, big enough and early enough to entail what the properties of the particle emitted from the source will be, and also what the detector settings will be, even if the latter are determined by doing polarity measurements on photons of CMB from opposite regions of the sky. (So notice: this region will be pretty damn large, and pretty early in time.) Now, your point has been: Suppose in the actual world the setting at A was “X”, and at B also “X”, and at C “Y”. In order for us to coherently imagine a world as close as possible to actuality, but where A, B and C measure XXX instead of XXY, something’s gotta be changed regarding the situation of everything at the time of emission of C’s CMB photon (or somewhere else in time, but all changes percolate to change things at all times). Hence, the particle positions &/or fields of the source-device must be at least slightly (perhaps very slightly indeed) different from what they were in actuality.

Great: we have established that the state of affairs at detector C is not

logicallyindependent of the state of affairs at the source, if we hold fixed Maxwell’s equations and try to hold fixed everything (all other particle positions and field values) as much as possible. But have we thereby established what you claim, namely thatstatisticalindependence fails for GHZ experiments modelled using classical EM? By no means! Keeping in mind how Tim explicated statistical dependence using coin flips, to establish this, you’d have to give a positive argument for the claim that under classical EM, in (“any”; or “typical”; or “most” (using the natural measure over the space of possible initial conditions)) runs of 1000 GHZ experiments, there will be correlations between apparatus settings and possessed properties of the source particles. And you’d have to give an argument for this that covers any of the myriad ways one might choose to try to “randomise” the detector settings. Good luck with this.CONT.

Cont’d:

But wait, there’s more you need to do, if you want to have classical EM be a “superdeterminist” theory in the sense everyone else is using! You need not just to show that there will be a failure of statistical independence in those runs of 1000 GHZ experiments under EM; you have to also show that the failure will be “the right kind” of failure, namely one that somehow leads to that 1 out of 4 possible outcomes (for each combination of apparatus settings), that QM forbids but a local deterministic theory can’t help but allow as locally possible,

neverhappening. That is a quite specific way for statistical independence to fail, just one out of zillions of logically possible ways for it to fail. Double good luck proving that this will generically happen.So, to sum up: No, your observations about the constraints on initial conditions in EM have not shown anything about whether Bell’s statistical independence premise fails.

One further side-note. You have claimed that classical EM is superdeterminist but Newtonian particle mechanics under gravity is not. The reason you gave is that Newtonian gravity allows us to place the particles wherever we want in an initial state; so we can for example modify the position of one particle in C’s detector at an early time without thereby modifying the positions of anything in the particle source at that same time. This is true but irrelevant, because once we do modify the initial state in that way, the gravitational forces it entails (on every particle, everywhere) become different at that moment and forever after/before. So if we change the initial state in any way, this will (as a matter of logical entailment) have effects on the behavior of the source and of the particles-to-be-measured, whether we like it or not. So, by your own way of defining what makes a theory “superdeterministic” (which I hope you will abandon), Newtonian particle physics is superdeterministic too.

JimV,

Admitting, for now, that the multiple arguments that have been presented here for the implausibility of superdeterminism do not amount to PROOFs in the mathematician's rigorous sense, it is still possible to see why it is unscientific to pursue the idea using analogies. Let me try a new one.

Suppose we entertain superdeterminism as a general concept, and Prof. T. wants to devote his research time to developing superdet theories to explain violations of Bell inequalities, and Dr. G wants to do an experimental test that might rule out (or fail to) some class of such theories. But along comes Prof. H, who has his own variant of superdeterminism (non-local). Prof. H. doesn't care about Bell tests, rather his hypothesis is that, generically, the initial conditions of the true physics of our world are such that whenever more than 500 Harvard professors simultaneously wish for the stock market to go up, it will always go up that day. Prof. H. wants funding to test his hypothesis. What grounds can you give for denying Prof. H. any funding, that don't also constitute grounds for denying Prof. T and Dr. G. funding too? Remember that you've implicitly set the standard: rigorous mathematical PROOF.

Marty;

Since Sabine has offered an olive branch to Tim, I’ll say no more on their conversation. Let sleeping dogs lie.

“

... between you and Mateus until you summarily banned him for no good reason that was obvious to me”Mateus crossed the line at 06:36, July 16, 2018.

sean s.

Arun;

Thanks for your kind words.

sean s.

Carl,

I said above I am not distinguishing the a "precise, detailed" initial condition from a "general type" because the distinction is both ill-defined and irrelevant. I have already explained this above. The only thing that matters for the question whether a theory does or doesn't explain something is the amount of information you put into the assumptions. Some initial states require much information (you would call these precise, detailed), others don't. This doesn't directly translate into the actual number of states, hence I don't speak about this. Hope that clarifies it.

You then write

"a completely precise, detailed initial condition of the universe (or anyway a big chunk of it, e.g. our current causal past) is a monstrously big and complex thing, something that could never be expressed in writing or coded in a real computer. So even if I pretend I don’t know that and pretend that you could somehow “give me” that enormous initial condition, I would certainly not consider it a simplification of anything smaller. "This is just inaccurate and not helpful. The question is not whether something is "smaller" but whether you need more or less information to detail it. Also, I have said above that of course you don't specify the initial condition for the whole universe if what you want to measure is not the whole universe, which is realistically always the case.

I have also already explained several times that the reason I spoke about the initial condition of the universe was to explain that there is no way to derive the probability of finding a specific subset from first principles. I have also said several times that, yes, you can draw on empirical knowledge instead (sample evidence) but a) this is an argument that Tim didn't make and b) it's difficult to make an argument for the distribution of variables you have never measured.

" you really have to, finally, face up to the strength of the challenge: the superdeterminist theory has to be at least able to capture the behavior of stuff we have around us, like dice and Galton boards and computers using the digits of π to determine apparatus settings. And the near-infinite variety of ways that such chaotic and/or pseudo-random phenomena can be used to determine apparatus settings makes it clear as can be that no “loose” initial condition could possibly manage to infallibly get the GHZ experiment results to come out as QM predicts."You have no argument for this claim.

"The arguments already make clear that IF there are any initial conditions of a deterministic, local physical theory that can reproduce the GHZ data (plus all other Bell tests, plus what we see around us in general), they (if there’s more than one such precise IC!) would have to be incredibly thin slivers of the IC space, using whatever measure is “natural” for that space + dynamics (e.g., equivariant under dynamical evolution)."Oh, there are your "thin slivers" again. Have you yet found a way to quantify their thin-ness. No, you haven't? How am I not surprised.

And which arguments are you even referring to? All this going about with dice? This has nothing to do with anything. I already said this several times above, why the heck is anyone talking about dice? I am talking about the degrees of freedom of the detector and the prepared state. What do you want with your dice?

1/2

2/2

" is just not a strong enough argument, you want a “PROOF”? "Yes, I want proof. Here is what a proof should look like:

Definition: A theory is superdeterministic if it a deterministic theory with hidden variables that reproduces quantum mechanics when averaged over the variables. In such theories, the prepared state is generically correlated with the detector and if there are multiple detectors these are generically correlated with each other. The interactions of the theory should be local (not because that's a requirement because otherwise I'm not sure why bother), but the theory isn't locally causal (in Bell's sense).

Claim: No superdeterministic theory (in the above defined sense) has explanatory power, where by explanatory power we refer to a simplification provided by the theory over just collecting data.

Proof: It follows from the requirement that the theory be local that.... Well, what follows from it?

Look, I have said several times, I am happy to discuss the definitions and assumptions. If you want further additions to what superdeterminism is, if you want to declare superdeterminism is something else entirely, that's all fine with me. If you want to use a different definition for "explanatory power" that also fine with me. What I am asking for is conceptual clarity. Write down the assumptions. Make conclusions from it. Spare me the talk about unfair dice and "thin slivers" and "unnatural restrictions" and other fuzzy ideas.

I don't think I am asking for too much. Bell did this. That's why we're still talking about his theorem.

Everybody:

As I have mentioned a few times, I have some trouble with the comment feature here. If you want I can explain what's up, but really I don't think it matters. Point is, it works badly and it's taking up more time than I currently have.

Reimond has kindly offered to set up a thread on his blog, which you find here. You can continue there if you wish. I have subcribed to the comments on his new thread, so don't worry that I'll be missing what you write.

I am doing this because I suspect that most of you will find it more convenient to not having to wait until I get up to see their comment appear.

Since I don't want that this comment thread splits in two, I'll be closing the comments here.

Thanks everyone for the interesting exchange.

Post a Comment