How to Make Sense of Quantum Physics

Superdeterminism, a long-abandoned idea, may help us overcome the current crisis in physics.

BY SABINE HOSSENFELDER & TIM PALMER

Quantum mechanics isn’t rocket science. But it’s well on the way to take the place of rocket science as the go-to metaphor for unintelligible math. Quantum mechanics, you have certainly heard, is infamously difficult to understand. It defies intuition. It makes no sense. Popular science accounts inevitably refer to it as “strange,” “weird,” “mind-boggling,” or all of the above.

We beg to differ. Quantum mechanics is perfectly comprehensible. It’s just that physicists abandoned the only way to make sense of it half a century ago. Fast forward to today and progress in the foundations of physics has all but stalled. The big questions that were open then are still open today. We still don’t know what dark matter is, we still have not resolved the disagreement between Einstein’s theory of gravity and the standard model of particle physics, and we still do not understand how measurements work in quantum mechanics.

How can we overcome this crisis? We think it’s about time to revisit a long-forgotten solution, Superdeterminism, the idea that no two places in the universe are truly independent of each other. This solution gives us a physical understanding of quantum measurements, and promises to improve quantum theory. Revising quantum theory would be a game changer for physicists’ efforts to solve the other problems in their discipline and to find novel applications of quantum technology.

Head over to Nautilus to read the whole thing. It’s a great magazine, btw, and I warmly recommend you follow it.

If you found that interesting, you may also be interested in my contribution to this year’s essay contest from the Foundational Questions Institute on Undecidability, Uncomputability, and Unpredictability:

Math Matters

By Sabine Hossenfelder

GĂ¶del taught us that mathematics is incomplete. Turing taught us some problems are undecidable. Lorenz taught us that, try as we might, some things will remain unpredictable. Are such theorems relevant for the real world or are they merely academic curiosities? In this essay, I first explain why one can rightfully be skeptical of the scientific relevance of mathematically proved impossibilities, but that, upon closer inspection, they are both interesting and important.

It is really "convenient" to disregard or belittle Mathematical results when these results appear to be in the way of deeply entrenched beliefs, as the belief that knowing some finite set of "fundamental principles" is enough to fully describe Reality, that belief had been expressed in this blog multiple times.

ReplyDeleteBut obviously this belief is rooted in the axiomatic method and that axiomatic method had been studied extensively by mathematicians, with striking results as Godel incompleteness theorems and the extension of these results by Chaitin and others. But some theoreticians are still stuck with the implicit vision of the axiomatic method exposed by Euclid 2000 years ago, and later by Galileo, Newton or even Einstein.

These new results clearly imply that "complexity" is a source of incompleteness, or in other words complex systems will exhibit new irreducible properties(new physics) that will be as fundamental as the underlying properties of their elementary components.

This already had been acknowledged explicitly by some physicists, just remember P. W. Anderson well known 1974 article "More is Different".

The "rigidity" in galaxies rotational speed is really not that different from the rigidity of solids when seen under this new optics, nowhere to be found "dark matter" is the result of using "reductionist" thinking in a very complex system.

P. W. Anderson quote:

"I saw the "theory of everything" as the theory of almost nothing. The actual universe is the consequence of layer upon layer of emergence, and the concepts and laws necessary to understand it are as complicated, subtle and, in some cases, as universal as anything the particle folks are likely to come up with. This also makes it possible to believe that the structure of science is not the simple hierarchical tree that the reductionists envision. but a multiply connected web, each strand supporting the others. Science, apparently, like everything else, has become qualitatively different as it has grown."

Delete"deeply entrenched beliefs, as the belief that knowing some finite set of "fundamental principles" is enough to fully describe Reality, that belief had been expressed in this blog multiple times"If you think this a belief that I have expressed, then you have misread me multiple times. How is it that people constantly invent opinions that I have never voiced and do not hold.

Just one example:

DeleteSabine Hossenfelder in her article "How to live without free will":

1- "Physics deals with the most fundamental laws of nature, those from which everything else derives. These laws are, to our best current knowledge, differential equations. Given those equations and the configuration of a system at one particular time, you can calculate what happens at all other times."

2- "In quantum mechanics, some things that happen are just not determined, and nothing you or I or anyone can do will determine them. Taken together, this means that the part of your future which is not already determined is due to random chance. It therefore makes no sense to say that humans have free will."

Implicitly all of this assumes that the known physical laws are "complete": that they can describe/explain with arbitrary precision any natural phenomena or behavior.

But there are multiple "layers" of complexity between the world of Quantum Mechanics and the reality of living beings, in particular human beings.

Each of these layers of complexity exhibit new emergent properties that can't be fully described by Quantum Mechanics, not even "in principle". For example "identity" is a basic emergent property of the "classical" world, and hence very likely "space" is only emergent as a classical concept as measurements are impossible without rigid uniquely identifiable "markers".

And since complex living beings evolve with time, some of these emergent properties will also be dependent on time, that gives wide margin for "free will" to be present even in very simple living beings.

The known "fundamental principles" do not determine uniquely the behavior or properties of all complex systems, even more when we really don't know all irreducible fundamental principles as complex systems will exhibit new irreducible properties.

Jeremy,

Delete"Implicitly all of this assumes that the known physical laws are "complete": that they can describe/explain with arbitrary precision any natural phenomena or behavior."No this is patently false. Look, I usually make quite some effort to express myself clearly and it is extremely frustrating that people nevertheless insist on inventing things I did not say. I recommend that you do not jump to conclusions thinking you know what I mean, but actually read what I wrote.

In your first quote, I wrote explicitly "to our best current knowledge". This is the very opposite of what you claim, namely that I supposedly assume that what we currently know is all there is to know.

The second quote is taken out of context, you can see this from the first sentence. I have over and over again stated clearly that this conclusion is based on the laws of nature that we currently use. It makes no sense to say that humans have free will, given our best current knowledge of the laws of nature.

Having said that, your further statements about supposedly complex layers and so on are empty words and document that you clearly do not understand the argument.

"Each of these layers of complexity exhibit new emergent properties that can't be fully described by Quantum Mechanics, not even "in principle"."That's wrong. Effective field theory does that for you. Please stop going around and making wrong statements about things you clearly do not understand. If you do not know what effective field theory is, how it works, and why it matters, then you are not qualified to comment. I am really tired of people confusing their opinions with expertise.

Can we do molecular biology using effective fields theory? Or cell biology? etc... The "in principle" claim made loosely by theoretical physicists is really meaningless in Reality. There are many levels of complexity between the quantum world and the reality of multicellular living beings and indirectly the hierarchy of Natural Sciences is an expression of the layered structure of Reality.

DeleteThe naive reductionist approach exposed here several times is ineffective in the real world, but obviously deeply entrenched beliefs are blinders for perception.

Hi Sabine,

ReplyDeleteI don't see how superdeterminism is compatible with quantum computation.

Suppose we eventually build ion trap quantum computers big enough to factor a large number. Now, suppose I choose a random large number by taking a telescope in Australia and finding random bits by looking at the light from 2000 different stars. I feed these bits into a quantum computer, and factor the number. There's a reasonable likelihood that this number has some very large factors.

Just where and how was the number factored? Did the universe take two 1000-bit numbers at some time in the distant past, multiply them, distribute the bits in the product to 2000 different stars, and store the factors in the Canadian ore that provided the rubidium ions for the quantum computer?

Or did the universe somehow factor the number after I gave it to the quantum computer, despite the fact that we don't know an efficient algorithm for this that doesn't use spooky action at a distance in it somehow?

Or do you think that quantum computation doesn't work?

I don't think any of these explanations are very believable. Do you have a better one?

Superdeterminism reproduces quantum mechanics when averaged over the hidden variables. It is therefore as compatible with quantum computation as quantum mechanics. If you think otherwise, you didn't understand what Superdeterminism is.

DeleteI cannot make sense of the question where and how the number was factored if you are not referring to the operation done by a certain man-made machine in which case the answer is obvious. You could, in any deterministic theory, rightfully claim that the information at the time when the calculation completes is exactly the same as it was at the first moment of the universe, so in that sense you could well say that the universe factored the number some billion years ago, just that you didn't know anything about it.

> Just where and how was the number factored?

DeleteMy impression is that in superdeterminism, you're just not allowed to ask that question. Or, if really pressed, one just mumbles something about the initial conditions of the universe permitting it to happen.

This is, of course, why most people don't like superdeterminism as an explanation...

Kevin,

DeleteThis has nothing to do with superdeterminism, it's a badly posed question. If Peter would bother to think about formulating it unambiguously, that is in terms of macroscopic concepts that require first defining what he means by computer and what it means to factorize a number, then the answer would be obvious and obviously the same as it is in ordinary quantum mechanics.

Mr. Shor, I deeply admire you, but your comment is such a chocking mess that is just embarrassing.

DeleteWHICH version of superdeterminism are you refering to? I have these:

1-The protonic

2-The Lunatic/Neutronic

3-And another one I can't recall immediately (!!! S E CODE!!!)

I switch between and among them without even thinking about it. The least-reliable one so far is the SE Code, because of the complexity families that get merged in a weird rev-log way that requires further computation. Of families. Of complexities. In rev-log fashion...

(Bubble sorts are beneath me at this point. Even bubble sorting families of complexities is beneath me.)

So, which superdeterministic thinking technology is that one that gets in the way of quantum computing?

SE Code?

Protonic?

Neutronic/Lunatic?

And Sabine is right: your question of where and how made no sense whatsoever.

Speaking of mathematicians, did you know that Hell has an unwritten rule in bright red fiery letters all over its stones that All Mathematicians Blogs Must Necessarily Suck?

Sabine, while I agree with you, I just have to say that this is how _every_ interpretation of quantum mechanics takes care of its sticking points. How does collapse on measurement happen, according to Bohr? To him, that's a badly posed question: outcomes for classical observers are definite, by definition. Where is the particle before a position measurement? Also badly posed, in that interpretation.

DeleteThe whole enterprise of interpreting quantum mechanics is in coming up with new ways of viewing the same mathematics, where questions that are badly posed or don't have answers in one interpretation gain answers in another. In this sense, saying "superdeterminism rejects question X as badly posed, but I really feel like X should have an answer" is a perfectly valid philosophical criticism, as valid as any criticism of the Copenhagen interpretation.

Kevin,

DeleteYou are confusing two different things. Bohr says this question has no answer, it makes no sense. I am saying that this question would make sense if it was properly formulated, and then it would have an answer, but it was not properly formulated. If you properly formulate it, the answer is obvious, and has nothing to do with superdeterminism.

Let me simplify my question. Does superdeterminism say that every small region of space (say a cubic nanometer) can only contain a finite amount of information? Or can there be an arbitrarily large amount of information in a cubic nanometer? Or is this a badly posed question that I shouldn't be asking?

DeleteThe idea that no two points or events in the universe are completely independent is a nonlocality expected of quantum gravity. Anything approaching a black hole is never seen to cross the event horizon, but at the same time Hawking radiation occurs. This means that quantum states do not have a unique location in space, but rather have a nonlocality that is probably a salient feature of quantum gravitation.

DeletePeter,

Delete"Does superdeterminism say that every small region of space (say a cubic nanometer) can only contain a finite amount of information? Or can there be an arbitrarily large amount of information in a cubic nanometer? Or is this a badly posed question that I shouldn't be asking?"For this question to be well-posed you have to (a) define what you mean by "information" (b) define what you mean by a region of space "containing" information and (c) specify what scale of resolution you are talking about because, as I am sure you certainly know, the number of degrees of freedom which you can resolve depends on the resolution you use.

@Sabine: You're avoiding answering questions here by complaining that they're not well-posed. I think you may be doing this to disguise the fact that you don't actually have a theory, but something more like a.vague hope. (There's nothing wrong with having a vague hope rather than a theory, but you should be honest about it.) Can you find a variation of the question that is well-posed and answer it?

DeleteAnd about your last comment

Deleteas I am sure you certainly know, the number of degrees of freedom which you can resolve depends on the resolution you use.

I certainly don't know that. If I had to bet, I would say that it is impossible to resolve the universe on a scale finer than the Planck scale, and that any bounded volume can contain at most a finite number of bits of information. So if superdeterminism says that as you go to finer and finer scales, you get arbitrarily many bits, you've answered my question.

Peter,

Delete"You're avoiding answering questions here by complaining that they're not well-posed."This is a false accusation. I am telling you that your question is not answerable because you are using terms that you have not defined. I cannot answer questions if I do not know what they mean.

A variation of that question which would be answerable could go like this. You can define your computer to be the collection of a certain set of particles that sit in your lab (add any details of you want) that operates on certain input that you give it (add details if you want about what you mean by input and giving) and you say it performs a calculation if, after receiving input and executing an algorithm gives you a result. You will then see that the answer to the question whether the computer did the calculation is trivially yes, regardless of whether it runs on quantum mechanics or whether quantum mechanics derives from an underlying superdeterministic theory.

You could alternatively define computer to mean anything that executes an equation, in which case the whole universe and anything in it can be said to be a computer, and the answer to your question is therefore, yes, the universe did the calculation. But again the answer does not depend on whether you use normal quantum mechanics or whether you think quantum mechanics derives from superdeterminism.

Peter,

Delete" If I had to bet, I would say that it is impossible to resolve the universe on a scale finer than the Planck scale, and that any bounded volume can contain at most a finite number of bits of information. So if superdeterminism says that as you go to finer and finer scales, you get arbitrarily many bits, you've answered my question."The question how finely you resolve structures has absolutely nothing to do with superdeterminism and I don't know why you even bring this up. Superdeterminism is not the new string theory. It's doesn't quantize gravity and it doesn't unify the interactions.

Also, as I already told you above, you should make some more effort defining what you talk about. What do you mean by a "bit of information". What is the information content of a continuous function, just to name a random example? And what do you mean by "resolving finer than the Planck scale"? Do you mean probing distances shorter than the Planck length? Then this statement would violate Lorentz-invariance. Do you mean curvature larger than the inverse of the Planck length squared? In this case I don't know what this has to do with anything because this only happens inside black holes (probably).

Again, I am afraid I have to say your questions are not well thought through and I don't get the impression you even understand what superdeterminism is.

“

DeleteLet me simplify my question. Does superdeterminism say that every small region of space (say a cubic nanometer) can only contain a finite amount of information? Or can there be an arbitrarily large amount of information in a cubic nanometer? Or is this a badly posed question that I shouldn't be asking?”:There are very, very few people in the world who can escape the car-crash feeling of weirdness at seeing a non-mathematician use non-mathematics to make a mathematical point to a mathematician. I don’t go pat myself in the back later on. I keep thinking I am going to turn a corner somewhere and someone is bound to whisper “Wow, Ivan, you fooled them”.

But the question was simplified to my satisfaction, that is, to the extent that I can answer it I have no reason not to try.

Sabine answered:

“

For this question to be well-posed you have to (a) define what you mean by "information" (b) define what you mean by a region of space "containing" information and (c) specify what scale of resolution you are talking about because, as I am sure you certainly know, the number of degrees of freedom which you can resolve depends on the resolution you use.”Let’s NOT define “information” for now -I don’t care. Therefore, no resolution is needed.

This is itself a super deterministic approach, a technical starting position (an irreducible “a hungry tiger will eat you”.) We are left with a missing definition of “containing”.

“

Does superdeterminism say that every small region of space (say a cubic nanometer) can only contain a finite amount of information?”No. It says the definite information (tiger, let’s say, or a number) has a start, middle and end, at resolution unspecified. Therefore, the second question is answered: arbitrarily large amounts of information fit any space and are “contained” by it.

Now, the definition of “containment” is very, very clear. That which is contained is coordinate-dependent on that which contains.

That is a technical definition too, not subject to debate. Te definition of “information” is STILL not needed for that.

(Sabine, I writing as fast as possible, it ain’t quite working. Will be back asap.)

("Can you find a variation of the question that is well-posed and answer it?")

Yes, I can.

You're right ... after reading your essay above, I don't know what superdeterminism is.

DeleteIf superdeterminism requires an infinite amount of information at every point of space-time (something which you tell me you cannot answer because I haven't defined what I mean by "information"), I wouldn't call it a more satisfying or more comprehensible theory than our current interpretations of quantum mechanics.

DeleteLet's make things simpler. Take a spin chain of spin-1/2 particles with a Hamiltonian where only spin-1/2 particles within distance 100 (or less) interact. How much information (define "information "any way you like) needs to be associated (define "associated" any way you like) with each particle in a superdeterministic theory?

DeleteThere is only so much technical detail you can convey in a popular science essay. I recommend you read our paper if the essay left you confused (it is also cited at the bottom of the essay).

Delete

Delete"Let's make things simpler. Take a spin chain of spin-1/2 particles with a Hamiltonian where only spin-1/2 particles within distance 100 (or less) interact. How much information (define "information "any way you like) needs to be associated (define "associated" any way you like) with each particle in a superdeterministic theory?"Fine, I define information to be a carrot. There's no carrot in the spin chain.

Seriously, if you want me to think about your questions, I would appreciate if you make a little more effort.

@Sabine: if I'm trying to understand your theory, and you respond to all my questions with either "you haven't specified your question precisely enough" (how can I specify it precisely enough when I don't understand your theory????) or with flippant answers, I don't see how we're ever going to get anywhere.

DeletePeter,

DeleteI am doing my best trying to make sense of your comments and frankly I think I have been very patient and polite. How about you stop blaming me for your own shortcomings.

Hi Sabine,

DeleteLet me try to make my question as concrete as possible.

If I understand correctly, your paper implies that there is some "information" (which you call Î», and which is not necessarily accessible to an observer) that is "located" in the immediate neighborhood of an "event" and which determines the outcome of an "event". Is this correct?

Here, an "event" is something measurable by human beings, like whether a spin is up or down, or what kind of particle is created by a collision.

I don't know exactly what is meant by "located" here, but it's whatever makes your theory be "local".

Similarly, I can't tell you what you mean by "information" here, because I don't understand your theory. But it's whatever you are representing by Î».

The question I was trying to ask was: how much "information" is there in Î»? Again, I don't understand your theory, so I can't specify what measure should be used to quantify this information. Is it finite or infinite? Can it be represented as a sequence of bits, a set of real numbers, a collection of qubits, or something else?

And since you're asking me questions about how much resolution I want to take into account, let's make it simpler by just considering a spin chain of n spin-1/2 particles, with a Hamiltonian that is a sum of local terms. This is an abstraction that is a quantum mechanical system, as well, so I assume superdeterminism should be able to describe it, and for this system we don't need to worry about questions of resolution.

Hi Peter,

Delete"If I understand correctly, your paper implies that there is some "information" (which you call Î», and which is not necessarily accessible to an observer) that is "located" in the immediate neighborhood of an "event" and which determines the outcome of an "event". Is this correct?"No, and I am baffled that you say you got this from our paper because we state explicitly:

"It is important to realise that these hidden variables are not necessarily properties intrinsic to or localised within the particle that one measures; they merely have to determine the outcome of the measurement.""The question I was trying to ask was: how much "information" is there in Î»? Again, I don't understand your theory, so I can't specify what measure should be used to quantify this information. Is it finite or infinite? Can it be represented as a sequence of bits, a set of real numbers, a collection of qubits, or something else?"That depends on the details of the model. It does not follow just from the definition of superdeterminism. In my model, \lambda has countably infinitely many entries. However, of these countably infinitely many entries, only finitely many that actually matter fapp. Just how many there are depends, as I said earlier on the resolution. The higher the resolution, the more details.

I believe in Tim's model everything is finite anyway. I don't know about 't Hooft's, but since it's all about discretization it's probably also countable. All the toy models (see references in paper) are just finite.

"And since you're asking me questions about how much resolution I want to take into account, let's make it simpler by just considering a spin chain of n spin-1/2 particles, with a Hamiltonian that is a sum of local terms. This is an abstraction that is a quantum mechanical system, as well, so I assume superdeterminism should be able to describe it, and for this system we don't need to worry about questions of resolution."Fine, but I don't know what the question is here. The purpose of Superdeterminism is to describe the measurement. You need a measurement device if you want to see any difference between quantum mechanics and superdeterminism. \lambda is what determines the measurement outcome. It makes no sense to even speak about it if you don't measure anything.

So superdeterminism allows for unobservable non-local information that determines the dynamics. I don't see why Bohm's pilot wave theory doesn't satisfy your criteria for a superdeterministic theory, then.

Delete“

DeleteDoes superdeterminism say that every small region of space (say a cubic nanometer) can only contain a finite amount of information?”:At that, we take a tiny gratuitous ma non troppo walk through a spooky forest: ADS. We have to go there because it’s not our space, where the ordering is well known: a point after the other, every tree visible.

The state space of ADS is not ours. Its ordering is different, its structure is different, and it requires specialized maths. Peter’s question does not specify which state space he is talking about, except a generic classical “our space” that doesn’t have any relationship with whatever “information”(any definition is fine) it contains. It’s a forest without trees. It’s literally a forest with no trees: in order to have a state space the relationship between structure and content must be established (that is what makes Verlinde brilliant and Godel and Chaitin irritating pricks).

If we have to get into state space, we necessarily have to have a hard-wired relationship between that which is contained and that which contains. (Sabine knows we don’t throw a carrot into it!) Due to the nature of my idea management, note that I have no idea what ADS is supposed to be nor do I CARE to know to demonstrate where it fails.

It works with unremarkably with classical objects. However, once cardinality and ordinality start to get distributed, all hell breaks lose. Let’s try a state space right out of the primes:

Prime automaton: take diagonal, add 0, add to state space, repeat:

111111111111111…. This is the prime definition of the # 1. The very first 1 to the left takes a 0:

10101010101010…. Now we have a 11 on the diagonal, add 0:

110110110110110…. The next diagonal is a 101, so add 0:

1010101010101010…. Next diagonal is 1111, so add 0:

11110111101111011110…

(Minimal requirement: definition of unity. That goes for all state spaces.)

That is a super deterministic state space because it doesn’t fail.

You will get a definition of all primes, period. Impossible to fail. You don’t do an unspecified operation on it, end up with 11111110, and say “Wow! NUMBER EIGHT IS PRIME!!!” It’s more obvious that YOUR initial coordinate condition failed. 8 is not prime.State space CAN NOT fail the information it holds because that is an impossibility. The opposite is not necessarily true.

(Back asap)

Peter,

Delete"So superdeterminism allows for unobservable non-local information that determines the dynamics. I don't see why Bohm's pilot wave theory doesn't satisfy your criteria for a superdeterministic theory, then."First, I don't know what your issue is with "non-local information". Classical correlations are "non-local information" too. The relevant question for compatibility with relativity is not whether information is or isn't non-local, but whether it propagates locally.

Leaving aside that I get the impression you are mixing up different types of locality, Bohm's theory either doesn't solve the measurement problem or it's also superdeterministic, depending on whether you take the selection of a measurement variable to be part of the axioms (I wish you fun debating the matter with Bohmists). It's the same with collapse models. Like pilot waves, these only solve the measurement problem if you use a dynamical equation that already contains information about what variable will be measured (note reference to future). This is exactly what superdeterminism means.

However, both of these approaches are axiomatically unsatisfactory exactly because you need to write down the information about the measurement into the dynamical law by hand. This isn't a well-defined procedure. A fundamental theory should treat the detector the same as the prepared state because they're ultimately both made of the same stuff.

"

DeleteSo *superdeterminism* allows for unobservable non-local information that determines the dynamics"State space allows that, not "superdeterminism".

Ivan,

DeleteBasically, yes. Let me add that there is no reason this information has to be unobservable. I don't know why Peter thinks it is.

This issue comes down to the meaning of hidden variable. I will have to admit that when I heard the term hidden variable, I feel ill at ease. The term nonlocal hidden variable gives me an uneasy sense in the same way I might when I heard about Joe Biden, but the term local variable causes horrendous nausea in the same way when I hear about t’Rump.

DeleteThe nonlocal hidden variable can be looked at as a gadget one uses to do analysis. It has no measurable degrees of freedom and it communicates no information. Thus, calling this superdeterminism is to my mind a bit of a misnomer. I don’t see where this invokes the idea of local hidden variables, so without anything communicated there is no cause and effect or a physical process in any objective meaning. Hence when one talks about any location in spacetime as not independent of any other, this is a form of nonlocality not that unfamiliar in quantum physics. It is also something in line with nonlocality of the holographic principle. As an exterior observer of a black hole witnesses quantum states asymptotically approach an event horizon and also occur as Hawking radiation removed from the black hole. This is a form on nonlocality that removes the concept of spatial or spacetime locality.

I read Palmer’s paper on the FQXi that makes a clear point on using the Blum, Shub, and Smale (BSS) concept of computability [ https://en.wikipedia.org/wiki/Blum–Shub–Smale_machine ]. This is an odd concept for it involves complete computation of the reals to infinite precision and where our usual idea of close approximations are not real computations. This is a certain definition of incomputability. Since these I_U fractal subsets for an underlying fractal system are forms of Cantor sets the p-adic number or metric system is used to describe them. As fractal sets are recursively enumerable their complements are what are incomputable in a standard Church-Turing sense. A fractal set of orbits within orbits U_I is such that while this can fragment enormously this as a fractal set is recursively enumerable. This is not anything undecidable in the standard definition of RE sets and uncomputability. Chaotic systems, such as punctured tori in KAM theory leads to Cantori, which are fractal or Cantor sets that dynamics is on. Recursive sets or algorithms have complements that are recursive. Recursively enumerable sets or algorithms have complements that are incomputable. So. this as a fractal set alone does not make this incomputable, at least in the sense of incompleteness of a first order recursive system.

The GĂ¶del incompleteness comes with the p-adic number system used to define a field over this set or Cantor set. The set of frequencies or periodicities means this Cantor set in a certain “limit” has an unbounded set of primes for p-adic number systems. Then enters Martin Davis, Hilary Putnam, Julia Robinson and in particular Yuri Matiyasevich. Hilbert asked if all Diophantine equations could be solved by a single method in his 10th problem. Diophantine equations were found to be equivalent to p-adic sets, and subsequently Yuri Matiyasevich proved these sets were not computable by a single method. This is a GĂ¶del incompleteness result. Solutions to Diophantine equations, which are associated with the nested frequencies or periodicities of orbits, can be solved locally, but there is no global solution method. This is a form of Szangolies’ epistemic horizon. The Cantor set here then has some undecidable properties in the p-adic setting, where any global field of numerical operations in a p-adic setting is incomplete.

continued:

DeleteI present this in an essay on the FQXi site. I submitted this maybe against my better judgment. Proposing some role for undecidability by GĂ¶del’s theorems in physics is to risk serious umbrage. These matters tend to involve complicated matters of Zermelo-Frankel set theory. Invoking Alan Turing is maybe a safer bet in the paradigm of quantum computing. Of course, GĂ¶del’s theorem and Turing’s proof against a universal Turing machine are the same thing. I though suspect at some point in the future, unless we are soon in a radioactive graveyard, the big names in physics will be all over this trying to stake a claim. However, that time is not yet today.

Keeping it as 12 year-old simple as possible: that •is• my educational level, I don’t do “sophistication”, I just can’t •afford• it.

Delete(Am establishing future ground for yet another model and know where this is going.)

A few interesting things about the prime automaton as state space:

It’s physical, not material.

It is an algorithm that doesn’t make mistakes.

The algorithm becomes an integral part of the data it describes.

What we used to think as complete and legitimate units called “primes” are now solid line segments. (The “primeness” got distributed!)

What we used to think as complete and legitimate units called “not primes” are now also line segments, but not solid; and they are perfectly symmetrical.

If you thing of it one way, it’s a unary graph. Think of it another way, it’s a binary.

Its information is, however, more than unary and less than binary. Meaning the encoded information is fractal, even tough all of it is expressed with only two symbols, 1 and 0, because now it also depends on coordinates.

It is what I call my proton, therefore it has 3 quarks with a gluon in the middle, all infinitely magnifiable.

ALL information, each and every dot in it, has a direction (left to right) and no movement.

It’s not possible for prime state space to betray any prime information, there is no prime information that it does NOT contain.

ALL complete information about prime qualities of a prime, not-prime, or idempotent end exactly at the 0’s in the middle of the graph, in a line that reads 1000000000… in the diagonal.

However, the definition of the side of the graph is already established as 1. Trivial or not, those zeros at the end of “complete definition of prime qualities” of the analytical prime unit you may have at hand are going to end up exactly in the middle, nowhere else.

Reminds you of something else?

No, I didn’t solve it. The prime state space told me that, I just had to

showit to you. And now you know why it is and how I didn’t do a single calculation to solve that problem. I didn’t need to.KUDOS to Philip Thrift 4:18 for inspiration exactly when I needed instructions to follow. I HIGHLY recommend his suggestion:

Practical intractability: a critique of the hypercomputation movement

Aran Nayebi

https://arxiv.org/abs/1210.3304

Superdeterminism has characteristics very similar to hypercomputation. A hyper-Turing machine is one able to circumvent the restrictions of the GĂ¶del-Turing uncomputability limit. A sort of Zeno machine may illustrate this. If I were to flip a switch at one second, then flip again at ½ second, then flip again at ¼ second and so forth, what is the state of the switch after 2 seconds? Well the rapid increase in energy required to oscillate the switch means that even if the switch does not fly apart it will face an energy limit where it is a black hole. Hence, at least from the outside general relativity appears to spoil the dream of circumventing GĂ¶del and Turing.

DeleteWhat if we go into the black hole? The inner event horizon is in the pure eternal solution continuous with I^∞. This would permit a sort of Zeno computation to be observed. This means one could in principle witness this end of switch toggling. So, if that switch is toggled or not toggled with each half-partitioned integral of time any possible algorithm can be computed and its output logged. However, this idealism is spoiled by quantum mechanics, for the black hole will emit Hawking radiation and the inner horizon is no longer continuous with I^∞. Thus, quantum mechanics rescues GĂ¶del and Turing, where both QM and GR appear to respect the Church-Turing thesis of computation so computed outputs are from first order primitive recursive algorithms.

As Peter Shor keeps harping on superdeterminism, with its real nesting of fractal loops or orbits, implies a vast or even infinite number of degrees of freedom. This does appear to be a serious minefield to go through, However, if these are nonlocal they are not real degrees of freedom that communicate information. They respect the no-signaling theorem of QM. If we could really output all of these nested loops or paths this would be a sort of hypercomputation. But we can’t, I illustrate this with the Matiyasevich theorem in the uncomputability of p-adic sets, while Palmer appeals to a funny idea of uncomputability of fractal or recursively enumerable sets by Blum, Shub, and Smale.

Hypercomputing is an interesting concept, but spacetime configurations that permit it violate conditions such as Hawking-Penrose energy conditions or chronology protection. Quantum mechanics and general relativity have conditions that in some ways are equivalent. For instance, the no-cloning theorem of QM is equivalent to chronology protection, for a wormhole would permit quantum cloning of states.

For hypercomputation a spacetime with closed timelike curves would suffice as well. The answer to an undecidable problem could come from a time machine if one sends the results back in time to yourself later. Anti-de Sitter spacetimes permit this. Curiously AdS and de Sitter (dS) spacetimes have the same quantum information, for the two spacetimes share asymptotic infinity. The de Sitter spacetime of inflation may then have a correspondence with an AdS in a quantum computing system. We may think of a computer with a closed timelike curve CTC register plus a chronology respecting register. As the two are quantum states or waves they then constructively and destructively interfere with each other.

The CTC corresponds to the chronology violating AdS and the CR is chronology respecting register. The self-interference of quantum waves imprints a quantum computing output in the R_{ctc} and quantum information in the R_{cr} corresponding to it has constructive interference. This is then how quantum computing in quantum gravitation or cosmology may work.

However, event horizons protect us from observing the data stack in R_{ctc} and this is then not hypercomputation. In this way spacetime and in general classical einselected outcomes occur, but we are shielded away from knowing how it occurs. It is not Turing computable. If theorems of QM and those of GR are formally equivalent, then this carries over to superdeterminism. This means there is no extra information we can obtain from superdeterminism that beats GĂ¶del and Turing.

Sabine,

DeleteI think I now understand what you're doing with superdeterminism.

I thought that at least part of Î» was unobservable because otherwise you run into problems with Bell's theorem and non-locality. But on reflection, I think maybe all these say is that Î» has to be

locallyunobservable.But if Î» is completely observable by local measurements, I think you run into the same no-go theorems that plague standard interpretations of quantum mechanics.

Peter,

Delete"I thought that at least part of Î» was unobservable because otherwise you run into problems with Bell's theorem and non-locality. But on reflection, I think maybe all these say is that Î» has to be locally unobservable."I don't know what you mean by that, sorry. What does it mean for something to be "locally observable". Is any correlation between different places "locally observable"? I would say, no. But I don't know how this matters. Since \lambda is by definition what determines the measurement outcome, you can of course observe its effects. Whether this will actually allow you to deduce *all* of lambda from the measurement, is a different question. Maybe not. Think of decoherence as an example. It's an aggregate effect from an interaction with many degrees of freedom. You observe decoherence, yet this doesn't allow you to actually deduce what all those individual degrees of freedom did. It'll be the same with lambda, I think.

"But if Î» is completely observable by local measurements, I think you run into the same no-go theorems that plague standard interpretations of quantum mechanics."Again, I don't know what you even mean by that, sorry. More generally, the theorems that I know of all assume that statistical independence is fulfilled, so they don't apply to superdeterministic theories (which is the very reason why superdeterminism is the obvious solution).

One last comment ... you say that you don't have to worry about no-go theorems because they all assume statistical independence. This is probably because nobody even thought of trying to prove no-go theorems that don't assume statistical independence. You might start thinking about that ... I think it's quite likely that such no-go theorems exist, and might be able to narrow down the parameter space that you need to look at in considering superdeterministic theories.

Delete(Dr., if you don't really like the next post feel free to get rid of it.)

DeleteSpeaking of hypercomputation, I am running in the very other direction.

DeleteThis is the SE Code, minimal freedom state space system, therefore minimal computational requirements. If another more minimal-freedom state space exists, I will gladly correct and amend. At this point, I suspect it doesn’t.

1 moves right, 0 moves left, if they meet they

Stop forever. If they don’t meet, bothEscape and fall of their plane or existence.00 - EE

01 - EE

10 - SS

11 - EE

100 - SSS •

101 - SSE

110 - SSS •

111 - EEE

1000 - SSSS •

1001 - SSSE

1010 - SSSS •

1011 - SSEE

1100 - SSSS •

1101 - SSSE

1110 - SSSS •

1111 - EEEE

10000 - SSSSS •

10001 - SSSSE

10010 - SSSSS •

10011 - SSSEE

10100 - SSSSS •

10101 - SSSSE

10110 - SSSSS •

10111 - SSEEE

11000 - SSSSS •

11001 - SSEEE

11010 - SSSSS •

11011 - SSSEE

11100 - SSSSS •

11101 - SSSSE

11110 - SSSSS •

11111 - EEEEE

100000 - SSSSSS •

100001 - SSSSSE

100010 - SSSSSS •

100011 - SSSSEE

100100 - SSSSSS •

100101 - SSSSSE

100110 - SSSSSS •

100111 - SSSEEE

101000 - SSSSSS •

101001 - SSSSSE

101010 - SSSSSS •

101011 - SSSSEE

101100 - SSSSSS •

101101 - SSSSSE

101110 - SSSSSS •

101111 - SSEEEE

110000 - SSSSSS •

110001 - SSSSSE

110010 - SSSSSS •

110011 - SSSSEE

110100 - SSSSSS •

110101 - SSSSSE

110110 - SSSSSS •

110111 - SSSEEE

111000 - SSSSSS •

111001 - SSSSSE

111010 - SSSSSS •

111011 - SSSSEE

111100 - SSSSSS •

111101 - SSSSSE

111110 - SSSSSS •

111111 - EEEEEE

1000000 - SSSSSSS •

1000001 - SSSSSSE

1000010 - SSSSSSS •

1000011 - SSSSSEE

1000100 - SSSSSSS •

1000101 - SSSSSSE

1000110 - SSSSSSS •

1000111 - SSSSEEE

1001000 - SSSSSSS •

1001001 - SSSSSSE

1001010 - SSSSSSS •

1001011 - SSSSSEE

1001100 - SSSSSSS •

1001101 - SSSSSSE

1001110 - SSSSSSS •

1001111 - SSSEEEE

1110000 - SSSSSSS •

1110001 - SSSSSSE

1110010 - SSSSSSS •

1110011 - SSSSSEE

1110100 - SSSSSSS •

1110101 - SSSSSSE

1110110 - SSSSSSS •

1110111 - SSSSEEE

1111000 - SSSSSSS •

1111001 - SSSSSSE

1111010 - SSSSSSS •

1111011 - SSSSSEE

1111100 - SSSSSSS •

1111101 - SSSSSSE

1111110 - SSSSSSS •

1111111 - EEEEEEE

Now I have to trust the super deterministic model at hand, it’s the only one I am considering:

There is no minimal movement that escapes description by this space state; if there were, the state space would be betraying the information it is charged with carrying. The minimal universal movement information

mustbe present and encoded there, in as much of its entirety as I can extract and analyze.In the prime automaton, the present can only be in two places: on the diagonal that you are using to derive the next horizontal line, OR on the horizontal line that you are now writing. When the present is in one place, the future will be in the next, but the past is already written in both cases.

The SE Code doesn’t have a definite passage of time, it’s a bubble sort code. The present tense is on the random access data you are calculating, one place at a time. The very nature of time is different in the SE Code: a list of bubble sort items has a collective future as an ordered state, and no localized present. Worse, no past: you can’t

undoa bubble sort since that is not granted by bubble sort state space.The two anatomies of time are different and mutually exclusive:

Bubble sort data on the SE Code is neither predictable nor postdictable. Its past is not solid, never was. You can only have a future of complete data order of the entire set, not its individual elements, and only when the whole set is classically ordered does it acquires classical solidity.

There was a second part/conclusion to this and it didn't enter:

Delete(Prime automaton data doesn’t move, is already classical in the physical sense and has way superior complexity class than binary code and SE Code.)

The model is built on the binary number base and can be neither larger nor smaller than it in physical size. Yet, the classes of complexity are significantly shortened on the right while on the left they remain as a binary count from 0 to 127, each one of which belongs to its own complexity class… bubble sort-wise, mind you, because that is what the state space is telling me. There is no immediately apparent reason why all 16 SE Codes from 64, 66, 68, 70, 72, 74… to 126 are the same SSSSSSS. The number of bits is the “hidden variable”.

Let’s consider an unspecified “hidden variable” a bit longer, n binary number with its associated SE Code. This element of the set jumps to its own bit-size family upon a bubble sort operation, the other data gets shifted left or right, repeat another n, end bubble sort. The only “speed” limit it its own number of bits. Meaning within the limits of the SE Code, there is no bubble sortable upper limit to the speed of movement of set element n up or down the ordered list.

The minimal information of the minimal degree of freedom (movement) does NOT impose a speed limit for data movement. Data jumps. Suddenly. Even if it is not “material” this is still physical information and already has a pre-determined place for it somewhere. Just like gravity.

Exactly like gravity, minimal movement we know.(Of note: the handedness of the prime automaton is different than the handedness of the SE Code: their time anatomies are different. They are “held in time” in different positions and have different continuities.)

So… is there a collective speed of gravity data-as-empty-space that jumps from here to there? Evidently, it would be derivable from the whole set. That should be directly traceable to a bubble sort, some sort of “speed of light” of bubble sorts.

But is there a limit to the speed of individually sorted “numbers” (uh…. the unfortunately named “graviton”) in gravity?

That is not what this state space model tells me. That is not what my other model tells me. And especially, that is not my language tells me, and it is gravity based. (I haven’t made any secret that my language is faster than light, never did.)

I am not introducing Lunatic Gravity here, except to debunk ADS, later on: I have introduced it to my modest satisfaction on youtube. It was enough for me.

I will be trying to synthesize a middle ground between, Sabine, Tim, and my own ideas in a way that is satisfactory to all three of us. They are very much the same ideas in different chocolate sauces, frankly!

Ultimately, , Mr. Shor, superdeterminism isn’t yet and may never become a cure-all for the common cold, but it’s one hell of a thought technology: I don’t even need resolution to be coherent. Won’t give it up too soon.

Is it necessary to have (in the un**3 essay) a link to a HAARPologist? Especially when the citation [18] can be linked to the very source. I've thought that you're a scientist (even though a contrarian by definition).

ReplyDeleteI still enjoy most of your posts (well, not those where you defend indefensible attacks against dark energy), yet now I feel quite strange here. OK, I'll assume it was caused by you had been sick, as you stated on gravity.

I have no idea what a HAARPologist is supposed to be. I was looking for the report but was not able to find the source. Thanks for the link, I will use this the next time.

DeleteHAARPologist is someone that believes that this project https://en.wikipedia.org/wiki/High_Frequency_Active_Auroral_Research_Program is used to control the weather, earthquakes, and other natural phenomena, in order to help the Illuminati or something similar.

DeleteYou might say then that the universe is the set of all possible (allowable) configurations under the constraint-- and the constraint would be some particle at some location in space and time, and the choice is arbitrary. Any particle or configuration of particles in any position at any time would serve as the constraint and determine all other allowable/possible configurations. And if you wanted to drag in a multiverse, it would be the set of all possible sets of allowable configurations. In that case there would be configurations that can not ever appear in our universe that could appear in some other universe. At least that's how I'm interpreting some of what you say.

ReplyDeleteDr. Hossenfelder,

ReplyDeleteMy position is that today's physics needs an "antagonist" to speak up over the astounding number of "protagonists" out there today. So please keep up the good work that you are doing. I skimmed this piece as I will read it later tonight. I will say that from what I read there are parts that I am not in complete agreement with. What does this mean? It means that I should respect your work and wait and see how things play out. One of the reviews on Nautilus was not very complimentary so I did a little research. After this research I believe that the author of the comment is more interested in promoting the sales of their own work then actually having a through provoking discussion.

Unless given permission I will not promote another author on your blog. But I am currently reading a book and the parallels between what occurred back in the early days of quantum to what is happening to you for having a different view are very similar.

Criticism of work is perfectly acceptable when alternative information is also given so as to open up a discussion. Character attacks are never acceptable and I for one have no patience for those who wish to attack you rather than your work. If I may be so bold as to offer a life observation, being on opposite sides of a problem/situation was in essence my career(s). When attacks were made on my character rather than my work, I knew that I was right.

If I maybe so bold as to make another observation, today's physics is not confined to a box. Rather, I see it as being in a trench as all are marching in the same direction with very few people willing to look outside of the trench.

As an (ex-)statistician, I could never understand how one is supposed to justify statistical independence in a deterministic universe. The usual answer is on the lines of "as a statistician you know that it works, so get over it!". But that ignores the damping effects of the Law of Large Numbers, which itself depends on the assumption of statistical independence. So as far as I am concerned the basic question is: can we justify LLN in a deterministic universe? I never came across a reasoned answer. Indeed, the very question is rarely considered at all. If I recall correctly, Sklar brings it up in "Physics and Chance", and then dismisses it (but it's been decades since I read the book :-))

ReplyDelete

Delete"As an (ex-)statistician"Old statisticians never die---they just get broken down by age and sex. :-)

It looks like you are the only person who entered the FQXi contest who is not a kook! You have a good chance of winning.

ReplyDeleteI'd enter myself but I couldn't understand the question.

Viel GlĂĽck!

"These hidden variables, it must be emphasized, are not necessarily properties of the particles themselves."

ReplyDeleteMay be they properties of space?

If statistical indepence is violated because everything depends on everything else, but in a chaotic way (if that is not too simplistic a summary of your proposal) then the chaotic part implies unpredictability which implies that in practice we may be little better off with the improved theory than without it. (Except of course it would valuable to have a consistent theory even if the results were not practical.)

ReplyDeleteThat is, if I remember vaguely from plotting Mandelbrot and Julia sets on my first personal computer (an Apple II), one iterates an equation at a given (x.y) point for some number of cycles to determine whether the point belongs to the set or not, and in some cases the result is inconclusive because more iterations were needed, but life is short, so in the end one only has a relatively small number of points to characterize the set. If a similar procedure is needed to predict QM outcomes more precisely depending on each experiment's exact configuation, it may not be practical to do much with it.

I think Dr. Scott Aaronson made a similar point a few years ago in a FQXi essay, to the effect that predictable or unpredictable was often a more practical way of describing things than deterministic or undeterministic.

My second reaction is that a new term is needed for this proposal, because many brilliant people have the notion stuck in their minds that super-determinism is a conspiracy theory by which we are fooled into thinking that QM results are random because the conspiracy is hiding those (and only those) results which would teach us otherwise. Off hand, all I have is "enceessdee" (NCSD--Non-Conspiratorial Statistical Dependence). (Well, that won't work.)

After decades with shut up and calculate, the discussion about what really happens, the physical description of the messurement process, is raised again. What would have been the situation today if De Broglie had got the majority at the meetings in 1927!?

ReplyDeleteBravo

I perused the Nautilus essay, where we read: "The big questions that were open then are still open today. We still don’t know what dark matter is, we still have not resolved the disagreement between Einstein’s theory of gravity and the standard model of particle physics, and we still do not understand how measurements work in quantum mechanics. How can we overcome this crisis ? " I disagree that a "crisis" exists. There is presently no "crisis" in physics. It has (so-far) remained an "assumption" that general relativity need necessarily be "quantized." There is a surfeit of theory in present-day physics (which is great: the more, the merrier), yet a lack of experimental findings from which to unambiguously discard many of those theories. I question the efficacy of any attempt to define what constitutes a "big question" in physics. Regarding the measurement problem, Julian Schwinger reminds us: "no independent quantum theory of measurement is required, it is part and parcel of the formalism" and it is "a non-existent problem." (pages 368 and 369, Climbing the Mountain). Now, reading the Nautilus essay: "The detector, after all, is also made of elementary particles, so we should be able to calculate what happens in a measurement" and "abandoning reductionism without proposing a better explanation is not only useless, it is outright anti-scientific." The first commenter (Jeremy Jr. Thomas) correctly invoked Anderson's 1974 essay "More is Different," as Steven Weinberg wrote "reductionism in this sense is not a program for the reform of scientific practice, it is a view of why the world is the way it is." (page 267, To Explain the World). Claiming that reductionism is "scientific" while abandoning it (without proposing a better explanation) is "outright anti-scientific" is an unfounded opinion. As the late Freeman Dyson wrote: "if something is predictable then it is not science." (Heretical Thoughts).

ReplyDelete”There is presently no "crisis" in physics.“

Delete- The sad story of physics the last 40 years is that most of the resources are used on mainly string-theorie and inflation.

- String-theori makes no predictions - can not be falsified - works only with 10 spatial and 1 timedimension - works only if the cosmological constant is negative and we messure it to be positive - there are so many versions of this theorie, some of them end up in multiverse, to me the end of science.

- Inflation: so many papers have been written on this subject - can predict everything=nothing - can not be falsified - and it ends up in multiverse.

- Could we not use some of the money spent on string-theorie and inflation to study other theories - among those a realistic quantum theorie.

I very much like the interview with de Broglie from 1967, among other things he says that that a theorie, much less depended on uncertenty, is needed:

https://youtu.be/stRrf4DB_3Y

I quote from Pais, Subtle is the Lord: "From 1859 to 1926, blackbody radiation remained a problem at the frontier of theoretical physics...from the experimental point of view the right answer had been found by 1900..." and yet " in 1905, there still is no revolution..." and "in order to recognize an anomaly, one needs a theory or a rule or at least a prejudice." (page 364, quantum theory-chapter 19). So, we take note that from Kirchoff (1859) to Schrodinger (1926) has elapsed 67 years. So, merely 40 years for fostering string-theory (or, inflation theory) is hardly a reason for alarm, especially given its highly abstruse nature. As for the claim that "most resources" have been utilized for those theories during these last 40 years, that is incredulous in light of the amount of funding provided for topics such as condensed-matter or quantum computation. Sure, the money for strings and inflation could go elsewhere--but, as I time and again ask: Who is so sage and confident in their predictions for the "future of physics" that they are more qualified to dole out the funds in a superior manner ? Much good physics is being done every day and many good papers are being published every day--as has been happening for the past 40 years. The real, or imagined, "crisis in physics" has not prevented discoveries from marching on. If, anything, this is a very exciting period of time to be "doing physics."

DeleteThe problem with string theorie is not so much the 40 years in it self, but the direction of the field this periode. Its hardly going anywhere when it comes to predictions and ways to examin these. And worse, the tendency among string theorists the last years seems to be the multiverse. The safe place where you cannot be proven wrong, but this place is also outside science.

DeleteWith “the most money” I meant to study the foundation of physics.

You should wonder, if the structure of doing science could be a reason, that so many “choose” string theorie. If you want a job and fundings - string theorie is the safe thing to do. Sad but true.

ReplyDelete"Since you have not spent any time studying the foundations of mathematics,"How do you know?

"Every mathematical result carries specific antecedents that must be fulfilled and every logic carries presupposition."That's correct and is exactly what I wrote in my essay.

"You have arrogantly stated that the only mathematics relevant to you is that which supports your beliefs and goals."I haven't said anything like that and I will not approve any further insults from you, good bye.

The hidden variables might be the variables we all know and love, but with precise and definite values. So the variables are not hidden, but their values are. Quantum mechanics might be something that emerges from the "inexorable phenomenology" of measurement. Compare that with the "ineluctable modality of the visible". Another passing thought: superdeterminism seems to fit neatly into the block universe picture. Another passing thought: taking GR seriously, then any arbitrary sample of spacetime, its precise geometry, determines the rest of the universe as completely as a particle at some specific place and time. Would the various quantum fields be emergent from that inexorable phenomenology of measurement I mentioned? That would have to be the case.

ReplyDeleteDr. Hossenfelder;

ReplyDeleteForgive my simplicity but after reading your piece the one description that kept going through my mind was “Everett Interpretation meets Quantum Entanglement.” I am not sure how this sounds, but I believe it is a good thing as it combines two important concepts into one.

Your piece is the first time I have heard about Superdeterminism and I found it interesting. I will be learning more about Super-Determinism. I must confess that I am not a big fan of the current crop of quantum interpretations as they all have universal inclusion. That is, either the wave function exists everywhere or everything and nothing always exists in multiple universes. Super-Determinism follows along this same all-inclusive line. However, it seems to me that there can be some wiggle room for adapting superdeterminism that Copenhagen and Everett do not seem to have. I will admit that I believe “non-locality” is a very important concept, just not in an all-inclusive fashion. And, I have the same position as John B., if only de Broglie and later Bohm had larger followings.

Speaking of de Broglie and Bohm, you seem to be running into the same “rigid physics acceptance” as they did. I am absolutely fascinated at the amount of defense and personal attacks that come out against anybody or anything that questions the physics status quo. Age to a quantum theory does not automatically make it right. Especially since so many years of work has gone into the current theories only to give more questions than have been answered. Furthermore, nothing new or ground breaking that can lead to new ideas and direction has been discovered. It would seem to me that the physics community would be eager to find something new to research and investigate, rather than to continue on a path that has yet to yield new direction.

If I may make an observation, it seems that for almost 100 years now unification of relativity and quantum has been the goal of physics. It also appears that the purpose of unification is to provide a deeper understanding of relativity and quantum, maybe this is why there is so much push back on anything new. In any case isn’t this approach backwards? Should we not be looking to answer ALL of the questions we have about relativity and quantum before trying to unify these things? If we do not completely understand these things how can we say they are right in their current form and move forward into something larger? IF, science cannot answer all of the questions associated with quantum and relativity then we cannot presume that unification would be complete or correct. And, here is the bigger kicker, what rule, theory or law of physics says that there has to be unification? Seems to me that pushing for something that is not required by uniting things we do not completely understand is a receipt for failure.

Having said this I would like to pose a question, it is a simultaneous measurement question. If I have two measuring devices separated by a large distance, under the current wave theory a free particle will exist at both of these devices until it is measured. What if both devices take a measurement at exactly the same time relative to the particle? Which device sees the particle and which one does not? How can it be proven that this cannot occur, and what happens if it does occur? It seems that under superdeterminism this would not be a problem.

Currently physics seems to have many questions and not enough answers. It is time to look at all options, not simply write them off because they do not comply with the currently accepted beliefs. It appears to me that the physics trench continues to be dug forward. Thanks for trying to crawl out of it and for the new information Dr. H.

With superdeterminism, can you link 2 points in the past and future so that a closed time like curve becomes feasible? If you don't have an intrinsic indeterminism of quantum mechanics, such arrangement becomes possible. Also, you cannot change the future leading to the closed curve, as it would lead to paradox. The universe would become a deterministic block, with handles.

ReplyDeleteOn the subject of the "uncomputable",

ReplyDeletePractical intractability: a critique of the hypercomputation movement

Aran Nayebi

https://arxiv.org/abs/1210.3304

argues that for the practical programmer, computer engineer, scientist, it really is not a useful concept.

Though "a computer operating in a Malament–Hogarth spacetime or in orbit around a rotating black hole could theoretically perform non-Turing computations for an observer inside the black hole."

Wikipedia:Hypercomputation

Philip, that is gorgeous! Great find!

DeleteSame lines of computational-field thought as Sabine's paper on applicability of certain types of metamaths to Physics!

Philip Pearle wrote a paper, Models For Reduction, read: "the great success of quantum theory suggests making a minimal alteration in the theory, only invoking those changes that the single-system re-interpretation of the meaning of the state vector requires." Read the paper, Constructing a Bit-String Universe: "Mathematics has to face the challenge of whether or not its theorems are computable--if not, why not ?--or, the relevance of the question and whether a 'proof by computer' that cannot be checked step-by-step by humans is a 'proof'." These papers are from 1984, published in Quantum Concepts in Space and Time (Edited by Penrose and Isham, Oxford). My purpose in drawing attention to these papers is to state the obvious: these questions have been thought about for many years !

ReplyDeleteI noticed in reading (the superdeterminist) The Cellular Automaton Interpretation of Quantum Mechanics, Gerard 't Hooft, arXiv:1405.1548,

ReplyDelete"In adopting the CAI, we except the idea that all events in this universe are highly correlated, but in a way that we cannot exploit in practice. It would be fair to say that these features still carry a sense of mystery that needs to be investigated more, but the only way to do this is to search for more advanced models."

there is the word "except" vs. "accept" (in the fist line) - which is strange. :)

guessing it should have been "expect"

DeleteOnce again you impress me with a link, thanks, Thrift! Chances are it (and t'Hooft) are way out of my league, but it resonates with me in the parts I understand. I will take my time reading and enjoying it.

DeleteTotally off topic...

ReplyDeleteYour name is popping up everywhere!

https://mindmatters.ai/2020/03/meet-the-serious-panpsychists/

There might be a way to produce the effects of superdeterminism without invoking determinism at all. Start with a block universe and suppose that one would like to construct a copy of that block universe. (Obviously hypothetical, but what the heck?) Superdeterminism would only require some arbitrary subset and the rest of the universe is "determined". But we could also ask, where would one find the information required to produce a copy of some block universe? The answer would be the same: an arbitrary subset, and the information in that subset would be such that it would be consistent ONLY with the block universe of which it is a part. You should think of a hologram here, in which you can cut the hologram into pieces and each piece will reproduce the entire hologram, albeit with a loss of resolution. The only thing at work here is a principle of consistency that occurs by default-- if information in the block universe is distributed in this way, any subset will serve to represent the whole, but ONLY the whole of which it is a part. Yeah, some Bohmian implicate order here. But determinism implies causality and what does that mean in a block universe? What we would expect is consistency, and the need for that is purely logical. A universe consistent in this sense is the only kind that can exist. And I'm not suggesting that the block universe is a hologram, only that information is distributed within that universe in the manner of a hologram. A practical question might be, would there really be a problem with a loss of resolution, if, say, one had access to all the information contained in an arbitrary subset of the block universe, could one model the entire block with perfect accuracy? It would also follow that having access to all of the information in any subset would enable you to model any other subset-- easily done if the entire block universe is encoded in an arbitrary subset. In this view the issue of causality is disposed of.

ReplyDeleteLawrence Crowell write:

ReplyDelete"The answer to an undecidable problem could come from a time machine if one sends the results back in time to yourself later."

You still have to verify the answer; if you don't, then there is nothing to ensure that your "answer" isn't just random garbage. And if you get an answer from the future saying that program P does not halt, how do you verify this?

As far as I can tell, having access to a time machine at most lets you decide any problem that is in the intersection of NP and co-NP in polynomial time.

I read some years ago that P = NP is true for closed timelike curves. I will have to look this up and find the paper reference. It was a decent paper. The extension to all PSPACE and undecidable propositions is of course difficult to prove explicitly. However, a spacetime that permits CTCs will present Cauchy horizons, and in principle an observer can in a finite time verify whether a Turing machine halts or does not halt, even if the proper time of that TM is infinite. This is of course an in principle argument.

DeleteIt is potentially interesting in the context of P = NP vs P ≠ NP whether this result really does mean this is undecidable proposition. P = NP appears true in a spacetime with CTCs, such as AdS or wormholes and so forth. We have no knowledge whether P = NP can hold in our more normal dS-like spacetime with positive vacuum energy.

If Superdeterminism leads to less random/more predictable experimental outcomes than seen in standard QM, due to the action of hidden variables postulated in this model, then perhaps this could be a factor in unusual premonitory dreams that seem to defy statistical odds. The idea here is that the subconscious ‘mind’ of a human, animal, or maybe even a microbe, is somehow privy to information from these postulated hidden variables that give it a leg up on issues important to it; like finding food or evading danger – or winning the lottery! This isn’t to say that such a (hypothetical) facility would always guarantee a successful outcome, as even in superdeterminism there is inherent randomness, not absolute certainty.

ReplyDeleteLast Thursday I had such a rare, seemingly, premonitory dream, as I described on the “Is Gravity a Force?” post at 10:41 AM, March 13, 2020, responding to a brief mention of the role of consciousness in the Universe by PhysicistDave, upthread at 5:00 AM, March 12, 2020. The setting of the dream had nothing to do with a lottery, but four numbers were ‘spoken’ to me in the dream. The details are over at the other thread. But suffice it to say 3 of the numbers in a local New Hampshire lottery game that evening matched the numbers in the dream. The bummer is I didn’t have reading glasses with me to see the fine print in the card you need to fill out. Also, which I didn’t mention in the other post, I ended up buying a second ticket, as I wasn’t sure of the order of the numbers ‘spoken’ to me in the dream. Thus I purchased a ticket with the sequence that interchanged the first two digits with the last two digits, so the ticket read 2898. Alas, the 2nd ticket would have yielded a 50 dollar win had I played all the 2 digit combinations.

Had another ‘numbers dream’ overnight during the 22/23 of March. This time the venue was more relevant – a card game in which a 4 of spades and a 5 of spades were vividly displayed, with other cards in the background being out of focus. Once again these numbers matched the numbers that came up in the New Hampshire’s Pick 4 evening game. If I’m not mistaken the odds of guessing two of four numbers, with each number having a range of ten (0 to 9), are 50 to 1, or 2%.

DeleteAs noted above, Superdeterminism improves predictability somewhat over standard QM, and if some of Earth’s earliest life forms could somehow tap into the hidden variables that underlie Superdeterminism’s enhanced future predictability, natural selection might tend to favor the survival of these organisms, ultimately passing this ability on to more complex life forms. The rub is laboratory experiments, probing the quantum realm, are conducted at near absolute zero to prevent decoherence. But biological organisms function far above this temperature so that any large scale coherence would seem to be ruled out.

But, then again, maybe living organisms do utilize quantum effects at ambient temperatures as discussed in the article linked below. I didn’t really read it, just skimmed over it, as I’m eager to polish off the remaining 86 miles needed to reach a goal of 300 cycling miles this month, and the bright sun is finally melting our snowpack.

https://www.the-scientist.com/features/quantum-biology-may-help-solve-some-of-lifes-greatest-mysteries-65873

Sabine,

ReplyDeleteIn the Nautilus article you say "Therefore, we do not believe that probing smaller and smaller distances with bigger and bigger particle accelerators will help solve the still-open fundamental questions." Do you have suggestions on where to look (CMB correlations, superfluid/BEC dynamics, ...) or do we need more theoretical work first?

HELP! HELP! HELP!!!

ReplyDeletehttps://en.wikipedia.org/wiki/Superdeterminism

So many wrong things I wouldn't know where start!!!

(In a delightful family off-topic, right in the first few minutes of "The Plot Against America" the other day, on the lines "Apartments?" and the answer from the husband "Single family house"...

That is my house. I made and painted the gate in the back of the driveway, where the kid runs! There were 45/50 techs behind the house, and 100's of thousand dollars in machinery. We were counting on about 15 seconds of screen time!)

Of the algorithmic incompleteness of the present tense and Superdeterminism

ReplyDeleteLet’s move ourselves into SE Code as a state space for a few minutes. You present consists of fetching the next number on a random list, SE Code classifying it by number of bits, sending it to its 2^ number of bits neighborhood, testing next bits, adjusting final address correction, and storage.

It’s extremely important to note at this point that there inexists a possibility of free will to “you”.

Denied. You follow in your eternal unmodifiable present tense the pre written orders from an algorithm while IT approaches its final future data order, therefore the end of its time line.

The other name for that is “Universal Conspiracy”. “Eternal damnation” is fine with me too.

And that is what superderminists do on weekends: We plan the end of the world, gathered in our secret masked balls where the food and wine are lavish.

Unless…

Unless there is something wrong with this picture.

Dear Dr. Hossenfelder,

ReplyDeleteI was discussing superdeterminism and your paper with someone on an online forum. He brought my attention to a paper which he said proves that a superdeterministic theory would necessarily be completely uncomputable, in the sense that it would yield no equation at all.

This is the paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.130401

Have you read that paper before, and what would be your response to it, if it is relevant to your work?

The second sentence of the abstract is wrong. A superdeterministic model is, of course, distinguishable from quantum mechanics. If it was not, what would be the point of even thinking about it? We explained this very clearly in our paper. I recommend you read it because we made a lot of effort to answer all the common questions.

DeleteDr. Hossenfelder,

DeleteI did read your paper, and it also seemed to me from my amateur perspective that you were talking about a different class of models. Namely, the paper I linked seems to be talking about non local deterministic models

"In this article we consider the class of deterministicmodels for nonlocal correlations in which the hiddenvariables"

Whereas you are clearly after a local superdeterministic model. But the person I was talking to answered that:

"The formulation is generic enough that the "hidden signalling" includes nonlocal, retrocausal and superdeterministic theories. The point is that nonlocality, retrocausality or superdeterminism should give you access to much larger computational power than seems to actually exist."

I don't have enough background to assess the claim that the proof from the paper I linked is generic enough to include your own model, which is why I asked for your input. I thought that such discussion could only lead to positive research outcomes.

What do you think about the above claim?

experience,

DeleteI will look at the paper, though the abstract makes me think the authors don't know what they are talking about to begin with.

Having said that, it is almost certainly correct that if you replace quantum mechanics with a more fundamental, deterministic theory, that allows you to make more predictions, then this will increase computational power (of, say, a quantum computer). I don't see how this is surprising.

Also, one has to be very careful in superdeterministic models what one even means with information transfer. A measurement outcome does not actually transmit information from a causally disconnected location, it just makes apparent information that was there all along, albeit in practically useless form.