*[Tim Palmer is a Royal Society Research Professor in Climate Physics at the University of Oxford, UK. He is only half as crazy as it seems.]*

[Screenshot from Tim’s public lecture at Perimeter Institute] |

Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other.

The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it.

Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality.

Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates.

So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post.

To start, I need to say what undecidability and uncomputability are in the first place. The concepts go back to the work of Alan Turing who in 1936 showed that no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt. This “Halting Problem” is therefore undecidable by algorithm. So, a key way to know whether a problem is algorithmically undecidable – or equivalently uncomputable – is to see if the problem is equivalent to the Halting Problem.

Let’s return to thinking about chaotic systems. As mentioned, these are deterministic systems whose evolution is effectively unpredictable (because the evolution is sensitive to the starting conditions). However, what is relevant here is not so much this property of unpredictability, but the fact that no matter what initial condition you start from, there is a class of chaotic system where eventually (technically after an infinite time) the state evolves on a fractal subset of state space, sometimes known as a fractal attractor.

One defining characteristic of a fractal is that its dimension is not a simple integer (like that of a one-dimensional line or the two-dimensional surface of a sphere). Now, the key result I need is a theorem that there is no algorithm that will take as input some point

*x*in state space, and halt if that point belongs to a set with fractional dimension. This implies that the fractal attractor

*A*of a chaotic system is uncomputable and the proposition “

*x*belongs to

*A*” is algorithmically undecidable.

How does this help unify physics?

Firstly defining chaos in terms of the geometry of its fractal attractor (e.g. through the fractional dimension of the attractor) is a coordinate independent and hence more relativistic way to characterise chaos, than defining it in terms of exponential divergence of nearby trajectories. Hence the uncomputable fractal attractor provides a way to unify general relativity and chaos theory.

That was easy! The rest is not so easy which is why I need two guest posts and not one!

When it comes to combining chaos theory with quantum mechanics, the first step is to realize that the linearity of the Schrödinger equation is not at all incompatible with the nonlinearity of chaos.

To understand this, consider an ensemble of integrations of a particular chaotic model based on the Lorenz equations – see Fig 1. These Lorenz equations describe fluid dynamical motion, but the details need not concern us here. The fractal Lorenz attractor is shown in the background in Fig 1. These ensembles can be thought of as describing the evolution of probability – something of practical value when we don’t know the initial conditions precisely (as is the case in weather forecasting).

In the first panel in Fig 1, small uncertainties do not grow much and we can therefore be confident in the predicted evolution. In the third panel, small uncertainties grow explosively, meaning we can have little confidence in any specific prediction. The second panel is somewhere in between.

Now it turns out that the equation which describes the evolution of probability in such chaotic systems, known as the Liouville equation, is itself a linear equation. The linearity of the Liouville equation ensures that probabilities are conserved in time. Hence, for example, if there is an 80% chance that the actual state of the fluid (as described by the Lorenz equation state) lies within a certain contour of probability at initial time, then there is an 80% chance that the actual state of the fluid lies within the evolved contour of probability at the forecast time.

The remarkable thing is that the Liouville equation is formally very similar to the so-called von-Neumann form of the Schrödinger equation – too much, in my view, for this to be a coincidence. So, just as the linearity of the Liouville equation says nothing about the nonlinearity of the underlying deterministic dynamics which generate such probability, so too the linearity of the Schrödinger equation need say nothing about the nonlinearity of some underlying dynamics which generates quantum probabilities.

However, as I wrote above, in order to satisfy Bell’s theorem, it would appear that, being deterministic, a chaotic model will have to violate relativistic causality, seemingly thwarting the aim of trying to unify our theories of physics. At least, that’s the usual conclusion. However, the undecidable uncomputable properties of fractal attractors provide a novel route to allow us to reassess this conclusion. I will explain how this works in the second part of this post.

Wahnsinn

ReplyDeleteI am wondering whether there is a relation of the above post with this incomputability result:

ReplyDeletehttps://www.nature.com/articles/d41586-020-00120-6

"MIP*=RE"

Almost certainly. Unfortunately I don't know what the relation is...

DeleteI am not sure if it makes sense to put chaos theory in a unification scheme. Chaos is about emergence from basic laws while QM and GR are part of the basic laws. This is like unifying biochemistry with QM and GR.

ReplyDeleteDaniel,

Delete(a) Chaos is not "about emergence from basic laws", chaos is a behavior found in some non-linear systems, regardless of whether these are emergent or not. But more importantly

(b) the whole point of what he's saying is that QM is emergent.

What he's saying, the whole point, that QM is emergent, is as close as makes very little difference to what 't Hooft, Wolfram, and not a few others have been saying for 20 years. I'm not competent enough with chaos theory to see whether Tim's approach is enough more compelling than other attempts to carry the day, though to me the way his approach is perhaps more tenable, but I think this needs approaches from multiple directions.

DeleteWhere I think this has a problem is that any claim of "emergence" is a bottom-up approach with (with apologies for saying so) hand-waving. What I think is more likely to open the door to this kind of emergence is a top-down approach to classical theory that works at the measurement theory level, just as quantum theory does. I am of course talking about my "An algebraic approach to Koopman Classical Mechanics", or about whoever comes after me who puts the argument better than I can, but I see my approach as complementary to Tim's bottom-up approach, not as replacing it.

BTW, Hi, Tim! I remember you quite well from UK foundations of physics conferences of about 20 years ago.

Both 't Hooft and Wolfram discretize space and time which, if you ask me (though you didn't), is a non-starter.

DeleteFair enough! I always work with manifestly Poincaré invariant models myself. If we work with an uncountable number of degrees of freedom, then quite a few analytic issues appear, but we don't have to worry for a lattice or otherwise very-many-body theory. Tim's discussion here seems to be at the level of finite numbers of degrees of freedom.

DeleteSabine wrote:

Delete"(b) the whole point of what he's saying is that QM is emergent."

If you are familiar with his work you would know that, but I did not see it mentioned anywhere in this post. Without context this post is very confusing.

Maybe he should add a part 0 that explains what he is trying to do.

I would think that people who come here and leave critical comments on somebody else's work have a look at that work first instead of blaming a blogger that a blogpost isn't a paper.

DeleteI think it's reasonable to have a summarization or more conversational presentation in a blogpost than a paper, but if something could be described as "the whole point of what he's saying," that should make it into the post!

Delete"the key result I need is a theorem that there is no algorithm that will take as input some point x in state space, and halt if that point belongs to a set with fractional dimension."

ReplyDelete1. Could you give a more accessible reference for this theorem? The link is to a book, with no indication of where in the book the theorem is presented.

2. I'm puzzled by this statement, as it seems vacuous. I'm assuming you're talking about Type II computability, which is the appropriate notion of computability for domains like the reals. But then the question "is x an element of the set A" is undecidable for *any* nontrivial A that is a subset of R^n, as this amounts to a discontinuous function with range {0, 1}, and discontinuous functions are uncomputable.

Not sure I can follow. Are you saying you don't know if 0.5 lies in the interval {0,1}?

DeleteI am quickly reviewing that book. Note that it presents a different model of computation (real computation) to the standard one with different uses of common terms like "decidable". So an exact reference within the book would be helpful, to check which definitions apply.

DeleteSabine I think that Kevin means that {0,1} is the two element set not the interval [0,1].

In Type II computability a real number is represented as an infinite nested sequence of rational intervals converging to the desired number. You can get a rational approximation to a real number

Deletex, to any desired degree of accuracyepsilon, by stepping through the sequence representingxuntil you come to an interval of length less thanepsilon. A computable function on the reals is a process that takes one of these infinite sequences as input and produces another one as output, with any finite initial subsequence of the output being determined by some finite initial subsequence of the input (to make the computation physically realizable). (A slight generalization of this scheme applies to any uncountable domain on which a topology is defined.) Weirauch's bookComputable Analysisis a standard reference for this model.You can then see the problem of answering the question, "Does

xbelong to [0,1]?". Ifxhappens to be a value in the interior of the interval, everything is fine -- one of the bounding intervals will lie entirely with [0,1]. But ifxturns out to be exactly equal to 0 or 1 -- say, you're computingx = tan(pi/4)-- thenevery oneof the bounding intervals will include points within [0,1] and points outside of [0,1], and the inclusion test will never terminate.In any event, after following the link to Palmer's essay I found that he is using a

differentnotion of computability, due to Blum, Shub, and Smale, in which infinite precision real constants can be produced in constant, finite time, and infinite precision arithmetic operations can be done in constant, finite time. Many more things are computable in the BSS model than in Type II computability, and likewise uncomputability is a much stronger property in the BSS model.Let me amend that last comment: some things are Type-II computable but not BSS-computable, and some things are BSS-computable but not Type-II computable. Trigonometric functions are examples of functions that are Type-II computable but not BSS-computable. So I'm still not sure that saying that a fractal set is not (BSS-)decidable is all that interesting, as it's not clear to me that even the set { (x,y) : y = sin(x) } is BSS-computable.

DeleteBTW, in Type-II computability one deals with the undecidability of nontrivial set membership by instead computing the minimal distance from a point to the set. The obvious question is then whether that function is Type-II computable.

The points made by Kevin above are the reason why Tim Palmer's reference to the BSS book (and thus presumably its model of computation) is a bit problematic as a reference for physics computation. Normally Turing computation is what should be referred to. If any other model of computation is playing a role in the fractal argument (whether Type-II or BSS) than that needs to be made much clearer.

DeleteAs Kevin says in these other models familiar terms like "computable" and "decidable" and hence "non-computable" and "Undecidable" now have different meanings.

It might still be true that (some) fractal sets are undecidable in the Turing sense, however, and there may be some results of that type around.

Just to be clear, Type-II computability

Deleteisreducible to standard finitary computability in the Turing sense: it is defined in terms of Turing machines, and if you consider the processing of infinite sequences to be problematic, there is an equivalent definition of "Type-II computable" that only involves finite computations, with functions having an extra argument for the desired degree of precision, and real numbers represented as functions whose argument is also a desired degree of precision.It seems to me impossible that one could ever produce infinite-precision reals in constant time. One would have to be doing analog computing in a non-quantum universe, or merely computing with a different radix or representation than one uses to write the real down (e.g., storing the result as a ratio).

DeleteGood luck on your essay at FQXI. I was not going to submit anything, but then was informed of this and the topic. The possible connection between quantum mechanics and Gödel’s theorem has been of considerable interest to me for years, though I have sidelined this because of the umbrage received over it. I will have an article here in a week or two after all. Look at Szangolies’ paper. His approach, which I have been familiar with prior to this essay contest, involves a form of epistemic horizon on any sort of Spekkens type of system.

ReplyDeleteMy observation with the idea of the fractal set of orbits within orbits U_I is that while this can fragment enormously this as a fractal set is recursively enumerable. This is not anything undecidable. Chaotic systems, such as punctured tori in KAM theory leads to Cantori, which are fractal or Cantor sets that dynamics is on. Recursive sets or algorithms have complements that are recursive. Recursively enumerable sets or algorithms have complements that are incomputable. So this as a fractal set alone does not make this incomputable.

The Gödel incompleteness comes with the p-adic number system used to define a field over this set or Cantor set. The set of frequencies or periodicities means this Cantor set in a certain “limit” has an unbounded set of primes for p-adic number systems. Then enters Martin Davis, Hilary Putnam, Julia Robinson and in particular Yuri Matiyasevich. Hilbert asked if all Diophantine equations could be solved by a single method in his 10th problem. Diophantine equations were found to be equivalent to p-adic sets, and subsequently Yuri Matiyasevich proved these sets were not computable by a single method. This is a Gödel incompleteness result. Solutions to Diophantine equations, which are associated with the nested frequencies or periodicities of orbits, can be solved locally, but there is no global solution method. This is a form of Szangolies’ epistemic horizon. The Cantor set here then has some undecidable properties in the p-adic setting, where any global field of numerical operations in a p-adic setting is incomplete. This is a part of the paper I hope to submit in the next week or so.

Perhaps I'm missing something--and I admit the article is a bit over my head--but if the fractal nature of the state space only manifests as t -> infinity, and we rely on that fractal nature to prove something about the real world, does that really tell us anything? How do we know that this mathematical manipulation translates into anything meaningful in the real world?

ReplyDeleteQ,

DeleteIt's a very good question, and indeed a concern I have had too. Two things. One is that he isn't saying "the fractal nature of the state space only manifests as t -> infinity" but that the state space is *always* that fractal, it's just that its structure is uncomputable (because that would amount to integrating a function for an infinite amount of time).

Now you can ask what's the difference between having a continuous space and an arbitrarily dense yet discrete set in that space. Does this ever have any observational consequences?

The answer to this, as I gather from his papers, is yes because certain combinations of measurements correspond to states that are not on the set. This is basically where the violation of statistical independence comes from (and also the non-commutativity of observables): The probability of certain combined measurements is zero simply because the state is not on the set and hence isn't physically real (there is no time-evolution that will get you there).

Still, you can say, well that's just normal quantum mechanics. So you have exchanged normal quantum mechanics by some fancy fractal something just to get back the same thing. Why bother?

To answer this question let me just give you my person perspective which is that this theory ultimately must make predictions that differ from quantum mechanics simply because it is local and deterministic. Fractal or not, in a deterministic theory identical initial states will have identical measurement outcomes. This is not the case in quantum mechanics.

The fractal set is a particular ontology to realise superdeterminism. It may or may not be ultimately the correct one, I don't know. I don't have a strong opinion on that. But in any case, I think with some more work it may become a useful phenomenological model and that is really my main interest in this idea.

(Devilishly Advocating Department)

Delete"Fractal or not, in a deterministic theory identical initial states will have identical measurement outcomes":

Yes, but you can't

measurethe present tense twice in a row and get the same result. If you could, there would be a trivial block of eternal present-tenses between aleph Null and Aleph One... except the present tense is always different,(more importantly, always a different content) which makes it not Aleph Null nor Aleph One.Can't exactly develop the thought for the foreseeable future, but I suspect the reach and breadth of those "initial conditions" might be being overestimated.

Any future to these musings?

The simplest Bell-like inequality is Pr(A=B)+Pr(A=C)+Pr(B=C)>=1 for 3 binary variables, intuitively "tossing 3 coins, at least two are equal", but QM formalism allows to violate it (e.g. https://arxiv.org/pdf/1212.5214 ).

ReplyDeleteHow fractal nature could explain that - lead to its violation?

Generally, isn't Lagrangian formalism e.g. in classical mechanics (leading to chaos) or Standard Model realistic (existence of state/field) and local (finite propagation speed)?

So does Bell theorem forbid the use of e.g. Standard Model?

I wouldn't conclude that, for example we can also violate above inequality with Ising model ( https://physics.stackexchange.com/questions/524856/violation-of-bell-like-inequalities-with-spatial-boltzmann-path-ensemble-ising ) - isn't Ising model local and realistic?

Jarek,

DeleteYou find that explained in Tim's papers, which are on the arXiv and not hard to find. To make a long story short, the requirement that allowed states have to lie on the fractal subset enforces correlations between the prepared state and the detector which violates statistical independence. Statistical independence is one of the assumptions you need to derive Bell's inequality and all related equalities. Without it, they're easy to violate.

The standard model is a quantum field theory. It doesn't work with a principle of least action, it works with a path integral, but that isn't the relevant factor to consider when it comes to violations of Bell's inequality. Bell's inequalities are all about computing measurement outcomes. And that works the same in quantum field theory as it does in quantum mechanics.

I don't understand your issue with the Ising model, sorry.

Sabine,

DeleteThank you, I will have to take a closer look, but generally we have inequalities derived from looking natural assumptions, which are violated by physics.

So the main goal here is pointing the problem with at least one of these assumption and modify it to make it agree with physics.

Sure e.g. Lorentz attractor discussed here is very complex, has fractional dimension ... but is a consequence of classical mechanics and its assumptions - so which of them disagrees with those of Bell's theorem?

If such classical mechanics system was solved with the least action principle, it has a bit different locality: which is time symmetric.

From QM perspective, we have such symmetric locality in Feynman path integrals (or TSVF).

From QFT perspective like the Standard Model, we have such (CPT) symmetric locality in Feynman diagram ensemble - including more complex scenarios like decay.

Ising model is Boltzmann sequence ensemble: mathematically Wick-rotated Feynman path integrals, now in space instead of time.

This mathematical similarity allows to also get e.g. Born rules, allowing for Bell violation construction, now with complete understanding.

If you don't like my construction, here is another Ising Bell violation paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.170604

Jarek,

DeleteI am terribly sorry for having to say this but your English grammar is so faulty it is incomprehensible. I simply cannot parse it.

Jarek,

DeleteI don't see the point of the PRL paper you cite: everyone knows you can get violations of Bell's inequality with only two spin-1/2 particles.

So, if you have a very large number of spin-1/2 particles, of course you will be able to get some sort of Bell-inequality violation. Why is that news?

Also, you wrote:

>So does Bell theorem forbid the use of e.g. Standard Model?

The physics used to get Bell-inequality violations, whether with photons or spin-1/2 particles, is just using (a tiny bit of) the Standard Model and so are certainly not inconsistent with the Standard Model. QED is part of the Standard Model.

Why do you think otherwise?

By the way, I hope you saw my comments on your paper in an earlier thread.

Dave

Dave,

DeleteI don't understand a need for restring e.g. to spin-1/2 regarding Bell theorem?

It shows a much more general issue: there are inequalities derived from natural assumptions, like realism, locality, Kolmogov probability theory.

The problem is that physics allows to violate them - therefore, physics does not satisfy at least one of these assumptions.

So the task is pointing and repairing the incorrect assumption used to derive such inequalities.

And locality is usually suggested as the suspected assumption.

So is Lagrangian mechanics e.g. the Standard Model a local theory?

Also, to find the incorrect assumption, it would be beneficial to have other models also allowing to violate such inequalities.

Like Ising model - which from one side is generally seen as realizable in condensed matter physics, from the other uses analogous mathematics: Boltzmann sequence ensemble as Wick-rotated Feynman path ensembles.

Jarek

ps. I was not able to find any your answer I did not reply to (?)

Jarek wrote:

Delete>I don't understand a need for restring e.g. to spin-1/2 regarding Bell theorem?

Well, the original violations of the Bell inequality involve either photons or spin-1/2 particles. If you want to convince a lot of us that you can reproduce the relevant violations of Bell's inequality, you need to deal with that.

Jarek also wrote:

>So is Lagrangian mechanics e.g. the Standard Model a local theory?

The Standard Model is most assuredly "local" in the sense of the word used by quantum field theorists (has to do with (anti)commutators of the field operators, the no-signalling theorem, etc.). But the Standard Model is most assuredly

not"local" in the sense used in Bell's theorem.Sabine addressed this a while back.

Jarek also wrote:

>It shows a much more general issue: there are inequalities derived from natural assumptions, like realism, locality, Kolmogov probability theory.

>The problem is that physics allows to violate them - therefore, physics does not satisfy at least one of these assumptions.

Hmmm.... the problem presented by Bell's inequality is that

ifsignals could be transmitted faster than light, it is easy to get violations of Bell's inequality.There are detailed, consistent models of quantum mechanics in which this happens -- e.g., Bohmian mechanics.

The problem is that all such models that have been developed to date have features that most of us physicists find unsatisfactory -- in particular, they have a privileged frame of reference, which violates special relativity, but it is impossible to detect this privileged frame of reference. That is weird.

Note, however, that to deal with the weirdness in Bell's inequality, you need to be able to talk about "faster than light," which of course means you must have a physical time dimension as in special relativity.

The paper you linked to, as well as your own work, seems not to have that. In which case, it is all just irrelevant to the real problem with Bell's inequality.

I did find your reply to my comments -- sorry for missing that. In your reply, you referred to "Wick-rotated quantum computers realized in Ising model."

We all know how to do Wick rotations in order to make various integrals converge. But the whole point of Wick rotations is to replace physical time with imaginary Euclidean time. And then you are no longer dealing with the real issue involved in Bell's inequality.

It's only in real physical time that we have a problem.

I'm afraid that the underlying communication problem here is that some of you who have tried to address the issues involving Bell's inequality just do not grasp why Bell's inequality has been a problem for physicists for over fifty years.

The problem is that we have a real physical theory – ordinary QM – that violates this specific inequality in ordinary spacetime, and we have had trouble coming up with a satisfactory “complete” theory that can reproduce these correct results that does not have serious problems of its own.

Doing a Wick rotation, going to four spatial dimensions, etc. just does not deal with the issue. You guys are just misunderstanding what the problem is.

You know that it is impossible to trisect an angle with compass and (unmarked) straight-edge? For a very long time, mathematicians have been deluged with “solutions” to this problem from amateurs that show that you

cantrisect any angle to arbitrary precision.But of course everyone, going back to the ancient Greeks, were aware of that. It misses the point. The issue is whether any angle can, in principle, be trisected in principle exactly with compass and unmarked straight-edge. And that cannot be done.

Similarly, your solutions to the Bell-inequality problem do not deal with the actual problem as I have outlined it above, which involves apparent faster-than-light effects, and therefore requires a real time dimesnion.

Dave

Dave,

DeleteDid you just say that in Bohmian mechanics signals can be transmitted faster than light? How can this possibly be if it makes the same predictions as the Copenhagen interpretation?

Bohm QM for the Klein-Gordon equation has some oddities. The relativistic form of the Hamilton-Jacobi equation has a term from the quantum potential that may define the interval of the particle outside the light cone. If this quantum potential is nonlocal there is no constraints on this, and in effect the particle or "beable" can move faster than light.

DeleteThis is one reason Bohm QM is considered to be non-workable with relativity. There are people who are working doggedly on working around this, but it seems pointless when the standard approach to QM is so much more advanced in solving these problems.

Lawrence,

DeleteI am aware of this, but that wasn't the question I asked.

Sabine,

DeleteI should have been more careful in my wording to say that faster-than-light "effects" occur: I fixed this in one place, but not in the other. Sorry.

You;re right: Bohmian mechanics does not violate the QFT no-signalling theorem. There are super-luminal effects on the "hidden variables"; however, it is impossible to detect these experimentally. The same is true of Nelson's "stochastic quantum mechanics," as he himself noted, and of some similar models I have developed myself.

So, anyway, no contradiction: all these theories do agree with QM in their experimental predictions and none violate the no-signalling theorem in terms of experimental observations.

But, if only one could observe all the hidden variables, then you would see super-luminal effects. Very, very weird.

All the best,

Dave

Lawrence,

DeleteOf course, one should never just use the Klein-Gordon equation: one should always "second-quantize" AKA use quantum field theory.

As to faster-than-light propagation of particles, it is too often forgotten that this really does occur in conventional quantum field theory.

As Tom Banks explains in detail in his

Modern Quantum Field Theory: A Concise Introduction, Section 1.2:>"Our formula for the emission/absorption amplitude is thus covariant, but it poses the following paradox: it is non-zero when the separation between the emission and absorption points is space-like. The causal order of two space-like separated points is not Lorentz-invariant (Problem 2.1), so this is a real problem.

>"The only known solution to this problem is to impose a new physical postulate: every emission source could equally well be an absorption source (and vice versa)."

Banks' is the first book I have seen on QFT that explicitly mentions this in detail, though it is implicit in older books. (I had a discussion about this with Steve Weinberg in the late 1970s.) It is obvious if you think about the VEVs for space-like separations.

The no-signalling theorem actually relies on this: as Banks explains, it is only because the particle that is going faster than light but forward in time in one frame of reference can be viewed in some other frames as the anti-particle going forward in time that the whole thing works.

I urge you to read Banks' explanation.

I have long had this nagging feeling that somehow there is a clue here as to how to deal with the violation of the Bell inequality and entanglement.

But I have never been able to work out the details. And, unlike some of our friends here, I know enough physics to know that my "nagging feeling" does not count without an actual detailed mathematical model!

Dave

Dave,

DeleteThese inequalities are much more general, like this Pr(A=B)+Pr(A=C)+Pr(B=C) >= 1, intuitively: “tossing 3 coins, at least 2 are equal”.

Assuming there exists any probability distribution Pr(ABC) on 8 possibilities for 3 coins/spins, we can easily derive this inequality, but QM formalism allows to violate it.

So first of all, it is crucial that there is no hidden probability distribution – in QM, states are defined by amplitudes instead of probabilities: to be added over unmeasured variables, then multiplied in Born rule.

While it is more general, in Ising realization we can indeed imagine that ABC are 3 spin 1/2 particles, we need 1D lattice of such 3 spins.

For Bell violation construction, assume there are interactions forbidding '+++' and '---' spin configurations.

In such width 3 lattice we need to introduce defect corresponding to measurement of exactly 2 out of 3 spins: without interaction for the unmeasured third spin – this way, from Boltzmann ensemble among sequences, we get Born rule and violation of above inequality to 3/5.

Pr(A=a,B=b) ~ (psi_{ab+} + psi_{ab-})^2

where one sum of amplitudes comes from left, second from right - we get Born rule from (spatial) symmetry.

Wick-rotated quantum mechanics replaces Feynman with Boltzmann path ensemble – and the latter is generally believed to be realizable with Ising model.

Ising uses spatial direction instead of temporal of QM, but mathematically is analogous – exploiting this analogy we can recreate analogs e.g. of Born rule, which allows for Bell violation constructions.

At least in theory, we could also build Wick-rotated quantum computers: width ‘w’ sequence of spins has mathematically Feynman/Boltzmann ensemble of 2^w possibilities, we could realize gates for their sequences, e.g. ferromagnet corresponds to NOT gate.

Regarding Standard Model, indeed it is local, but in a different way – understanding/specifying this subtle difference could be very helpful to understand the Bell theorem issue.

Does Standard Model allow for faster-than-light transmission? If not, then we need to search for another difference between their locality.

There is CPT theorem at heart of QFT, to get a symmetric solution we can e.g. apply CPT transform to all Feynman diagrams - we cannot determine direction of time from such diagram (ensemble).

So QFT has symmetric locality - is it also true for locality used in Bell theorem?

Jarek

ps. Regarding "people working for fifty years" - you can similarly say about Huffman and arithmetic coding ... in recent years both replaced by my ANS coding. It is extremely difficult to get out of "well known" hermetic ways of thinking.

As far as I know Bohmian QM matches the predictions of standard QM in the nonrelativistic case. I am not sure of relativity and QM. Most relativistic QFT is done in a second quantization setting. Bohmian mechanics is not as I see very amenable to that.

DeleteRelativistic QFT has a bit of a fix put on it. In particular, the Wightman conditions impose zero commutators on amplitudes separated by spacelike intervals. This clips the wing tips off from quantum nonlocality some. The intention is to avoid nonlocal quantum physics from interfering in the computation of amplitudes. The statement by Banks seems to reflect how relativistic QM will have spacelike correlations that occur in amplitudes, and these muddle up things. Various conditions are imposed to “clean this up,” Since these correlations communicate no information and QFT is devised to look at the statistics of events this is removed in the experimental wash anyway.

A part of the problem though is relativistic QFT is a bit of a kluge where the seams are visible. It is an indication of how our understanding of quantum mechanics and spacetime physics has been somewhat incomplete even with special relativity.

John Bell writes of Bohmian mechanics and Quantum Field Theory in his paper "Beables For Quantum Field Theory." His conclusion: "I am unable to prove, or even formulate clearly, the proposition that a sharp formulation of quantum field theory, such as that set out here, must disrespect Lorentz invariance. But, it seems that this is probably so." (1984, CERN-TH. 4035).

DeleteLawrence wrote:

Delete>Most relativistic QFT is done in a second quantization setting. Bohmian mechanics is not as I see very amenable to that.

For some reason, that misconception is widespread, even though Bohm dealt with it in the second of his famous papers way back in 1952: see the abstract.

The hidden variable are now the values of the fields themselves, rather than the positions of particles. Works fine, at least modulo the familiar problems with QFT (regularization, renormalization, and all that). And, of course, you still have the weird but undetectable super-luminal effects.

There happens to be an

embarras de richesseas to how to handle fermion fields: what is probably the most common approach explicitly (but of course in an undetectable way!) breaks rotational invariance. Bell suggested another stochastic approach in his book, and I have a couple alternate approaches I prefer.By the way, I am not an advocate for the Bohmian approach: I view it, in Dr. Johnson's words, "like a dog's walking on his hind legs. It is not done well; but you are surprised to find it done at all."

I.e., I do not think Bohmian mechanics is

theanswer to quantum weirdness, but I think it is interesting that such an approach can be formulated that agrees with experiment.Dave

Jarek,

DeleteSabine wrote to you above:

>”I am terribly sorry for having to say this but your English grammar is so faulty it is incomprehensible. I simply cannot parse it.”

I fear she was too diplomatic: I

canparse what you write, but you are so certain of the value and novelty of your work that you will not try to understand why physicists will tend to find your work lacking in both novelty and value.You wrote to me:

>Regarding "people working for fifty years" - you can similarly say about Huffman and arithmetic coding ... in recent years both replaced by my ANS coding.

You misunderstood my point: I was not saying that we physicists are thinking the way we did fifty years ago. I was implying that

youhave not yet caught up with where we were fifty years ago!I guess I too should have said that more bluntly and not been so diplomatic.

An example is your saying:

>Wick-rotated quantum mechanics replaces Feynman with Boltzmann path ensemble – and the latter is generally believed to be realizable with Ising model.

Yes, physicists (including me) have all known about this since long before you were born. You mention such things as if they have informational value and give value to your work.

They don't.

You also wrote:

>Ising uses spatial direction instead of temporal of QM, but mathematically is analogous – exploiting this analogy we can recreate analogs e.g. of Born rule, which allows for Bell violation constructions.

Your attempts to do that have failed.

You claimed in the paper you referred me to that you reproduced Born's rule. You did not.

What you did do was, by taking a different physical system, get the same probability distribution that Born's rule gives. That is not Born's rule.

Born's rule says: use the (real-time) Schrödinger equation to get the wave function. Then,

if you, the probability density function for a particular valuemeasurethe positionxfor the position will be (ψ(x)*) ψ(x).And, similarly for the momentum, if you express ψ in the momentum basis, etc.That is weird: we allow the Schrödinger equation to evolve for an indefinite time, and then, all of a sudden, we do something else, defined by a couple other postulates of the theory: we make a measurement. And the probability of getting different values for that measurement is given by Born's rule.

What your work shows is other physical problems that happen to end up with the same pdf as occurs in various quantum problems.

That is worthless.

We already know multiple ways of solving Schrödinger's equation: we do not need your help.

What we do want to understand is why we have this strange dualism in quantum theory: a continuous Schrödinger equation occasionally interrupted by this apparently completely different operation of “measurement.”

And your work sheds no light on that at all. Your work is of no value for physicists.

A similar point applies to the other issues you raise.

Terry Bollinger says that you are a brilliant contributor to computer science, and I will take his word for that. Perhaps you are much smarter than I.

But, genius though you may be, you are unwilling to stand back and consider that you really do not get the central points that are problematic about quantum mechanics.

You seem like a nice enough fellow, and, as I have said earlier, I have not noticed actual calculational errors in your work that you have pointed me to.

But you are not going to communicate clearly with physicists unless and until you try to actually understand the issues they are dealing with. Until you do this, your work is going to have zero value for physics.

Again, I do not wish to be cruel nor am I disparaging your intelligence. But, sometimes even geniuses (especially geniuses?) need a friendly word of advice.

All the best,

Dave

That Bohm's QM has troubles with relativity might really be telling us something. As a nonrelativistic interpretation of QM it is equivalent to other ways of doing QM. It is remarkable as well the Hamiltonian for FLRW cosmology, except for the topological part k/a^2, can be derived with just plain Newtonian mechanics. In other words, both of these are acceptable physics in some sort of highly boosted frame as seen in a lab frame with the boost direction contracted to near zero. With time delay increases the motion along the nonboosted directions are very nonrelativistic and Galilean relativity recovered. This is in a sense holography.

DeleteDave wrote: "I do not think Bohmian mechanics is the answer to quantum weirdness, but I think it is interesting that such an approach can be formulated that agrees with experiment." In reading Bohm's paper #2 (1952,Phys.Rev. Vol. 85, N.2), he presents Compton and Photoelectric effects utilizing--as he terms it--his "new interpretation." The sentence after equation A4 is interesting: "... q is assumed to refer to the actual value" and "there is present an objectively real superfield..." and "this hypothesis is based on the simple assumption that the world as a whole is objectively real." (page 189). Yes, Bohm's is an interesting paper, but, as far as I am able to ascertain, Bohm introduces more "weirdness" into quantum mechanics than was ever present before ! What little I know of his "new interpretation" stems from reading John Bell, Max Jammer (1974) and a paper entitled "On the Formulation of Quantum Mechanics associated with Classical Pictures " (Takabayasi, Progress of Theoretical Physics, Vol. 8, No. 2, August 1952).

DeleteDave,

DeleteI am only arguing for accepting time/CPT symmetry of physics, which is at heart of physics we use (like Lagrangian formalism), has different (symmetric) locality than Bell theorem.

Which while switching to asymmetric picture like "Feynman path integrals -> Schrodinger equation" requires superdeterminism.

I don’t remember claiming novelty, symmetric views on QM have nearly a century.

I am only debating this way of resolving especially the Bell theorem issue, using arguments which work for me, also for other physicists I have contact with having similar view (e.g. TSVF community).

In contrast, you have used authority "argument" that this way just didn’t work for you – without providing any real arguments ... and similarly for most of what you write - as you are so smart, maybe you could use meritorious arguments instead?

QM foundations discussion in the last fifty years were focused on philosophy, defending free will, SF concepts like MWI.

Accepting that CPT-symmetric physics might solve its equations in CPT-symmetric way is just too nonintuitive for most, as our intuition is built on time-asymmetric way of thinking.

It is why I am bringing Ising model to discussion – which is belied to be realizable and uses analogous mathematics, but this time in spatial instead of temporal direction.

Nonintuitive time symmetry becomes natural spatial symmetry here.

E.g. Born rule starting with Pr(u) = (psi_u)^2 probability of value inside Ising model comes directly from symmetry – one psi comes from left, second from right – analogously to TSVF.

We can also construct Ising analogues of "measurement": leading to addition of amplitudes over unmeasured variables, then multiplication to get probabilities - Born rule allowing for Bell violation constructions.

Sure Ising model is different from QM, but its analogous mathematics (Feynman <-> Boltzmann ensemble) leads to many analogous properties.

Why do you think we cannot transfer conclusions through this analogy – like that physics has symmetric locality instead of asymmetric one in Bell theorem?

Why exactly you don’t like the idea that CPT-symmetric physics could solve its equations in CPT-symmetric way?

So what do you think is solution for the Bell theorem issue?

Why Bell theorem does not discredit Lagrangian formalism like Standard Model - what is the difference between their properties like locality?

All the best,

Jarek

>"The only known solution to this problem is to impose a new physical postulate: every emission source could equally well be an absorption source (and vice versa)."

DeleteThis is why it is better to think of 'currents' rather than emission and absorption. For a two particle interaction, it doesn't matter which you choose to be the Source or the Sink, you still get the same propagator!

Interaction is continuous.

Best greetings.

It has been a long time since I read Bohm’s 1952 papers. The idea there is some connection between Bohm’s QM and second quantization can be seen in the following. For ψ = √{ρ}e^{iS/ħ} we can for √{ρ} = e^r define the vector

DeleteP = ∇S + i∇r,

From ∇ψ. This has some properties such as P^†P = (∇S - i∇r)( ∇S + i∇r) and

P^†P = (∇S)^2 + (∇r)^2.

Here ∇S = p and so it is suggestive to consider the operator A = P + i√{ω}x so that A^†A = P^2 + x^2 and so

A^†A = (∇S)^2 + (∇r)^2 +It has been a long time since I read Bohm’s 1952 papers. The idea there is some connection between Bohm’s QM and second quantization can be seen in the following. For ψ = √{ρ}e^{iS/ħ} we can for √{ρ} = e^r define the vector

P = ∇S + i∇r,

From ∇ψ. This has some properties such as P^†P = (∇S - i∇r)( ∇S + i∇r) and

P^†P = (∇S)^2 + (∇r)^2.

Here ∇S = p and so it is suggestive to consider the operator A = P + i√{ω}x so that A^†A = P^2 + x^2 and so

A^†A = (∇S)^2 + (∇r)^2 + ω^2x^2.

This is close to the Hamiltonian for a harmonic oscillator for a quantum potential that has ∇^2r = 0. This then illustrates that the harmonic oscillator Hamiltonian can be written, say with A → (1/√2)A and other mods. in general

A^†A = H_{ho} + (ħ^2/2m) (∇r)^2,

Where H_{ho} is a classical Hamiltonian and the second term is the quantum potential.

However, this is still non-relativistic. A full second quantization, say found in classic texts as Bjorken & Drell, is not available to my understanding.

Lawrence Crowelll wrote to me:

Delete>Where H_{ho} is a classical Hamiltonian and the second term is the quantum potential.

You do not really need the "quantum potential" idea to do Bohmian mechanics. You just say that the rate of change of a generalized "hidden variable" with respect to time is

(1) dq/dt = i/(2 ϰ) ( ψ ∂ψ*/∂q - ∂ψ/∂q ψ*)/((ψ*)ψ)

where ϰ/2 is the coefficient of the (dq/dt)^2 term in the Lagrangian.

The "quantum potential" was really just an unnecessary complication Bohm employed to try to show how his theory compared to classical mechanics.

I think you will find that this is typical of a lot of modern presentations of Bohmian mechanics in that it does not mention the "quantum potential" at all. See Tumulka's equation 1, which is basically what I gave above. See also his Section 12 for a discussion of Bohmian field theory.

Lawrence also wrote:

>However, this is still non-relativistic. A full second quantization, say found in classic texts as Bjorken & Drell, is not available to my understanding.

Well, the idea is that you basically just do Bjorken and Drell volume 2 to get the wave function for the fields: this is not too hard for free fields, but of course you get into the usual problems with interactions. If you put the system on a lattice to regularize it, it's not so bad.

And then once you have the wave function ψ, you just apply my equation 1 above to see how the fields change (q is of course a generalized coordinate -- i.e., the value of the field at a specific lattice point).

You will get the same experimental results as for conventional QFT: everything is nicely relativistic except for the final stage where you apply my equation 1 above.

It is only that last step that violates relativistic invariance, and, as always in Bohmian mechanics, that occurs only at the level of the hidden variables, not at the level of actual observations.

So, Bohmian field theory (at least for bosons) is really no more and no less weird than Bohmian QM in general.

Again, I myself view Bohmian mechanics as simply an interesting "toy model" that shows that the interpretation of quantum mechanics is less determinate than people used to think.

But I do think it is worth understanding what the real problems are with Bohmian mechanics: it can handle relativistic theories, such as QFT, fine at the level of experimental observations, but relativistic invariance is broken in a maximal (but undetectable!) way at the level of the hidden variables.

Which is indeed weird.

Dave

Jarek wrote to me:

Delete>In contrast, you have used authority "argument" that this way just didn’t work for you – without providing any real arguments ... and similarly for most of what you write - as you are so smart, maybe you could use meritorious arguments instead?

Jarek, in the comment to which you replied I went to some trouble to explain to you why your merely finding other physical systems that give pdf's that occur in some problems in QM is not a matter of deriving Born's rule. I also explained in some detail what Born's rule is and how the issue of deriving Born's rule connects to the measurement problem. That is hardly an argument from authority. It is simply information that I am providing to you (for free!) that you are choosing not to go to the trouble to understand.

There is a common ploy on the Internet that boils down to: “I got in an argument on the Internet with a physicist, and (s)he could not convince me that I was wrong,

so I win!”We have seen a lot of that in Sabine's comments section.

I'm pretty sure you can see how silly that is.

In any case, arguments from authority are not per se wrong: indeed, they are a completely necessary part of human society.

For example, I take it from Terry Bollinger that you have made a major contribution to data compression/source encoding theory. I am a bit of an expert on error detection and correction theory (co-inventor on some patents for BCH?Reed-Solomon systems) and so I naturally know a bit about source encoding – e.g., I know what Huffman encoding is.

However, I am not at all as expert on the subject as you are.

So suppose I posted something here about source encoding and you, as an authority on the subject, informed me that I was mistaken.

Don't you think I would be wise to presume that you were correct and that I had indeed made a mistake and that I should go to work trying to understand my error?

Similarly, I was born with a melanin deficiency that makes me prone to skin cancer, I have a dermatologist whom I see regularly: I have checked him out in various ways so that I am confident that he is an expert.

When he tells me (this has happened more than once) that something on my skin is pre-cancerous and needs to be dealt with, I presume that he is probably correct, since he is an authority in the field and I am not.

I am an authority in physics and you are not.

I know that Jagiellonian University granted you a Ph.D. in physics as well as in computer science, but I have looked at your physics thesis, and, to put it diplomatically, your work in physics is neither new nor significant.

Rather, your thesis is filled with gobbledygook like this:

>"The disagreement of standard stochastic models (approximating thermodynamical principles) with thermodynamical predictions of quantum mechanics is one of many reasons of reluctance for imagining electron as a particle - undividable charge carrier, of radius so small that it is practically unmeasurable in particle collider experiments. Orthodox view on quantum mechanics lead physicists to ignoring this half of wave-particle duality."

All your thesis really shows is that there is a formal mathematical analogy between quantum mechanics and statistical mechanics, which is not new and which you did not discover: it has been known for a very, very long time.

You do not want to go to the trouble to understand my criticisms and see what is wrong with your work?

Fine. Don't. I really do not care.

It is not my job to prove to you to

yoursatisfaction that your work in physics is neither original nor significant and that your work sheds no light on foundational issues in quantum mechanics.You can do with this advice as you wish.

Dave

Dave wrote: "Bohmian field theory (at least for bosons) is really no more and no less weird than Bohmian QM in general." Takabayasi noted these issues with Bohmian mechanics (1952): it cannot handle spin degree of freedom, electron as quantum field cannot be represented in this picture, the formulation is applicable to Bosons but not to Fermions (Progress of Theoretical Physics, On the Formulation of Quantum Mechanics Associated with Classical Pictures). Now, read Bohm's PR 2: "simultaneous measurements of position and momentum having unlimited precision would be possible if the mathematical formulation needs to be modified at very short distances in certain ways that are consistent with our interpretation but not with the usual interpretation." Now, John Bell: "But, it might be that this apparent freedom is illusory. Perhaps experimental parameters and experimental results are both consequences, or partially so, of some common hidden mechanism. Then, the apparent non-locality could be simulated." (1975 GIFT Seminar).

DeleteI have not kept up with Bohmian developments of late. As with you I regard this as more of a curiosity than something serious.

DeleteIt is easy to derive a Bohm version of the Klein-Gordon equations. The q-potential means the particle is moving outside the light cone on a spacelike interval, and by special relativity this means its interval can be anything including time reverse or instantaneous motion. As pointed out we have off-shell conditions in QFT such that a massive particle can be in a spacelike interval. In standard QFT though we have better “control” over this. With Bohmian QM or idea of QFT not so much. The BCFW approach to QFT removes gauge redundancies and these off-shell conditions and has some advantages as a result.

As I said above Bohmian QM as a strictly nonrelativistic physics suggests something about holography. Say if our spacetime is on a holographic screen in 5-dimensions, think of a causal wedge junction in AdS_5 or something, then nonrelativistic physics is a sort of emergent or observed physics. It is possible to derive the FLRW Hamiltonian from Newtonian mechanics and Bohm has a clear preference for non-relativistic physics. I question whether both indicate some sort of holographic result on our observable universe.

Dave,

DeleteAgain I am asking concrete physical questions, but instead of even trying to respond, you write long purely authority/personal response completely out of topic.

Ok, my data is error corrected with your methods, your data is encoded with my ANS coding ... but could we maybe focus on physics here?

You have agreed that locality in Lagrangian formalism like the Standard Model is different than locality in Bell theorem.

Does it mean that Lagrangian formalism does not satisfy assumptions of Bell theorem?

If so, can we build "hidden variable" models ignoring Bell theorem - if they are models governed by Lagrangian formalism?

Where else can we ignore Bell theorem? In Ising model?

All the best,

Jarek

Locality in QFT is not determined by a Lagrangian formalism. QFT field locality is the imposition of a condition that field amplitudes on a spatial surface, or with spatial separation, have zero commutator. Such a surface is given by a coordinate condition or frame where clocks are synchronized. This is Wightman’s equal time commutator condition. This is done to avoid mixing nonlocal aspects of QM in the computation of propagators. There are virtual particles that on an off-shell condition can jump along a spacelike interval, but these do not communicate information. The Wightman conditions for QFT locality sweeps these away from any contribution to s field propagator.

DeleteThis condition is not something inherent in quantum mechanics nor is it determined by the nature of a Lagrangian. Lagrangian dynamics in QM or QFT is used because with QM we work in either coordinate or momentum variables. Hence, a Lagrangian formalism with half the phase space variables, say configuration variables, suits QM well. The Wightman conditions in some sense reduce the nonlocality of QM, but this has no observable consequence for QFT calculations. With path integrations a time ordered sequence of field operators is computed in the integral. Thus, what damage is done is largely avoided in calculations.

Where this restriction on quantum nonlocality becomes a problem is quantum gravitation. The problem with quantum gravitation can be seen as a situation where the field propagated in spacetime is a quantum amplitude for spacetime. There is an uncertainty in what is timelike and what is spacelike, and this nonlocality removed by equal time commutation mingles into spacelike separations. This is rather clear with twistor theory that maintains null directions, but with an uncertainty in what is meant by a point as a projective geometry.

Another issue not often brought up is that with entanglement we have a funny meaning to configuration variables. We can well enough understand what is meant by a wave function at a spatial location or with a configuration variable. We can represent a quantum state in some coordinate condition, and this works well enough. Two such quantum states in an entanglement though is strange. Entanglements mean the information of the two individual states no longer exists, but only quantum numbers associated with the entanglement exists. Hence, if we have two quantum states at their individual configuration variables enter into an entangled state through some interaction the meaning of this entangled state at the two configuration variables has no meaning. It also is problematic to assign a single configuration variable. What configuration variable do we use? With the Bell state |ψ⟩ = A|+⟩|-⟩ + B|-⟩|+⟩ it makes sense on the left hand side to contract with a bra for a coordinate r, but this makes little sense on the right hand side. The nonlocality of the entangled state muddles issues with coordinates or configuration variables. So, what is meant by field locality?

In QFT physicists do not in general worry about entanglement. This is not something that occurs in most QFT texts, and theory is concerned only with computing amplitudes for the outgoing scattered fields. There is though a place where this is a growing issue. Quarks in a hadron are increasingly being thought of as having entanglements with respect to some quantum numbers. Of course, this occurs in the IR domains where quarks at low energy might be considered nonrelativistic.

QFT is a sort of Frankenstein’s monster where the bolts and seams holding spacetime and the quantum together are visible. While it has served well for decades it is problematic for future work.

Lawrence,

DeleteBell inequalities are derived from assumption of realism and locality, but violated by physics – so we need to point the incorrect assumption(s): different from physics.

All Lagrangian formalism assumes existence of some objective situation e.g. field – what can be seen as realism: “hidden variables” which effects we are observing.

Therefore, the hope is that the difference from physics in Bell's assumptions is in locality – that the real physics uses a different type of locality, like the one in QFT.

We also successfully use classical Lagrangian formalism: in classical mechanics (e.g. for chaos in above Tim Palmer post) and classical field theories – especially electromagnetism (with Malus law similar to Born rule) and general relativity.

The question is which locality are they using: the one forbidden by Bell theorem, or analogous as in QFT? – especially if wanting to unify QFT with GR.

> There is an uncertainty in what is timelike and what is spacelike

Indeed, while locality in Bell theorem is time asymmetric: emphasizes past->future direction, general relativity is not only time-symmetric, but additionally mixes spatial and temporal directions.

Also, for such unification: what happens in spacetime with QM as temporal Feynman path integrals – in spatial directions are they still Feynman ensemble, or maybe Wick-rotated: Boltzmann as in Ising model?

The uncertainty relationship in QM for the electric and magnetic field ΔEΔB = ħ stems from the commutation [E, B] = iħ. The electric and magnetic fields are conjugate variables and they are not commutative in general. Now for a simplified situation with a plane wave A = A0e^{ikr-iωt} we have

DeleteE_i = -∂_tA_i = ωA_i

B_i = -(∇×A)_i = -(k×A)_i = -ε_{ijk}k_jA_k.

This means the commutator [E, B] = iħ is

[E_i, B_i] = -ε_{ijk}ωk_j[A_i, A_k] = iħ.

This means the commutator [A_i, A_k] is non-zero and this is a noncommutative coordinate condition. This is related to Connes’ work. There is nothing here that specifies the coordinates of the vector fields. The Wightman conditions then impose Heaviside θ-functions for this commutator to be nonzero only in for timelike or null separations.

This is often cited as different from nonlocality in QM, but clearly it is related. That this is imposed on spatial separations means there is no information theoretic content to this. So, for most QFT calculations this leaves things unscathed. However, if we were to consider entangled states in high energy physics, we might start to have some troubles. With quantum gravitation there are clear problems that emerge. Anyway, I call this as I see it.

I suppose I fail to see how a Lagrangian implies anything with hidden variables. Further, as I indicated, configuration variables for entangled systems is not entirely clear. What is the configuration variable for an entangled system?

erratum: I forgot to pull down the i = sqrt{-1}, but the argument still holds

DeleteBut Lagrangian formalism assumes existence and objective state e.g. of the field - how does this field differ from "hidden variables" in Bell theorem?

DeleteFor example general relativity is usually seen through Einstein's equation, determining intrinsic curvature of spacetime from stress-energy tensor.

How does intrinsic geometry of spacetime differ from "hidden variable" in Bell theorem?

Solving GR through Einstein's equation is a time symmetric way (like the least action principle), spacetime can be imagined 4D jello satisfying such local condition for tension.

It is literally hard to imagine (?) solving GR in asymmetric way like through Euler-Lagrange: that geometry of spacetime "unrolls" toward future.

Also solving QM through Feynman path integrals, or QFT through Feynman diagrams is time/CPT symmetric: such paths/diagrams do not determine direction of time.

In contrast, locality in Bell theorem is not time symmetric: hidden variables are only in the past, before measurement.

Isn't symmetry the difference between locality in Bell theorem and locality in QFT?

And locality in classical field theory like general relativity?

If they are the same, why Bell theorem does not disqualify e.g. QFT or GR?

Jarek Duda wrote to me:

Delete>You have agreed that locality in Lagrangian formalism like the Standard Model is different than locality in Bell theorem.

>Does it mean that Lagrangian formalism does not satisfy assumptions of Bell theorem?

>If so, can we build "hidden variable" models ignoring Bell theorem - if they are models governed by Lagrangian formalism?

Those questions do not really make sense, Jarek. It's a bit like asking whether Broadway musicals taste as bitter as the moon. Each individual word makes sense, but you are making a bunch of assumptions, due to your lack of knowledge, that makes the questions nonsensical.

Or, if you prefer an American colloquial expression, you are "talking apples and oranges."

I generally agree with what Lawrence told you on this.

Jarek also wrote:

>Again I am asking concrete physical questions, but instead of even trying to respond, you write long purely authority/personal response completely out of topic.

>Ok, my data is error corrected with your methods, your data is encoded with my ANS coding ... but could we maybe focus on physics here?

No, we can't, simply because you won't: what you think are "concrete physical questions" are not. You have some gross misconceptions about physics, and when we try to tell you this, you will not listen and you just keep repeating your misconceptions.

Because of that, the subject of discussion necessarily must turn away from physics to why you are unwilling to consider that you have some serious misconceptions about physics.

Look: I do no think you are a bad fellow, nor do I think you are stupid. But, if I started making grandiose pronouncements about, say, organic chemistry or gourmet cooking, I would not doubt make a fool of myself, since I am woefully ignorant in those fields.

You know a small amount about quantum mechanics, and you are making a lot of errors and saying a lot of nonsensical things. But you are not willing to accept correction.

And that is a problem. (Admittedly, a very common human problem!)

Dave

Dave,

DeleteSo how to handle the Bell theorem issue: that physics can violate inequalities derived from assumptions of realism and locality?

Does Lagrangian formalism satisfy these assumptions? - you agreed that QFT has a different type of locality, so what about general relativity?

Jarek Duda asked me:

Delete>So how to handle the Bell theorem issue: that physics can violate inequalities derived from assumptions of realism and locality?

Well, I think the key to dealing with this question is to get clear that “Belll locality” is much more general than the specialized sense of locality discussed in quantum field theory.

The idea is that if you have two separate spatial regions such that in the course of the experiment there is not enough time for effects or influences from one region to affect the other region, then Bell's inequality must hold. Bell's argument is very, very general: it is intended to include physical experiments of any sort, but also experiments that could be done with human beings, etc.

Are you familiar with Bell's argument as reprinted in his

Speakable and Unspeakable in Quantum Mechanicsand in Dave Mermin;s famousPhysics Todayarticle?By “familiar” with the argument, I mean that you yourself understand it well enough that you could go up to a whiteboard and give an improptu lecture and prove Bell's theorem yourself in its intended generality (no reference to physics).

People still debate exactly what assumptions are implicit in the argument: as you say Bell-locality and also probably “counter-factual definiteness.” Bell's argument definitely does not assume determinism, and I do not think it even assumes realism.

Anyway, if you really understand and can derive Bell's inequality itself, it is hard to see how it can be false.

Again, critically, it is intended to apply to physics but not

justto physics.Now, of course, both simple theoretical calculations and experimental results show that quantum mechanics does violate Bell's inequality.

How can this happen?

Bell's proof seems to show that there must be effects or influences transmitted faster than light (if the time of the experiment is less than the time required for light to be transmitted between the two observers).

Therefore, quantum mechanics must involve superr-luminal effects or influences. End of discussion.

Okay – I'm being facetious: that

seemsto be the conclusion, but for fifty years people have been arguing about it!Now, as I have pointed out, the same result can be achieved in quantum field theory – the part of quantum mechanics needed to show the violation of Bell's inequality is essentially the same in quantum field theory. You do not, however,

haveto use quantum field theory: the violation of Bell's inequality can be calculated in non-relativistic quantum mechanics, and indeed, it does not require any sort of Hamiltonian or Lagrangian or path-integral formulation of any sort at all.The violation follows from some very simple facts about quantum spin.

You do, however, have to be dealing with real quantum mechanics, where you can talk, for example, about time passing and the time required for light to get from one region to another. You cannot go through Bell's argument in imaginary time (AKA Euclidean time).

Again, what does this have to do with quantum field theory? Very little at all – the part of quantum mechanics you need to see the violation of Bell's inequality is a tiny part of quantum field theory, but most of the complicated apparatus of quantum field theory is irrelevant.

I have pointed out that, as Tom Banks has explained, there are indeed super-luminal particles in quantum field theory. Does this have anything at all to do with the violation of Bell's inequality?

No, at least no one has made a credible connection. Remember: the violation of Bell's inequality still occurs in non-relativistic quantum mechanics. Also, the super-luminal effects in quantum field theory die off exponentially as distance increases: they are negligible at macroscopic scales.

So why does it happen?

I don't know.

Dave

Gary Allen wrote to me:

Delete>Takabayasi noted these issues with Bohmian mechanics (1952): it cannot handle spin degree of freedom, electron as quantum field cannot be represented in this picture, the formulation is applicable to Bosons but not to Fermions (Progress of Theoretical Physics, On the Formulation of Quantum Mechanics Associated with Classical Pictures).

That's true of Bohm's original 1952 papers, but various ways of dealing with fermions have been worked our since then. Antisymmetrization of the wave function, if you want a particle picture, is straightforward. The spin state of the particle can be handled easily by letting there be two discrete states the particle can be in with appropriate amplitudes (spin up and down along some axis, of course): this breaks rotational invariance of course at the level of the hidden variables, but, hey, we've already broken Lorentz invariance!

Bell suggests a stochastic approach that supposedly becomes deterministic in the continuum limit.

Some people seem to think they can make it work with Berezin anticommuting Grassmann variables: I confess that I have never understood that approach.

Some of this can be found in Peter Holland's

The Quantum Theory of Motionand in lots of papers on the arXiv.I have a couple ways of doing it by having amplitudes defined over an inner spinor space: I've never published it because, after all, I do not think Bohmian mechanics is true!

Anyway, yes, fermions were not covered in Bohm's original papers, but later developments give easy ways to include fermions.

Dave

Dave,

DeleteIndeed to answer if e.g. general relativity or QFT satisfy assumptions of Bell theorem, the key is clearing its abstract locality assumption.

You have mentioned Mermin, so maybe let’s focus on his inequality, which is more general than Bell’s or CHSH – for 3 binary variables ABC, we can imagine that these are anything e.g. coins:

Pr(A=B) + Pr(A=C) + Pr(B=C) >= 1

To derive it, just assume that there is some probability distribution Pr(ABC) on these 8 possibilities, then one easily gets this inequality.

This derivation does not assume some abstract unclear “locality”, just existence of some probability distribution … before the measurement.

QM formalism allows to violate it. We couldn’t do it if measuring all three variables - it is crucial that we measure only two.

We need to understand what is the difference between just “not knowing” some variable, and “not measuring it” – it is the same in classical physics, but somehow QM distinguishes them.

So does general relativity satisfy assumption required to derive this inequality: just existence of probability distribution?

But this is probability distribution before the measurement - where is “before” in Einstein’s equation?

It is analogous to Ising model: probability of value inside is Pr(u) = (psi_u)^2 – there is not just a single probability distribution, but two from symmetry: amplitudes from left and right direction.

Jarek Duda wrote to me:

Delete>You have mentioned Mermin, so maybe let’s focus on his inequality...

Nope.

Look, Jarek. I am tired of playing silly games with you.

I did not simply "mention" Mermin: I strongly suggested that you should stop spouting nonsense on this subject unless and until you had mastered Bell's inequality: that is, the original inequality discovered by John Bell. And I told you that a good place from which you could learn this was a very specific article by Dave Mermin to which I provided a link, as well as Bell's own original paper. Om looking again at Mermin's article, I see that he does not really give a clear algebraic derivation of the Bell inequality. So, focus on Bell's original paper.

Until you do that, it is really a waste of time to talk to you. You literally do not know what you are talking about.

(Yes, yes, I know you will call this the "ad hominem fallacy." It is not. The ad hominem fallacy is starting from the idea that someone is bad and therefore concluding that his ideas are bad. Pointing out that someone's ideas are bad and concluding that therefore he needs to do something to improve his ideas is not a fallacy at all. It is the "ad hominem truism.")

Jarek also asked:

>So does general relativity satisfy assumption required to derive this inequality: just existence of probability distribution?

All classical theories that obey the speed-of-light limitation, which includes GR as long as you do not have funny topologies (e.g., no wormholes, no closed time-like curves, etc.) are going to obey the original Bell inequality. And, no, I have no desire to prove that. Call it a strong conjecture that I and most physicists share because no one has ever found a counter-example. If you think you can come up with a counter-example, well, go for it. Good luck. I think you will fail.

Jarek also wrote:

>It is analogous to Ising model: probability of value inside is Pr(u) = (psi_u)^2 – there is not just a single probability distribution, but two from symmetry: amplitudes from left and right direction.

Gobbledygook.

As I said earlier, the whole point of the Bell inequality is situations where you expect a speed-of-light limitation on transmission of influences between different regions.

Which can not occur in the Ising model -- no time coordinate, therefore no meaning to "speed," much less to "speed of light."

If you want to talk to physicists about this stuff, you need to first really learn what is already known along the lines I discussed earlier. Otherwise, you are going to find most physicists are a good deal less patient than I am.

They will tend simply to conclude, as Sabine did:

>”I am terribly sorry for having to say this but your English grammar is so faulty it is incomprehensible. I simply cannot parse it.”

Now, you and I have conversed long enough that I know your English is fine – the real confusion lies not in your English but in your ideas, whether expressed in English or any other language.

Jarek also asked:

>But this is probability distribution before the measurement - where is “before” in Einstein’s equation?

Jarek, your question makes no sense. Where is “bitterness” in the sky?

Again, I know you will object to my appealing to the “ad hominem truism.” But your question makes no sense because you refuse to learn the basic physics in this area. No one can really talk to you till you do.

Dave

Dave,

DeleteIt is really hard to find physics in your responses, but I have managed to do it:

> the whole point of the Bell inequality is situations where you expect a speed-of-light limitation on transmission of influences between different regions.

But Lagrangian formalism has no faster-than-light communication.

So are you saying that it satisfies assumptions used to derive Bell inequalities?

But physics violates them - so does physics contradict Lagrangian formalism e.g. GR, QFT?

Hint: faster-than-light transmission is not the only way not to satisfy the Bell's locality assumption.

Another way is solving these models in symmetric way, like through the least action principle, Feynman path/diagram ensembles ... or as we calculate probability inside Ising sequence.

Jarek,

DeleteI somewhat followed this discussion. I’m also a physicist and I’m trying my best to understand your arguments, but I cannot fully make sense out of them, it really seems like we are talking in a different language. Let me explain, I promise that my comments would be strictly about physics.

You wrote:

“But Lagrangian formalism has no faster-than-light communication.”

I am not sure what you mean by Lagrangian formalism. We can write a Lagrangian for two point masses (let’s say Sun and Earth) with classical gravitational interaction. This Lagrangian is non-local and therefore has faster than light communication. The two theories you mention later, GR and QFT, indeed do not have faster than light communication.

“So are you saying that it satisfies assumptions used to derive Bell inequalities?”

There are two side to Bell inequalities. On one side you have quantum correlations, QFT can clearly describe these. On the other hand you have some undefined hidden variable theory with classical probabilities and all the limitation than Bell mentions including locality.

Do you understand why your question does not make sense? I am guessing that you are asking if the Lagrangian formalism can be used for the hidden variable theory. Yes, it can. But this does not mean that every Lagrangian describes a hidden variable theory. Specifically, QFT is not a hidden variable theory – it is a quantum theory.

“But physics violates them - so does physics contradict Lagrangian formalism e.g. GR, QFT?”

By physics, I’m guessing that you mean experiments. There are no experiments that contradict QFT. Any experiment that exposes the quantum properties of nature, would clearly contradict classical theories.

Udi,

DeleteThanks for the help!

I do not think Jarek is a bad guy, but it is hard to get through to him that he is making a whole lot of assumptions that physicists know to be untrue.

All the best,

Dave

Jarek,

DeleteI hope you found Udi's response to be more clear than mine.

To summarize, you wrote:

>But Lagrangian formalism has no faster-than-light communication.

No, that is a mistaken assumption. Lots of examples of the Lagrangian formalism do have faster-than-light communication, but lots don't.

The big debate between you and me has hinged on your believing things of this sort that all competent physicists know to be false. But when I tell you, you will not believe me. And you know way too little physics to figure it out for yourself.

Will you believe Udi?

No, I suppose not -- understanding that some other people know more than you do on some subject is not something you are good at.

Jarek also wrote:

>Hint: faster-than-light transmission is not the only way not to satisfy the Bell's locality assumption.

Well... that is debatable, I suppose. I'd say that lots and lots of theories with faster than light transmission do not violate the original Bell inequality for the trivial reason that those theories do not have things like photons or spin 1/2 particles needed to carry out the experiment!

Jarek also wrote:

>Another way is solving these models in symmetric way, like through the least action principle, Feynman path/diagram ensembles ... or as we calculate probability inside Ising sequence.

Well, least-action, Feynman diagrams, etc. give the same answer as more traditional methods: they will not make any difference.

As to the Ising model, no faster-then-light transmission for the trivial reason that "faster" makes no sense in a model with no time coordinate (not to mention no light!).

You just keep talking about the Ising model as if it is obvious that it has some relevance to faster-than-light effects. Of course, you never can explain how, since it obviously does not.

Some of the things you are saying are so obviously and trivially false that you really should be able to get it.

Dave

Dave wrote:

Delete"Will you believe Udi?"

Jarek, please do NOT believe anything I say. Check the facts and do the calculations yourself. Believe is a very bad approach for doing science.

Physicist Dave wrote: "...after all, I do not think Bohmian mechanics is true!" I must concur with that remark, Dave. Thanks for reminding me to peruse later papers regarding the topic of Fermions and Bohmian mechanics. I read a paper by Struyve: "Pilot Theory and Wave Fields" (ArXiv:0707.3685v4).

DeleteHis concluding line was intriguing: "it might be worthwhile to consider a supersymmetric extension of standard quantum field theory, maybe this symmetry allows for a more unified approach for introducing beables." I have not kept up with later (or, further) developments on hidden-variables, this, in part, from a remark in Piron's 1976 monograph: "it is not logically necessary to introduce hidden-variables into physics" (page 2, Foundations of Quantum Physics).

Udi wrote:

Delete>Jarek, please do NOT believe anything I say. Check the facts and do the calculations yourself. Believe is a very bad approach for doing science.

Ah, but the problem is that Jarek has not believed what physicists tell him to the level of an initial presumption that what we say might possibly be right, so that it is indeed worth checking it out for himself.

If JeanTate tells me that I have a misconception on some matter of astronomy, I do not assume that Jean is correct. But, I do have enough respect for Jean's knowledge to be worried that I might well be wrong and that I better check it out.

On the other hand, if our friend bud rap tells me I am wrong, well, it means nothing, given bud's proven lack of knowledge concerning science.

In any case, Jarek's misunderstanding on the issue of theories based on Lagrangians that you and I have both pointed out suggests that he lacks even an undergrad physics major's knowledge of physics. I and my fellow physics students knew this before we finished our junior year.

In short, I am afraid it may be unduly optimistic to assume Jarek

cancheck it out for himself. He seems to know a lot of buzzwords with no knowledge of their meaning.We'll see: since the issue of the Lagrangians is quite obvious, his further comments on this will tell the tale.

All this is more than passing strange since Jarek claims on his webpage that he has a Ph.D. in physics from Jagiellonian University!

This is getting very weird.

Gary Allen wrote to me:

Delete> I have not kept up with later (or, further) developments on hidden-variables, this, in part, from a remark in Piron's 1976 monograph: "it is not logically necessary to introduce hidden-variables into physics" (page 2, Foundations of Quantum Physics).

Well, obviously we do not need hidden-variable theories if our goal, in Dave Mermin's immortal phrase, is just to "shut up and calculate"!

But, as Sabine, Steve Weinberg, and many others have pointed out, existing textbook QM has two radically different processes: the continuous Schrödinger equation and then the separate process of "measurement." There are several, certainly non-obvious, postulates relating to that measurement process: Born's rule, the eigenvalue/eigenvector rule, etc.

No one has ever managed to explain how the measurement process follows from Schrödinger's equation in a way that most physicists accept.

Worse than that, at least superficially the measurement process seems to give a privileged place to human (or at least conscious) beings, which, at the very least, would imply that physics is incomplete.

Hidden-variable theories are one possible way of dealing with those dilemmas, possibly the only way yet presented that clearly makes sense, even if they are ugly. E.g., it is highly debatable whether the many-world theories are actually well-formulated.

So, anyone seriously interested in quantum foundations must deal with hidden-variable theories, even if we rather hope that they turn our not to be true.

Dave

I always say to myself (when reading discussions on General Relativity), that Einstein was never happy with GR; but that never stops academia romping ahead anyway

ReplyDeleteIf we accept gravity a purely (~higher level) communication domain it reduces GR to special relativity and the Universe begins to be computable and allows sensible progress

Seems like fractals require infinite subdivision of lengths. The previous blog post is about a minimum length.

ReplyDeleteWe're talking about state space here, not about space-time. It isn't even clear in this context what you even mean by "length".

DeleteIn state space distance is given by the Fubini-Study metric. This however only makes sense for a finite dimensional Hilbert space. For black holes and quantum gravity with Bekenstein/Bousso bounds this is fine. An infinite dimensional Hilbert space has not meaningful metric distance.

DeleteAn interesting possible toy model is the projective Hilbert space CP^1 ~ S^2, where we map the Mandelbrot set onto it. A point on this sphere will evolve according to the potential curves around the Mandelbrot set. The closer the "particle" comes to the black Mandelbrot set the more wild, or more cycles within cycles, the motion is.

The recent article in Quanta:

ReplyDeletehttps://www.quantamagazine.org/mathematicians-prove-batchelors-law-of-turbulence-20200204/

seems to be relevant to the last two blogs here. [Though the article and the two blogs are over my head to a large extent.]

“A little randomness allows you to smear out the difficulties,” (... in supposedly proving Batchelor's Law, with respect to hydrodynamic turbulence).

Planck's constant may perhaps be taken to represent randomness, and/or a limit to exactitude. It came into the calculations of a (potential) minimum length in the previous blog. However one would expect a little randomness to be variable rather than a constant? It is fortunate that Planck's constant is a constant [is that certain everywhere and everywhen?] as if it were a variable then using h squared in a QM formula would make it non-linear?

Fractal effects are also mentioned in the article: "The pattern looks the same at every scale, just as it does in hydrodynamic turbulence, where each vortex contains other vortices."

Laws of nature are constructed out of categories like relative mass, position and energy which are mathematically related to other such categories. The numbers that apply to the categories only exist from a point of view. The categories, numbers and relationships only exist from the point of view of the micro-world: we human beings only found out about these relationships in the past few hundred years.

ReplyDeleteBut the yellow blobs are a human-made numerical representation of behaviour in time. These blobs might represent a sort-of category from the point of view of human-beings, but they can never become the sort of category found in the law of nature relationships. There is no mathematics that can turn a blob into a micro-world category.

I got the idea that the state space here is basically coords + momentum + time for everything? I think of the analogy that the coast of Britain gets longer and longer without limit the shorter your measuring stick, so it's fractal. But with a minimum measuring stick...

ReplyDeleteThank you for the interesting post.

ReplyDeleteHaving unified all of physics can you tell us any new fact about the physical universe that we don't already know which can be checked?

Are these fractal attractors physically real? I suspect not. I suspect the situation is like calculus being a useful tool, but continuity implies unphysical consequences such as Banach-Tarski.

So, "undecidable uncomputable properties of fractal attractors" is probably not a physical property, just like physical reality is not known to be continuous.

Is it possible that these ideas could ever be confirmed by observation? From the abstract of your paper with Dr. H. it seems that currently there is no known way of confirming them (just an idea about a partial confirmation exists apparently i.e. not a confirmation).

Steven,

DeleteCertainly you have heard that it is impossible to verify hypothesis. "Partial confirmation" is as good as it will ever get and as you correctly point out we have explained in our paper how to do that.

It would be good to read an outline of this partial confirmation in principle covered in the next post.

DeleteIt seems that to derive the conclusion of undecidable uncomputability that the fractal attractor needs to be a precise physical artefact, but apparently it takes an infinite amount of time for the relevant chaotic system to reach the point that it is evolving on a fractal subset. So in reality at best we get physical artefacts on the way to being a fractal attractor and can't conclude uncomputability? I mean those diagrams aren't fractals, are they? They are just on the way to being fractals. You can't have a fractal physically, just like you can't write out sqrt(2) completely in decimal notation. Stating it "technically" takes infinite time, just means physically it never happens. Or is that rubbish?

Re "your paper with Dr. H" (on superdeterminism):

DeleteI hope Dr. H and Dr. P are enjoying the smell of burnt koala flesh and fur wafting over from Australia. It was all superdetermined, according to them.

If you start on the attractor, you remain on the attractor. It does not take an infinite amount of time.

DeleteLorraine,

DeleteYou think you are witty. You are wrong.

Sabine,

DeleteI DON'T think I'm witty. I'm Australian, living with smoke-hazy skies, the horror of a billion animal deaths, let alone the people who have died, and more than 10 million acres of land burnt. You are saying that this was all superdetermined.

I see.

DeleteBut these fractal attractors are artefacts of the model. They are objects that exist in the platonic mathematical realm but not reality. Wouldn't you require actual continuity in reality to define a fractal, and wouldn't that be physically meaningless once you fall below the Planck length? And if you don't have *precisely* a fractal attractor physically, then the critical uncomputability condition cannot necessarily be claimed. More rubbish?

Lorraine,

DeleteCorrect. If you have a point to make, then make it and stop wasting our time.

Steven,

Models are not reality, they describe reality. Whether this particular model describes reality remains to be seen; pending experimental test and all that.

Sure. But this state space is a representation of the physically underlying reality, yes? And the fractal attractors in this state space are precisely definable, yes, so that you can claim uncomputability? So something exhibiting as a fractal is actually there in reality, according to the model, which hasn't be the case until now. True?

DeleteSabine Hossenfelder 4:12 AM, February 05, 2020

Delete"Models are not reality, they describe reality. Whether this particular model describes reality remains to be seen; pending experimental test and all that. "

But in the case of Tim Palmer's model, the fractal actually exists in the real state space and is required for the critical conclusion of uncomputability. The uncomputability and therefore the fractal are required to be real, not just some property of the model which doesn't necessarily apply to reality (the real attractor may have additional structure, but it must at least be fractal to make the critical assumption of uncomputability). It is difficult to imagine that the actual existence in the real state space of a beast like a fractal won't have strong consequences in reality which may contradict quantum experimental results. (Something clearly unphysical like Banach-Tarski.)

And given the model can apparently only be partially experimentally verified even in principle, wouldn't checking the consequences of the assumed properties of reality the theory makes be a good idea? Has any such checking been done?

Lorraine, these fires are a huge tragedy for us humans, but insignificant on a cosmic scale and of no meaning for the fundamental laws of nature.

ReplyDeleteI am afraid that trying to make sense out of nature is just "Kurieren am Symptom" of incmpatibility of existing theories of nature.

ReplyDeleteThe best one can hope for would be a model that accounts for all past observations and all current observations, and predicts the result of new observations over and over again, with a perfect track record. Of course, one can never know whether the next observation, in some new energy regime for instance, will agree with the model's prediction. This would be the end of physics. And the model in question might be a formal scheme so abstract that any connection to "reality" will always be a puzzle. The model would be a black box with a known mechanism linking input and output, but the mechanism might be a pure mathematical abstraction. I suppose then the question might be: is there another formalism that would work as well? Another question might be: is the model equivalent to a Turing device, in that the question of it's future validity would be akin to the Halting Problem? In other words, we can't know whether the model will continue to be validated by future observations. So physicists could puzzle over that, at least.

ReplyDeletePersonally I think the probability of most current unification approaches are doomed to fail. This isn't some haf ass guess rather I base it on circumstantial evidence surrounding the theories coupled with an understanding of human behavior. Likely GR and QM are great approximations of the nature they predict but fall short in mimicking that phenomena. Most probably for GR a drastic revamping of it's mathematics will be necessary to more closely represent the physical way in nature that space and time are inseparable and malleable in the presence of different energy potentials for relative velocities and gravity.

ReplyDeleteLorraine Ford, as I and others have stated before, determinism does not mean we can't learn from our mistakes and are not responsible for our actions (as proved by the deterministic computer program AlphaGoZero and other examples). It just means there are firm rules in place, which gives us the ability to evaluate choices and make decisions. Without at least some determinism, how could we determine anything? So don't blame phyics for not giving us choices. It is what makes our choices possible and meaningful.

ReplyDeleteA few brilliant people have telling us for 200 years that releasing all the fossilized carbon in a couple hundred years which was accumulated over hundreds of millions of years was going to have adverse consequences which we should take steps to prevent. Those people, such as Dr. Edward Teller in his 1959 speech to the American Petroleum Industry, did so based on their understanding of physics.

I appreciate your concern and am very sorry for what is happening in your country. (I have donated to some charities which support your fire-fighters and to the Australian Red Cross organization. Please list some places you know of which will accept international donations, i.e. USA dollars--some organizations I tried do not.)

You are very wrong there JimV.

ReplyDeleteRe “determinism does not mean we can't learn from our mistakes and are not responsible for our actions (as proved by the deterministic computer program AlphaGoZero and other examples)”:

According to physics there are no IF…THEN…. algorithmic steps in the laws of nature, there are only lawful relationships that are representable by equations. Try to do IF…THEN… with equations. You can’t. So according to deterministic physics, you CAN’T learn from your mistakes, and you CAN’T be responsible for your actions.

Where are the models showing how IF…THEN… is done, using nothing but equations?? Come on, where are they? IF…THEN… is about outcomes (the THEN… bit) that arise from logical analysis of situations (the IF… bit), but equations can’t represent logical analysis. IF…THEN… is about non-deterministic outcomes, because

logical analysisof situations is non-deterministic: there are no laws covering logical analysis.Computers are 100% deterministic: they don’t do IF…THEN… logical analysis. They deterministically process

symbolic representationsof IF…THEN… steps, which deterministically processsymbolic representationsof information.Thanks for your concern about the fire devastation in Australia. I fear that things are going to get a lot worse before politicians and people really understand the facts about climate, and are forced to act and mend their ways.

Lorraine,

DeleteJimV is exactly right. Learning from mistakes and being responsible for our actions is perfectly possible in determinism. Even computers learn from mistakes (that's how artificial intelligence works) and we do make computers responsible for mistakes (by throwing them away or fixing them, basically).

Every differential equation is an if-then cause-effect relation. If the state at time t is X then the state at time t+\Delta t is f(X). (This is not to say I suggest you actually integrate an differential equation this way. I am answering your question in spirit and not by way of recommending an algorithm.)

"Computers are 100% deterministic: they don’t do IF…THEN… logical analysis."What, then, do you think an IF - THEN command in a computer program is, if not an IF - THEN command??

Sabine,

DeleteI was a computer programmer and analyst for a long period of time. Computers are 100% deterministic, they do exactly what the computer programmer “tells” them to do, even in so-called “artificial intelligence”. All computer programs are, or should be, extensively tested to make sure they do what they are supposed to do, and if they don’t, it is because the programmer has made mistakes in the programs, or has not anticipated all possible inputs and programmed-in ways to handle these inputs. Computers don’t ever make mistakes: it’s the computer programmers and analysts that make mistakes.

The computer algorithms and the input words, numbers and equations are broken down into almost anonymous sets of “ones” and “zeroes”. “Ones” and “zeroes” are actually just a concept which is implemented by high and low voltages; these voltages are steered through transistors. Transistors, and formerly mechanical relay switches and vacuum tubes, can implement the concept of an IF…THEN… logic gate. An IF…THEN… “logic gate” is just a concept: materials with suitable properties to implement the concept need to be found e.g. vacuum tubes. So the transistor or vacuum tube is not doing IF…THEN… logical analysis of a situation, it’s just doing its thing with anonymous high and low voltages.

I agree that every equation and differential equation (representing a law of nature) represents a cause-effect (in the real world) relationship. But it is not a genuine conditional relationship. A genuine conditional relationship specifies different outcomes for different numeric values of the variables.

"According to physics there are no IF…THEN…. algorithmic steps in the laws of nature":

DeleteThen let the "according to Physics" come tell us whether "IF/THEN statements" are extra-universal and exclusive to our minds and algorithms, OR show us where they arise/emerge from. "According to Physics" doesn't save you.

Sabine,

DeleteTo explain more fully what I meant by different outcomes for different numeric values of the variables:

Laws of nature handle all light and sound information the same way. This is representable by equations.

Living things acquire light and sound information via their senses. This light and sound information is analysed, and outcomes will depend on the analysis of the information. This is representable by algorithms. So:

If a butterfly is approaching (colour is blue, no beak, no teeth, no sound) then a person might stop and look (speed is zero);

If a bird is approaching (colour is grey, has a beak, no teeth, tweeting sound) then a person might continue walking (speed is the same, direction is the same);

If a tiger is approaching (colour is yellow and black, no beak, has teeth, roaring sound) then a person might run in the opposite direction (triple the speed, opposite direction).

Ivan from Union,

DeleteRe: are IF…THEN… statements universal?

The question is: did logical analysis of situation exist in a primitive form before living things arose (living things might be considered to be specialists in logical analysis of incoming information); or did logical analysis of situation not exist at all before living things arose?

Perhaps it could be argued that the double slit experiment demonstrates that a primitive logical analysis of situation has occurred.

"

DeleteRe: are IF…THEN… statements universal?:Yes, Lorraine. More: they are fundamental to the universe. I covered the subject on youtube some time ago.

I can't help if Einstein already proved that gravity can't be quantized and the proof is invisible to normal people. I have no control over that. (What I will not do is vandalize the Dr.'s blog with an endless off-topic.)

Interesting. Just end of last year I found a way that connects quantum correlations with uncomputability. Can't wait for the second part of this essay.

ReplyDeleteRichard Feynman: "it's going to be necessary that everything that happens in a finite volume of space and time would have to be exactly analyzable with a finite number of logical operations. The present theory of physics is not that way, apparently." Feynman added: "We...don't really have any "real" freedom..." (Quotes: Simulating Physics With Computers, 1982). It is interesting that Feynman touched on nearly every issue that might be applicable to "unification," as he describes "...how we might modify physical law..." (examples: discretize space and time). He reminds us that "Physical knowledge is, of course, always incomplete." Physicist John F. Donoghue writes: "So what is the problem of quantum gravity, and what did we think it was ? Historically, one can find very many quotes in the literature to the effect that 'General Relativity and quantum mechanics are incompatible.' Such phrasing is still found today... However, this is just wrong." (page 65, Lectures on General Relativity as a Quantum Field Theory, ArXiv:1702.00319v1). In his thoughtful Essay, Dr. Palmer writes: "General relativity is primarily a geometric theory. (page 9)."

ReplyDeleteI recall Steven Weinberg: "too great an emphasis on geometry can only obscure the deep connections between gravitation and the rest of physics." (1972, Preface, Gravitation and Cosmology). When I read that line many years ago, I believed Weinberg was wrong. I now believe he is correct. The question remains: how much "emphasis on geometry" is "too great" an emphasis ?

At the human level it doesn't matter if the future is deterministic since you don't know what the future is. You just take your best shot at what you believe is rational and moral and let the chips fall where they may.

ReplyDeleteI have been wanting to post something about what Tim Palmer is writing. I think one of the implications of his theory is that it cannot be proven with complete mathematical rigor that quantum mechanics is non local, even if it is, or perhaps because it is. Boy, I hope I don't get slapped on the wrist for that!

This all is obviously just another extension of a proven theory. Such things did not lead to anything really new. I am well aware of other approaches, such as quantum loop gravity or string theory, which, despite all efforts, have yet to resolve the open questions.

ReplyDeleteThe question of what a theory must look like so that QM and GRT can be deduced from it have already been asked. Maybe it will look somewhat crazy from today's perspective, as the QM for classical physicists.

My point is to take a fundamentally different perspective on the role of gravity in QM.

A model like the “GenI Process” (see wikipedia for a definition or “The Source of the Universe by Siegfried Genreith) on the other hand indeed requires a rethink. According to such a model, our universe, as we perceive it, evolves according to a collapse of its wave function. This clearly contradicts the not explicitly justified assumption of leading physicists that it develops along a Schrödinger equation. But why is it like that? Is there a clear justification and vice versa? What, in essence, is against assuming a collapse? I have not even seen a discussion among physicists about this aspect. Even with well-known authors like Penrose, Greene, Hawking, who otherwise like to talk about the wildest speculations, nowhere is there any hint that the collapse of its wave function is the source of reality in our universe.

Can anyone help me here? Are there any works that consider this perspective?

At least in a nutshell, one can perform the calculations of a space-time metric for a spin1 / 2 particle and actually prove that the dynamics during the measurement satisfy Einstein's field equations. That should justify at least a discussion about the view.

I guess by Liouville equation Palmer means the equation which describes the evolution of the distribution function in the phase space of the Hamiltonian system? The problem is that the Hamiltonian system cannot be chaotic, at least not according to the standard definition of chaotic systems.

ReplyDeleteAnother problem is that Palmer first talks about computability in the sense of Turing, but then refers to the book by Blum, Shub and Smale whose definition of computability and the whole model of computation is completely different.

One is reminded of the joke of different proof techniques, one of which was:

proof by semantic shift: Some of the standard but inconvenient definitions are changed for the statement of the result.