Friday, September 27, 2019

The Trouble with Many Worlds

Today I want to talk about the many worlds interpretation of quantum mechanics and explain why I do not think it is a complete theory.

But first, a brief summary of what the many worlds interpretation says. In quantum mechanics, every system is described by a wave-function from which one calculates the probability of obtaining a specific measurement outcome. Physicists usually take the Greek letter Psi to refer to the wave-function.

From the wave-function you can calculate, for example, that a particle which enters a beam-splitter has a 50% chance of going left and a 50% chance of going right. But – and that’s the important point – once you have measured the particle, you know with 100% probability where it is. This means that you have to update your probability and with it the wave-function. This update is also called the wave-function collapse.

The wave-function collapse, I have to emphasize, is not optional. It is an observational requirement. We never observe a particle that is 50% here and 50% there. That’s just not a thing. If we observe it at all, it’s either here or it isn’t. Speaking of 50% probabilities really makes sense only as long as you are talking about a prediction.

Now, this wave-function collapse is a problem for the following reason. We have an equation that tells us what the wave-function does as long as you do not measure it. It’s called the Schrödinger equation. The Schrödinger equation is a linear equation. What does this mean? It means that if you have two solutions to this equation, and you add them with arbitrary prefactors, then this sum will also be a solution to the Schrödinger equation. Such a sum, btw, is also called a “superposition”. I know that superposition sounds mysterious, but that’s really all it is, it’s a sum with prefactors.

The problem is now that the wave-function collapse is not linear, and therefore it cannot be described by the Schrödinger equation. Here is an easy way to understand this. Suppose you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability. But here’s the thing. If you take a superposition of these two states, you will not get a superposition of probabilities. You will get 100% either on the one side, or on the other.

The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.

Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.

But that’s not how most physicists deal with it. Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.

If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if you could do that, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done. And without that, the Copenhagen interpretation and its cousins do not solve the measurement problem, they simply refuse to acknowledge that the problem exists in the first place.

The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching.

Some people have a problem with the branching because it’s not clear just exactly when or where it should take place, but I do not think this is a serious problem, it’s just a matter of definition. No, the real problem is that after throwing out the measurement postulate, the many worlds interpretation needs another assumption, that brings the measurement problem back.

The reason is this. In the many worlds interpretation, if you set up a detector for a measurement, then the detector will also split into several universes. Therefore, if you just ask “what will the detector measure”, then the answer is “The detector will measure anything that’s possible with probability 1.”

This, of course, is not what we observe. We observe only one measurement outcome. The many worlds people explain this as follows. Of course you are not supposed to calculate the probability for each branch of the detector. Because when we say detector, we don’t mean all detector branches together. You should only evaluate the probability relative to the detector in one specific branch at a time.

That sounds reasonable. Indeed, it is reasonable. It is just as reasonable as the measurement postulate. In fact, it is logically entirely equivalent to the measurement postulate. The measurement postulate says: Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch. Same thing.

And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.

And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics. What’s the trouble with the other interpretations? We will talk about this some other time. So stay tuned.


  1. Sabine,

    I don't disagree with your criticisms, but it does seem to me that you have more or less reiterated the probability-measure problem and, perhaps, the preferred-basis problem.

    In concrete terms, how does MWI lead to the Born rule?

    I think you have mentioned on some other occasion the issue of how you get the "observer" out of MWI: an actual human being, after all, is not, it would seem, in a single quantum state but rather some sort of mixture. So, how does this work? I.e., how many universes in the many-worlds multiverse does an actual human being span, etc.?

    I've also mentioned before the fallacy of the "branching" metaphor. All the basis states of the Hilbert space are, in some sense, already there, and the evolution controlled by the Schrodinger equation is just sloshing probabilities back and forth among these different basis states (hence, MWI = "the sloshing probability interpretation").

    I myself first got interested in MWI back in the mid-1970s when DeWitt's book came out. MWI seemed to me worth taking seriously, and I think all of us physicists occasionally think in MWI terms, just as all of us, in practice, often act as if we believe in the Copenhagen interpretation.

    In the end, though, I think both approaches are more of heuristic value than anything else: neither interpretation really works if taken completely seriously.


    1. PhysicistDave,

      It's not the probability measure problem, as I am not worried about how to weigh different branches.

      It's not the preferred basis problem as that is basically solved by decoherence.

      I am asking why is the forward evolution of what is a detector at t_0 no longer a detector at t_1>t_0. The answer to this is that, by assumption, the forward-evolved detector is not what many worlds fans want to call a detector. So you need an additional assumption and this assumption is virtually equivalent to the measurement postulate in Copenhagen. I use virtually to mean "up to interpretation".

      I have asked several people whether this point has been discussed somewhere in the literature. It seems to me this pretty much must have been said before, but I haven't been able to dig up a reference. If you have one, please let me know, and I'll be happy to give appropriate credits where due.

    2. I wonder if you would consider a classical model of duplication when a measurement is made as equally problematic. The observer tosses a coin, and at that point he and the coin are duplicated, with one copy seeing heads and the other tails. Would you say he has a 1/2 probability of seeing either outcome prior to duplication?

    3. As PhysicistDave explains, the branching metaphor is unhelpful. The wave function evolves continuously in a multi-dimensional space. It assigns phased amplitudes (not probabilities) to every point. When you consider a simple quantum mechanical phenomenon like double slit the interference looks like waves. When you involve a big system there's so many amplitudes and they interfere in such a way that tiny threads of the space have high amplitude and other huge areas have near zero. It's like how you approximate a sharp image or audio signal by adding many frequency components, but it's continuous, there's no branching anywhere.

      This is decoherence and it's how you select only the consistent outcomes e.g. only left or right macroscopically observed. Vast parts of the space that are inconsistent like the needle pointing half-left are eliminated by destructive interference. That fully explains how you get crisp outcomes. The wave functions of big systems with lots of interactions are extremely sparse, but they are valued everywhere and continuous.

      What's left to explain is where probabilities come from. Why the amplitude squared? I don't have command of the math but there's a clue: The amplitude of the configuration before the measurement is unimaginably tiny to begin with. It's 10^-10^100 or so, because the wave function has been spreading out and distributing amplitudes since the beginning of the universe. So the amplitude of all the outcomes is not 100% in your example, it's very very tiny. And that's the possible paths. The impossible paths have amplitude that's something like 10^-N lower than that, where N is the number of particles in the big system.

      Yet the probability of the configuration prior to the measurement is 100% as we observe it, and the probability after is also 100% for whatever we observe. To explain this we can reframe the Born rule as a kind of relativity in the wave function. Just as there's no preferred Lorentz frame, there's also no preferred Schrödinger frame. Wherever a system is on the wave function the laws of physics behave as if it's amplitude 1 prior to an interaction, and as the wave function evolves the future states also behave as if they have amplitude 1 when you consider future interactions. The laws of physics are independent of the absolute amplitude or where you are on the wave function.

      There may be a residual mystery, how do probabilities emerge as the wave function squared. I can't do the math, and the experience of uncertainty may have to do with emergent conscious brains like the experience of color or anything else, without being mystical. All that physics has to do is explain why amplitudes evolve as they do.

      Did that improve our understanding? I think so. We went from saying that measurement is something mysterious to explaining how we get sharp macroscopic states by destructive interference. And we went from two rules that you have to choose between ad-hoc to two principles that apply everywhere all the time: The wave function evolves all the time, and the local physics acts as if the amplitude is 1 prior to an interaction all the time. Looks like a complete theory to me.

  2. Dear Dr. Hossenfelder
    "the detector will also SPIT into several universes"? You should talk to those detectors, that's not what well behaved detectors should do.

    1. Haha :p Thanks for spotting, I fixed this.

    2. Well, the question remains, if the detector would lower the frequency of the wave function in that other universe, because it is all yucky and later on, when it is dry again, it would go to the old frequency. Or would it collapse the wave completely and therefore the yucky universe, too? That would combine the two theories. CERN could use the FCC for this experiment, because I am sure the LHC can't spit that far.
      I'll get my coat.

    3. But Sabine was somewhere right: such detectors who plays this sort of game to fool us are quite ill educated.

  3. Hi Sabine - very thoughtful video. I understand your words but I don’t understand your objection. MWI says that when you perform an experiment, you and the experimental device (and all your MWI doppelgängers) are in a single basis vector of that device, where the basis is somehow defined by your experiment, which means you will measure the eigenvector of that basis vector with certainty. Your doppelgängers in other basis vectors and will measure other eigenvalues with certainty. I don’t see how this is equivalent to the Copenhagen collapse hypothesis beyond the well-known observation that to each of you it only looks like the wave function collapsed. Can you elaborate or give a different phrasing?

    Of course it is somewhat sticky how the relative probabilities you get from repeated measurements come about, and as others have pointed out those basis vectors have always been there though that is also conceptually sticky.

    1. Steve,

      Write down the assumptions that you need to describe what you observe (including the fact that, after measurement, you know with 100% probability what has happened). I hope then you will see that you need an assumption, next to the Schrödinger equation, to replace the measurement postulate. You write "MWI says", but it is unclear to mean what you mean by that. Best,


  4. Sabine wrote: "In the many worlds interpretation, if you set up a detector for a measurement, then the detector will also split into several universes. Therefore, if you just ask “what will the detector measure”, then the answer is “The detector will measure anything that’s possible with probability 1.”

    One should perhaps not look at what happens in the future,
    but to what happened in the past: for the past there is a
    unique, well defined branch and one can check whether the outcome that have already been realized satisfy the Born rule in a frequentist sense, and how closely.

    1. Pascal,

      The Schrödinger equation gives you unique relation between the present, the past, and the future. If you think -- as many worlds fans like to argue -- that the Schrödinger evolution is all there is, then you should be able to make statements about the future.

    2. Pascal,

      The standard way to prepare a state is to measure. You can thus wonder what happens as you evolve a state backwards in time past incompatible Stern-Gerlach measurements. The result is that your unique final state now can be obtained from any one of an infinite set of initial states.

      In the end, you have no choice but to realise that you really only get to know what is happening between the initial and final state, nothing more, and often a lot less.

    3. If I have understood your argument correctly: If the detector in our world touches the superposition by the act of measurement, then at that moment the detectors and their corresponding possibilities split into many worlds; that is, copies of the same detector are made but each copy being linked to a unique possibility, and there will be as many copies of the same detector as there are remaining possibilities.

      One copy is associated with one unique possibility in each of the separate worlds. When measurement takes place in our world, simultaneous measurements take place in all the worlds. All the copies are detecting at the same time in their respective worlds, and in each world the corresponding unique possibility is of 100% probability.

      Let us say one such possibility is "flying". Can the human program realize this possibility 100%? No. Then what happens to the flying possibility? Is it hidden? Does it disappear? So we are back to the piolet wave theory or Copenhagen interpretation which ask similar questions.

    4. The unique state is observer dependent. If the observer or the program cannot realize the unique state, example a human flying, then what happens?

  5. I can follow this, excerpt when "reductionism" appears. What is the definition of "reductionism" here in this quantum theory context?

    1. Ontological reductionism. Large things are made of smaller things. The laws of the large things follow from the laws of the smaller things.

    2. Sabine,

      Doesn't ontological reductionism lead to infinite regression?

    3. If there is the semblance of the observer, then there is the semblance of reductionism. It is the observer who divides, fragments, fractures the whole. When the observer disappears, then there is one whole thing, but then you cannot describe it because it is the observer who describes.

  6. Many worlds should be interpretated, as a Charge Parity symmetric limited set of anti-copy universes with raspberry shape.

  7. Thank you for writing this. It has always seemed to me obvious that merely changing your interpretation from "only one branch is real" to "all branches are real but the other ones are unobservable from this one" couldn't actually solve the problem. What I would like to see is an answer to the question "What is a measurement?" I'd accept as an answer "With my interpretation, we don't need to define the notion of measurement" but every account of MWI that I've read talks about measurements, so it doesn't seem to fit the bill. (Speaking as a total non-expert here, so I may have said some things that are obviously wrong.)

    1. gowers,

      Yes, it's obvious if you look at it from a purely axiomatic perspective. If it was possible to derive the measurement process using only the Schrödinger equation in the many worlds interpretation, then it would be possible to derive the measurement process using only the Schrödinger equation, period. But we already know that this isn't possible because using only the Schrödinger equation you will never get a non-linear process. Hence, you need at least one second assumption.

    2. I don't understand this "non-linear" objection. Yes the Schrödinger equation is linear and measurement is sloppily defined but it looks very non-linear.

      In a measurement, first the particle under test is entangled with a particle of the apparatus and these are in superposition of left/right. Then all the other particles of the apparatus and you and the Earth are entangled with that so that the whole Earth agrees it's either left or right, not something in between. This looks like a very non-linear process: We started with something that was recognizably a wave and amplified like crazy it to look like a square yes/no function, although it's built from a nearly infinite sum of frequencies.

      At the same time the linearity of the Schrödinger equation is well and good because the Earth is in a sum of "Earth thinks left" and "Earth thinks right" states. Nothing irreversible happened. I don't think the unobservable nature of the other state is a problem, although a clearer formalism of how amplitudes scale to 1 from the point of view of any local system may be needed.

    3. Pavlos,

      Because what you say does not explain what we observe. Where is the observable that corresponds to our observation?

    4. Forgive my ignorance. I understand decoherence and the whole many worlds argument is only about explaining why all parts of big macroscopic things like instruments or people agree on an outcome. Why there's discrete interaction in the first place has to be explained another way. Isn't quantization a prediction of the Schrödinger equation?

  8. Nice point, Dr. H. Never heard that one before.
    I have 2 questions.
    Surely it's also a problem for Many-Worlds that these purported other branches haven't been observed and maybe can't be, even if the issue you have pointed out was resolved? Without observation it isn't physics?

    Also, how does the wave-like interference seen in the double slit experiment fit into what you say (I know it's a deliberately concise post)? Do we just consider the wave-like interference an expression of the fact that a measurement hasn't been made to determine which slit photons went through, and leave it at that for now because no-one has a good interpretation?

    1. Steven,

      No, this isn't a problem for Many-Worlds in the sense that it doesn't make the theory wrong. It's just that believing that the other worlds really exist is not scientific but equivalent to religious belief. I have explained this in an earlier video.

      I don't know what problem you see with wave-like interference. You have this in Copenhagen and many worlds and pilot wave likewise.

    2. Dear Sabine,
      "In fact, it is logically entirely equivalent to the measurement postulate". OK for this formal and interesting equivalence.

      But in no case it is reasonable. Not only because we definitively cannot verify the others branchs exist. More strongly, because the "ontology" of MW is the extremely opposite to a economic one. It is totally crasy. It looks much worse than the scholastics discussions on the angels's sex. We should just look it as a funny spéculation for laugh, but we are compelled to speak about as if it was a serious option only because too many academics take it as a serious one.

      I have read the argument of Sean Carroll. Against the foolish branching, he say that "All the branchs exist before the measurements". That means we have to believe that all the superposed states or possibilities really exist (in sort of parrallels worlds) before the reduction in standard QM to a only state. So we might see MW and standard QM as also "ontologically equivalent". But this manner to push the problem in amont is equally crasy.

      We have obviously to search a economic interpretation where the measurement problem is solved and where superposition's physics effects could be attributes to just one entity ...follow my look. The De Broglie-Bohm theory has also problems but we ought to try repair them rather than get rid of this much more reasonable possibility. It is still a research program, not a complete theory.

    3. Jean-Paul,

      Prof Carroll would point out that your perception of economy is way off. MWI only assumes Schroedinger evolution. Every other interpretation assumes something else.

      For example, you want "the reduction in standard QM to an only state". It is not actually necessary to do this; we do it because we literally don't care about the other branches and so this makes our calculations easier. But if you insist that this needs to be done, then you need to assume something else than Schroedinger evolution to get this.

      It is actually rather difficult to use Occam's razor. One really needs a lot of training to see which of two alternatives is the ontologically simpler one.

      On the topic of de Broglie-Bohm pilot waves, I think that the best view of it is to see it as an attempt at a post-quantum theory. If it succeeds it will be really wonderful.

    4. Probably I have not been clear. Of course I know that MWI avoids the reduction by multiplying the realities (the worlds) where the detectors are. I was only saying that S Carroll defends this idea by saying that "the multiplication of worlds" is not specific to MWI, but that it is already present in the MQ's stand-by vision. Namely the real existence of all the possibilities defined by the wave function. Which would make MWI "ontologically equivalent" and this standard vision. And that in my opinion, also rejects such a standard vision (reality of possibilities). By saying this, Carroll simply shifts the problem of the branching upstream of the measure to a realistic view of the wave function.
      But this supposed standard vision is (fortunately) not so common. Unlike Carroll, many physicists stick to formalism and avoid saying that the wave function (the quantum state) univocally refers to a physical reality. They agree that the MQ still has no satisfactory interpretation.

      For the Occam razor, the PWT also needs to eliminate "empty waves", among other problems. In my opinion, it is constrained because it must integrate that the pilot wave also collapses during quantum interactions (energy-momentum exchanges) - whose measurements - which redefine its shape, and in particular its center from from which interference is calculated in optics.

      For equivalence MWI <==> Copenhagen there is more to say. The standard MQ (without reification of possibilities) is a probabilistic theory where the notion of probability makes sense (but not the notion of probability amplitude, which is only a formal tool). On the other hand, the fact that the reality is multiplied in MWI seems to make me lose all meaning to the probability since each possibility is realized each time. We can not define a frequency associated with probability (such result happens in 30% of cases, etc.)

    5. Steve, on this slit problem, the measure on the screen will tell you nothing on which slit the particle went through. Because, it's not the purpose of this specific quantum math model of the slit experiment. If you want to know by which slit went a particle, it's another model and another experiment. I don't know if I follow the Copenhagen interpretation or not, but the way I see the thing is each specific experiment has its own quantum linear model, its own specific observable. The model is not the model of a particle only , its the model of an experiment. To access to the reality the experiment must include a detection process which could be also modeled by the component of the output state vector and so on. Talking reality of particle before the detection is useless because we cannot verify the prediction true or false, it's philosophy and lead to contradiction and false statement like spooky distant action (Bell experiments). Useless to say that MW is a useless total nonsense.

    6. What is reality? What is actuality? To the bat doppler's effect of light "really" does not exist, and to the human it "really" does exist. By relativity both are true. Whether the phenomenon is insensible or sensible depends on the observer or the program. The snake biological program cannot sense, but the human biological program can sense Doppler's effect of light. This clearly means that the observer or the program dictates reality. How the same doppler's effect be both sensible and insensible? The actuality is a wave disturbance which based on the observer can be sensed or not sensed. This is another way putting Wigner's friend paradox. The same thing presents two realities which means each reality is the virtue of the program.

    7. continued. . .
      There can be no reality independent of the observer because reality is the virtue of the observer. There is one actuality and many interpretations based on the observer or program. For it is the program that interprets or describes. Therefore, the description is the program, in that, there is no description separate from the program. Remove the program and the description goes away. In such a case, what does Sean Carroll mean when he says that the possibilities already exist as realities before the split. It is measurement that describes or defines a reality, that is, reality comes into being--ontology--during measurement because there can be no measurement without the observer. Measurement is the movement of the observer. That being the case, how can the possibilities exist as realities in corresponding branches prior to the act of measurement?

    8. A relative effect such as the Doppler effect, which is a function of the relative movement of the receiver (in classical physics, relative to the propagation medium of the wave type concerned) does not need any consciousness to exist: it is objectively recordable by a device .

      More generally, there are tons of definitive arguments against this subjective definition of reality (ie that requires a Subject to exist). I think that's why you're having trouble finding interlocutors in this blog. Example of arguments:
      - Do you think that we, the speakers on this blog, need your consciousness to exist? You obviously do not believe it, otherwise you would not go looking for the discussion. The consequent subjectivism leads to solipsism.
      - Do you think that the world has waited for your consciousness to exist, or more generally the consciousness of the living beings of our planet?
      Where does consciousness begin? Einstein mocked the recourse to consciousness in MQ by asking if the look of a mouse was enough to modify a wave function.
      When one knocks on a pole BECAUSE one has not seen it, it is because it exists independently of consciousness. etc.

      And, it must be remembered, it is not because theories are human constructions that the reality they seek to describe would be a co-construction of reality and consciousness.

  9. (Long time lurker here - thank you Dr. Hossenfelder for your long series of explanatory articles, shining light on the IMHO most exciting and most important aspects of physics in an approachable language - it is much appreciated. I bought your book as well.)

    A stupid question, and an apology in advance for the imprecise language I'm using: that the Schrödinger equation is not "real" in its usual mathematical form in our universe appears to be plainly obvious from the fact that we see a dot on the double-slit-experiment screen on one side of the screen or on the other.

    I.e. which slit the particle passed through in hindsight is a 100% discrete event with trillions of followup probabilistic events building on it as that dot fades from the fluorescent screen.

    Why is it then such a big leap to require all past events in our Universe to have a fixed 100% probability - i.e. what happened happened - while future events are probabilistic and we'll only ever experience one specific outcome of it?

    I.e. cannot we picture our universe as a processing machine that takes the deterministic history of all past events (i.e. the current full quantum state of the universe) and branches off into one of the probable directions, of which we'll only ever be able to experience a single branch, if we ever look back at what happened in the past?

    In that super-deterministic view the "measurement problem" and "observation" doesn't ever arise: the superdeterministic processing machine is compatible with the Schrödinger equation for every observable quantum experiment, and we are only doing measurements because it directly arose from the probabilistic execution of the universe's quantum state, starting from the Big Bang, progressing forwards in a deterministic fashion according to probabilistic decisions.

    In such a model the "many universes" interpretation isn't required, because the propagation function of the Universe is self-sufficient in itself and only a single version of the Universe exists.

    Admittedly super-determinism is not a particularly happy thought to advocates of "free will", but the math seems self-consistent to me, and resolves most of the philosophical paradoxes around quantum mechanics.

    (I hope my imprecise language didn't make my arguments 100% illegible to you. Not that I could do much about it if the universe is indeed superdeterministic - but I'm trying.)

    1. Schrödinger's Cat,

      you wrote: "Admittedly super-determinism is not a particularly happy thought to advocates of "free will", but the math seems self-consistent to me, and resolves most of the philosophical paradoxes around quantum mechanics"

      I agree with you that if we would live in a super-deterministic world this would really ba a hard challenge for the the arguments in favour of free will.

      But anyway, in this case I would argue that the occurence of free will was already determined from the very beginning of the universe.

      Although our physical laws are symmetric in time, we can only remember the past, we dont remember the future.

      "The art of prophecy is very difficult, especially with respect to the future".

      The knowledge about super-determinism does not help you for managing your personal live anyway. In order to live your personal live successfully, you still need to take your own personal decisions. You should better believe in free will! It's a very scientific view. Believing in freedom of will is in my view to a large extent simply the opposite of "believing in fate".

    2. You are assuming incompatibilist free will, which is a minority position among philosophers and, in usage, among laypeople.

    3. Stathis,

      I have not any clue, what an "incompatibilist free will" should mean.

      In my view freedom of will does exist, and it is pretty much compatible with determinism. Determinism within this context means, that every state X(t=0) determines completely the state at X(t+dt). But this is just the mechanism, how nature is working. In order to understand "freedom of will" one has to go beyond the basic mechanism.

      Nobody with an open mind could deny that "freedom of will" can be experienced in our personal daily lives, thus it should be possible, to define "freedom of will" in such a way, that it is completely compatible with determinism.

    4. A simple question: what is the mathematical definition of a "free will decision" of a human, if every future state is a super-deterministic F(X(t=0)) function of the previous state of the universe?

      Because almost by definition it's the identity function if I read the math right, which doesn't have too much philosophical meaning.

    5. It is quite obvious, that there can't be any mathematical definition.

      You should better find an AI algorithm allowing the occurance of free will. Have you ever watched Stanley Kubricks Space Odyssey 2001? The main computer of the space ship HAL developed a quite dangerous attitude while achieving free will. At least SF fans are able to imagine free will. Needless to say, the computer itself is of a purely deterministic device.

      The neuronal structure of Human Brains are shurely more complex in comparision to silicon computers. Let's imagine for example, you are trying to explain a psychiatrist, that you have not been able to control yourself because lack of free will! You have just been determined to do some very stupid things, therefore you were not able to do it otherwise! It was not your fault. I am quite shure, they will have a place for you in order to help you doing it better.

      Obviously free will does exist! There is no common definition yet. The lack of an accepted definition is also a severe problem, if you want to study the neural processes, that constitutes the mechanisms of free will.

      But I am quite shure, that fundamental physics won't be helpful. What quantum information physicists are saying about free will, is not much more than a religious belief. It does not have any measurable consequences! You could replace "physical laws" equally by the "devine laws of the Holy Lord", who caused our existence applying a big, big bang.

  10. I never understood how it should help if the word salad around "measurement" and "collapse" is replaced by another salad around "branching" and "worlds".

    These concepts are not part of the mathematical theory, so what is the goal: having the most believers?

    1. Lehrer, but in MWI "measurement" is really replaced with "entanglement with environment by interaction with it" (following the same Shr. eq.) and "decoherence", while "collapse" is not replaced with anything, it's just gone as not required anymore.

    2. Dmitry,

      No, it's not, that's the whole point.

  11. Prof Sabine,

    What you are saying plagues all decoherence, not just MWI, but of course you already know that. I don't like MWI either, but I think decoherence is already correct.

    You also already know that the proper thing to do is, in your words, "You should only evaluate the probability relative to the detector in one specific branch at a time."

    I am not sure why you seem to still have an objection. Decoherence should clearly be seen to improve the measurement problem over Copenhagen because
    1) is smooth evolution, not instantaneous shenanigans
    2) does not assume classical observers
    3) derived from Schroedinger evolution

    That means, even if you wish to insist that MWI is equivalent to the measurement postulate, you should agree that this new measurement postulate is a lot smaller an assumption than the Copenhagen one.

    But I deny even that. Feynman points out that the only physically observable, and hence should be objectively agreed upon by all observers, is the transition probability amplitudes (make its density operator version to get rid of phase). That is, thinking about the wavefunction alone is somewhat wrong---all predictions require you to state the initial wavefunction AND the final wavefunction.

    In our final wavefunction, you need to state which detectors observed what outcome, and those immediately destroys any superposition that are forbidden.

    Of course, I do not attempt to explain the probabilities, seeing as others seem to think this is yet unsolved. Not relevant to the point we are considering at the moment.

    Maybe there is still a measurement problem, but I think decoherence already explains a whole lot. The size of the problem is now a lot smaller than Copenhagen's.

    I literally do not see why Copenhagen gets a out-of-jail-free card when they literally define measurement as forever out of quantum theory's purview, and yet when decoherence explains so much more and then have tiny ghosts left as "I don't know", it is somehow no longer acceptable to postulate something special happens or whatnot.

    1. B.F.

      "You also already know that the proper thing to do is, in your words, "You should only evaluate the probability relative to the detector in one specific branch at a time."

      I am not sure why you seem to still have an objection."

      I do not have an "objection", I am merely pointing out that this is an additional assumption which means that many worlds is not any simpler than the Copenhagen interpretation. This supposed simplicity is why many worlds fans think their approach is superior. I am pointing out that this is because they are not careful writing down the assumptions which are necessary to arrive at a description of the world that agrees with what we see.

    2. Prof Sabine,

      Thank you for the clarification.

      Shock and awe at physicists being sloppy!?!?!

      When my teachers and profs tell me I am sloppy, I do not even intend to defend myself. Too many have told me that; my response tends to be, "I don't know where I am being sloppy, please tell me, so I can improve."

      Needless to say that I am not buying any of the standard arguments by MWI advocates. Instead of simplicity, I think it is far more fruitful to consider that it demystifies the measurement-entanglement-observation process, which represents an actually objective improvement of understanding, than of "I like this more, it is simpler".

      But do you think it would be fruitful to consider the Feynman view as a better alternative?

      I mean, I am intending to, over the next decade, gestate a textbook that starts teaching quantum theory from a highly simplified QFT, from scratch. The postulatory basis that would be included in the middle (because I am no monster that would condemn students to missing out what everybody else is doing) would follow more of Feynman-Hibbs and then some Dirac.

      It would, therefore, be really much a problem if I don't get this rigorously correct, and mislead students. And yet I am, by nature, simply not the rigorous type, so I cannot help myself. I literally require external cross-check.

    3. B.F wrote: "thinking about the wavefunction alone is somewhat wrong"

      Yes, it is clearly wrong. The continuous and deterministic evolution of the wavefunction has confused many people. The Schrödinger equation is clearly at odds with the jumps and randomness that lie at the heart of quantum physics. It is a mistake to think of the Schrödinger equation as describing an *individual* quantum system.

      "decoherence already explains a whole lot"

      No. Decoherence is just a word. It papers over the real discontinuities and suggests a "gradual" evolution from the quantum to the classical world. Coherence theory is based on classical optics and requires statistical machinery to describe the superposition of random waves. Yet some people seem to think that decoherence applies to individual systems, rather than ensembles.

      "starts teaching quantum theory from a highly simplified QFT"

      Yes, such a textbook is badly needed! Quantum field theory and quantum statistical mechanics are much closer to the core of the "measurement problem". Are you aware of the closed time-path (Schwinger/Keldysh) formalism? It combines unitary evolution and measurement postulate in a seamless way. It's the transactional interpretation fully fleshed out.

    4. Werner,

      Don't know why comments sometimes get lost.

      Let us begin with agreements. Thank you for adding to my motivation for writing the book.

      I am also myself an enthusiast of transactional interpretation.

      I had learnt Schwinger-Keldysh closed time-path formalism before, in many electron physics. But I have zero idea what you mean by that having anything to do with measurement, and Google is equally stumped. For all I know, the formalism is merely supreme rigour, for realising that it is not ok to assume that future infinity has the same Hilbert space as past infinity, and that we ought to only ever impose the zeroing of the vacuum state in one time slice. So, they simply evolve the future infinity states back to past infinity and do all evaluations there.

      That is totally inappropriate for many-electron physics, since that level of rigour is totally washed out by all the horrible approximations. It is only sensible in few-particle QFT.

      Anyway, let's move on. I think your biggest problem is "It is a mistake to think of the Schroedinger equation as describing an individual quantum system"

      It is known to the pioneers that we do not have a choice in this. Dirac pointed out early on that the double slit experiment done for single photons and single electrons at a time need the wavefunction to be describing single particles in order for the particles to avoid locations of destructive interference.

      These arguments are so powerful, that Born could successfully convince Schroedinger that his wave equation had to be probability amplitude waves, that the theory had to have quantum jumps, etc. It is also the reason why pilot waves have the driving wave pass through both slits to get the interference pattern, even though the single dot only ever passes through one.

      "[Decoherence] papers over the real discontinuities and suggests a gradual evolution from the quantum to the classical world"

      Isn't this a plus? Why would you want to have sudden jumps that you literally postulate to not be able to explain? What "real discontinuities" are you saying have been experimentally observed and require explanation?

      Decoherence is so advanced now, that we consider things like preferred basis to be solved. You only need Schroedinger evolution to get alpha ray tracks in bubble chambers even when the decay ought to be spherically symmetric. Is that discontinuous enough for you? We also can explain why you only see one result in measurements. What more do you want?

    5. "What 'real discontinuities' are you saying have been experimentally observed and require explanation?"

      Don't you believe in atoms, in some graininess of matter? Doesn't a counter register clicks? (I am not claiming that we should be able to predict when atoms decay!)
      Where does this craving for continuity come from? Schrödinger wanted to get rid of quantum jumps, and he failed. And his mythical wave function afflicts the thinking of almost every physicist.

      "I think your biggest problem is ..."

      Of course I know about Dirac's dictum that a single particle interferes only with itself. But even then the wave function describes only a statistical ensemble (the experiment has to be performed with many particles). If you insist that the time-dependent Schrödinger equation describes an *individual* particle, the statistical character of QM is lost, or has to, somewhat artificially, be put back in using the measurement postulate.

      "I have zero idea what you mean by that having to do with measurement"

      The Keldysh formalism can deal with irreversible processes. And is't photon absorption an irreversible process? At least in everyday life absorbed light turns into heat. But in the Aspect et al. experiments photon absorption acquires special status as a "measurement" process? Only by adding the confirmation waves can we arrive at a unified description of elementary processes. I've explained this in more detail in Sabine's blog on the problem with quantum measurements. It was probably too late for you to notice it.

    6. Werner,

      Nitpick: Dirac's dictum is NOT "that a single particle interferes ONLY with itself".

      Also, it would be better if you could just give me some links on what you mean by Keldysh v.s. measurement.

      Like, if you meant that photon absorption is irreversible, then ordinary QED are all able to deal with this, and you do not actually need Keldysh. You suddenly talk about confirmation waves (yes, this is standard transactional interpretation, I know) in this context and I am so totally lost.

      It is also important to note that whether photon absorption is irreversible or not is actually dependent upon the final state. The electron could easily have coherently re-emitted the absorbed photon, in which case the absorption has to be reversible. This is also why I really take Feynman's view that the only sensible things to talk about are when you have specified both initial and final states, to get only transition probability amplitudes.

      More importantly, if you insist that wave functions only describe statistical ensembles, then you need to explain 2 things:
      1) Physicists universally deduce wave functions by solving for the eigenfunctions of operators or whatnot. These Hilbert space trickery implicitly or explicitly assume only one single particle (Slater determinant or better to do more, and that changes things). Those that want psi-epistemic pictures, and also your statistical ensembles, need to explain why mathematical games could produce experimental outcomes.

      2) If quantum theory only describes statistical ensembles, then how does it explain measurement outcomes either? We do a lot of stuff with single particles, and we also ought to have an answer to how detectors work. If detectors work statistically, why should the results exactly agree with quantum predictions so well that Bell's inequalities get violated?

      Finally, I already was telling you that Schroedinger evolution already happily explains why counters registers clicks and all. Atoms and graininess of matter reflect discrete conserved quantities and literally are not conceptually difficult. Heck, Schroedinger wanted to introduce waves precisely because self-consistent waves naturally gives rise to discreteness.

      Yes, of course Schroedinger failed in his quest to get rid of quantum jumps. But the same Schroedinger evolution, with Born's probabilistic interpretation, begets all the correct results.

      It is not so much that I am against your scheme. If you want to get rid of wave functions in all of physics entirely, well, you should point me to some of your publications on that topic. I am just trying to tell you that decoherence with mere Schroedinger evolution already sufficiently explains how you get counter clicks and all that. I am not even into MWI; anything that has decoherence will suffice. And you know I am into transactional interpretation too.

      However, I am not able to see how a statistical interpretation, and/or doing without wave functions, is supposed to work, let alone help us understand more about quantum theory. Heck, I already pointed out that I am even open to entertaining post-quantum ideas, e.g. regarding pilot waves. I think I have already invested a lot of work into this accursed topic, so if you want me to understand your point, please do not ask me to put in more maths myself. Send me some links instead. I'll read them.

    7. The moment you map one copy of the detector to one unique possibility there is discrimination. This discrimination is the fall out of measurement. There can be no discrimination without measurement. To say that each copy of the observer is mapped to a unique possibility in each unique branch before the measurement took place is not reasonable. Before measurement superposition is something undefined whose outcomes are uncertain, that being the case, how can the branching and the association of copies to possibilities be predetermined?

    8. Superposition is when measurement is not, period. Any attempt to describe or define superposition is still measurement. Measurement is always with reference to something. That something is the measurer or the observer-program. It is measurement that determines the outcome and resolves the uncertainty. Measurement destroys the superposition, in that it destroys the uncertainty. When this happens the outcome is 100% probable.

    9. Werner,

      I cannot help myself. After you mentioned that Keldysh formalism does something more, I read up, and indeed my initial views are wrong. Yes, Keldysh allows you to do non-equilibrium, adding time variance, etc. That is wonderful.

      Not sure how I could do hyper-simplified QFT from that, though.

      I would still need some pointers to how Keldysh deals with measurement. I don't know how to search for that.

      Gokul, just go away. You haven't learnt anything since we last met. Still rambling to yourself on everybody else's threads.

    10. B.F wrote: "I think I have already invested a lot of work into this accursed topic."

      Sorry, I certainly do not want to waste your time. I'm a victim of the "accursed topic" myself (dropped out astrophysicist trying to understand Quantum Theory). Surely there must be a reason why this topic is still being debated so much?

      "you need to explain 2 things"

      1) I have no problem with the time-independent Schrödinger equation and its eigenfunctions. They are of course useful, but still they represent only statistical information. Or is an energy eigenstate something "real" for you?

      "why mathematical games could produce experimental outcomes"

      Nobody can explain the "unreasonable effectiveness" of mathematics. The models that survive somehow capture essential features of reality. The Maxwellian velocity distribution, for example, is not forced on atoms by pure thought, but is a useful expression of our experience.

      2) "how does it explain measurement outcomes"

      Shouldn't we be happy to have a working statistical *description* at all. Surely you are not demanding that we ought to supply more than an exponential decay curve, but also individual decay times?

      "Why should the results exactly agree with quantum predictions so well that Bell's inequalities get violated?"

      Isn't that what the transactional interpretation achieves? There are intricate correlations, and we can "explain" them by using waves traveling backwards in time. But first and foremost we should be happy that we have at least a consistent *description* of the experiment.

      You are right that it is a key feature of the Keldysh formalism that it applies to non-equilibrium systems. Detectors are always far away from thermal equlibrium -
      in equilibrium the measurement signal and thermal noise are indistinguishable.

      Our views on practical matters are essentially the same. I don't want to "get rid of wave functions in all of physics entirely". I'm not suggesting any new formalism, or a mechanism to explain the measurement process.
      What I do urge is to throw out the classical concepts with which QM has always been formulated: "particles" and "measurement". The formulation of QM/QFT should be based on a more fundamental notion: an event. QFT is a statistical theory of events and correlations between them.

      "send me some links instead"

      I'm not sure if my last publication (1988!) is available online, but it likely contains very little that is new to you. What I suggested was that you look at Sabine's blog at I don't know how to extract the URL for the relevant post (and it would probably be deleted automatically). But you can search for "Keldysh".
      (Remember to press "load more" twice.)

  12. >you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible.

    I believe Sean Carroll in the book you recently reviewed provides the answer. Remember the part where he talks about observables with continuum spectrum of eigenvalues. Decoherence shows how the wave function evolves into parts that "don't interact" (he skips the math unfortunately), and then the way we divide the whole wave function into "branches" is about as arbitrary as our division of matter around into chairs, tables and keyboards. There are no two distinct detectors with the beam splitter, we just chose to call similar parts of the state vector as "detector with arrow up" and "detector with arrow down". And rescaling of probabilities when you select one of them is just "conditional probability", we do it for our convenience.

    1. No, decoherence does not solve the problem.

    2. Then it's an idea for another important and helpful pop-sci blog post: what exactly is decoherence and why exactly it doesn't solve the measurement problem. I think many readers, me included, will be happy to read it.

    3. Decoherence is a process that happens due to the Schrödinger equation alone. It cannot solve the measurement problem because the measurement process is non-linear whereas the Schrödinger equation is linear. I explained that in my video. Another way to put this is that decoherence will not bring a system into a detector eigenstate, which is what we observe. Decoherence (suitably interpreted) gives you a statistical mixture.

    4. Sorry, I still fail to understand the issue fully, that's why more elaboration (possibly in another post, if there isn't one already) would be helpful.

      In every branch the measurement looks non-linear and the measured part of the system looks to be in an eigenstate. Meanwhile the whole system continues to evolve linearly, there is no non-linear measurement/collapse in MWI. There's no contradiction or problem, as MWI folks see it.

      Say, we have an electron in a superposition state a*|up> + b*|down> where |up> and |down> are eigenstates of operator of spin along chosen direction. Then we measure it with a detector. By interacting with the detector and the world around the system evolves (linearly) into superposition a*|spin up, detector saw it up> + b*|spin down, detector saw it down>. No collapse, no non-linearity. Every branch of the detector sees the electron in an eigenstate. For each branch the measurement looks like a non-linear change of electron wavefunction from original superposition to either |up> or |down>. But globally nothing of that kind happened, the global superposition remains. If we take this global superposition and only look at the electron part, discarding the |detector saw it down> part, then of course the elecron state is not just a*|up> + b*|down> anymore, now the electron is entangled with the detector, so the electron itself can only be described by a mixed state. But it's only if you keep the detector out of the picture.

      So far I couldn't quite see which part exactly of that MWI reasoning described in Carroll's book you find troublesome.

      What I personally find troublesome is the question of probabilities, what they mean in MWI and deriving Born rule. When Carroll starts talking about "credence" variant of probabilities, I don't see how it leads to frequencies of events in MWI following necessary distributions that Born rule predicts...

    5. How is a statistical mixture of 50% left and 50% right different from two non-interacting worlds, in one of which the photon goes left and in the other of which the photon goes right?

    6. In the former case the system is not in a detector eigenstate. In the latter case it is.

    7. "Another way to put this is that decoherence will not bring a system into a detector eigenstate, which is what we observe."

      Sabine, you might have here a hidden assumption that whenever we make a measurement the system ends up in a pure detector eigenstate. To experimentally prove that the system is in a detector eigenstate you have to prove that there is not even the smallest admixture of other states. If someone had such evidence then decoherence would not suffice to explain our experience in the framework of the MWI, as you point out, but I suspect that such evidence is lacking.

    8. Ripi,

      No, I am not assuming any such thing. I have instead spelled out very clearly previously that the irreversibility of the measurement process (which you would have if you indeed would end up in an eigenstate) is *not* the problem because you can (and most plausibly have) small remainders in the other states.

      The issue is that decoherence generically doesn't get you anywhere close to such a state, as illustrated by Peter's 50-50 example. The reason it doesn't is that the Schrödinger equation is linear. The problem is not that it's reversible. The problem is that it's linear. Measurement is a non-linear process.

    9. Prof Sabine,

      But didn't we just get to an agreement that it is not necessary to be working with (what this essentially is) the collapsed states? As in, there is no problem with working with the 50%-50% mixed state density operator for the rest of your calculation, that the collapsing is really just syntactic sugar that makes it easier to keep track of the part of the universal wavefunction relevant to our physical world's actualised branch?

      I am, of course, not denying if you say this requires a new measurement postulate; I don't know what postulate would be good, or even is needed (please explain more!), only am interested in why you think it is necessary to be in detector eigenstate.


      I am also going to have to complain about the sloppiness in asserting non-linearities.

      What do you mean that measurement is a non-linear process?

      Is Schrödinger evolution linear (as needed for superpositions) or non-linear (when you impose single independent particle approximation)? Is decoherence linear (derived from Schrödinger evolution) or non-linear (master equation style evolution terms aren't linear, are they? They destroy superpositions too, so is that sufficiently non-linear for your taste?)

      I am inclined to think that measurement should be a linear process, since decoherence is sufficient to get branches that are individually detector eigenstates. Again, I am not claiming that this solves all issues. I am just asking you what you mean.

    10. B.F.,

      If you want to "keep on working" with the mixed state, you are taking on a neo-Copenhagen interpretation (the state describes knowledge and is not a real thing). In this case you are in conflict with reductionism, as I said.

      The measurement process is non-linear because if you have one prepared state that evolves into eigenstate 1 and another prepared state that evolves into eigenstate 2, then the superposition of these prepared states will not evolve into a superposition of eigenstates. I explained this in my video.

      The Schrödinger evolution is linear.

      Tracing out part of the system does not bring you into an eigenstate (and generically not even close by one) either. As I already said.

    11. Prof Sabine,

      I did watch and read, and rewatch and reread, multiple times.

      Let's use your words again. Decoherence derives that "if you have one prepared state that evolves into eigenstate 1 and another prepared state that evolves into eigenstate 2, then the superposition of these prepared states" will evolve into a mixed state of the two detector eigenstates.

      All that is needed to guarantee that you either see one branch or the other, is that the detector's own eigenstates are orthogonal. Mind you, not the system in detector eigenstates, but rather the detector in detection results eigenstates.

      It is not actually tracing anything out (unless you mean within decoherence itself).

      Since decoherence treats the potentially macroscopic detector as a quantum system too, I do not see how a conflict with reductionism arises from merely having mixed states that we might later partially ignore here and there.

      Note (for others) that I am not even deciding whether to update to a collapsed density matrix, or use the mixed density matrix.


      I think I am getting to understand what you are saying is a needed new measurement postulate. I think you simply mean that Born probabilities, the probabilistic interpretation of the diagonal coefficients of the density matrix when expressed in detector eigenstates, is a postulate regardless of Copenhagen/MWI/PWT.

      If you just mean that, then I wholeheartedly agree. There is no improvement upon this aspect by moving from Copenhagen to decoherence, and it is not possible to get it from within Schrödinger evolution either.

      I don't think anybody is even suggesting that this could be solved. (Except maybe Prof Carroll and some other MWI extremists.)

      I really do not think this should be called the measurement problem. Not least because calling it Born probabilities problem far better pinpoints where the difficulty is---decoherence already explains the details of measurement except this and maybe some other stuff.

      At least, I won't be spending so much time confused as to what is meant that "measurement problem looms open with decoherence" if all you meant is this. I would just give up and move on, accept that it is a postulate.

    12. B.F.

      You completed my quote with your own words, and wrongly so. Think about it. You seem to know the math.

      "All that is needed to guarantee that you either see one branch or the other, is that the detector's own eigenstates are orthogonal."

      Write down the assumptions you need to define "what you see" and "detector".

    13. Prof Sabine,

      OMG, sorry, it looks like I made you say something else.

      As in, I know you know that decoherence explains all the way to the decohered mixed states. That is what is needed, and meant. Please reply to the other stuff in that comment.

      I kind of do not know what you want me to do. But I can start.

      A detector is any quantum system prepared in an initially inert state that can be entangled with the system we wish to study, such that the studied system would send the detector into a different state. It is often wanted that the detector's different states amplifies the perturbation from the studied system to make a permanent mark that allows for repeat measurements by yet larger systems later. Pointer states are included, by simply having a video or photograph. Human is not really important; as long as the permanent mark is made, observation of the results is to be considered completed and the human can forget to collect the data entirely.

      In the Mott problem, each H atom as detector has its own energy eigenstates (really wrong, since electrons share the same underlying field), so that the alpha ray passing by will excite them to states orthogonal to the ground state as the definition of detection. Each of them are also orthogonal to any other atom being excited. Later, the H atoms reradiates, and that could be photographed as bubble chamber photographs.

      A1: All systems are quantum mechanical in nature.
      A2: It is possible to have weakly interacting systems, such that the actually exact multiple-system Hilbert space is well-approximated by the tensor product of the individual system Hilbert spaces. (Needed or else orthogonality of detector eigenstates is not well-defined.)
      A3: System and detector are weakly coupled enough to have the almost-separable Hilbert spaces as above, yet not weakly interacting enough to forbid entanglement.
      A4: All standard assumptions of QM _minus_ Hermitian operators and expectation value
      A5: Subsequent detectors are not independent. In particular, re-detection of energy-compatible observables (which may be incompatible with each other) necessarily beget the same value.
      A6: Upon incompatible detection, Born probabilities appear.

      The assumption A5 and A6 is sufficient to tell us that projectors are involved. The projection is coming not from Hermitian observable operators acting upon the system. The projection is coming from detectors having orthogonal states.

      This also explains why double-slit experiment, putting a detector at the slits is different from not. The detector at the slits will decohere into orthogonal detector states and thereby spoil superposition when you later decide to specify the final states, whereas without the detectors there, there will not be orthogonality coming from the detectors at the slits.

      Since the detectors furnish the projectors, it is then natural that the measurement operators inherit the projectors to the detector eigenstates. Since detector eigenstates can always be labelled by the measurement outcomes (and other auxiliary variables if needed), this means that measurement operators are measurement outcome values multiplied with projection operators.

      Since Born probabilities are directly assumed in A6, then the expectation value is not postulated.

  13. Happily enough the Many-Worlds Interpretation does not mean that they have to be interpreted sequentially or alternatively all at the same time, but just as an idea. We have not finished interpreting our own world yet.

  14. 100% possibility is mapped to one observer in each world, right. But the observer is a program, and therefore limited to a finite set of possibilities; for example, I cannot fly or walk on water. If I can do that then it is not me, a human program, but an alien program. But MWI says there are different copies of the same program or observer. So, if the 100% possibility is flying, then the observer-program cannot accommodate the possibility. Then what happens in that world. Would Bohm come along and say it is the hidden variable; would Bohr come along and say the "flying" possibility disappears. So, we are back to square one: What are the hidden variables? What happens to the "flying" possibility? The realization of the possibility is dependent on the scope or domain of the observer-program. The observer being a program has limited domain and range.

  15. Quantum interpretations tend to invoke something that is not quantum mechanical. We have in all of them something which breaks apart the quantum-ness of the world. Heisenberg realized this with the Copenhagen interpretation, where he saw a problem with the definition of the “cut-off” between the quantum and classical domains. Bohr's insistence on there being a dualism between the quantum world and the classical world not only means the classical world has no quantum description, but it also means there exists a boundary between these two domains. Yet it is difficult to know where this boundary is. Experimentalists are currently working on this, and I recently read an article about molecular beams with high Dalton numbers that have quantum behavior.

    The Many Worlds Interpretation (MWI) has a funny issue with localization. We are not able to put our fingers on where this eigen-branching of the world occurs. Measurements in the Copenhagen setting are a localization. With MWI this lack of localization is maybe more commensurate with quantum gravitation. However, this is in a sense a position representation of the same problem that Heisenberg pointed out, which is more associated with the large scale in mass, momentum, energy or action of the measurement system. The issue of localization in MWI is then found in the momentum representation of the same problem in Copenhagen interpretation (CI) pointed out by Heisenberg.

    The matter of reconfiguring the probabilities based on the “phenomenon” of the observer is inherent in all quantum interpretations. QuBism takes this further and says this IS the basis for measurement as a Bayesian update. If we think of all physics as a form of convex sets of states, then there are dualisms of measures p and q that obey 1/p + 1/q = 1. For quantum mechanics this is p = ½ as an L^2 measure theory. It then has a corresponding q = ½ measure system that I think is spacetime physics. A straight probability system has p = 1, sum of probabilities as unity, and the corresponding q → ∞ has no measure or distribution system. This is any deterministic system, think completely localized, that can be a Turing machine, Conway's Game of life or classical mechanics. The CI tells us there is a dualism between p = ½ and ∞ on a fundamental level, or noumena to use this definition by Kant, while MWI tells us that p remains p = ½ on the noumena and the shift is with the observer phenonena. Qubism put CI on steroids and says all there exists are Bayesian updates, and in effect there is none of this sort of dualism.

    I doubt there is any quantum interpretation that has either a theoretical proof for its truth value, or an empirical hook that gives it an observable advantage. We might however think of over-complete coherent states, such as laser states of light, as those which have a classical-like sympletic structure. These states are emenable to Wigner's quasi-probability where we can define a basis |p, q), or really more |z) for z = p + iq. There is a whole range of related physics with condensates, superfluids and states that occur often with a Ginsburg-Landau quartic potential or Bogoliubov coefficients. Then there are states removed from this condition that are mixed or maximally mixed states. This is what Einstein formulated with his photon emission coefficients! In a quantum gravitation setting we may then see this as a way of looking at a classical background manifold with gravitons. In this way we might then also be able to look at how spacetime is “built up” from entanglements of states. The emission of Hawking radiation and the “damage” done to quantum states is remarkably similar to how Tr(ρ^n) for n larger than unity is not preserved.

  16. continued due to space limits

    The paper Quantum Theory of the Classical: Quantum Jumps, Born’s Rule, and Objective Classical Reality via Quantum Darwinism by Zurek. arXiv:1807.02092v1 [quant-ph] 5 Jul 2018 offers the hypothesis that unitarity is broken with nonlinearities that occur with large N quantum number systems. This then maintains the classical states of the world are maintained against environmental decoherence by a sort of environmental supersymmetry. This too hints at some sort of conservation of quantum phase, so quantum information or qubits are conserved, and there is maybe something similar to this connection to Einstein coefficients with coherent and maximally mixed states.

    There are plenty of things to think about here, and I have thought for many years this issue is connected in part to problems with the quantization of gravity. The connection between how Tr(ρ^2) is not preserved in measurement and Hawking radiation, the possibility spacetime is emergent from quantum entangled states, convex sets of states, and other connections indicate there may well be overlaps between quantum gravity and this subject of quantum decoherence or measurement.


    How about the delayed choice quantum eraser experiment. How else do we explain that unless we use many worlds as retroactive backwards in time is religion more than creating many worlds surely!!?

  18. Wave functions are not real, physical things. Wave functions do not collapse.

    The belief in collapsing wave functions should be our prime concern.

    1. That is a bit of an inversion. The usual is to find that ψ-epistemic interpretations with a collapse and the ψ not existing. It is also the ψ-ontic interpretations with ψ existing that have no collapse. Stochastic and ensemble QM or QED have this feature, and consistent histories as well. Though CH has a lot of problems.

    2. There is no wave function or epistemic language (as if nothing was real until a human brain evolved to "observe" the universe) in quantum measure theory. QMT's focus is on the nature of the (quantum) measure space, which seems a more natural thing to do for probabilists, not that it is necessarily the "wave" of the future for quantum theory.

      Evolving Realities for Quantum Measure Theory
      Henry Wilkes
      "whilst Hilbert space quantum mechanics uses the Hamiltonian and collapse for its dynamics, in QMT we use the quantum measure, which measures the sum of quantum interferences between pairs of histories in an event"

  19. Hi Sabine,

    you wrote: "If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if that was so, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism"

    In my view, your blogpost gives really convincing arguments wrt. this view. We should face the idea, that there might be a problem with the assumption of reductionism!

    In an earlier blogpost, you argued, that the measurement process is about information loss. I am not so shure any more, if this is really the case. One could argue, that the wave function is only an approximation of the unknown true physical state in all its particular details. If you take this view, the measurement process would be about gaining additional information about the real world within classical terms.

    Learning algorithms, biological evolution, etc. are all about accumulating or gaining some kind of information, which seems to be almost impossible to deduce from the quantum mechanical wave function, which has a quite limited **predictive** power in explaining our real world observations.

  20. I thought this article by Chad Orzel was a more palatable view of MWI, though I'm not even a layman in the field (a pagan perhaps?):

    1. From the article by Chad Orzel: "There’s only one universe, in an indescribably complex superposition, and we’re choosing to carve out a tiny piece of it, and describe it in a simplified way"

      It seems to me, that this statement explains the origin of the problem with the reductionistic approach.

    2. Saw that Prof. Hossenfelder: "carve out a tiny piece of it[indescribable]". This carving out is what I call "abstraction", "to draw away". Abstraction is according to the observer or program. Different observers abstract different realities. Because you are abstracting according to a background, a frame of reference, which is the observer-program, "abstraction" is "measurement". If you remove the observer program the abstraction or description goes away, rather the abstraction merges into the whole, and it is one whole thing, the indescribable(Chad Orzel) or the undefined or the uninterpreted primordial. Looks like I have not been talking nonsense.

    3. Continued. . . with this statement by Chad Orzel what I have been saying about reductionism falls in its place, that is, out goes the observer and in comes the indescribable. . . and reductionism ends.

    4. Sorry Gokul, but I do not understand anything of what you are talking about..

  21. I still find it very worry some that established scientists turn to MWI for explaining QM. That may be a basic fact, but it's importance is under-estimated as we simply try to understand the behavior of >this< universe we are living in.

    1. Marc,

      I also think it is bad that people simply turn to it.

      However, I cannot agree with your particular complaint. That would be like asking what it means for the first few planetary orbits to just so almost nicely be related to the platonic solids.

  22. A human is programmed to detect optical light; he or she cannot sense infrared light or ultraviolet light. In that the human program is limited to optical light. I need to create a device to sense either infrared or ultraviolet light. Such a device is a hardware program because it performs the well-defined function of detecting ultraviolent light. Then what ever it detects is translated into a format that is intelligible to us humans: may be, a numerical or graphical representation. This device cannot detect infrared light; it is simply not programmed to. Let us say there are only light waves. We don't know what they are. These waves are undefined. But we know that in our absence or in the absence of any program like a infrared or ultraviolet detector they are just waves we can't describe or define or discriminate them as visible, infrared, ultraviolet etc. That indescribable, undefined thing, whatever it is, is what I call "actuality".

    Now, these indescribable waves being everywhere, I first introduce an infrared detector. What will I detect? Infrared light, right. I remove this detector. Then again there are indescribable waves. Second, I introduce an ultraviolet detector, what will I detect? Ultraviolet light, right. I remove this detector. The actuality returns: there are only indescribable, undefined waves. Third, I enter among these waves. What will I see? Bright visible light, optical light. I jump out from among the waves, and the actuality returns.

    Detection or seeing is measurement because I detect or see based on or with reference to the detector or human, which is a program. Any description, any definition with respect to a frame of reference is measurement. I am describing what "the other" is with respect to the frame of reference. If the frame of reference is an infrared detector or program, then I describe infrared light; alternately, if the frame of reference is an ultraviolet detector or program then I describe ultraviolet light. The rest of the waves are simply undefined.

    I say what applies to the macrocosm also applies to the quantum world. Let us consider superposition as the undefined. First, if I introduce a detector or program that detects "left", then what the detector detects is "left". When I remove the detector, the actuality returns, which is undisturbed superposition; because there is superposition the interference pattern returns. Second, I introduce a detector that detects "right", then what the detector detects is "right". I remove the detector, the actuality, the interference pattern returns.

    There are two ways of looking at this measurement. First is that when left or right is detected the rest is undefined. Second, the rest is undefined because the whole wave contracts or coagulates when the electron or photon "records as memory" its interaction with the detector. The entire superposition of the states is used up to record that one state of interaction with the detector. This creates a memory-load which is responsible for the particulate behavior of the photon or electron, which till the point of interaction or measurement behaved like a wave or existed as a superposition.

    What is happening during measurement? Recording, memorization, and programming. The act of measurement programs the electron or photon. Thereafter, the programmed electron or photon acquires a particulate nature.

    1. Gokul,

      regarding electromagnetical waves, your arguments are a bit misleading. Electromgagnetical phenomena are predicted by Maxwells equations. Radio waves have been predicted successfully long before we were able to turn our radio on in order to enjoy some music from our favourite radio station.

      It is sometimes quite helpful to have an appropriate theory before trying to build a detector. It helps discriminating the waves in terms of their wave lenghts (frequencies) The waves are not as indescribable as you might believe.

  23. Where does reductionism end? Why? The undefined or the uninterpreted primordial is the ultimate actuality. It is indescribable, beyond measurement. Description, definition, measurement begin the moment I introduce the observer or the program as a frame of reference. What happens when I remove the observer? Measurement ends. There is the undefined or the uninterpreted primordial. When measurement stops, you can't reduce any further: if measurement is possible, and it is possible because there is the inkling of the observer, there is reductionism. This implies that the moment I remove the observer, measurement ends, therefore, reductionism ends.

  24. The moment measurement begins and the observer cuts out a reality from the actuality we enter the classical world.

  25. "Most sign up for what is known as the Copenhagen interpretation"

    Do you have data on this? I remember reading that, while the Copenhagen interpretation was indeed the favoured one decades ago, this has now changed, with the many-worlds interpretation now much higher in, possibly at the top of, the polls.

    Of course, the correct interpretation isn't decided by vote (and one could also argue that if it could be decided at all, then it would no longer be just an interpretation), but you seem to be painting the many-worlds interpretation as some sort of fringe position (at least with respect to the fraction of scientists who subscribe to it).

    Joke du jour courtesy of Roger Penrose: "There are probably more different attitudes to quantum mechanics than there are quantum physicists. This is not inconsistent because certain quantum physicists hold different views at the same time."

    1. One philosopher said "When you are with the majority, that is the time to think." How many people understood that the Sun is the center of the solar system when it was first stated? A handful may be. How many people understood that gravity is not a force but a curvature of space-time when it was first stated? Two men may be because Author Edington asked who was the third man. A fact is non-democratic, in that numbers don't count; either you see it or you don't see it. Even if everybody on planet earth says that the earth is flat is that the truth?

  26. What is your explanation as to why many people who are obviously very smart, such as Max Tegmark, David Deutsch, Sean Carroll, etc, subscribe to the many-worlds interpretation?

    1. I'm a physicist, not a psychologist.

    2. Phillip,

      I think the explanation is quite easy. It is a very rare event, that a massive change in the current physical paradigma arises from the work of a single person, like e.g. the game changing contributions to physics from Albert Einstein in his wonder year 1905.

      Most physicists tend to pick up ideas whith a tendency to be just below the surface of awareness. MWI is actually very trendy, so its quite obvious that clever and smart people become attracted.

    3. Sabine wrote: "I'm a physicist, not a psychologist."

      Shouldn't a theory be judged by the *physical* arguments put forward?

      Of course there is a strong psychological force at work here: physicists are hooked to theoretical preconceptions. Many cannot even conceive of quantum theory without the wave function.

    4. Phillip,

      They use their brain power in the wrong places...

      Einstein's principles (like Mach's) are mostly forgotten in favor of new fancy QM interpretations. Reality should be realistic... :O)

    5. "Shouldn't a theory be judged by the *physical* arguments put forward?"

      Hi Werner,

      you are definitly complety right with this. But with MWI there is nothing to be judged at all. So far, no one has any clue, how to setup an experiment in order to verify MWI. In other words, MWI is "not even wrong".

    6. We are not brushing aside MWI nor are we finding fault with it, rather we are going into it very deeply to find out the facts of QM. If I start with a prejudice or a bias then I cannot go very far, very deep. This prejudice or bias is the psychological observer. When the observer is active, I will see what I am programmed to see and not the fact. The observer influences observation, which is no observation at all. This psychological observer is the program put together by scientific tradition and orthodoxy. Berzelius was a name in Chemistry. He said organic compounds cannot be synthesized in the laboratory. Chemists of his day where programmed to this conclusion which then became their tradition or background. The tradition or background is the observer or the program. If Wohler had started off with this background or observer interfering in his enquiry could he have synthesized Urea in the Laboratory? Newton was an authority in gravity, if Einstien gave in to the authority of Newton by allowing the observer or the psychological program of Newton's tradition to interfere in his enquiry, could he have discovered relativity? "Einstein broke the Newtonian orthodoxy" Aldous Huxley. This is because the activity of the observer was in suspension, and therefore he saw. Observation is when the observer is not.

    7. "A former LEP experimentalist," yes, there is a way to falsify MWI. If it turns out the quantum computing is not possible because of some currently-unknown mechanism that causes wave function collapse no matter how carefully we isolate a system from its environment, then MWI is out, and QM as we know it needs to be modified to account for this mechanism.

  27. This is a
    live experiment about observation of waves:

  28. Probabilities “jump” from 50% to 100% in MWI because we are talking about conditional probabilities.

    We can apply the conditional probabilities before the measurement, but then it would be an empty statement: “Given that the detector measured an up spin, the probability that the spin is up is 50%.” At this point, the detector has no knowledge about the stat of the spin.

    Once the detector makes a measurement, this would become: “Given that the detector measured an up spin, the probability that the spin is up is 100%.”

    We need to make a similar statement for the spin down state. Then the time evolution between the initial state before the measurement and the final state after the measurement can be fully described by a linear, unitary equation.

    The measurement problem is solved.

    1. Udi,

      Making any statement about what a detector measures or doesn't measure requires that you define what you mean by "detector."

    2. Sabine wrote:

      “Making any statement about what a detector measures or doesn't measure requires that you define what you mean by "detector." “

      In this context a detector can be simply a two state system: |detector measured up spin> and |detector measured down spin>. I can explicitly write the initial state, the final state and the unitary transformation between the two. It is just tedious to put it in a comment with no support for writing equations.

    3. Udi,

      Good, now you have a detector. Now please calculate what you observe using nothing but the Schrödinger equation.

    4. Sabine wrote:

      “Good, now you have a detector. Now please calculate what you observe using nothing but the Schrödinger equation.“

      This is going to be ugly. I will write it as a two particle system. The first is the spin we are measuring and the second is the detector. In the initial state the particles are set up to be independent of each other:

      psi_initial = (|u1> + |d1>)(|u2> + |d2>)/2

      In vector notation it can be written as:
      psi_initial = [1 1 1 1] / 2

      In the final state the particles are coupled:

      psi_final = (|u1>|u2> + |d1>|d2>) / sqrt(2) = [1 0 0 1] /sqrt(2)

      The unitary matrix that transforms between the initial and final state is
      [ 1 0 1 0 ] /
      [ 1 0 -1 0 ] / sqrt(2)
      [ 0 1 0 -1 ] /
      [ 0 1 0 1 ] /

      psi_final = U psi_initial

      You can diagonalize it and write down the Hamiltonian if you want. I don’t think it will give you any more insight.

      The final state obeys exactly the conditional probability that I wrote: “Given that the detector measured an up spin, the probability that the spin is up is 100%.”

    5. Udi,

      "“Given that the detector measured an up spin, the probability that the spin is up is 100%.”"

      Do I really need to say that this is a circular argument?

    6. Sabine wrote:

      “Do I really need to say that this is a circular argument?”

      I don’t see any circular argument here. The statement: “Given that the detector measured an up spin, the probability that the spin is up is 100%.”

      is just a description of the equation:

      P(u1|u2) = P(u1^u2)/P(u2) =

      ----------------------------- = 1

      The detector can measure spin up or spin down. It cannot measure a superposition of the two spin states, because this is how I decided to build the detector. If I wanted to measure something else, I would choose a different detector.

    7. Udi Fuchs, please provide an interpretation of the state [1 0 0 0], and the final state after the action of your unitary matrix upon it, namely [1 1 0 0]/sqrt(2).

    8. Arun wrote:

      “please provide an interpretation of the state [1 0 0 0], and the final state after the action of your unitary matrix upon it, namely [1 1 0 0]/sqrt(2).”

      [1 0 0 0] = |u1>|u2>
      [1 1 0 0] = |u1>(|u2> + |d2>)

      There is nothing special about this transformation. There are many unitary operators that would do the measurement I wanted (anything in the U(3) subgroup of U(4)), I just wrote the first one that came to my mind.

    9. Udi,

      If you do not understand that an if-then statement doesn't prove the conditional, I cannot help you.

    10. Udi Fuchs, physically what does this time evolution mean?

    11. Sabine wrote:

      “If you do not understand that an if-then statement doesn't prove the conditional, I cannot help you.”

      The conditional clause is “the detector measured an up spin”. Of course it cannot be proven true because it is not necessarily true. What is true is that either “the detector measured an up spin” or “the detector measured a down spin”.

      We have set up the detector to measure spin in the up/down direction, that is why it is our preferred basis. In this basis there are exactly two independent answers “100% up” and “100% down”. You can have a superposition of the detector measuring both options, but there is no state where the detector measures “50% up”.

      You can try ask our up/down detector about left/right spin, but it is not sensitive to this question, so the answer here would always be “50% up and 50% down”.

    12. Arun wrote:

      “physically what does this time evolution mean?”

      I’m not sure it has any physical meaning. Sabine complained that in MWI you need to “Update probability at measurement to 100%”. So I gave the simplest example I could think of that demonstrates how this “update” works with a unitary operator.

      If you want an example that is more physically realistic, you can look at a Hamiltonian that describes the Stern-Gerlach experiment.

  29. Do you have a way to explain the power of quantum computing other than the many worlds interpretation?

    1. The postulates you need to derive the speed-up of quantum computing compared to classical computing do not depend on the interpretation. That derivation is explanation enough for me.

    2. The sum-over-histories formulation of quantum computing
      Ben Rudiak-Gould

  30. " But we already know that this isn't possible because using only the Schrödinger equation you will never get a non-linear process."

    That is wrong because you do not need to assume a nonlinear quantum process.

    The Schrodinger equation, in its nonrelativistic world, is sufficient to describe how the detector works! And its linear. "Detection" of a single particle is just a cascading set of standard processes that proceed from the microscopic to the macroscopic. A photon creates an electron-hole pair in silicon. That pair separates in an electric field, the electron ending up on the gate of an FET. This controls the flow of other electrons onto a wire to a second transistor, the output of that is connected to a gong that makes a sound heared by a whole lecture hall. There is one gong for right, one for left.

    No one disputes that. No one disputes that transistors are made of atoms descibed by quanum mechanics. No one disputes the quantum theory of semiconductor band structure. No one disputes the quantum theory of how transistors work. Apparently some people dispute that you can construct a quantum mechanical operator that describes position of the clanger in the gong, but I consider that silly ... a sum of operators projecting out the positions of the metal atoms in the gong will do. One can even dope it with atoms of an element otherwise unused in the system, and project out those only. The uncertainty principle operating on such sums generates negligible uncertainty.

    This of course implies that the original state of the detector system matters ... but we cannot measure it exactly. It does NOT dispute that in fact the wavefunction of the original particle (if a fermion or composite boson) actually DOES collapse ... it does. It does this because it is entangled with the wavefunction of the detector, and it is the projection of the original particle out of the wavefunction of the whole apparatus at the time of the gong that matters. For photons as particles, of course, the "wavefunction" of the photon does not collapse, it disappears ... but here we enter the realm of field theory.

    Its also true, of course, that if you expect to see 50% one way, 50% the other, adding to 100%, for a particle, you have to KNOW IN ADVANCE that there is just one, not zero or many, particles. The stuff that determines that is part of the "apparatus" you have to include in the Schrodinger equation.

    I find it very odd that many people insist on denying unitary evolution!

    Some say "but you have to PROVE that this results in the Born Rule". I say, no I'm perfectly free to use the Born rule on the probability generated on the classical size measurement at the gong ... we all agree on unitary evolution. There really is only one world, and observing such a classical size result per se leads to consequences which are best worried about by philosophers. This is the crux of the matter.

    For some reason lots of bloggers don't like to consider such simple explanations. Some actually censor out my comments. Oddly, all of my colleagues, when I or my (NAS member) department head explain this, seem to understand just fine. But then, they are neither fancy physicists nor philosophers. Yes, this is too long.

    1. dtvmcdonald,

      "For some reason lots of bloggers don't like to consider such simple explanations. Some actually censor out my comments."

      Yes, because what you say is obviously wrong. It doesn't matter how many linear processes you line up after each other, that still doesn't make a non-linear process. I strongly suggest you try to actually write down the equations because that will make it immediately clear. I am not saying this because I am dismissive but because I started at the same point as you 25 years ago.

  31. Let us take four cases of the observer and the observed.

    1. A scale, the observer, and a straight line of finite length, the observed.

    2. A human on the railway platform, the observer, and Doppler's effect of sound, the observed. This together with a human inside a moving train blowing its horn, the observer, and the monotonous sound of the horn, the observed.

    3. A fly, the observer, and rotten stuff, the observed. This together, with a human, another observer.

    4. A hindu, the observer, and the belief of reincarnation, the observed. This together with a christian, the observer, and the belief of resurrection, the observed.

    In case 1, the scale is not a standard, say, it has lost its absoluteness. When I alter the length of the scale, the length of the straight line changes. A shorter scale means a longer line, and a longer scale means a shorter line. That is, when I alter the observer, the scales' length, then the observed, the length of the line, changes. By relativity all lengths are true because different observers mean different lengths.

    In case(2), the observer on the railway platform experiences Doppler's effect, but the one in the moving train experience a single monotonous sound of the horn. By relativity the presence or absence of dopplers effect is true to the respective observers. When the observer changes the phenomenon appears or disappears.

    In case 3, to the fly biological program, the rotten stuff is attractive, and to the human biological program, it is offensive. How can the same stuff be both attractive and offensive? So the offense is not out there in the stuff but it is in here in the program. If I swap the human program for the fly program, the human finds the stuff attractive, and the fly finds it offensive. The program is subjective, so, the observers are subjective. But we can apply relativity here too. By relativity both offense and attraction are true. The fly "really" finds the stuff attractive, and the human "really" finds the stuff offensive. So both offense and attraction are real or true according to relativity.

    In case 4, one human programmed as a hindu believes in reincarnation, which is the observed. And anther human programmed as a christian, believes in resurrection, which is the observed. Now, the hindu-program, and the christian-program are the observers, and the belief in reincarnation or resurrection is the observed. We swap the programs: the hindu now reprogrammed as a christian will believe in resurrection, and the christian now reprogrammed as a hindu will believe in reincarnation. The hindu-program and the christian-program are psychological programs, software programs. And when I swap the programs, the observed, say, belief in reincarnation changes to belief in resurrection. By relativity the hindu-program "really" believes in reincarnation, and the christian-program "really" believes in resurrection; therefore, both are real but the actuality is that these realities are illusions.

    In all the four cases we can safely apply relativity, and we see that when we alter or change the observer, the observed also changes. Therefore, "the observer" is "the observed". This is true for all measurement in the classical world as well as the measurement problem or the measurement in the double slit experiment.

  32. "The observer is the observed" J Krishnamurti

  33. A physical device doing quantum measurements is composed of a linear system doing linear transformation on state waves (or signals) like beam-splitting polarizers. All of that in QM is modelized by linear observable matrix acting on state vectors or signals.
    As you don't do any detection, so far the process is linear but since you start the detection process it 's irreversible.

    Indeed, the detector is a device doing a completely non linear process. first it provide the module of the wave fonction which cancel out the phase of the signals, this process is of course irreversible. 2nd it usually applies a threshold to the module detection/not detection, Yes or no. Each measurement is like answers to questions then it brings some information which is function of its probability of occurrence given by the module² of the wave function.
    I am not sure what do you mean by reductionism, if the linear part of the quantum experiment has something to do with reductionism I don't see why it shouldn't be the same for the detection devices?

  34. continued. . . the 4 cases. . .

    Out of measurement after measurement, out of programming after programming, out of pattern forming and more and more complex pattern forming through ever increasing complexity of the observer or the program at various levels the classical world emerged. . . going on this way, that is, recording, memorizing, and programming life or the biological programs emerged. A stream of photons kick start photosynthesis Quantum mechanically, and after many, many layers of complex programing an apple tree bears an apple. To reduce or shrink the response time to a challenge, psychological astuteness like thinking, ideation, and imagination emerged so much so that when it might take centuries upon centuries for the human biological program, a hardware program, to mutate to be resistant to polio virus, with the capacity of the psychological program, a software program, man discovers the polio vaccine and eradicates the virus. Time is always shrinking as evolution progresses.

  35. Sometimes I get the impression that the measurement problem is an artifact associated with an over dependence on the Schrodinger equation-- first of all, it might be better interpreted in a statistical sense. But if we worked entirely in the framework of something more phenomenological, like Heisenberg/ Heisenberg Dirac matrices, we would not be talking about a wave function "collapse". Measurement would be reduced to phenomenological proportions. I'm well aware of the difficulties of the matrix methods, and the level of abstraction required. But maybe that is the best approach, simply for those very reasons, of difficulty, and abstraction.

    1. "(the Schrödinger equation) might be better interpreted in a statistical sense"

      You are right. In the Heisenberg picture the state vector is constant. And in that picture it would never occur to anybody to associate a matrix with an *individual* system - only with an ensemble. But apparently many people believe that the Schrödinger equation describes the evolution of an *individual* quantum system. The "deterministic" evolution applies only to the individual members of the ensemble. To arrive at a result, the ket (wave function) must always be combined with a bra, and a sum (trace) be taken over the entire ensemble.

      It would help a lot to say that every quantum system is described not by a wave function, but by a density matrix. A pure state is a very special case.

  36. That there exists a probability distribution for the possible outcomes means that repeating the same experiment many times over will yield a frequency distribution that corresponds to that probability distribution. There is then a fluctuation around the expected normalized frequency distribution, and this tends to zero in the limit of an infinite number of measurements.

    One can then consider a hypothetical system that consists of an infinite number of copies of the original system and then define the observable for measuring the normalized frequency distribution. The frequency distribution will of course be found to be given by the Born rule with probability 1. This means that the state of the frequency distribution always corresponds to the Born rule and that this is an eigenstate of the observable.

    What this means is that the general Born rule follows from the special case of the Born rule that says that if a system is in an eigenstate of an observable then the system will be found with certainty in that eigenstate upon measurement of that observable.

  37. Corrections:

    Here is easy way to understand -> Here is an easy way to understand

    splits into several parallel words -> splits into several parallel worlds

    1. Thanks, I have fixed this. This is the actual copy of the transcript that I used for the video. I have read this out loud a dozen times and didn't notice these typos.

  38. Sabine,

    I always thought that a big problem with Many Worlds was that say you take a spin 1/2 particle in a magnetic field, you can measure that state and you split into an up 'world' and a down world. However without the magnetic field - the degenerate case - you have an infinite set of possible wave functions, and thus an infinite number of worlds.

    Is that a faulty way of looking at Many Worlds?

    1. David,

      Maybe, or maybe not. Whether anything in nature is truly continuous and/or infinite is somewhat questionable. Usually there's a way to turn infinity into "large but finite". In any case, however, I don't see how this is a problem for many worlds.

    2. Well if rotations are continuous, then talking about a continuum of 'worlds' (actually universes) branching out of degenerate wave function collapses seems to stretch my imagination to breaking point - maybe that's my limitation.

  39. It has always seemed to me the MWI is non-scientific, because it fails to predict anything at all about what we observe. It is tautological, "you see what you see." It is an empty "explanation", as much as saying a child being struck by lightning was "God's Will."

    It just isn't science, scientific knowledge has to limit something, either the range of things that did happen to produce what we presently observe (like astronomy or forensics or geology) or the range of what will happen given what we presently observe. MWI is incapable of doing that. Schrodinger's is at least capable of predictions, even if we don't understand the mechanisms.

    I am not in the "shut up and compute" camp, nor do I buy the Copenhagen interpretation. There may be some non-linearity yet to discover. But the answer isn't to throw out science (Schrodinger) for non-science (MWI).

    1. Dr Castaldo,

      Your complaint does not make sense for the following reasons:
      1) MWI is assuming nothing more than "Schroedinger evolution works for everything." Every other interpretation assumes something extra. You cannot claim that "Schroedinger's is at least capable of predictions" against MWI.

      2) The essence of your argument is akin to complaining that the continuous possibilities given by Newtonian orbit theory is an empty explanation compared to Kepler's early Platonic solid values of the planetary orbital radius.

  40. Sabine (and Lawrence, since you mentioned it): The obvious solution is that there IS some non-linear factor involved, that isn't accounted for by the Schrödinger equation, and one candidate may be gravity. Perhaps it is not quantizable, and has acted as the non-linear "detector" since the whole universe was nothing but a dense quantum soup (if that was ever true).

    And as others have suggested (Penrose I think), the "collapse" is triggered by the gravitational behavior of masses in superposition; i.e. there is some threshold to be discovered at which massive superpositions become mutually exclusive.

    1. Dr Castaldo,

      All such schemes are doomed because we have macroscopic systems put into quantum superpositions.

    2. “... we have macroscopic systems put into quantum superpositions.”
      Do we?
      The biggest one I know is this here and 2000 atoms I would not call macroscopic.

      If you now want to name things like BEC, SQUID, ... then first think about whether these are just a bunch of bosons (or Cooper pairs) sitting in the same (ground) state. This can indeed be a macroscopic number N of them, but they just form a huge product state (of tiny superpositions or entangled states) and not one huge entangled state or macroscopic superposition.

      You can see a macroscopic BEC precisely because N is so huge (N≈N-1) that it does not matter when one particle is measured. (By the way this defines the chemical potential μ=dE/dN in the limit.)
      If it would be a single huge entangled or superposed state then one measurement would collapse the whole state and this is not what happens.

    3. Reimond,

      I doubt my pitiful cases would meet your challenge, but I am much more interested in learning from you what difference it is that "a huge product state (of tiny superpositions or entangled states" is "not one huge entangled state or macroscopic superposition".

      Obviously my own study into entanglement is lacking, and if you could help remedy that, I would be very thankful.

      Technically, I subscribe to decoherence (but not MWI) so that I would not speak of collapsing whole states. I am not sure how we would experimentally determine the difference between a macroscopic entangled state being observed in part, v.s. huge product state being observed in part.

      If you were curious, I was originally thinking of things like polaritons, cavity QED, superconducting rings or superfluids i.e. your BEC, and even something as simple as having a double-slit photon over an entire wall---for the short time between the photon reaching the wall and the atoms in the wall decohering the single result out, there is a short timeframe whereby the universal wavefunction is a superposition of many different single-atom-absorption states. You need the superposition there only because only one atom actually gets to absorb.

      Needless to say, I am aware that this last argument is weak, and that better experiments that can give unambiguous results that we are observing macroscopic entangled states would be far better.

      But at least I know I am not talking pure nonsense. Anderson's More is Different paper stated that the ammonia molecule tends to be entangled, but before the 100 atoms mark, the entanglement tends to get washed out, so if you have something like, say, a GRW idea, you need to make it collapse quite frequently. Yet, you already mentioned that 2000 atoms could be put into superposition, so that the constraints on GRW is contradictory. That is sufficient for my initial assertion to be correct, even if it is possibly still not enough to rule out Penrose's gravitational collapse entirely.

    4. B.F,

      the difference between product and entangled states is explained here.
      For SQUID, BEC and “The meaning of the wave function” please refer to chap. 21-4, 5, ... in here.

    5. Reimond,

      That was super underwhelming. I am currently doing some quantum info so I do know about the basics regarding entanglement. I thought you would bring up something about entanglement witness, or some measure. Your link didn't work; Google covers the part.

      Also, Feynman lectures? I read that long ago as an undergrad.

      I am not sure why superconductivity isn't considered entangled. As in, sure, Feynman's argument about how light is such that its "Schroedinger wavefunction, the vector potential A" is observable is because they are "non-interacting Bosons" at least kind of makes sense. I worry about how they actually ought to have 4th order and higher interaction terms, but I'll give him that.

      But for BCS, the Cooper pairs are literally entangled across space. I get that you call that "huge product state of tiny entanglements", but entanglement is still an important property of the entire system. Not mentioning that it really was many electrons and phonons interacting, to produce this effect.

      I'll take some time to think more about how a huge product state of tiny entanglement is not enough entangled to be a huge entangled state. I mean, just writing it out, I am already inclined to agree with you.

      But do spend some time about my examples above. Not all of them require this.

  41. The math/physics of the Other World Interpretation is far, far beyond most of us -- and especially me. Assuming that some form of OWI is "true" then from a purely nuts and bolts perspective has anyone respected in physics theorized what mechanism can instantaneously generated infinite amounts of mass/energy an infinite number of times each nanosecond and has done so for at least 13.8 billion years worth of quantum events?

  42. Recording, memorizing, and responding from that memory is the fundamental design pattern of nature that it plays out so well at every level starting from the quantum level. In understanding the mind and how it works, we get an insight into not only evolution but also how nature herself work. "Thought is the response of memory. If you had no memory, knowledge you cannot think." J Krishnamurti. We now have experimental evidence, obtained while demonstrating "recording of copies" as posited by Quantum Darwinism, which tells us that, yes, recording is going on at the quantum level. This implies that the electron or photon in the double slit experiment during the act of measurement or detection, must "record" its interaction with the detector apparatus. If we can experimentally demonstrate such a recording, then we can easily show that the electron or photon is programmed by the act of measurement, and therefore takes on a specific state and acquires a well emphasized particulate nature.

  43. I suppose these are obvious questions. Why doesn't creating all these new universes violate the conservation of matter and energy? Also if these alternate realities are all around us then why can't we observe them. They should also have a classical aspect because of the correspondence principle. The people living in these alternate realities are observing things around them. So how can all this extra matter be squeezed into the same space we are in. It can't remain as waves because then they would not observe their own realities.

    People might not call them alternate realities, but why should these branching probabilities not experience a reality of their own? For example if Sabine in another reality did not fix her Premiere problem, then why can't she see this reality around here where it was fixed. This branching has to be local because of the speed of light, so it can't be anywhere else.

  44. Very good video, Sabine.

    Could you please make another one about the Superdeterminism? I've tried to find information about it in internet but there is so little to watch or read.

    Thank you!!

  45. "then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done."

    Humpty Dumpty sat on a wall
    Humpty Dumpty had a great fall
    All the King's horses and all the King's men
    Couldn't put Humpty together again.

    The second law of thermodynamics is incompatible with reductionism.

    1. No, it's not. The second law of thermodynamics is derivable by way of statistical mechanics from the underlying microscopic laws.

    2. In the twentieth century, the seemingly insoluble nature of the theoretical physics significance problems has credited the paradigm that a sufficiently insane theory should be the solution. A regrettable new myth.
      For example a theory that gets rid of reductionism, or the uniqueness of our reality, or even classical logic (and then we could say everything and its opposite).

    3. Sabine wrote:

      “No, it's not. The second law of thermodynamics is derivable by way of statistical mechanics from the underlying microscopic laws.“

      I am sure that you are fully aware that for a microscopic system with a unitary time evolution, the entropy is constant.

      The point is that entropy is a measure of how much we know about the system, it depends on our perspective. I just published a blog post explaining this in the context of quantum mechanics. It seems that when I try to put a link, my comments get filtered, so just google “Goldilocksism” to find it.

    4. Udi,

      Thank you, I know what entropy is.

  46. "Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch. Same thing."
    Sabine, obviously "same thing" regarding the probability to 100%. But isn't this just trivial? The important thing as I understand it is that the state of the detector is different. MWI claims continued superposition and thus unitarity holds.
    I may have missed your key point.

  47. Hello Sabine,

    you made a very clear statement, that Copenhagen is incompatible with reductionism. I tend to buy this immediately. on the other hand, we have nothing convincing going beyond Copenhagen.

    So far, the measurement problem is not understood at all. In my view, it is quite hopeless heading for a TOE, before we really understand, what's going on there.

  48. Sabine,

    in your video, you are explaining your problem accepting MWI as a physical theorie.

    In an earlier comment regarding your blogpost Thomas Lindgren mentioned an Forbes article by Chad Orzel. He is writing about the book keeping problem with MWI. It seems to me, that his arguments are quite similar to yours. Am I right with this assumption?

    Taking him serious, his arguments reveal also some limitations of the reductionist approach.

    Perhaps you have not read the article before. So I will post the link here again for your convenience.

    1. No, I do not in this video explain why MWI is not a physical theory. I explain why, in contrast to what is often stated, it does not solve the measurement problem. This does not make it unphysical.

    2. Sorry, of course you are right. My statement was not exactly enough! Anysway, you did not answer my question!

    3. Sabine,

      it is not your explanation in the video, that makes MWI unphysical. Its the lack of measurable consequences, that makes it unphysical. It's the same with the assumption of free will. It seems to make no difference, if the basic physical theory is of probabilistic, deterministic, super-deterministic or whatever nature. Nobody can explain so far, what the observable consequences from applying these different assumptions would be. It seems to me, that "freedom of will" does not depend too much on the very details considered within fundamental physics.

  49. Sabine said "If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if you could do that,you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done. And without that, the Copenhagen interpretation and its cousins do not solve the measurement problem, they simply refuse to acknowledge that the problem exists in the first place."

    But the 2nd equation you are talking about is simply = a² this is of course a non linear operation where you are losing the phase.

    In any quantum experiment you have 2 parts:
    1 a linear part where an observable M operate.
    if |Ψ>=1/√2.(|0⟩+|1⟩) then M|Ψ>=1/√2.(M|0⟩+M|1⟩)

    2 the detection part
    Say for sake of simplicity that |0>, |1> are eigenstates of M with eigenvalue m0 & m1 and |Ψ>=1//√2.[exp(jφ0).|0⟩+exp(jφ1).|1⟩]
    In this case, detection gives the signal power shared on the 2 outputs as :|<0|Ψ>|²= 1/2 associated to the given measure m0 and |<1|Ψ>|²= 1/2 associated to the measure m1.
    Then the mean value of the the measure is = (m0+m1)/2, the Copenhagen interpretation is rather consistent.
    In the meantime you are able to evaluate the gain of information from an a priori model to the posteriori measurements, in this case either m1 or m2. The gain of info in this case is log(2) or 1 bit.

    2 remarks:
    First , I agree of course, detection is a non linear process, you cannot retrieve the initial state |Ψ>, you have obviously lost the phases of the signals. Since you are doing detection it's irreversible. Every thing is okay!
    But let 's go in detail, detection process provides Real detection bip 0/1 that can be stored for further processing leading to a measure with physical unities. Detection process is the sole physical mean to access to the Reality of an quantum experiment.
    State vector don't have any physical unity , they can't be stored on computers for further processing without detection, they have no reality by itself, then it is irrelevant talking of information associated to the state itself.
    Only measure following detection has physical informations.

    BTW, here is the major flaw of Bell rationale leading to the supposed Bell paradoxe.
    In the Bell rationale it is supposed there are detection +/-1, bits of reality where there is none.

    2nd, these detection bip 0/1 depends of course on real microscopic events whose the a priori probabilities is given by the component of the state vector that is an a priori maths model of the reality.

    1. Hi Fred,

      "That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done"

      If someone is able to provide unquestionable evidence that reductionism breaks down, he would shurely be a candidate for the upcoming nobel prize.

      I am not expecting, that Sabine would publish this knowlegde in one of her blogposts!

    2. @Fred Harmand: You have some of this right. The measurement apparatus couples or entangles with the system. We may take this a bit further and look at the two slit experiment. We have two slits aligned vertically at y = 0 and y = d. A quantum wave approaches this along the x direction. The wave has two eigenstates for entering the slit at y = 0 and y = d which we write as

      ψ = A(e^{ikx} + e^{ikx'})

      for x' = √(x^2 + d^2). Now compute the modulus square of this wave to find the probabilities for the particle in the x slit or the x' slit:

      ψ^*ψ = 1 = A^2(2 + e^{ik(x – x') + e^{ik(x' - x)}) = 2A^2(1 + cos(k(x - x'))).

      This result gives a wave pattern, from the cross term in the multiplication, which in an ensemble of experiments is observed.

      Now consider a spin state at one of the slits, say the x slit, which is done to try to find which slit the particle really went through. These two states are given by the eigenvalues of the σ_z matrix with the states represented as |+) and |-). These states are orthogonal so (+|-) = (-|+) = 0. These states become entangled with the two-slit wave function so that

      ψ → A(e^{ikx}|+) + e^{ikx'}|-))

      Now if we compute the modulus square we get

      ψ^*ψ = 1 = 2A^2,

      and the cross term disappears because of the orthogonality of the spin or needle states. This is pretty much what is expected and is what experimentally is found.

      What the MWI maven is going to say is that the world split into two worlds according to the |±) needle states. In both of these split worlds the observer witnesses the periodic structure in an ensemble of experiments disappear when she tries to measure where the particle is. The CI upholder will say instead that there is this entanglement, and if the needle state is entangled with other states for larger systems, then the quantum state of the world is absorbed into an entanglement of an einselected state that is stable. In other words the classical world is an entanglement.

      Which is correct? It is not really easy to say. Laser coherent states are classical-like states with a sympletic structure. So the MWI panegyric will say there is this underlying set of over-complete coherent states that defines the classical world, though it is still quantum mechanical. The CI defender will say the classical world is like any entangled state, where the underlying quantum numbers effectively do not exist. For two spin states in an entanglement as a Bell state, the degrees of freedom for the constituents are replaced by those of the entangled state --- the spins no longer really exist! So for the CI side would say the classical world emerges, it emerges from the einselected stable quantum states and the quantum-ness of the system no longer exist, or at least can not be observed.

      Much to think about.

  50. What I think is most salient about what Sabine is trying to say is that with a measurement there is some sort of nonlinearity that sets into the system. Classical nonlinear systems can exhibit chaotic dynamics. For a large measurement system with many quantum numbers or atoms, say a mole of them, the deBroglie wavelength is λ = h/p, where in a relativistic setting p_0 = mc and it is clear the wavelength is nearly zero. This means the frequency with νλ = c is ν → ∞. So this happens on some time scale that is much shorter than the frequencies of the system.

    For large quantum numbers the system is thought to converge to a classical system as N → ∞. Let me write the Schrodinger equation as

    iψ_t = Hψ.

    The wave function though is a sequence of perturbed functions I write as

    Ψ = ψ + sum_{n=1}^∞ε^nφ_n

    I am thinking of this as a singular perturbation. The reason is that wave function collapses are almost instantaneous. They tend to occur on time scales far smaller than the frequencies of the system. I will also consider the Hamiltonian as having a perturbing part so that H → H + εK. Also consider the time evolution as ∂_t → ∂_t + εδ/δt to account for the two time scales, or the two conjugate energy scales and their evolutions . Let me try an example where I just consider n = 1 in this series and I let φ_1 = |ψ|^2. I then get two differential equations

    iψ_t = -iHψ: O(1)

    iψ*ψ_t + iψ*_tψ + iδψ/δt = -iKψ + iH|ψ|^2: O(ε).

    By conservation of probability iψ*ψ_t + iψ*_tψ = 0 and we are left with a nonlinear differential wave equation. It is not hard to see this O(ε) equation bears some similarities to the logistics equation of chaos theory. This would say the quantum wave on a longer time scale has this tiny perturbed part that obeys chaotic dynamics. With a little more creativity this can be made into the nonlinear Schrödinger equation. That is a soliton equation. This would mean we have a quantum wave on one time scale that is perturbed by a small soliton wave on a much shorter time scale. By playing with different singular perturbation models it is possible to have various models of a quantum system perturbed by a set of short time scale perturbations.

    For many nonlinear systems there is a violation of unitarity as well. Zurek makes this point in his paper arXiv:1807.02092v1 [quant-ph] 5 Jul 2018. The einselected states are those that are stable under these singular perturbations on a tiny time scale. Other states are no stable and they result in this collapse. We may think of this as a lowering of entropy of a system in the case such a collapse results in the emission of a mixed state boson, which dumps entropy into the environment.

    As I tend to think there are connections between gravitation and wave function collapse. The similarities with the the nonconservation of Tr(ρ^2) with decoherence and Hawking radiation has always made me suspect a connection. We can perform a Cavendish experiment on masses in the kilogram scale, and most measurement apparatus are on the scale of grams on up. So it is not unreasonable to say there is some superposition of spacetime is established in a measurement that is nonlinear and not quantum mechanically stable. It is not an einselected state that is stable against environmental perturbation or quantum noise. This quantum noise then has nonlinearities, which we should not be too surprised of with gravitation, that abruptly adjusts the wave function. To carry this further, I would argue if we have conservation of qubits that there is some gravitational response. This might be in the form of gravitons or very weak gravity waves. If there is a superposition of a needle state in a measurement apparatus that has a growing superposition of spacetime metrics that is non einselected or stable, the collapse should then produce gravitons.

  51. Given a wavefunction, how does one read off the many worlds from it?

    E.g., no one would read two worlds in the wave function of a single particle - a |spin up> + b |spin down>.

    But we are supposed to detect two worlds in

    a |spin up> |detector indicates up> + b |spin down> |detector indicates down>.

    General challenge to MWIers: given a many-particle wave function, count how many worlds it represents.

    1. Note that since time evolution can be interpreted as a change of basis of the present state, all the future worlds will also be present in that state.

    2. The number of worlds equals the number of eigenstates that are in a superposition. So if you have spin up/down states in a superposition ψ = 1/√2([up] + [down]), the entire world splits into two upon a measurement. The probability for each of these is p_up = ½ and p_dn = ½. and |ψ|^2 = ½ + ½ = 1.

      Now there is some confusion people have that if the world splits there is a violation of conservation of mass and energy. However, globally each of those two branches has globally a ½ probability, so from this "bird's eye" perspective there is no violation. From the perspective of the "frog's eye" that is carried along one of these branches in a sort of Hilbert space frame dragging the probability is reset to unity.

      That gets to the issue Sabine raises on whether MWI really solves the measurement problem. It does not tell us how an observer is quantum frame dragged along one branch, or at least that is the phenom observed.

      Within the observable universe there are some 10^{80} elementary particles. There are maybe around 10^{20} wave function collapses occurring with each of these a second. This is certainly the case for particles inside stars and other thermal bodies. I can't say much about dark matter. So this means the observable universe may split 10^{100} times every second on the Hubble frame. The observable universe may only account for 1 in 10,000 or so of the universe out to what is causally accessible with a z redshift from the Planck scale. Now consider the multiverse prospect.

      MWI is nifty in some ways, but I have never been compelled to "drink the MWI Koolaid."

    3. First- so the claim is that the initial a |spin up> + b | spin down> particle already exists in two worlds?

      Second- if I measure spin, the world splits into two, but if I measure momentum, the world splits into a continuous infinity of worlds? "The number of worlds equals the number of eigenstates that are in a superposition" cannot be right.

    4. Lawrence Crowell writes "The number of worlds equals the number of eigenstates that are in a superposition."

      If so:
      The dimensions of the Hilbert space describing the universe and hence the number of eigenvectors/eigenstates does not change. So this MWI branching is an illusion, the number of worlds cannot change and is given by the initial wave function of the universe.

  52. I agree that to make predictions in MWI one needs something more than Schrödinger's equation, but not that much more.

    Namely, it follows from MWI that the Born rule will be satisfied in "most" branches of the multiverse, where "most" means "outside of an an exceptional set with vanishingly small sum of squared amplitudes".
    To extract a prediction from this, one has to disregard this tiny exceptional corner of the universal wavefunction. Yes, it is an additional assumption, but it seems rather benign and natural.

    1. The assumption one needs to make here is that measuring the eigenstate of an observable will yield that eigenstate with certainty. This special case of the Born rule implies the general case. Disregarding the "exceptional corner"shouldn't be a problem as probabilities only become rigorously defined by the normalized frequencies in the limit of an infinite number of measurements, and in that limit the probability of deviations from the Born rule go to zero.

    2. Pascal wrote:

      “one has to disregard this tiny exceptional corner of the universal wavefunction. Yes, it is an additional assumption, but it seems rather benign and natural.”

      There is no need to make any such assumption in MWI, and luckily so. Such an assumption would be far from benign. I am not aware of any way to introduce such a cut-off in a consistent way.

  53. Different people in "this World" can also be interpreted as MWI copies that spit off a long time ago (around or before the time we were born). The inverse time evolution of the multiverse will lead to merging of copies. Is it then true that two arbitrary people will always merge under an inverse time evolution? This has to be true because we grew out of a fertilized egg that didn't have a brain. The difference between what any two people are aware of will thus get smaller as we turn the clock back until it completely vanishes.

    So, we start out with zero awareness and we gradually accumulate information. We thus branch out becoming all the conscious agents in the entire multiverse, including dinosaurs here on Earth, strange aliens in far away galaxies, intelligent AIs and also the persons posting on this blog.

    This then means that different people in "this world" are actually the same persons in different worlds.

  54. Hi Sabine,

    Another (more amusing) problem with the Many Worlds Interpretation is that if we were to take the theory seriously, then we need to face the possibility that our own universe may not have originated 13.8 billion years ago in a Big Bang.

    No, because if the Many Worlds theory is true, then we may owe our existence to a “branching” that might have occurred - (perhaps a mere 10 minutes ago) - due to the quantum events that took place in the methane from a bear farting in the woods in an alternate universe.

    In which case, we are not here as the result of a “Big Bang,” but from a “tiny toot.”

    (I call it – “the tiny-toot theory”)

  55. All good physicists love those linear theories.

    Quantum Mechanics is an explicitly non-linear theory. The Many Worlds Theory only looks at the linear part. So claiming that the MWT is quantum mechanics is false.

  56. Dear Prof. Hossenfelder,

    "Measurement is a non-linear process."

    If there is clear experimental evidence backing up this statement then the MWI cannot explain quantum measurements, agreed.

    But the gedanken-experiment you offer does not seem enough as it might be missing some of the required finer details.

    Thank you for your reply and your patience in trying to communicate your knowledge.

    1. Ripi: Every experiment that results in a detector eigenstate for three different prepared states Psi_1, Psi_2 and Psi_1+Psi_2 (appropriately normalized) is the evidence you ask for. It's been done millions of times. How can I possibly make this any clearer. Look up any textbook on the measurement postulate. It's a normalized projection operator. It is not linear!

  57. Does this make any sense?

  58. "Pmer,

    The idea behind MWI is that there is one Universal WaveFunction (Psi) which follows the Schrodinger equation. The many worlds arise because Psi gives alternative possible outcomes, which are sometimes measured. In order not to introduce some theory as to why a specific measurement has occurred, the MWI just says that all Psi-possible outcomes have been measured. Thus Psi (and Schrodinger) live in the Multiverse and not in any Universe."

    I don't think I quite believe that. What if you have two different universes (worlds) that are evolving in completely different--and incompatible?--bases?

  59. Hi Sabine
    I cannot agree with your conclusion about the MWI. As far as I know, the interpretation has been revitalized in the last 20 years and rendered more "plausible":
    1. The wavefunction evolving according to the Schrödinger equation is ontologically real and so are the splitting worlds (PBR Theorem)
    2. Branching or splitting into parallel worlds plus observers is caused by irreversible decoherence. The decoherence approach enters the global picture and there is objectively no collapse.
    3. Since we as observer experience only one specific world or reality out of many, the wave function collapse on our branch can therefore only be a subjective illusion.
    Moreover, it seems that the MWI can be made compatible with the Born rule. Sean Carroll and Charles Sebens have introduced the self location uncertainty (SLU) before any measurement in any branch is done. Carroll calls the revised theory "Everettian Quantum Mechanics" (EQM).
    However, MWI may not entirely solve the measurement problem for a specific observer, but evades it due to the construction of the theory.
    The real problems of MWI are: apparently no freedom of choice of measurement and no free will (but this could be countered by incorporating the anthropic selection principle).
    Feynman once said that MWI was the only way he could think to resolve difficulties like the measurement problem in quantum mechanics, but that he didn't like it. This is my opinion, too.

    1. rhkail,

      Look, if you cannot find a fault in my argument, you have to accept the conclusion. That's how science works. You can't just come here and disagree with the conclusion without even looking at my argument, that doesn't make any sense whatsoever.

    2. Sabine,
      We could precise "That's how GOOD science works". Because a big part of spéculative physics don't seems obey this basic principle.

  60. "But first, a brief summary of what the many worlds interpretation says." She lost me after that.

  61. Interestingly, if dogs can always catch a frisbee then what is time to a dog? What is time to a mite? By relativity both the times are true. Time to a dog is as much "real" as time to a mite or a human. How can the same movement present two different times, that is, two different realities? Therefore, the dog-biological-program or the mite-biological-program dictates dog-time or mite-time respectively. The program or the observer dictates reality. Then what is actuality? The movement is the actuality, but how it is interpreted is the virtue of the program or the observer, that is, the human-program or the dog-program. There is one actuality, movement, but many realities, and each reality is a description of the movement based on the observer or the biological program. But, the bat-program does not see movement, rather it hears movement. So, what is time to a bat?

    You introduce time when you notice change. You cannot notice change if you did not register or record the previous instant as an image that you use as a reference to look at the present image. The present image when registered or recorded as memory becomes the reference to the future image. In this way, in memory, you have a series of recorded images. When you connect or link these images, in the act of thinking, time is born. If the images match or if the same set of different images repeat, you notice a pattern, if they don't you notice change. When you notice a pattern or change, you introduce time. You also sense an interval between the images when you notice change; if not for this interval, this space between two recorded images you can't tell them apart. This interval or space between two similar images that is recorded as different positions helps discern change as change in position, and therefore you notice movement. Time is in the interval, and time is in the movement as much as time is in the memory or the recording. This interval or space may be different for a dog when compared to a human. The dog-program dictates how the dog senses this interval or space. This sensing in turn dictates what is movement and time to a dog. The program or the observer dictates reality.

  62. I'd appreciate, Sabine, a clear piece about what you see as the flaws in Sean Carroll's reasoning.

  63. The detectors are copies in MWI, and the possibility associated with each copy is unique. MWI starts off with "Up" associated with copy A, and "down" associated with copy B. Now, both Copy A and Copy B are copies of the same thing. That same thing is the observer-program, which means that copy A and copy B are running the same program. But the program can detect only "Up". In the other branch too, it will detect only "Up". So, what happens to "Down" possibility? Is it a hidden variable or does the possibility simply disappear? These are questions the PWT and Copenhagen interpretation ask. We are back to square one.

  64. Dear Prof. Hossenfelder,

    "Look up any textbook on the measurement postulate. It's a normalized projection operator. It is not linear!"

    So your argument goes:

    - The measurement postulate is a non-linear normalized projection operator.
    - In the MWI the evolution is always linear.
    - Therefore there is a problem with the MWI.

    But the whole point of the MWI is to reject the measurement postulate, isn't it?

    Please forgive me if I misunderstood. This is all very confusing, I do not intent to distort your argument.

    Thanks again for your reply

    1. Ripi,

      The reason we have a measurement postulate is that it is necessary to describe what we observe. MWI people claim they can describe what we observe without the measurement postulate. I am pointing out that this only works because they bring in an assumption that's equivalent to the measurement postulate (equivalent in terms of observable consequences), therefore MWI is not any simpler than other interpretations of QM. Which, as I said above, is obvious if you think of it from a purely axiomatic perspective. If you could derive the measurement process from the Schroedinger equation in MWI, you could do it in any interpretation.

    2. In at least some versions of the MWI, the measurement postulate is replaced by assumptions about some kind of measure on observers in the wave function that gives the Born rule.

      You're saying that the MWI isn't any better than the Copenhagen interpretation because they both require additional assumptions. But shouldn't you actually compare these additional assumptions, and decide which one is simpler, more intuitive, or easier to work with? Comparing the number of additional assumptions (one in each case) and concluding that they are equally good interpretations is not a logically sound procedure.

      Look at the Bohm pilot-wave interpretation. I would claim that the additional mechanism is much more unwieldly and difficult to calculate with than the Copenhagen, and thus that, at least for the purpose of performing calculations, the Copenhagen interpretation is much better than the Bohm pilot-wave interpretation. But couldn't you argue that it also has just one additional assumption, and so is equally simple?

    3. Peter,

      I am saying that since they both use the same number of (logically equivalent) assumptions neither is any simpler than the other, in contrast to what MWI defenders claim.

    4. I'm saying that simply counting assumptions is an incredibly bad measure of simplicity. I won't argue with you about whether MWI is indeed simpler than Copenhagen, but I do strongly disagree with this methodology for measuring "simplicity".

  65. One can make a dead end street longer, wider, higher, paint a marvelous panorama on the wall at the end of it, but it stays a dead end street.
    Thinking in terms of technical specifications instead of thinking in functional specs in the first place, is not the right methodology. It's typical for do-oriented people, not for process-oriented ones.
    Space , seen in terms of technical specs, is l x b x h. L x b x h don't do anything.What is space in functional terms?

  66. Is MWI layer only spacelike or what? How do it handle with the relativity of simultaneity?

    Where did come the effect for pure guess from? Interaction from another world?

    The whole concept of MWI is internally contradictory.

    1. You are raising one interesting issue. It is in part an aspect of QM in general. If I make a measurement of an entangled state here, the measured entangled state I find determines the other state somewhere else. However, there is no information communicated along a spatial surface. One way to see this is the quantum numbers of the entangled state appears in a measurement as another pair of quantum numbers, but where this is just a subjective shift. Nothing is communicated; there are no transmissions of qubits or information.

      With Copenhagen interpretation (CI) and most others the wave function collapse is due to a local process. I can say with my detector the event happened “here.” There is still this nonlocality that a superposition or entanglement is reduced everywhere, but the localization of where a particle is is in the small space of my detector, or a spot on a CCD pixel. With Many World interpretation (MWI) my device makes the measurement, but where the splitting happens is ambiguous. Nonlocality means the splitting of the wave function happens potentially in all spacetime. This is made rather apparent with the Wheeler Delayed Choice Experiment (WDCE) Here a measurement of quantum waves after having passed through a double slit collapses the wave to appearing at a slit after the wave has pass through. So this reduction in not just nonlocal in space, but time as well. With WDCE there is no way to assign a probability to branches in this splitting, say if the density matrix or probabilities are evolving, This nonlocality is interesting for quantum gravitation, which is a quantum field that because it involves the dynamics of space can't have the locality conditions imposed on other quantum fields.

      Then in MWI there is no unique localization of where probability eigen-branching occurs, which in a curious sense means with a spatial interval for branching D → ∞ there is in the dual momentum-energy perspective k → 0 for k the reciprocal length and momentum p = ħk. However, in CI there is a rapid adjustment of a wave function that is tightly localized so D → 0 and k → ∞. From a phenomenological perspective an observer in the MWI setting also appears to localize a wave and the conditions D → 0 and k → ∞ are found. This is seen in all the analysis of resetting the measured wave probability to unity or one and so forth. So MWI does offer something interesting here in the way of a sort of duality involved with how branching occurs with the evolution of probabilities p_i = ψ_i*ψ_i. It is a duality that carries with it the Fourier transform nature of QM. With CI this is not as apparent, though we have wave function reduction that occurs nonlocally. With MWI this involves on a certain level a splitting of spatial surfaces and implies aspects on the quantization of space or spacetime.

      A part of what this is wrapped in is the subjective shift in quantum numbers and measured quantities. I alluded to this in the first paragraph. This has some elements of QuBism in it. This is the ultimate ψ-epistemic interpretation, which ultimately places the subject (experimenter or apparatus etc) as primary with no objective reality to anything quantum outside of observation. Of course most statements in the world have a relationship between subject and object, or a predicate with reference to object that becomes a sentence with the inclusion of a subject. QuBism however, has an almost Gödel numbering aspect to it, for if all that exists is a subject that makes Bayesian updates, then predicates involve ultimately the subject, with subjective outcomes, and the subject is then the predicate as well. This has a sense similar to Will VO Quine's “Is false when appended by itself, 'Is false when appended by itself, ''Is false ...” and so on. This odd statement has a self-referential quirkiness and this leads in some ways into quantum physics being self-referential. A quantum measurement involves quantum states that encode Gödel numbering of quantum states.

    2. continued due to 4096 character limit:

      CI is ψ-epistemic, though not on the steroids QuBism is on. MWI is ψ-ontological and is preferred by physicists who dislike the idea that QM ultimately does not have any reference to something objective “out there” as physical reality. With CI the collapse of wave functions appears to demolish quantum information, but qubit conservation I think can be maintained if we include quantum gravitation and identify nonlocal spacetime with large N entanglements. MWI does not appear to demolish qubits, but an observer is left with the a phenomenology not that different from CI and appeals to this global perspective of branching. However, all observations are local. So in the end we can I think shift around between quantum interpretations and think of things accordingly. As Penrose said there are more ideas about quantum foundations than there are physicists, because many of them switch their perspective.

    3. Thank you for your detailed view.

      is there any measurement-specific entanglement other than dual antipodality?

      I would see it possible for the world to be divided into two opposing causalities, of which measurement always chooses either. Hence we could preserve full physicality and causal information but in QM handedness independent logics...

  67. Hi Sabine
    I'm sorry about the misunderstanding in my previous post. I have looked again at your arguments concerning the measurement postulate and I revised my conclusion as follows. Under the pure theoretical aspect in MWI my comment might be valid. But considering the practicality of any measurement at any detector (ie, updating the probabilities to 100%)the measurement problem reappears and your statement is correct.

  68. Hello Sabine!
    Pardon me but I notice a problem in your rationale.
    This is very important because, UMHO, this leads to contradiction and more over to misinterpretation of EPR experiment for example.

    You said ;"From the wave-function you can calculate, for example,that a particle which enters a beam-splitter has a 50% chance of going left and a 50% chance of going right. But – and that’s the important point – once you have measured the particle, you know with 100% probability where it is.
    This means that you have to update your probability and with it the wave-function.... the wave-function collapse."

    Here is the problem ; you are talking of particle presence before making any measure but a measure suppose a detector and if your wave function go through a detector the wave function or the state doesn't exist any longer, no need to update a state and its probability.
    All away around, if you are talking to update a state it means you don't have done any detection then you cannot talk of probabilities of particle detection, you just have a new state, not even a measure.

    Here comes with the contradiction, you are saying: "you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability.
    But here’s the thing. If you take a superposition of these two states, you will not get a superposition of probabilities. You will get 100% either on the one side, or on the other."

    The 2 experiments you quote are completely different following what I said above.
    The 1st one, you have a detector on each branches and it is true that your particle must be either right or left each with a certain probability. This is the particle detection logic.
    The 2nd one, you recombine 2 existing states (not particles) from 2 outputs of each branch left & right without any detection, it is important to keep your wave function, then downstream you can go through detectors right & left. This is a completely different experiment and the associated probabilities of detection will be different accordingly.

    Let me insist that the position of your detector in the wave function processing chain is rather crucial and talking of particle detection probabilities upstream of a detection process is meaningless, worse it leads to contradictions.
    Upstream the detector it's a linear state model of the experiment, and downstream there is no wave function anymore but only real probabilities of detection of real particles.
    We must conclude that we must pay attention to not mix real particle detected concepts with linear Hilbert state vector model of experiment or wave function in the same manner without facing contradictions.
    This is the only way to avoid contradiction and above all to have CI compatible with causal relativity in EPR experiments.

  69. Sabine,

    You seem resolutely determined to treat the measurement apparatus and observer as "outside" of the system, rather than treating them all as part of a combined quantum system. You state that the measurement process is nonlinear because a definite outcome is produced, but how could it possibly *appear* otherwise? What region of Hilbert space for the combined system (including observer and measurement apparatus) corresponds to the observer perceiving a non-definite measurement after decoherence has occurred?

    To simplify things, let's replace the human observer with a computer system programmed to respond in some way to the measurement. Follow the evolution of the wave function for the combined system through the measurement and for some time beyond. Once decoherence has occurred, what do you expect it to look like? Is there any point or subspace of the Hilbert space for this combined system that corresponds to the computer registering an indefinite measurement?

    To simplify even further, and make things more concrete, pick any quantum circuit you like using only "classical" gates (e.g. no Hadamard operators, and an input state corresponding to a definite bit pattern produces an output state corresponding to a definite bit pattern). Let this circuit take the measurement as input and produce some output. What are you going to get? A superposition of states each corresponding to a definite measurement and definite output.

    Now let's do the same for a really large, complex quantum circuit that implements an artificial intelligence. Again, the result will be a superposition of states each corresponding to the AI perceiving a definite measurement.

    1. Kevin,

      No, the opposite is the case. I am saying the detector is made of the same stuff as the prepared state, hence they should be describable in the same way.

  70. "Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching." This is the nosense! In my understanding, if you have a wavefunction of a system, with different probable outcomes by measurement, you have the branching without (maybe before) measurement. It will be full deterministic what you will measure in your own branch, but you will not know before, only a probability. There is no measurement problem in this many world theory.

    1. Arcturus,

      The measurement causes decoherence and hence results in the different branches to become noticeably different. It is correct that you also have a large number of different "realitie" before and without measurement, but it's not what people normally refer to as "branching". Instead, they use the word to refer to something, vaguely speaking, macroscopic. As I emphasize in my video, that's a matter of definition. I would appreciated if you would pay some more attention to what I say before proclaiming that I am talking nonsense.

      "It will be full deterministic what you will measure in your own branch, but you will not know before, only a probability. There is no measurement problem in this many world theory"

      That's wrong and clearly demonstrates you didn't understand what I say. Write down a definition for "what you will measure" by using the Schrödinger equation only, and you will hopefully see what I am talking about.

  71. I think a lot of the problem is that we tend to confuse interaction or entanglement with measurement or observation. When we talk about measurement or observation, we are usually talking about classical phenomena with thousands, millions and even more particles and their wavefunctions involved. At the human scale, detecting a photon involves inducing a cascade of effects to produce an electrical signal whether the flow of electrons in a wire or as part of a chemical cascade along an axon. A single cyanide molecules won't kill a cat. It would take millions of cyanide molecules to get one a cat short of breath. When that many particles are involved, the statistics are aggregated to classical.

    Arxiv had a paper on a "Wigner's friend" experiment which examined the statistics of a system with multiple quantum scale "observers". Preliminary results indicate that quantum Wigner's observations and his little friend's observations do not need to be consistent. Quantum scale measurement doesn't mean decoherence. It just means entanglement. It's only when too much stuff gets entangled that the quantum statistics turn into classical statistics. It's like the way a Poisson distribution repeated turns into a Gaussian.

  72. The Psychological Observer:

    Programmed as a Hindu, the observer believes in reincarnation. Progammed as a Christian, the observer believes in resurrection. The observer is the Hindu-Program or the Christian-Program. Belief is the product of the program. If it is the Hindu-Program, then the belief is reincarnation, and if it is the Christian-Program, then the belief is resurrection. How do we know? Replace the Hindu-Program with the Christian-Program--conversion--then the Hindu now reprogrammed as a Christian believes in resurrection. The replacement betrays the underlying program. The program is the observer. With the Hindu-Program running, how do I look at a Muslim? The Muslim looks offensive or suspicious. Replace the Hindu-Program with the Muslim-Program, then the Hindu now reprogrammed as a Muslim develops a great affinity for the Muslim. Again, the replacement betrays the underlying program, which influences how I look at a human being, that is, whether I take offense or a great liking for a human being. Clearly, the observer is influencing the reaction or the measurement, which is how I look at, discriminate or classify a human being. The offense or the attraction is in the observer or the program and not out there in the human being. The observer is the offense or the attraction. The observer is the observed because if the program stalls or stops running the offense or the attraction disappears, right. Then there is only the human being not the Hindu or the Christian or the Muslim.

    How does the observer come into being? The moment one is born, one is conditioned or programmed by culture, tradition, orthodoxy, religion, family, caste, nationality, political ideology, linguistic patriotism, superstitions, rituals, dogmas, male chauvinism, feminism etc. The observer is the result of this conditioning or programming. And the observer in turn influences one's likes, dislikes, hate, offense, attraction, affinity, attitude, outlook, worldview, how one sees, how one listens, how one feels etc. The observer comes into being or is born out of conditioning or programming. Brainwashing or indoctrination by religion or political parties or communism or any other ism including patriotism is conditioning or programming. We all know how indoctrination terribly influences the world view of a terrorist and prepares him to kill or get killed.

    After all, Hinduism is a program as much as Christianity and Islam are. After all, political or linguistic affinity, fanaticism, affiliation or identity is as much a program as religion and caste are. After all, patriarchy and male chauvinism are as much a program as feminism is.

    Now how do you receive these statements or listen to these statements? Is there the influence of the observer? Is the program running? Observation is when the observer is not. When there is observation then there is an opportunity to see what is as it is. Observation does not guarantee discovery but there can be no discovery without observation.


COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.