Showing posts with label Quantum foundations. Show all posts
Showing posts with label Quantum foundations. Show all posts

Friday, February 03, 2017

Testing Quantum Foundations With Atomic Clocks

Funky clock at Aachen University.
Nobel laureate Steven Weinberg has recently drawn attention by disliking quantum mechanics. Besides an article for The New York Review of Books and a public lecture to bemoan how unsatisfactory the current situation is, he has, however, also written a technical paper:
    Lindblad Decoherence in Atomic Clocks
    Steven Weinberg
    Phys. Rev. A 94, 042117 (2016)
    arXiv:1610.02537 [quant-ph]
In this paper, Weinberg studies the use of atomic clocks for precision tests of quantum mechanics. Specifically, to search for an unexpected, omnipresent, decoherence .

Decoherence is the process that destroys quantum-ness. It happens constantly and everywhere. Each time a quantum state interacts with an environment – air, light, neutrinos, what have you – it becomes a little less quantum.

This type of decoherence explains why, in every-day life, we don’t see quantum-typical behavior, like cats being both dead and alive and similar nonsense. Trouble is, decoherence takes place only if you consider the environment a source of noise whose exact behavior is unknown. If you look at the combined system of the quantum state plus environment, that still doesn’t decohere. So how come on large scales our world is distinctly un-quantum?

It seems that besides this usual decoherence, quantum mechanics must do something else, that is explaining the measurement process. Decoherence merely converts a quantum state into a probabilistic (“mixed”) state. But upon measurement, this probabilistic state must suddenly change to reflect that, after observation, the state is in the measured configuration with 100% certainty. This update is also sometimes referred to as the “collapse” of the wave-function.

Whether or not decoherence solves the measurement problem then depends on your favorite interpretation of quantum mechanics. If you don’t think the wave-function, which describes the quantum state, is real but merely encodes information, then decoherence does the trick. If you do, in contrast, think the wave-function is real, then decoherence doesn’t help you understand what happens in a measurement because you still have to update probabilities.

That is so unless you are a fan of the the many-worlds interpretation which simply declares the problem nonexistent by postulating all possible measurement outcomes are equally real. It just so happens that we find ourselves in only one of these realities. I’m not a fan of many worlds because defining problems away rarely leads to progress. Weinberg finds all the many worlds “distasteful,” which also rarely leads to progress.

What would really solve the problem, however, is some type of fundamental decoherence, an actual collapse prescription basically. It’s not a particularly popular idea, but at least it is an idea, and it’s one that’s worth testing.

What has any of that to do with atomic clocks? Well, atomic clocks work thanks to quantum mechanics, and they work extremely precisely. And so, Weinberg’s idea is to use atomic clocks to look for evidence of fundamental decoherence.

An atomic clock trades off the precise measurement of time for the precise measurement of a wavelength, or frequency respectively, which counts oscillations per time. And that is where quantum mechanics comes in handy. A hundred years or so ago, physicist found that the energies of electrons which surround the atomic nucleus can take on only discrete values. This also means they can absorb and emit light only of energies that corresponds to the difference in the discrete levels.

Now, as Einstein demonstrated with the photoelectric effect, the energy of light is proportional to its frequency. So, if you find light of a frequency that the atom can absorb, you must have hit one of the differences in energy levels. These differences in energy levels are (at moderate temperatures) properties of the atom and almost insensitive to external disturbances. That’s what makes atomic clocks tick so regularly.

So, it comes down to measuring atomic transition frequencies. Such measurements works by tuning a laser until a cloud of atoms (usually Cesium or Rubidium) absorbs most of the light. The absorbtion indicates you have hit the transition frequency.

In modern atomic clocks, one employs a two-pulse scheme, known as the Ramsey method. A cloud of atoms is exposed to a first pulse, then left to drift for a second or so, and then comes a second pulse. After that, you measure how many atoms were affected by the pulses, and use a feedback loop to tune the frequency of the light to maximize the number of atoms. (Further reading: “Real Clock Tutorial” by Chad Orzel.)

If, however, between the two pulses some unexpected decoherence happens, then the frequency tuning doesn’t work as well as it does in normal quantum mechanics. And this, so Weinberg’s argument, would have been noticed already if decoherence were relevant for atomic masses on the timescale of seconds. This way, he obtains constraints on fundamental decoherence. And, as bonus, proposes a new way of testing the foundations of quantum mechanics by use of the Ramsey method.

It’s a neat idea. It strikes me as the kind of paper that comes about as spin-off when thinking about a problem. I find this an interesting work because my biggest frustration with quantum foundations is all the talk about what is or isn’t distasteful about this or that interpretation. For me, the real question is whether quantum mechanics – in whatever interpretation – is fundamental, or whether there is an underlying theory. And if so, how to test that.

As a phenomenologist, you won’t be surprised to hear that I think research on the foundations of quantum mechanics would benefit from more phenomenology. Or, in summary: A little less talk, a little more action please.

Saturday, December 19, 2015

Ask Dr B: Is the multiverse science? Is the multiverse real?

Kay zum Felde asked:
“Is the multiverse science? How can we test it?”
I added “Is the multiverse real” after Google offered it as autocomplete:


Dear Kay,

This is a timely question, one that has been much on my mind in the last years. Some influential theoretical physicists – like Brian Greene, Lenny Susskind, Sean Carroll, and Max Tegmark – argue that the appearance of multiverses in various contemporary theories signals that we have entered a new era of science. This idea however has been met with fierce opposition by others – like George Ellis, Joe Silk, Paul Steinhardt, and Paul Davies – who criticize the lack of testability.

If the multiverse idea is right, and we live in one of many – maybe infinitely many – different universes, then some of our fundamental questions about nature might never be answered with certainty. We might merely be able to make statements about how likely we are to inhabit a universe with some particular laws of nature. Or maybe we cannot even calculate this probability, but just have to accept that some things are as they are, with no possibility to find deeper answers.

What bugs the multiverse opponents most about this explanation – or rather lack of explanation – is that succumbing to the multiverse paradigm feels like admitting defeat in our quest for understanding nature. They seem to be afraid that merely considering the multiverse an option discourages further inquiries, inquiries that might lead to better answers.

I think the multiverse isn’t remotely as radical an idea as it has been portrayed, and that some aspects of it might turn out to be useful. But before I go on, let me first clarify what we are talking about.

What is the multiverse?

The multiverse is a collection of universes, one of which is ours. The other universes might be very different from the one we find ourselves in. There are various types of multiverses that theoretical physicists believe are logical consequences of their theories. The best known ones are:
  • The string theory landscape
    String theory doesn’t uniquely predict which particles, fields, and parameters a universe contains. If one believes that string theory is the final theory, and there is nothing more to say than that, then we have no way to explain why we observe one particular universe. To make the final theory claim consistent with the lack of predictability, one therefore has to accept that any possible universe has the same right to existence as ours. Consequently, we live in a multiverse.

  • Eternal inflation
    In some currently very popular models for the early universe our universe is just a small patch of a larger space. As result of a quantum fluctuation the initially rapid expansion – known as “inflation” – slows down in the region around us and galaxies can be formed. But outside our universe inflation continues, and randomly occurring quantum fluctuations go on to spawn off other universes – eternally. If one believes that this theory is correct and that we understand how the quantum vacuum couples to gravity, then, so the argument, the other universes are equally real as ours.

  • Many worlds interpretation
    In the Copenhagen interpretation of quantum mechanics the act of measurement is ad hoc. It is simply postulated that measurement “collapses” the wave-function from a state with quantum properties (such as being in two places at once) to a distinct state (at only one place). This postulate agrees with all observations, but it is regarded unappealing by many (including myself). One way to avoid this postulate is to instead posit that the wave-function never collapses. Instead it ‘branches’ into different universes, one for each possible measurement outcome – a whole multiverse of measurement outcomes.

  • The Mathematical Universe
    The Mathematical Universe is Max Tegmark’s brain child, in which he takes the final theory claim to its extreme. Any theory that describes only our universe requires the selection of some mathematics among all possible mathematics. But if a theory is a final theory, there is no way to justify any particular selection, because any selection would require another theory to explain it. And so, the only final theory there can be is one in which all mathematics exists somewhere in the multiverse.
This list might raise the impression that the multiverse is a new finding, but that isn’t so. New is only the interpretation. Since every theory requires observational input to fix parameters or pick axioms, every theory leads to a multiverse. Without sufficient observational input any theory becomes ambiguous – it gives rise to a multiverse.

Take Newtonian gravity: Is there a universe for each value of Newton’s constant? Or General Relativity: Do all solutions to the field equations exist? And Loop Quantum Gravity has multiverses with different parameters for an infinite number of solutions like string theory. It’s just that Loop Quantum Gravity never tried to be a theory of everything, so nobody worries about this.

What is new about the multiverse idea is that some physicists are no longer content with having a theory that describes observation. They now have additional requirements for a good theory, like for example that the theory have no ad hoc prescriptions like collapsing wavefunctions; no small, large, or in fact any numbers; or initial conditions that are likely according to some currently accepted probability distribution.

Is the multiverse science?

Science is what describes our observations of nature. But this is the goal and not necessarily the case for each step along the way. And so, taking multiverses seriously, rather than treating them as the mathematical artifact that I think they are, might eventually lead to new insights. The real controversy about the multiverses is how likely it is that new insights will emerge from this approach eventually.

The maybe best example for how multiverses might become scientific is eternal inflation. It has been argued that the different universes might not be entirely disconnected, but can collide, thereby leaving observable signatures in the cosmic microwave background. Another example for testability comes from Mersini-Houghton and Holman who have looked into potentially observable consequences of entanglement between different universes. And in a rather mindbending recent work, Garriga, Vilenkin and Zhang, have argued that the multiverse might give rise to a distribution of small black holes in our universe which also has consequences that could become observable in the future.

As to probability distributions on the string theory landscape, I don’t see any conceptual problem with that. If someone could, based on a few assumptions, come up with a probability measure according to which the universe we observe is the most likely one, that would for me be a valid computation of the standard model parameters. The problem is of course to come up with such a measure.

Similar things could be said about all other multiverses. They don’t presently seem very useful to describe nature. But pursuing the idea might eventually give rise to observable consequences and further insights.

We have known since the dawn of quantum mechanics that it’s wrong to require all mathematical structures of a theory to directly correspond to observables – wave-functions are the best counter example. How willing physicists are to accept non-observable ingredients of a theory as necessary depends on their trust in the theory and on their hope that it might give rise to deeper insights. But there isn’t a priori anything unscientific with a theory that contains elements that are unobservable.

So is the multiverse science? It is an extreme speculation, and opinions widely differ on how promising it is as a route is to deeper understanding. But speculations are a normal part of theory development, and the multiverse is scientific as long as physicists strive to eventually derive observable consequences.

Is the multiverse real?

The multiverse has some brain-bursting consequences. For example that everything that can happen does happen, and it happens an infinite amount of times. There are thus infinitely many copies of you, somewhere out there, doing their own thing, or doing exactly the same as you. What does that mean? I have no clue. But it makes for an interesting dinner conversation through the second bottle of wine.

Is it real? I think it’s a mistake to think of “being real” as a binary variable, a property that an object either has or has not. Reality has many different layers, and how real we perceive something depends on how immediate our inference of the object from sensory input is.

A dog peeing on your leg has a very simple and direct relation to your sensory input that does not require much decoding. You would almost certainly consider it real. On the contrary, evidence for the quark model contained in a large array of data on a screen is a very indirect sensory input that requires a great deal of decoding. How real you consider quarks thus depends on your knowledge of, and trust in, the theory and the data. Or trust in the scientists dealing with the theory and the data as it were. For most physicists the theory underlying the quark model has proved reliable and accurate to such high precision that they consider quarks as real as the peeing dog.

But the longer the chain of inference, and the less trust you have in the theories used for inference, the less real objects become. In this layered reality the multiverse is currently at the outer fringes. It’s as unreal as something can be without being plain fantasy. For some practitioners who greatly trust their theories, the multiverse might appear almost as real as the universe we observe. But for most of us these theories are wild speculations and consequently we have little trust in this inference.

So is the multiverse real? It is “less real” than everything else physicists have deduced from their theories – so far.

Thursday, June 18, 2015

No, Gravity hasn’t killed Schrödinger’s cat

There is a paper making the rounds which was just published in Nature Physics, but has been on the arXiv since two years:
    Universal decoherence due to gravitational time dilation
    Igor Pikovski, Magdalena Zych, Fabio Costa, Caslav Brukner
    arXiv:1311.1095 [quant-ph]
According to an article in New Scientist the authors have shown that gravitationally induced decoherence solves the Schrödinger’s cat problem, ie explains why we never observe cats that are both dead and alive. Had they achieved this, that would be remarkable indeed because the problem has been solved half a century ago. New Scientist also quotes the first author as saying that the effect discussed in the paper induces a “kind of observer.”

New Scientist further tries to make a connection to quantum gravity, even though everyone involved told the journalist it’s got nothing to do with quantum gravity whatsoever. There is also a Nature News article, which is more careful for what the connection to quantum gravity, or absence thereof, is concerned, but still wants you to believe the authors have shown that “completely isolated objects” can “collapse into one state” which would contradict quantum mechanics. If that could happen it would be essentially the same as the information loss problem in black hole evaporation.

So what did they actually do in the paper?

It’s a straight-forward calculation which shows that if you have a composite system in thermal equilibrium and you push it into a gravitational field, then the degrees of freedom of the center of mass (com) get entangled with the remaining degrees of freedom (those of the system’s particles relative to the center of mass). The reason for this is that the energies of the particles become dependent on their position in the gravitational field by the standard redshift effect. This means that if the system’s particles had quantum properties, then these quantum properties mix together with the com position, basically.

Now, decoherence normally works as follows. If you have a system (the cat) that is in a quantum state, and you get it in contact with some environment (a heat bath, the cosmic microwave background, any type of measurement apparatus, etc), then the cat becomes entangled with the environment. Since you don’t know the details of the environment however, you have to remove (“trace out”) its information to see what the cat is doing, which leaves you with a system that has now a classic probabilistic distribution. One says the system has “decohered” because it has lost its quantum properties (or at least some of them, those that are affected by the interaction with the environment).

Three things important to notice about this environmentally induced decoherence. First, the effect happens extremely quickly for macroscopic objects even for the most feeble of interactions with the environment. This is why we never see cats that are both dead and alive, and also why building a functioning quantum computer is so damned hard. Second, while decoherence provides a reason we don’t see quantum superpositions, it doesn’t solve the measurement problem in the sense that it just results in a probability distribution of possible outcomes. It does not result in any one particular outcome. Third, nothing of that requires an actually conscious observer; that’s an entirely superfluous complication of a quite well understood process.

Back to the new paper then. The authors do not deal with environmentally induced decoherence but with an internal decoherence. There is no environment, there is only a linear gravitational potential; it’s a static external field that doesn’t carry any degrees of freedom. What they show is that if you trace out the particle’s degrees of freedom relative to the com, then the com decoheres. The com motion, essentially, becomes classical. It can no longer be in a superposition once decohered. They calculate the time it takes for this to happen, which depends on the number of particles of the system and its extension.

Why is this effect relevant? Well, if you are trying to measure interference it is relevant because this relies on the center of mass moving on two different paths – one going through the left slit, the other through the right one. So the decoherence of the center of mass puts a limit on what you can measure in such interference experiments. Alas, the effect is exceedingly tiny, smaller even than the decoherence induced by the cosmic microwave background. In the paper they estimate the time it takes for 1023 particles to decohere is about 10-3 seconds. But the number of particles in composite systems that can presently be made to interfere is more like 102 or maybe 103. For these systems, the decoherence time is roughly 107 seconds - that’s about a year. If that was the only decoherence effect for quantum systems, experimentalists would be happy!

Besides this, the center of mass isn’t the only quantum property of a system, because there are many ways you can bring a system in superpositions that doesn’t affect the com at all. Any rotation around the com for example would do. In fact there are many more degrees of freedom in the system that remain quantum than that decohere by the effect discussed in the paper. The system itself doesn’t decohere at all, it’s really just this particular degree of freedom that does. The Nature News feature states that
“But even if physicists could completely isolate a large object in a quantum superposition, according to researchers at the University of Vienna, it would still collapse into one state — on Earth's surface, at least.”
This is just wrong. The object could still have many different states, as long as they share the same center of mass variable. A pure state left in isolation will remain in a pure state.

I think the argument in the paper is basically correct, though I am somewhat confused about the assumption that the thermal distribution doesn’t change if the system is pushed into a gravitational field. One would expect that in this case the temperature also depends on the gradient.

So in summary, it is a nice paper that points out an effect of macroscopic quantum systems in gravitational fields that had not previously been studied. This may become relevant for interferometry of large composite objects at some point. But it is an exceedingly weak effect, and I for sure am very skeptical that it can be measured any time in the soon future. This effect doesn’t teach us anything about Schrödinger’s cat or the measurement problem that we didn’t know already, and it for sure has nothing to do with quantum gravity.

Science journalists work in funny ways. Even though I am quoted in the New Scientist article, the journalist didn’t bother sending me a link. Instead I got the link from Igor Pikovski, one of the authors of the paper, who wrote to me to apologize for the garble that he was quoted with. He would like to pass on the following clarification:
“To clarify a few quotes used in the article: The effect we describe is not related to quantum gravity in any way, but it is an effect where both, quantum theory and gravitational time dilation, are relevant. It is thus an effect based on the interplay between the two. But it follows from physics as we know it.

In the context of decoherence, the 'observer' are just other degrees of freedom to which the system becomes correlated, but has of course nothing to do with any conscious being. In the scenario that we consider, the center of mass becomes correlated with all the internal constituents. This takes place due to time dilation, which correlates any dynamics to the position in the gravitational field and results in decoherence of the center of mass of the composite system.

For current experiments this effect is very weak. Once superposition experiments can be done with very large and complex systems, this effect may become more relevant. In the end, the simple prediction is that it only depends on how much proper time difference is acquired by the interfering amplitudes of the system. If it's exactly zero, no decoherence takes place, as for example in a perfectly horizontal setup or in space (neglecting special relativistic time dilation). The latter was used as an example in the article. But of course there are other means to make sure the proper time difference is minimized. How hard or easy that will be depends on the experimental techniques. Maybe an easier route to experimentally probe this effect is to probe the underlying Hamiltonian. This could be done by placing clocks in superposition, which we discussed in a paper in 2011. The important point is that these predictions follow from physics as we know, without any modification to quantum theory or relativity. It is thus 'regular' decoherence that follows from gravitational time dilation.”

Sunday, August 31, 2014

The ordinary weirdness of quantum mechanics

Raymond Laflamme's qubit.
Photo: Christina Reed.
I’m just back from our 2014 Workshop for Science Writers, this year on the topic “Quantum Theory”. The meeting was both inspiring and great fun - the lab visit wasn’t as disorganized as last time, the muffins appeared before the breaks and not after, and amazingly enough we had no beamer fails. We even managed to find a video camera, so hopefully you’ll be able to watch the lectures on your own once uploaded, provided I pushed the right buttons.

Due to popular demand, we included a discussion session this year. You know that I’m not exactly a big fan of discussion sessions, but then I didn’t organize this meeting for myself. Michael Schirber volunteered to moderate the discussion. He started with posing the question why quantum mechanics is almost always portrayed as spooky, strange or weird. Why do we continue to do this and is beneficial for communicating the science behind the spook?

We could just blame Einstein for this, since he famously complained that quantum mechanics seemed to imply a spooky (“spukhafte”) action at a distance, but that was a century ago and we learned something since. Or some of us anyway.

Stockholm's quantum optics lab,
Photo: Christina Reed.
We could just discard it as headline making, a way to generate interest, but that doesn’t really explain why quantum mechanics is described as weirder or stranger as other new and often surprising effects. How is time-dilatation in a gravitational field less strange than entanglement? And it’s not that quantum mechanics is particularly difficult either. As Chad pointed out during the discussion, much of quantum mechanics is technically much simpler than general relativity.

We could argue it is due to our daily life being dominated by classical physics, so that quantum effects must appear unintuitive. Intuition however is based on experience and exposure. Spend some time calculating quantum effects, spend some time listening to lectures about quantum mechanics, and you can get that experience. This does not gain you the ability to perceive quantum effects without a suitable measuring device, but that is true for almost everything in science.

The explanation that came up during the discussion that made the most sense to me is that it’s simply a way to replace technical vocabulary, and these placeholders have become vocabulary on their own right.

The spook and the weirdness, they stand in for non-locality and contextuality, they replace correlations and entanglement, pure and mixed states, non-commutativity, error correction, path integrals or post-selection. Unfortunately, all too often the technical vocabulary is entirely absent rather than briefly introduced. This makes it very difficult for interested readers to dig deeper into the topic. It is basically a guarantee that the unintuitive quantum behavior will remain unintuitive for most people. And for the researchers themselves, the lack of technical terms makes it impossible to figure out what is going on. The most common reaction to supposed “quantum weirdness” that I see among my colleagues is “What’s new about this?”

The NYT had a recent opinion piece titled “Why We Love What We Don’t Understand” in which Anna North argued that we like that what isn’t understood because we want to keep the wonder alive:
“Many of us may crave that tug, the thrill of something as-yet-unexplained… We may want to get to the bottom of it, but in another way, we may not — as long as we haven’t quite figured everything out, we can keep the wonder alive.”
This made me think because I recall browsing through my mother’s collection of (the German version of) Scientific American as a teenager, always looking to learn what the scientists, the big brains, did not know. Yeah, it was kinda predictable I would end up in some sort of institution. At least it’s one where I have a key to the doors.

Anyway, I didn’t so much want to keep the mystery alive as that I wanted to know where the boundary between knowledge and mystery was currently at. Assume for a moment I’m not all that weird but most likely average. It is surprising then that the headline-grabbing quantum weirdness, instead of helping the reader, misleads them about where this boundary between knowledge and mystery is? Is it surprising then that everybody and their dog has solved some problem with quantum mechanics without knowing what problem?

And is it surprising, as I couldn’t help noticing, that the lecturers at this year’s workshop were all well practiced in forward-defense, and repeatedly emphasized that most of the theory is extremely well understood. It’s just that the focus on new technics and recent developments highlights exactly that what isn’t (yet) well understood, thereby giving more weight to the still mysterious in the news than there is in the practice.

I myself do not mind the attention-grabbing headlines, and that news focus on that what’s new rather than that what’s been understood for decades is the nature of the business. As several science writers, at this workshop and also at the previous one, told me, it is often not them inventing the non-technical terms, but it is vocabulary that the scientists themselves use to describe their research. I suspect though the scientists use it trying to adapt their explanations to the technical level they find in the popular science literature. So who is to blame really and how do we get out of this loop?

A first step might be to stop assuming all other parties are more stupid than the own. Most science writers have some degree in science, and they are typically more up to date on what is going on in research than the researchers themselves. The “interested public” is perfectly able to deal with some technical vocabulary as long as it comes with an explanation. And researchers are not generally unwilling or unable to communicate science, they just often have no experience what is the right level of detail in situations they do not face every day.

When I talk to some journalist, I typically ask them first to tell me roughly what they already know. From their reply I can estimate what background they bring, and then I build on that until I notice I lose them. Maybe that’s not a good procedure, but it’s the best I’ve come up with so far.

We all can benefit from better science communication, and a lot has changed within the last decades. Most notably, there are many more voices to hear now, and these voices aim at very different levels of knowledge. What is still not working very well though is the connection between different levels of technical detail. (Which we previously discussed here.)

At the end of the discussion I had the impression opinions were maximally entangled and pure states might turn into mixed ones. Does that sound strange?

Tuesday, February 18, 2014

A drop makes waves – just like quantum mechanics?

My prof was fond of saying there are no elementary particles, we should really call them “elementary things” - “Elementardinger”. After all the whole point of quantum theory is that there’s no point - there are no classical particles with a position and a momentum, there is only the wave-function. And there is no particle-wave duality either. This unfortunate phrase suggests that the elementary thing is both a particle and a wave, but it is neither: The elementary thing is something else in its own right.

That quantum mechanics is built on mathematical structures which do not correspond to classical objects we can observe in daily life has bugged people ever since quantum mechanics came, saw, and won over the physics departments. Attempts to reformulate quantum mechanics in terms of classical fields or particles go back to the 1920s, to Madelung and de Broglie, and were later continued by Bohm. This alternative approach to quantum mechanics has never been very popular, primarily because it was unnecessary. Quantum mechanics and quantum field theory as taught in the textbooks proved to work enormously well and there was much to be done. But despite its unpopularity, this line of research never went extinct and carried on until today.

Today we are reaching the limits of what can be done with the theories we have and we are left with unanswered questions. “Shut up and calculate” turned into “Shut up and let me think”. Tired of doing loop expansions, still not knowing how to quantize gravity, the naturalness-issue becoming more pressing by the day, most physicists are convinced we are missing something. Needless to say, no two of them will agree on what that something is. One possible something that has received an increasing amount of attention during the last decade is that we got the foundations of quantum mechanics wrong. And with that the idea that quantum mechanics may be explainable by classical particles and waves is back en vogue.

Enter Yves Couder.

Couder spends his days dropping silicone oil. Due to surface tension and chemical potentials the silicone droplets, if small enough, will not sink into the oil bath of the same substance, but hover above its surface, separated by an air film. Now he starts oscillating the oil up and down and the drops start to bounce. This simple experiment creates a surprisingly complex coupled system of the driven oscillator that is the oil and the bouncing droplets. The droplets create waves every time they hit the surface and the next bounce of the droplets depends on the waves they hit. The waves of the oil are both a result of the bounces as well as a cause of the bounces. The drops and the waves, they belong together.

Does it smell quantum mechanical yet?

The behavior is interesting even if one looks at only one particle. If the particle is given an initial velocity, it will maintain this velocity and drag the wave field with it. The drop will anticipate and make turns at walls or other obstacles because the waves in the oil had previously been reflected. The behavior of the drop is very suggestive of quantum mechanical effects. Faced with a double-slit, the drop will sometimes take one slit, sometimes the other. A classical wave by itself would go through both slits and interfere with itself. A classical particle would go through one of the slits. The bouncing droplet does neither. It is a clever system that converts the horizontal driving force of the oil into vertical motion by the drops bouncing off the rippled surface. It is something else in its own right.

You can watch some of the unintuitive behavior of the coupled drop-oil system in the blow video. The double-slit experiment is at 2:41 mins


Other surprising findings in these experiments have been that the drops exhibit an attractive force on each other, that they can have quantized orbits, they mimic tunneling and Anderson localization. In short, the droplets show behavior that was previously believed to be exclusively quantum mechanical phenomena.

But just exactly why that would be so, nobody really knew. There were many experiments, but no good theory. Until now. In a recent paper, Robert Brady and Ross Anderson from the University of Cambridge delivered the theory:

While the full behavior of the drop-oil system is so far not analytically computable, they were able to derive some general relations that shed much light on the physics of the bouncing droplets. This became possible by noting that in the range the experiments are conducted the speed of the oil waves is to good approximation independent of the frequency of the waves, and the equation governing the waves is linear. This means it obeys an approximate Lorentz-symmetry which enabled them to derive relations between the bounce-period and the velocity of the droplet that fit very well with the observations. They also offer an explanation for the attractive force between the droplets due to the periodic displacement of the cause of the waves and the source of the waves and tackle the question how the droplets are bounced off barriers.

These are not technically very difficult calculations, their value lies in making the theoretical connection between the observation and the theory which now opens the possibility of using this theory to explain quantum phenomena as emergent from an underlying classical reality. I can imagine this line of research to become very fruitful also for the area of emergent gravity. And if you turn it around, understanding these coupled systems might give us a tool to scale up at least some quantum behavior to macroscopic systems.

While I think this is interesting fluid dynamics and pretty videos, I remain skeptic of the idea that this classical system can reproduce all achievements of quantum mechanics. To begin with it gives me to think that the Lorentz-symmetry is only approximate, and I don’t see what this approach might have to say about entanglement, which for me is the hallmark of quantum theory.

Ross Anderson, one of the authors of the above paper, is more optimistic: “I think it's potentially one of the most high-impact things I've ever done,” he says, “If we're right, and reality is fluid-mechanical at the deepest level, this changes everything. It consigns string theory and multiple universes to the dustbin.”

Wednesday, January 08, 2014

Why quantize gravity?

When I first read DNLee’s story, I mistakenly thought “ofek” is a four letter word, an internet slang for oh-what-the-fucken-heck. Ofek, then, is all I have to say about my recent grant proposals being declined, one after the other. Ah, scrape the word “recent” – the Swedish Research Council hasn’t funded any one of my project proposals since I moved to Sweden in 2009. So all I can do is to continue to write papers as what feels like the only person working on quantum gravity in Northern Europe. Most painfully, I have had to turn away many a skilled and enthusiastic student wanting to work on quantum gravity phenomenology.

To add insult to injury, the Swedish Research Council publicly lists the titles of the winning proposals. You win if your research contains either “nano” or “neuro”, promises a cure for cancer, green energy, or a combination of the above. The strategy is designed for a bad return on investment. Money goes where lots of people poke the always same questions. If many flies circle the same spot there must be shit to find, the thinking is, let’s throw money at it. Our condensed matter people have no funding issues.

As nations face economic distress and support for basic research dwindles, why would anybody want to work on quantum gravity. Srsly. This question keeps coming back to me; its recurrence time conspicuously coincides with the funding agencies’ call cycles. It factors into my reflection index that women, I read, are drawn to occupations that help others, also occupations where they can use their allegedly superior social and language skills. What’ wrong with me? Why quantize gravity if I could cure cancer instead? Or at least write proposals promising I will, superior languages skills and all.

Modern medicine wouldn’t exist without the technologies that have become possible by breakthroughs in physics. There wouldn’t be any nano or neuro without imaging and manipulating quantum things and without understanding atoms and nucleons. Without basic research in physics, there wouldn’t be CT scans, there wouldn’t be NMR, nuclear power, digital cameras, and there wouldn’t be optical fibers for endovenous laser treatment.

At this point in history we still build on the new ground discovered by physicists a century ago. But the only way we can continue improving our circumstances of living is to increase our understanding of the fundamental laws of nature. And at the very top of the list there’s the question what is space and time, and how can we manipulate quantum objects. In my mind, these questions are intimately related. In my mind, that’s the ground the technologies of the next centuries will be built upon. In my mind, that’s how my occupation contributes to society – not to this generation maybe, but to the coming ones. Quantum gravity, quantum information, and the foundations of quantum mechanics are what will keep medicine advancing when nano and neuro has peaked and busted. Which will happen, inevitably, sooner or later.

So why quantum gravity? Because we know our knowledge of nature is incomplete. There must be more to find than we have found so far.

The search for quantum gravity is often portrayed as a search for unification. All other interactions besides gravity are quantized, there’s no unifying framework and that’s what physicists are looking for. It’s an argument from aesthetics, and it’s an argument I don’t like. Yes, it is unaesthetic to have gravity stand apart, but the reason we look for a quantum theory of gravity is much stronger than that: We know that unquantized gravity is incomplete and it is inconsistent with quantum theory. It isn’t only that we don’t know how to quantize gravity and that bugs us, we actually know that the combination of theories we presently have does not describe space and time at the fundamental level.

The strongest evidence for this inconsistency are the occurrence of singularities in unquantized gravity and the black hole information problem. The singularities are a sign that the unquantized theory breaks down and is incomplete. The black hole information problem shows that combining unquantized gravity with quantized matter is inconsistent – the result of combining them is incompatible with quantum theory.

Most importantly, we know that quantum particles can exist in superposition states, they can be neither here nor there. We also know that all particles carry energy and all energy creates a gravitational field. We thus know that the gravitational field of a superposition must exist, but we don’t know what it is. If the electron goes through both the left and the right slit, what happens to its gravitational field? Infuriatingly, nobody knows.

Nobody knows isn’t to say that nobody has an answer. Everybody seems to have an answer, the flies are circling happily. So I’ve made it my job to find out how we can ever know, which leads me to the question how to experimentally test quantum gravity. Without finding observational evidence, quantum gravity should be taught in the math or philosophy departments, not in the physics departments.

The irony is that quantum gravity phenomenology is as safe an investment as it gets in science. We know the theory must exist. We know that the only way it can be scientific is to make contact to observation. Quantum gravity phenomenology will become reality as surely as volcanic ash will drift over Central Europe again.

Every time I go down this road of self-doubts, I come out at the same place, which is right here in my office with my notepad and the books and the piles of papers. Quantum gravity is the next level of fundamental laws. The theory has to be connected to experiment. Quantum gravity is my contribution to the future of our societies and to help advance life on planet Earth. And, so I hope, space exploration, eventually. Because I really want ask those aliens a few things.

Today I talked to a professional photographer. Between the apertures and external flash settings and my attempt to produce a smile, I learned that he too has to write proposals for project funding. In his case, that’s portraits taken by a method which, I gather, isn’t presently widely used and not very popular with the Swedes. It’s neither nano nor neuro and it wasn’t funded.

Money is time, and time flies, and so in the end the most annoying part is all the waste of time that I could have used better than searching for pretty adjectives to decorate my proposals. Your tax money at work. Neuro-gravity anybody? Nano is also a four-letter word.

Wednesday, November 27, 2013

Cosmic Bell

On the playground of quantum foundations, Bell’s theorem is the fence. This celebrated theorem – loved by some and hated by others – shows that correlations in quantum mechanics can be stronger than in theories with local hidden variables. Such local hidden variables theories are modifications of quantum mechanics which aim to stay close to the classical, realist picture, and promise to make understandable what others have argued cannot be understood. In these substitutes for quantum mechanics, the ‘hidden variables’ serve to explain the observed randomness of quantum measurement.

Experiments show however that correlations can be stronger than local hidden variables theories allow, as strong as quantum mechanics predicts. This is very clear evidence against local hidden variables, and greatly diminishes the freedom researchers have to play with the foundations of quantum mechanics.

But a fence has holes and Bell’s theorem has loopholes. These loopholes stem from assumptions that necessarily enter every mathematical proof. Closing all these loopholes by making sure the assumptions cannot be violated in the experiment is challenging: Quantum entanglement is fragile and noise is omnipresent.

One of these loopholes in Bell’s theorem is known as the ‘freedom of choice’ assumption. It assumes that the settings of the two detectors which are typically used in Bell-type experiments can be chosen ‘freely’. If the detector settings cannot be chosen independently, or are both dependent on the same hidden variables, this could mimic the observed correlations.

This loophole can be addressed by using random sources for the detector settings and putting them far away from each other. If the hidden variables are local, any correlations must have been established already when the sources were in causal contact. The farther apart the sources for the detector settings, the earlier the correlations must have been established because they cannot have spread faster than the speed of light. The earlier the correlations must have been established, the less plausible the theory, though how early is ‘too early’ is subjective. As we discussed earlier, in practice theories don’t so much get falsified as that they get implausified. Pushing back the time at which detector correlations must have been established serves to implausify local hidden variable theories.

In a neat recent paper, Jason Gallicchio, Andrew Friedman and David Kaiser studied how to use cosmic sources to set the detector, sources that have been causally disconnected since the big bang (which might or might not have been ‘forever’). While this had been suggested before, they did the actual work, thought about the details, the technological limitations, and the experimental problems. In short, they breathed the science into the idea.

    Testing Bell's Inequality with Cosmic Photons: Closing the Settings-Independence Loophole
    Jason Gallicchio, Andrew S. Friedman, David I. Kaiser
    arXiv:1310.3288 [quant-ph]

The authors look at two different types of sources: distant quasars on opposite sides of the sky, and patches of the cosmic microwave background (CMB). In both cases, photons from these sources can be used to switch the detectors, for example by using the photon’s arrival time or their polarization. The authors come to the conclusion that quasars are preferable because the CMB signal suffers more from noise, especially in Earth-based telescopes. Since this noise could originate in close-by sources, it would spoil the conclusions for the time at which correlations must have been established.

According to the authors, it is possible with presently available technology to perform a Bell-test with such distant sources, thus pushing back the limit on conspiracies that could allow hidden variable theories to deliver quantum mechanical correlations. As always with such tests, it is unlikely that any disagreement with the established theory will be found, but if a disagreement can be found, it would be very exciting indeed.

It remains to be said that closing this loophole does not constrain superdeterministic hidden variables theories, which are just boldly non-local and not even necessarily realist. I like superdeterministic hidden variable theories because they stay as close to quantum mechanics as possible while not buying into fundamental non-determinism. In this case it is the measured particle that cannot be prepared independently of the detector settings, and you already know that I do not believe in free will. This requires some non-locality but not necessarily superluminal signaling. Such superdeterministic theories cannot be tested with Bell’s theorem. You can read here about a different test that I proposed for this case.

Thursday, November 21, 2013

The five questions that keep physicists up at night

Image: Leah Saulnier.

The internet loves lists, among them the lists with questions that allegedly keep physicists up at night. Most recently I spotted one at SciAm blogs, About.com has one, sometimes it’s five questions, sometimes seven, nine, or eleven, and Wikipedia excels in listing everything that you can put a question mark behind. The topics slightly vary, but they have one thing in common: They’re not the questions that keep me up at night.

The questions that presently keep me up are “Where is the walnut?” or “Are the street lights still on?” I used to get up at night to look up an equation, now I get up to look for the yellow towel, the wooden memory piece with the ski on it, the one-eyed duck, the bunny’s ear, the “white thing”, the “red thing”, mentioned walnut, and various other household items that the kids Will Not Sleep Without.

But I understand of course that the headline is about physics questions...

The physics questions that keep me up at night are typically project-related. “Where did that minus go?” is for example always high on the list. Others might be “Where is the branch cut?”, “Why did I not run the scheduled backup?”, “Should I resend this email?” or “How do I shrink this text to 5 pages?”, for just to mention a few of my daily life worries.

But I understand of course that the headline is about the big, big physics questions...

And yes, there are a few of these that keep coming back and haunt me. Still they’re not the ones I find on these lists. What you find on the lists in SciAm and NewScientist could be more aptly summarized as “The 5 questions most discussed on physics conferences”. They’re important questions. But it’s unfortunate how the lists suggest physicists all more or less have the same interests and think about the same five questions.

So I thought I’d add my own five questions.

Questions that really bother me are the ones where I’m not sure how to even ask the question. If a problem is clear-cut and well-defined it’s a daylight question - a question that can be attacked by known methods, the way we were taught to do our job. “What’s the microscopic origin of dark matter?” or “Is it possible to detect a graviton?” are daylight questions that we can play with during work hours and write papers about.

And then there are the night-time questions.
  • Is the time-evolution of the universe deterministic, indeterministic or neither?

    How can we find out? Can we at all? And, based on this, is free will an illusion? This question doesn’t really fall into any particular research area in physics as it concerns the way we formulate the laws of nature in general. It is probably closest to the foundations of quantum mechanics, or at least that’s where it gets most sympathy.
  • Does the past exist in the same way as the present? Does the future?

    Does a younger version of yourself still exist, just that you’re not able to communicate with him (her), or is there something special about the present moment? The relevance of this question (as Lee elaborated on in his recent book) stems from the fact that none of our present descriptions of nature assigns any special property to the ever-changing present. I would argue this question is closest to quantum gravity since it can’t be addressed without knowing what space and time fundamentally are.
  • Is mathematics the best way to model nature? Are there systems that cannot be described by mathematics?

    I blame Max Tegmark for this question. I’m not a Platonist and don’t believe that nature ultimately is mathematics. I don’t believe this because it doesn’t seem likely that the description of nature that humans discovered just yesterday would be the ultimate one. But if it’s not then what is the difference between mathematics and reality? Is there anything better? If so, what? If not, what does this mean for science?
  • Does a theory of everything exist and can it be used, in practice (!), to derive the laws of nature for all emergent quantities?

    If so, will science come to an end? If not, are there properties of nature that cannot be understood or even modeled by any conscious being? Are there cases of strong emergence? Can we use science to understand the evolution of life, the development of complex systems, and will we be able to tell how consciousness will develop from here on?
  • What is the origin and fate of the universe and does it depend on the existence of other universes?

    That’s the question from my list you are most likely to find on any ‘big questions of physics’ list. It lies on the intersection of cosmology and quantum gravity. Dark matter, dark energy, black holes, inflation and eternal inflation, the nature and existence of space-time singularities all play a role to understand the evolution of the universe.
(It's not an ordered list because it's not always the same question that occupies my mind.)

I saw that Ashutosh Jogalekar at SciAm blogs also was inspired to add his own five mysteries to the recent SciAm list. If you want to put up your own list, you can post the link in this comment section, I will wave it through the spam filter.

Tuesday, October 15, 2013

Shut up and let me think

I recently attended a conference on the foundations of quantum mechanics in Vienna. It was a very interesting and well organized event. The food was good, the staff efficient, and everybody got a conference bag with an umbrella.

I don’t normally have a lot to do with quantum foundations, especially not since I left Perimeter Institute. And so I learned many new things and got feedback on my paper. It was a useful meeting for me – but it was also a little strange.

Most of the feedback I got was people telling me they don’t believe in superdeterminism, wanting to know why I believe in it, not that I’m sure I do. Discussions turned towards final causes and theology. I’m a phenomenologist, I heard myself saying, I couldn’t care less what other people believe, I want to know how it can be tested. Faintly, I heard an echo of a conversation I had with Joao Magueijo at PI some years ago. Boy, I thought back then, does this guy get explosive when asked about his beliefs. Now I think he must have been spending too much time with the quantum foundations folks. Suddenly I’m very sympathetic to Joao’s attitude.

Quantum foundations polarizes like no other area in physics. On the one hand there are those actively participating who think it’s the most important thing ever but no two of them can agree on anything. And then there’s the rest who thinks it’s just a giant waste of time. In contrast, most people tend to agree that quantum gravity is worthwhile, though they may differ in their assessment of how relevant it is. And while there are subgroups in quantum gravity, there’s a lot of coherence in these groups (even among them, though they don’t like to hear that).

As somebody who primarily works in quantum gravity, I admit that I’m jealous of the quantum foundations people. Because they got data. It is plainly amazing for me to see just how much technological progress during the last decade has contributed to our improved understanding of quantum systems. May that be tests of Bell’s theorem with entangled pairs separated by hundreds of kilometers, massive quantum oscillators, molecule interferometry, tests of the superposition principle, weak measurements, using single atoms as a double slit, quantum error correction, or the tracking of decoherence, to only mention what popped into my head first. When I was a student, none of that was possible. This enables us to test quantum theory now much more precisely and in more circumstances than ever before.

This technological progress may not have ignited the interest in the foundations of quantum mechanics but it has certainly contributed to the field drawing more attention and thus drawing more people. That however doesn’t seem to have decreased the polarization of opinions, but rather increased it. The more attention research on quantum foundations gets, the more criticism it draws.

“Shut up and let me think” is the title of an essay by Pablo Echenique-Robba which you can find on the arxiv at 1308.5619 [quant-ph]. In his personal account Pablo addresses common arguments for why research on quantum foundations is a waste of time. I’ve encountered most of these and I largely agree with his objections. But let me add some points Pablo didn’t mention.

I do have my issues with much of what I’ve seen in quantum foundations. To begin with, most of it seems to be focused on non-relativistic quantum mechanics. That’s like trying to improve the traffic in NYC by breeding better horses. If you can’t make it Lorentz-invariant and second quantized I don’t know why I should think about it. More important, I can’t fathom what most of the interpretation-pokers are aiming at. It’s all well and fine with me to try to find another formulation for the theoretical basis of quantum theory. But in the end I want to see either exactly what the observable differences are or I want to see a proof of equivalence. Alas, there seems to be a lot of talk about, well, interpretations which do neither one nor the other. Again the phenomenologist lacks the motivation to think about it.

Despite these reservations I think that research on the foundations of quantum mechanics is of value, again for a reason that Pablo did not address in his paper, so I want to add.

I’ve been educated in the “shut up and calculate” philosophy with my profs preaching Feynman’s mantra that nobody understands quantum mechanics, so don’t bother trying. Needless to say I, as probably most students, was not so much deterred as encouraged by this, so we dug a little into the literature. If you dig, it gets into philosophy very quickly. That’s not necessarily a bad thing, but most students come around to realize they wanted to study physics, not philosophy, and they move on to calculate. I’m among those who feel comfortable with a mathematical framework that “just” delivers results and that can be used to describe nature. To me science is “just” about making good models.

But those who are criticizing research on the foundations of quantum mechanics on the ground that everything has been understood are dismissing a way to arrive at an improved description of nature, and they are dismissing it based on unjustified arrogance about their superior motives.

Science progresses by evaluating the use of models about nature in the form of specific hypotheses. What we call ‘scientific method’ are procedures that have proved efficient in creating good hypotheses and tests thereof. Not only do these methods change (hopefully improve) over time, what constitutes a ‘good’ hypothesis also depends on beliefs and social dynamics. In the end what matters is not how somebody arrived at a hypothesis, but whether it works. That’s the essence of scientific progress.

The action principle, gauge-symmetry, and unification, for example, have proved dramatically useful in the construction of theories. And that they have been useful in the past is a good reason to employ them in the future search for improved theories. The same goes for naturalness. A theory that isn’t ‘natural’ is typically believed to be incomplete and in need of improvement or at least additional explanation. Yet all that says is that it’s a criterion which researchers draw upon to arrive at better theories. There’s no proof that this will work. It’s a reasonable guess, that’s all. How reasonable depends on your attitude, your beliefs and on whether you think it’ll land you a job.

And so some may guess there is something to be gained by poking around on the foundations of quantum mechanics. You might not believe that the reasons for their interest are good reasons, much like I don’t believe in naturalness and others don’t believe in a theory of everything. But in the end it doesn’t matter. In the end what matters is not what motivated people to study some research question, but only whether it led to something.

My support for quantum foundations thus comes from a live-and-let-live attitude. Maybe studying the foundations of quantum theory will improve our understanding of the fundamental nature of reality. Maybe it won’t. I don’t understand most of their motivations. But then they don’t understand mine either.

Those who are dismissing quantum foundations as a waste of time I want to ask to consider the consequences of this research in fact revealing a different theory underlying quantum mechanics, one that allows us to manipulate quantum processes in novel ways. The potential is enormous. It’s not a stone that should be left unturned.

I’ll shut up now and let you think.

Tuesday, October 01, 2013

Testing Conspiracy Theories

I'm about to fly to Vienna where I'll be attending a conference on Emergent Quantum Mechanics. I'm not entirely sure why I was invited to this event, but I suspect it's got something to do with me being one of the three people on the planet who like superdeterministic hidden variables theories, more commonly known as "conspiracy theories".

Leaving aside some loopholes that are about to be closed, tests of Bell's theorem rule out local hidden variables theories. But any theorem is only as good as the assumptions that go into it, and one of these assumptions is that the experimenter can freely chose the detector settings. As you know, I don't believe in free will, so I have an issue with this. You can see though why theories in which this assumption does not hold are known as "conspiracy theories". While they are not strictly speaking ruled out, it seems that the universe must be deliberately mean to prevent the experimentalists from doing what they want, and this option is thus often not taken seriously.

But really, this is a very misleading interpretation of superdeterminism. All that superdeterminism means is that a state cannot be prepared independently of the detector settings. That's non-local of course, but it's non-local in a soft way, in the sense that it's a correlation but doesn't necessarily imply a 'spooky' action at a distance because the backwards lightcones of the detector and state (in a reasonable universe) intersect anyway.

That having been said, you might like or not like superdeterministic hidden variables theories, the real question is if there is some way to test if that's how nature works, because one can't use Bell's theorem here. After some failed attempts, I finally came up with a possible test that is almost model-independent, and it was published in my paper "Testing super-deterministic hidden variables theories".

I actually wrote this paper in the hospital when I was pregnant. The nurses kept asking me if I'm writing a book. They were quite disappointed to be drowned in elaborations on the foundations of quantum mechanics rather than hearing a vampire story. In any case, in the expectation that the readers on this blog are somewhat more sympathetic to the question whether the universe is fundamentally deterministic or not, here a brief summary of the idea.

The central difference between standard quantum mechanics and superdeterministic hidden variables theories is that in the former case two identically prepared states can give two different measurement outcomes, while in the latter case that's not possible. Unfortunately, "identically prepared" includes the hidden variables and it's difficult to identically prepare something that you can't measure. That is after all the reason why it looks indeterministic.

However, rather than trying to prepare identical states we can try to make repeated measurements on the same state. For that, take two non-commuting variables (for example the spin or polarization in two different directions) and measure them alternately. In standard quantum mechanics the measurement outcomes will be non-correlated. In a superdeterministric hidden variables theory, they'll be correlated - provided you can make a case that the hidden variables don't change in between the measurements. The figure below shows an example for an experimental setup.

A particle (electron/photon) is bounced back and forth between
two mirrors (grey bars). The blue and red bars indicate measurements
of two non-commuting variables, only one eigenvalue passes, the
other leaves the system. The quantity to measure is the average time
it takes until the particle leaves. In a superdeterministic theory,
it can be significantly longer than in standard quantum mechanics.


The provision that the hidden variables don't change is the reason why the test is only 'almost' model independent, because I made the assumptions that the hidden variables are due to the environment (the experimental setup) down to the relevant scales of the interactions taking place. This means basically if you make the system small and cool and measure quickly enough you have a chance to see the correlation between subsequent measurements. I made some estimates (see paper) and it seems possible with today's technology to make this test.

Interestingly, after I had finished a draft of the paper, Chris Fuchs sent me a reference to a 1970 article by Eugene Wigner where, in a footnote, Wigner mentions Von Neumann discussing exactly this type of experiment:
“Von Neumann often discussed the measurement of the spin component of a spin-1/2 particle in various directions. Clearly, the possibilities for the two possible outcomes of a single such measurement can be easily accounted for by hidden variables [...] However, Von Neumann felt that this is not the case for many consecutive measurements of the spin component in various different directions. The outcome of the first such measurement restricts the range of values which the hidden parameters must have had before that first measurement was undertaken. The restriction will be present also after the measurement so that the probability distribution of the hidden variables characterizing the spin will be different for particles for which the measurement gave a positive result from that of the particles for which the measurement gave a negative result. The range of the hidden variables will be further restricted in the particles for which a second measurement of the spin component, in a different direction, also gave a positive result...”
Apparently there was a longer discussion with Schrödinger following this proposal, which could be summarized with saying that the experiment cannot test generic superdeterminism, but only certain types as I already said above. If you think about it for a moment, you can never rule out generic superdeterminism anyway, so why even bother.

I'm quite looking forward to this conference, to begin with because Vienna is a beautiful city and I haven't been there for a while, but also because I'm hoping to meet some experimentalists who can tell me if I'm nuts :p

Update: Slides of my talk are here.

Thursday, June 20, 2013

Testing spontaneous localization models with molecular level splitting

Gloria's collapse model.
We in the quantum gravity groups all over the planet search for a unified framework for general relativity and quantum theory. But I have a peripheral interest also in modifications of general relativity and quantum mechanics since altering one of these two ingredients can change the rules of the game. General relativity and quantum mechanics however work just fine as they are, so there is little need to modify them. In fact, modifications typically render them less appealing to the theoretician, for not to say ugly.

Spontaneous localization models for quantum mechanics are, if you ask me, a particularly ugly modification. In these models, one replaces the collapse upon observation in the Copenhagen interpretation by a large number of little localizations that have the purpose of producing eigenstates upon observation. These localizations that essentially focus the spread of the wave-function are built into the dynamics by some stochastic process, and the rate of collapse depends on the mass of the particles (the higher the mass, the higher the localization rate). The purpose of these models is to explain why we measure the effects of superposition, but never a superposition itself, and never experience macroscopic objects in superpositions.

Unfortunately, I have no reason to believe that nature gives a damn what I find ugly or not, and quite possibly you don’t care either. And so, as a phenomenologist, the relevant question that remains is whether spontaneous localization models are a description of nature that agrees with observation.

And, to be fair, on that account spontaneous localization models are actually quite appealing. That is because their effects, or the parameters of the model respectively, can be bounded both from above and below. The reason is that the collapse processes have to be efficient enough to produce eigenstates upon observation, but not so efficient as to wash out the effects of quantum superpositions that we observe.

The former bound on the efficient production of observable eigenstates becomes ambiguous however if you allow for a many worlds interpretation because then you don’t have to be bothered by macroscopic superpositions. Alas, the intersection of the groups of many worlds believers and spontaneous localization believers is an empty set. Therefore, the spontaneous localization approach has a range of parameters with macroscopic superpositions that is “philosophically unsatisfactory,” as Feldman and Tumulka put it in their (very readable) paper (arXiv:1109.6579). In other words, if you allow for a many worlds situation whose main feature is the absence of collapse, then there really is no point to add stochastic localization on top of that. So it’s either-or, and thus requiring absence of macroscopic superpositions bounds possible parameters.

Still, the notion of what constitutes “macroscopic reality” is quite fuzzy. Just to give you an idea of the problem, the estimates by Feldman and Tumulka go along such lines:
“To obtain quantitative estimates for the values [of the model parameters] that define the boundary of the [philosophically unsatisfactory region], we ask under which conditions measurement outcomes can be read off unambiguously... For definiteness, we think of the outcome as a number printed on a sheet of paper; we estimate that a single digit, printed (say) in 11-point font size, consists of 3 x 1017 carbon atoms or N = 4 x 1018 nucleons. Footnote 1: Here is how this estimate was obtained: We counted that a typical page (from the Physical Review) without figures or formulas contains 6,000 characters and measured that a toner cartridge for a Hewlett Packard laser printer weighs 2.34 kg when full and 1.54 kg when empty. According to the manufacturer, a cartridge suffices for printing 2 x 104 pages...”
And so on. They also discuss the question whether chairs exist:
“One could argue that the theory actually becomes empirically refuted, as it predicts the nonexistence of chairs while we are sure that chairs exist in our world. However, this empirical refutation can never be conclusively demonstrated because the theory would still make reasonable predictions for the outcomes of all experiments...”
Meanwhile on planet earth, particle physicists calculate next-to-next-to-next-to leading order corrections to the Higgs cross-section.

Sarcasm aside, my main problem with this, and with most interpretations and modifications of quantum mechanics, is that we already know that quantum mechanics is not fundamentally the correct description of nature. That’s why we teach 2nd quantization to students. To make matters worse, most of such modifications of quantum mechanics deal with the non-relativistic limit only. I thus have a hard time getting excited about collapse models. But I’m digressing - we were discussing their phenomenological viability.

In fact, Feldman and Tumulka’s summary of experimental (ie non-philosophic) constraints isn’t quite as mind-enhancing as the nonexistent chair I’m sitting on. (Hard science, my ass.) Some experimental constraints they are discussing: The stochastic process of these models contributes to global warming by injecting energy with each collapse and since there’s some cave in Germany which doesn’t noticeably warm up in July, this gives a constraint. And since we have not heard any “spontaneous bangs” around us that would accompany the collapses in certain parameter ranges, we get another constraint. Then there’s atom interferometry. And then there’s this very interesting recent paper


In this paper the authors calculate how spontaneous localization affects quantum mechanical oscillation between two eigenstates. If you recall, we previously discussed how the observation of such oscillations allows to put bounds on decoherence induced by coupling to space-time foam. For the space-time foam, neutral Kaons make a good system for experimental test. Decoherence from space-time foam should decrease the ability of the Kaons to oscillate into each other. The bounds on parameters are meanwhile getting close to the Planck scale.

For spontaneous localization the effect scales differently with the mass though, and is thus not testable in neutral Kaon oscillation. Since the localization effects get larger with large masses, the authors recommend to instead look for the effects of collapse models in chiral molecules.

Chiral molecules are pairs of molecules with the same atomic composition but with a different spatial arrangement. And some of these molecules can exist in superpositions of such spatial arrangements that can transform into each other. In the small temperature limit, this leads to an observable level splitting in the molecular spectrum. The best known example may be ammonia.

Now if collapse models were correct, then these spatial superpositions of chiral molecules should localize and the level splitting, which is a consequence of superpositions of two eigenstates, become unobservable. The authors estimate that with current measurement precision the bound from molecular level splitting is about comparable to that of atom interferometry (where interference should become unobservable if spontaneous localization is too efficient, thus leading to a bound). Molecular spectroscopy is a presently very active research area and with better resolution and larger molecules, this bound could be improved.

In summary, this nice paper gives me hope that in the soon future we can put the ugly idea of spontaneous localization to rest.

Saturday, September 01, 2012

Questioning the Foundations

The submission deadline for this year’s FQXi essay context on the question “Which of Our Basic Physical Assumptions Are Wrong?” has just passed. They got many thought-provoking contributions, which I encourage you to browse here.

The question was really difficult for me. Not because nothing came to my mind but because too much came to my mind! Throwing out the Heisenberg uncertainty principle, Lorentz-invariance, the positivity of gravitational mass, or the speed of light limit – been there, done that. And that’s only the stuff that I did publish...

At our 2010 conference, we had a discussion on the topic “What to sacrifice?” addressing essentially the same question as the FQXi essay, though with a focus on quantum gravity. For everything from the equivalence principle over unitarity and locality to the existence of space and time you can find somebody willing to sacrifice it for the sake of progress.

So what to pick? I finally settled on an essay arguing that the quantization postulate should be modified, and if you want to know more about this, go check it out on the FQXi website.

But let me tell you what was my runner-up.

“Physical assumption” is a rather vague expression. In the narrower sense you can understand it to mean an axiom of the theory, but in the broader sense it encompasses everything we use to propose a theory. I believe one of the reasons progress on finding a theory of quantum gravity has been slow is that we rely too heavily on mathematical consistency and pay too little attention to phenomenology. I simply doubt that mathematical consistency, combined with the requirement to reproduce the standard model and general relativity in the suitable limits, is sufficient to arrive at the right theory.

Many intelligent people spent decades developing approaches to quantum gravity, approaches which might turn out to have absolutely nothing to do with reality, even if they would reproduce the standard model. They pursue their research with the implicit assumption that the power of the human mind is sufficient to discover the right description of nature, though this is rarely explicitly spelled out. There is the “physical assumption” that the theoretical description of nature must be appealing and make sense to the human brain. We must be able to arrive at it by deepening our understanding of mathematics. Einstein and Dirac have shown us how to do it, arriving at the most amazing breakthroughs by mathematical deduction. It is tempting to conclude that they have shown the way, and we should follow in their footsteps.

But these examples have been exceedingly rare. Most of the history of physics instead has been incremental improvements guided by observation, often accompanied by periods of confusion and heated discussion. And Einstein and Dirac are not even good examples: Einstein was heavily guided by Michelson and Morley’s failure to detect the aether, and Dirac’s theory was preceded by a phenomenological model proposed by Goudsmit and Uhlenbeck to explain the anomalous Zeeman effect. Their model didn’t make much sense. But it explained the data. And it was later derived as a limit of the Dirac equation coupled to an electromagnetic field.

I think it is perfectly possible that there are different consistent ways to quantize gravity that reproduce the standard model. It also seems perfectly possible to me for example that string theory can be used to describe strongly coupled quantum field theory, and still not have anything to say about quantum gravity in our universe.

The only way to find out which theory describes the world we live in is to make contact to observation. Yet, most of the effort in quantum gravity is still devoted to the development and better understanding of mathematical techniques. That is certainly not sufficient. It is also not necessary, as the Goudsmit and Uhlenbeck example illustrates: Phenomenological models might not at first glance make much sense, and their consistency only become apparent later.

Thus, the assumption that we should throw out is that mathematical consistency, richness, or elegance are good guides to the right theory. They are desirable of course. But neither necessary nor sufficient. Instead, we should devote more effort to phenomenological models to guide the development of the theory of quantum gravity.

In a nutshell that would have been the argument of my essay had I chosen this topic. I decided against it because it is arguably a little self-serving. I will also admit that while this is the lesson I draw from the history of physics, I, as I believe most of my colleagues, am biased towards mathematical elegance, and the equations named after Einstein and Dirac are the best examples for that.

Wednesday, June 15, 2011

Nonlocal correlations between the Canary Islands

Bell's inequality is the itch on the back of all believers in hidden variables. Based on only a few assumptions it states that some correlations in quantum mechanics can not be achieved by local realistic hidden variables theories. The correlations in hidden variables theories of that type have to fulfill an inequality, now named after John Bell, violations of which have been observed in experiment, thus hidden variables don't describe reality. But as always, the devil is in the details, and if one doesn't pay attention to the details, loopholes remain. For Bell's inequality, there are actually quite a few of them, and to date no experiment has managed to close them all.

The typical experiment for Bell's theorem makes use of a pair of photons (electrons), entangled in polarization (spin). The two particles are send in different directions and their polarizations are measured along different directions. The correlation among the pairs of repeated measurements is subject to Bell's inequality. (Or the more general CHSH inequality).

The maybe most obvious loophole, called the locality loophole, is that information could be locally communicated from one measurement to the other. Since information can maximally be transmitted by the speed of light this is the case if, for example, the second measurement is made with delay to the first, such that the second measurement is in the forward lightcone of the first. Another loophole is that the detector settings may possibly be correlated with the prepared state without any violations of locality if they are in the forward lightcone of the preparation. Since in this case the experimenter cannot actually set the detector as he wishes, it's called the freedom-of-choice loophole.

A case where both loopholes are present is depicted in the space-time image below. The event marked with "E" is the emission of the photos. The red lines are the worldlines of the entangled electrons or photons (in an optical fiber). "A" and "B" are the two measurements and "a" and "b" are the events at which the detector settings are chosen. Also in the image are the forward lightcones of the event "E" and "A".


So that's how you don't want to make your experiment if you're aiming to disprove locally realistic hidden variables. Instead, what you want to do is an experiment as in the second figure below, where not only the measurement events "A" and "B" are spacelike to each other (ie they are not in each other's lightcone), but also the events "a" and "b" at which the detector settings are chosen are spacelike to each other and to the emission of the photons.

Let us also recall that the lightcone is invariant under Lorentz-transformations and thus the statement whether two events are spacelike, timelike or lightlike to each other does not depend on the reference frame. If you manage to do it in one frame, it's good for all frames.

Looks simple enough in a diagram, less simple to actually do it: Entanglement is a fragile state and the speed of light, which is the maximum speed by which (hidden) information might travel is really, really fast. It helps if you let the entangled particles travel over long distances before you make the measurement, but then you have to be very careful in getting the timing right.

And that's exactly what a group of experimentalists around Anton Zeiliger did and published in November in their paper "Violation of local realism with freedom of choice" (arXiv version here). They closed for the first time both of the two above mentioned loopholes by choosing a setting that disabled communication between the measurement events as well as between the preparation of the photons and the choice of detector settings. The test was performed between two Canary Islands, La Palma and Tenerife.


[Image Source: Lonely Planet]

The polarization-entangled pairs of photons were produced in La Palma. One was guided to a transmitter telescope and sent over a distance of 144 km to Tenerife, where it was received by another telescope. The other photon made 6km of circles in a coiled optical fibre in La Palma. The detector settings in La Palma were chosen by a quantum random number generator 1.2 km away from the source, and in Tenerife by another similar but independent random number generator. The measurements violated Bell's inequality by more than 16 standard deviations.

What a beautiful experiment!

But if you're a believer in local realistic hidden variable theories, let me scratch your itch. You can't close the freedom-of-choice loophole in superdeterministic hidden variables theories with this method because there's no true randomness in that case. It doesn't matter where you locate your "random" generator, its outcome was determined arbitrarily long ago in the backward lightcone of the emission.