Thursday, August 27, 2015

Embrace your 5th dimension.

I found this awesome photo at
What does it mean to live in a holographic universe?

“We live in a hologram,” the physicists say, but what do they mean? Is there a flat-world-me living on the walls of the room? Or am I the projection of a mysterious five-dimensional being and beyond my own comprehension? And if everything inside my head can be described by what’s on its boundary, then how many dimensions do I really live in? If these are questions that keep you up at night, I have the answers.

1. Why do some physicists think our universe may be a hologram?

It all started with the search for a unified theory.

Unification has been enormously useful for our understanding of natural law: Apples fall according to the same laws that keep planets on their orbits. The manifold appearances of matter as gases, liquids and solids, can be described as different arrangements of molecules. The huge variety of molecules themselves can be understood as various compositions of atoms. These unifying principles were discovered long ago. Today physicists refer by unification specifically to a common origin of different interactions. The electric and magnetic interactions, for example, turned out to be two different aspects of the same electromagnetic interaction. The electromagnetic interaction, or its quantum version respectively, has further been unified with the weak nuclear interaction. Nobody has succeeded yet in unifying all presently known interactions, the electromagnetic with the strong and weak nuclear ones, plus gravity.

String theory was conceived as a theory of the strong nuclear interaction, but it soon became apparent that quantum chromodynamcis, the theory of quarks and gluons, did a better job at this. But string theory gained second wind after physicists discovered it may serve to explain all the known interactions including gravity, and so could be a unified theory of everything, the holy grail of physics.

It turned out to be difficult however to get specifically the Standard Model interactions back from string theory. And so the story goes that in recent years the quest for unification has slowly been replaced with a quest for dualities that demonstrate that all the different types of string theories are actually different aspects of the same theory, which is yet to be fully understood.

A duality in the most general sense is a relation that identifies two theories. You can understand a duality as a special type of unification: In a normal unification, you merge two theories together to a larger theory that contains the former two in a suitable limit. If you relate two theories by a duality, you show that the theories are the same, they just appear different, depending on how you look at them.

One of the most interesting developments in high energy physics during the last decades is the finding of dualities between theories in a different number of space-time dimensions. One of the theories is a gravitational theory in the higher-dimensional space, often called “the bulk”. The other is a gauge-theory much like the ones in the standard model, and it lives on the boundary of the bulk space-time. This relation is often referred to as the gauge-gravity correspondence, and it is a limit of a more general duality in string theory.

To be careful, this correspondence hasn’t been strictly speaking proved. But there are several examples where it has been so thoroughly studied that there is very little doubt it will be proved at some point.

These dualities are said to be “holographic” because they tell us that everything allowed to happen in the bulk space-time of the gravitational theory is encoded on the boundary of that space. And because there are fewer bits of information on the surface of a volume than in the volume itself, fewer things can happen in the volume than you’d have expected. It might seem as if particles inside a box are all independent from each other, but they must actually be correlated. It’s like you were observing a large room with kids running and jumping but suddenly you’d notice that every time one of them jumps, for a mysterious reason ten others must jump at exactly the same time.

2. Why is it interesting that our universe might be a hologram?

This limitation on the amount of independence between particles due to holography would only become noticeable at densities too high for us to test directly. The reason this type of duality is interesting nevertheless is that physics is mostly the art of skillful approximation, and using dualities is a new skill.

You have probably seen these Feynman diagrams that sketch particle scattering processes? Each of these diagrams makes a contribution to an interaction process. The more loops there are in a diagram, the smaller the contributions are. And so what physicists do is adding up the largest contributions first, then the smaller ones, and even smaller ones, until they’ve reached the desired precision. It’s called “perturbation theory” and only works if the contributions really get smaller the more interactions take place. If that is so, the theory is said to be “weakly coupled” and all is well. If it ain’t so, the theory is said to be “strongly coupled” and you’d never be done summing all the relevant contributions. If a theory is strongly coupled, then the standard methods of particle physicists fail.

The strong nuclear force for example has the peculiar property of “asymptotic freedom,” meaning it becomes weaker at high energies. But at low energies, it is very strong. Consequently nuclear matter at low energies is badly understood, as for example the behavior of the quark gluon plasma, or the reason why single quarks do not travel freely but are always “confined” to larger composite states. Another interesting case that falls in this category is that of “strange” metals, which include high-temperature superconductors, another holy grail of physicists. The gauge-gravity duality helps dealing with these systems because when the one theory is strongly coupled and difficult to treat, then the dual theory is weakly coupled and easy to treat. So the duality essentially serves to convert a difficult calculation to a simple one.

3. Where are we in the holographic universe?

Since the theory on the boundary and the theory in the bulk are related by the duality they can be used to describe the same physics. So on a fundamental level the distinction doesn’t make sense – they are two different ways to describe the same thing. It’s just that sometimes one of them is easier to use, sometimes the other.

One can give meaning to the question though if you look at particular systems, as for example the quark gluon plasma or a black hole, and ask for the number of dimensions that particles experience. This specification of particles is what makes the question meaningful because identifying particles isn’t always possible.

The theory for the quark gluon plasma is placed on the boundary because it would be described by the strongly coupled theory. So if you consider it to be part of your laboratory then you have located the lab, with yourself in it, on the boundary. However, the notion of ‘dimensions’ that we experience is tied to the freedom of particles to move around. This can be made more rigorous in the definition of ‘spectral dimension’ which measures, roughly speaking, in how many directions a particle can get lost. The very fact that makes a system strongly coupled though means that one can’t properly define single particles that travel freely. So while you can move around in the laboratory’s three spatial dimensions, the quark gluon plasma first has to be translated to the higher dimensional theory to even speak about individual particles moving. In that sense, part of the laboratory has become higher dimensional, indeed.

If you look at an astrophysical black hole however, then the situation is reversed. We know that particles in its vicinity are weakly coupled and experience three spatial dimensions. If you wanted to apply the duality in this case then we would be situated in the bulk and there would be lower-dimensional projections of us and the black hole on the boundary, constraining our freedom to move around, but in such a subtle way that we don’t notice. However, the bulk space-times that are relevant in the gauge-gravity duality are so-called Anti-de-Sitter spaces, and these always have a negative cosmological constant. The universe we inhabit however has to our best current knowledge a positive cosmological constant. So it is not clear that there actually is a dual system that can describe the black holes in our universe.

Many researchers are presently working on expanding the gauge-gravity duality to include spaces with a positive cosmological constant or none at all, but at least so far it isn’t clear that this works. So for now we do not know whether there exist projections of us in a lower-dimensional space-time.

4. How well does this duality work?

The applications of the gauge-gravity duality fall roughly into three large areas, plus a diversity of technical developments driving the general understanding of the theory. The three areas are the quark gluon plasma, strange metals, and black hole evaporation. In the former two cases our universe is on the boundary, in the latter we are in the bulk.

The studies of black hole evaporation are examinations of mathematical consistency conducted to unravel just exactly how information may escape a black hole, or what happens at the singularity. In this area there are presently more answers than there are questions. The applications of the duality to the quark gluon plasma initially caused a lot of excitement, but as of recently some skepticism has spread. It seems that the plasma is not as strongly coupled as originally thought, and using the duality is not as straightforward as hoped. The applications to strange metals and other classes of materials are making rapid progress as both analytical and numerical methods are being developed. The behavior for several observables has been qualitatively reproduced, but it is as present not very clear exactly which systems are the best to use. The space of models is still too big, which leaves too much room for useful predictions. In summary, as the scientists say “more work is needed”.

5. Does this have something to do with Stephen Hawking's recent proposal for how to solve the black hole information loss problem?

That’s what he says, yes. Essentially he is claiming that our universe has holographic properties even though it has a positive cosmological constant, and that the horizon of a black hole also serves as a surface that contains all the information of what happens in the full space-time. This would mean in particular that the horizon of a black hole keeps track of what fell into the black hole, and so nothing is really forever lost.

This by itself isn’t a new idea. What is new in this work with Malcom Perry and Andrew Strominger is that they claim to have a way to store and release the information, in a dynamical situation. Details of how this is supposed to work however are so far not clear. By and large the scientific community has reacted with much skepticism, not to mention annoyance over the announcement of an immature idea.

[This post previously appeared at Starts with a Bang.]

Tuesday, August 25, 2015

Hawking proposes new idea for how information might escape from black holes

So I’m at this black hole conference in Stockholm, and at his public lecture yesterday evening, Stephen Hawking announced that he has figured out how information escapes from black holes, and he will tell us today at the conference at 11am.
As your blogger at location I feel a certain duty to leak information ;)

Extrapolating from the previous paper and some rumors, it’s something with AdS/CFT and work with Andrew Strominger, so likely to have some strings attached.

30 minutes to 11, and the press has arrived. They're clustering in my back, so they're going to watch me type away, fun.

10 minutes to 11, some more information emerges. There's a third person involved in this work, besides Andrew Strominger also Malcom Perry who is sitting in the row in front of me. They started their collaboration at a workshop in Hereforshire Easter 2015.

10 past 11. The Awaited is late. We're told it will be another 10 minutes.

11 past 11. Here he comes.

He says that he has solved a problem that has bothered people since 40 years, and so on. He now understands that information is stored on the black hole horizon in form of "supertranslations," which were introduced in the mid 1960s by Bondi and Metzner. This makes much sense because Strominger has been onto this recently. It occurred to Hawking in April, when listening to a talk by Strominger, that black hole horizons also have supertranslations. The supertranslations are caused by the ingoing particles.

That's it. Time for questions. Rovelli asking: Do supertranslations change the quantum state?

Just for the record, I don't know anything about supertranslations, so don't ask.

It's taking a long time for Hawking to compose a reply. People start mumbling. Everybody trying to guess what he meant. I can see that you can use supertranslations to store information, but don't understand how the information from the initial matter gets moved into other degrees of freedom. The only way I can see how this works is that the information was there twice to begin with.

Oh, we're now seeing Hawking's desktop projected by beamer. He is patching together a reply to Rovelli. Everybody seems confused.

Malcom Perry mumbling he'll give a talk this afternoon and explain everything. Good.

Hawking is saying (typing) that the supertranslations are a hologram of the ingoing particles.

It's painful to watch actually, seeing that I'm easily typing two paragraphs in the time he needs for one word :(

Yes, I figure he is saying the information was there twice to begin with. It's stored on the horizon in form of supertranslations, which can make a tiny delay for the emission of Hawking particles. Which presumably can encode information in the radiation.

Paul Davies asking if the argument goes through for de Sitter space or only asymptotically flat space. Hawking saying it applies to black holes in any background.

Somebody else asks if quantum fluctuations of the background will be relevant. 't Hooft answering with yes, but they have no microphone, I can't understand them very well.

I'm being told there will be an arxiv paper some time end of September probably.

Ok, so Hawking is saying in reply to Rovelli that it's an effect caused by the classical gravitational field. Now I am confused because the gravitational field doesn't uniquely encode quantum states. It's something I myself have tried to use before. The gravitational field of the ingoing particles does always affect the outgoing radation, in principle. The effect is exceedingly weak of course, but it's there. If the classical gravitational field of the ingoing particles could encode all the information about the ingoing radiation then this alone would do away with the information loss problem. But it doesn't work.You can have two bosons of energy E on top of each other and arrange it so they have the same classical gravitational field as one of twice this energy.

Rovelli nodding to my question (I think he meant the same thing). 't Hooft saying in reply that not all field configurations would be allowed. Somebody else saying there are no states that cannot be distinguished by their metric. This doesn't make sense to me because then the information was always present twice, already classically and then what would one need the supertranslations for?

Ok, so, end of discussion session, lunch break. We'll all await Malcom Perry's talk this afternoon.

Update: After Malcom Perry's talk, some more details have emerged. Yes, it is a purely classical picture, at least for now. The BMS group essentially provides classical black hole hair in form of an infinite amount of charges. Of course you don't really want an infinite amount, you want a finite amount that fits the Bekenstein-Hawking entropy. One would expect that this necessitates a quantized version (at least geometrically quantized, or with a finite phase-space volume). But there isn't one so far.

Neither is there, at this point, a clear picture for how the information gets into the outgoing radiation. I am somewhat concerned actually that once one looks at the quantum picture, the BMS charges at infinity will be entangled with charges falling into the black hole, thus essentially reinventing the black hole information problem.

Finally, to add some context to 't Hooft's remark, Perry said that since this doesn't work for all types of charges, not all models for particle content would be allowed, as for example information about baryon number couldn't be saved this way. He also said that you wouldn't have this problem in string theory, but I didn't really understand why.

Another Update: Here is a summary from Jacob Aron at New Scientist.

Another Update: A video of Hawking's talk is now available here.

Yet another update: Malcom Perry will give a second, longer, lecture on the topic tomorrow morning, which will be recorded and be made available on the Nordita website.

Monday, August 24, 2015

Me, Elsewhere

A few links:

Friday, August 21, 2015

The origin of mass. Or, the pion’s PR problem.

Diagram depicting pion exchange
between a proton and a neutron.
Image souce: Wikipedia.

When the discovery of the Higgs boson was confirmed by CERN, physicists cheered and the world cheered with them. Finally they knew what gave mass to matter, or so the headlines said. Except that most of the mass carried by matter doesn’t come courtesy of the Higgs. Rather, it’s a short-lived particle called the pion that generates it.

The pion is the most prevalent meson, composed of a quark and anti-quark, and the reason you missed the headlines announcing its discovery is that it took place already in 1947. But the mechanism by which pions give rise to mass still holds some mysteries, notably nobody really knows at which temperature it happens. In a recent PRL now researchers at RHIC in Brookhaven have taken a big step towards filling in this blank.
    Observation of charge asymmetry dependence of pion elliptic flow and the possible chiral magnetic wave in heavy-ion collisions
    STAR Collaboration
    Phys. Rev. Lett. 114, 252302 (2015)
    arXiv:1504.02175 [nucl-ex]
There are three variants of pions: one positively charged, one negatively charged, and one neutral. The mechanism of mass generation by pions is mathematically very similar to the Higgs mechanism, but it involves entirely different particles and symmetries, and an energy about one thousandth of that where the Higgs mechanism operates.

In contrast to the Higgs which gives masses to elementary particles, the pion is responsible for generating most of the masses of composite particles that are found in atomic nuclei, protons and neutrons, collectively called nucleons. If we would only add up the masses of the elementary particles – the up and down quarks – that they are made up of, we would get a badly wrong answer. Instead, much of the mass is in a background condensate, mathematically analogous to the Higgs field (vev).

The pion is the Goldstone boson of the “chiral symmetry” of the standard model, a symmetry that relates left-handed with right-handed particles. It is also one of the best examples for technical naturalness. The pion’s mass is suspiciously small, smaller than one would naively expect, and therefore technical naturalness tells us that we ought to find an additional symmetry when the mass is entirely zero. And indeed it is the chiral symmetry that is recovered when the pions’ masses all vanish. The pions aren’t exactly massless because chiral symmetry isn’t an exact symmetry, after all the Higgs does create masses for the quarks, even if they are only small ones.

Mathematically all this is well understood, but the devil is in the details. The breaking of chiral symmetry happens at an energy where the strong nuclear force is strong indeed. This is in contrast to the breaking of electro-weak symmetry that the Higgs participates in, which happens at much higher energies. The peculiar nature of the strong force has it that the interaction is “asymptotically free”, meaning it gets weaker at higher energies. When it’s weak, it is well understood. But at low energies, such as close by chiral symmetry breaking, little can be calculated from first principles. Instead, one works on the level of effective models, such as that based on pions and nucleons rather than quarks and gluons.

We know that quarks cannot float around freely but that they are always bound together to multiples that mostly neutralize the “color charge” that makes quarks attract each other. This requirement of quarks to form bound states at low energies is known as “confinement” and exactly how it comes about is one of the big open questions in theoretical physics. Particle physicists deal with their inability to calculate it by using tables and various models for how the quarks find and bind each other.

The breaking of chiral symmetry which gives mass to nucleons is believed to take place at a temperature close by the transition in which quarks stop being confined. This deconfinement transition has been subject of much interest and lead to some stunning insights about the properties of the plasma that quarks form when no longer confined. In particular this plasma turned out to have much lower viscosity than originally believed, and the transition turned out to be much smoother than expected. Nature is always good for a surprise. But the chiral phase transition hasn’t attracted much attention, at least so far, though maybe this is about to change now.

These properties of nuclear matter cannot be studied in collisions of single highly energetic particles, like proton-proton collisions at the LHC. Instead, one needs to bring together as many quarks as possible, and for this reason one collides heavy nuclei, for example gold or lead nuclei. RHIC at Brookhaven is one of the places where these studies are done. The GSI in Darmstadt, Germany, another one. And the LHC also has a heavy ion program, another run of which is expected to take place later this year.

But how does one find out whether chiral symmetry is restored together with the deconfinement transition? It’s a tough question that I recall being discussed already when I was an undergraduate student. The idea that emerged over long debates was to make use of the coupling of chiral matter to magnetic fields.

The heavy ions that collide move at almost the speed of light and they are electrically charged. Since moving charges create magnetic fields, this generically causes very strong magnetic fields in the collision region. The charged pions that are produced in large amounts when the nuclei collide couple to the magnetic field. And their coupling depends on whether or not they have masses, ie it depends on whether chiral symmetry was restored or not. And so the idea is that one measures the distribution of charged pions that come out of the collision of the heavy nuclei, and from that one infers whether chiral symmetry was restored and, ideally, what was the transition temperature and the type of phase transition.

So much for the theory. In practice of course it isn’t that easy to find out exactly what gave rise to the measured distribution. And so the recent results have to be taken with a grain of salt: even the title of the paper carefully speaks of a “possible observation” rather than declaring it has been observed. It will certainly take more study to make sure they are really seeing chiral symmetry restoration and not something else. In any case though, I find this an interesting development because it demonstrates that the method works, and I think this will be a fruitful research direction about which we will hear more in the future.

For me chirality has always been the most puzzling aspect of the standard model. It’s just so uncalled for. That molecules come in left-handed and right-handed variants, and biology on Earth settled on mostly the left-handed, ones can be put down as a historical accident – one that might not even have repeated on other planets. But fundamentally, on the level of elementary particles, where would such a distinction come from?

The pion, I think, deserves a little bit more attention.

Tuesday, August 18, 2015

Tuesday distraction: New music video

When a few weeks ago someone in a waterpark jumped into my left side and cracked a rib, I literally got a crash course in human anatomy. I didn't really appreciate just how often we use our torso muscles, which I was now painfully made aware of with every move. I could neither cough, nor run, nor laugh without wincing. Sneezing was the worst, turning over in bed a nightmare. I also had to notice that one can't sing with a cracked rib. And so, in my most recent song, I've left the vocal track to Ed Witten.

If you want to know more about the history of string theory, I can recommend watching the full lecture, which is both interesting and well delivered.

The rib has almost healed, so please don't expect Ed to become a regular, though he does have an interesting voice.

Friday, August 14, 2015

Superfluid Dark Matter

A new paper proposes that dark matter may be a quantum fluid that has condensed in puddles to seed galaxies.

If Superfluid was a superhero, it would creep through the tiniest door slits and flow up the walls to then freeze the evil villain to death. Few things are cooler than superfluids, an utterly fascinating state that some materials, such as Helium, reach at temperatures close to absolute zero. Superfluid’s superpower is its small viscosity, which measures how the medium sticks to itself and can resist flowing. Honey has a larger viscosity than oil, which has a lager viscosity than water. And at the very end of this line, at almost vanishing viscosity, you find superfluids.

There are few places on Earth cold enough for superfluids to exist, and most of them are beleaguered by physicists. But outer space is cold and plenty. It is also full with dark matter whose microscopic nature has so far remained mysterious. In a recent paper (arXiv:1507.01019), two researchers from the University of Pennsylvania propose that dark matter might be puddles of a superfluid that has condensed in the first moments of the universe, and then caught the matter we readily see by its gravitational pull.

This research reflects how much our understanding of quantum mechanics has changed in the century that has passed since its inception. Contrary to what our experience tells us, quantum mechanics is not a theory of the microscopic realm. We do not witness quantum effects with our own senses, but the reason is not that human anatomy is coarse and clumsy compared to the teensy configurations of electron orbits. The reason is that our planet is a dense and noisy place, warm and thriving with thermal motion. It is a place where particles constantly collide with each other, interact with each other, and disturb each other. We do not witness quantum effects not because they are microscopic, but because they are fragile and get easily destroyed. But at low temperatures, quantum effects can enter the macroscopic range. They could, in fact, span whole galaxies.

The idea that dark matter may be a superfluid has been proposed before, but it had some shortcomings that the new model addresses; it does so by combining the successes of existing theories which cures several problems these theories have when looked at separately. The major question about the nature of dark matter is whether it is a type of matter, or whether it is instead a modification of gravity, or MOG for short. Most of the physics community presently favors the idea that dark matter is matter, probably some kind of as-yet-unknown particle, and they leave gravity untouched. But a stubborn few have insisted pursuing the idea that we can amend general relativity to explain our observations.

MOG, an improved version of the earlier MOdified Newtonian Dynamics (MOND), has some things going for it; it captures some universal relations that are difficult to obtain with dark matter. The velocities of stars orbiting the center of galaxies – the galactic rotation curves – cannot be explained by visible matter alone but can be accounted for by adding dark matter. And yet, many of these curves can also be explained by stunningly simple modifications of the gravitational law. On the other hand, the simple modification of MOND fails for clusters of galaxies, where dark matter still has to be added, and requires some fudging to get the solar system right. It has been claimed that MOG fits the bill on all accounts but on the expense of introducing more additional fields, which makes it look more and more like some type of dark matter.

Another example of an observationally found but unexplained connection is the Tully-Fisher relation between galaxies’ brightness and the velocity of the stars farthest away from the galactic center. This relation can be obtained with modifications of gravity, but it is hard to come by with dark matter. On the other hand, it is difficult to reproduce the separation of visible matter from dark matter, as seen for example in the Bullet Cluster, by modifying gravity. The bottom line is, sometimes it works, sometimes it doesn’t.

It adds to this that modifications of gravity employ dynamical equations that look rather peculiar and hand-made. For most particle physicists, these equations appear unfamiliar and ugly, which is probably one of the main reasons they have stayed away from it. So far.

In their new paper, Berezhiani and Khoury demonstrate that the modifications of gravity and dark matter might actually point to the same origin, which is a type of superfluid. The equations determining the laws of condensates like superfluids at lowest temperatures take forms that are very unusual in particle physics (they often contain fractional powers of the kinetic terms). And yet these are exactly the strange equations that appear in modified gravity. So Berezhiani and Khoury use a superfluid with an equation that reproduces the behavior of modified gravity, and end up with the benefits of both, particle dark matter and modified gravity.

Superfluids aren’t usually purely super, instead they generally are a mixture between a normally flowing component, and a superfluid component. The ratio between these components depends on the temperature – the higher the temperature the more dominant the normal component. In the new theory of superfluid dark matter the temperatures can be determined from the observed spread of velocities in the dark matter puddles, putting galaxies at lower temperatures than clusters of galaxies. And so, while the dark matter in galaxies like our Milky Way is dominantly in the superfluid phase, the dark matter in galactic clusters is mostly in the normal phase. This model thus naturally explains why modified gravity works only on a galactic scales, and should not be applied to clusters.

Moreover, on scales like that of our solar system, gravity is strong compared to the galactic average, which causes the superfluid to lose its quantum properties. This explains why we do not measure any deviations from general relativity in our own vicinity, another fact that is difficult to explain with the existing models of modified gravity. And since the superfluid is matter after all, it can be separated from the visible matter, and so it is not in conflict with the observables from colliding clusters of galaxies. In fact, it might fit the data better than single-particle dark matter because the strength of the fluid dark matter’s self-interaction depends on the fraction of normal matter and so depends on the size of the clusters.

Superfluids have another stunning property which is that they don’t like to rotate. If you try to make a superfluid rotate by spinning a bucket full of it, it just won’t. Instead it will start to form vortices that carry the angular momentum. The dark matter superfluid in our galaxy should contain some of these vortices, and finding them might be the way to test this new theory. But to do this, the researchers first have to calculate how the vortices would affect normal matter.

I find this a very interesting idea that has a lot of potential. Of course it leaves many open questions, for example how the matter formed in the early universe, so as the scientists always say: more work is needed. But if dark matter was a superfluid that would be amazingly cool – a few milli Kelvin to be precise.

When it comes to superpowers, I’ll chose science over fiction anytime.

Monday, August 10, 2015

Dear Dr. Bee: Why do some people assume that the Planck length/time are the minimum possible length and time?

“Given that the Planck mass is nothing like a minimum possible mass, why do some people assume that the Planck length/time are the minimum possible length and time? If we have no problem with there being masses that are (much) smaller than the Planck mass, why do we assume that the Planck length and time are the fundamental units of quantisation? Why could there not be smaller distances and times, just as there are smaller masses?”
This question came from Brian Clegg who has written a stack of popular science books and who is also to be found on twitter and uses the same blogger template as I.

Dear Brian,

Before I answer your question, I first have to clarify the relation between the scales of energy and distance. We test structures by studying their interaction with particles, for example in a light microscopes. In this case the shortest structures we can resolve are of the order of the wavelength of the light. This means if we want to probe increasingly shorter distances, we need shorter wavelengths. The wavelength of light is inversely proportional to the light’s energy, so we have to reach higher energies to probe shorter distances. An x-ray microscope for example can resolve finer structures than a microscope that uses light in the visible range.

That the resolution of shorter distances requires higher energies isn’t only so for measurements with light. Quantum mechanics tells us that all particles have a wavelength and the same applies for measurements made by interactions with electrons, protons, and all other particles. Indeed using heavier particles is of advantage to get short wavelengths, which is why electron microscopes reveal more details than light microscopes. Besides this, electrically charged particles are easier to speed up to high energies than neutral ones.

The reason we build particle colliders is because of this simple relation: larger energies resolve shorter distances. Your final question thus should have been “Why could there not be smaller distances [than the Planck length], just as there are larger masses?”

One obtains the Planck mass, length, and time when one estimates the scale at which quantum gravity becomes important, when the quantum uncertainty in the curvature of space-time would noticeably distort the measurement of distances. One then obtains a typical curvature, and a related energy-density, at which quantum gravity becomes relevant. And this scale is given by the Planck scale, that is an energy of about 1015 times that of the LHC, or 10-35 meter respectively. This is such a tiny distance/high energy that we cannot reach it with particle colliders, not now and not anytime in the soon future.

“Why do some people assume that the Planck length/time are the minimum possible length and time?”

It isn’t so much assumed as that it is a hint we take from estimates, thought experiments, and existing approaches to quantum gravity. It seems that the very definition of the Planck-scale – it being where quantum gravity becomes relevant – prevents us from resolving structures any shorter than this. Quantum gravity implies that space and time themselves are subject to quantum uncertainty and fluctuations, and once you reach the Planck scale these uncertainties of space-time add to the quantum uncertainties of the matter in space-time. But since gravity becomes stronger at higher energy density, going to even higher energies does no longer improve the resolution, it just causes stronger quantum gravitational effects.

You have probably heard that nobody yet knows how the complete theory of quantum gravity looks like because quantizing it can’t be done the same way as one quantizes the other interactions – this just leads to meaningless infinite results. However, a common way to tame infinities is to cut off integrals or to discretize the space one integrates over. In both cases this can be done by introducing a minimal length or a maximal energy scale. And since the Planck length and Planck energy are the only scales that we know of for what quantum gravity is concerned, it is often assumed that they play the role of this “regulator” that deals with the infinities.

More important than this general expectation though is that indeed almost all existing approaches to quantum gravity introduce a short-distance regulator in one way or the other. String theory resolves short distances by getting rid of point particles and instead deals only with extended objects. In Loop Quantum Gravity, one has found a smallest unit of area and of volume. Causal Dynamical Triangulation is built on the very idea of discretizing space-time in finite units. In Asymptotically Safe Gravity, the resolution of distances below the Planck length seems prohibited because the strength of gravity depends on the resolution. Moreover, the deep connections that have been discovered between thermodynamics and gravity also indicate that Planck length areas are smallest units in which information can be stored.

“Why do we assume that the Planck length and time are the fundamental units of quantisation?”

This isn’t commonly assumed. It is assumed in Loop Quantum Gravity, and a lot of people don’t think this makes sense.

“Why could there not be smaller distances and times, just as there are larger masses?”

The question isn’t so much whether there are distances shorter than (this question might not even be meaningful), but whether you can ever measure anything shorter than. And it’s this resolution of short distances that seems to be spoiled by quantum gravitational effects.

There is very relevant distinction you must draw here, which is that in most cases physicists speak of the Planck length as a “minimum length scale”, not just a “minimum length.” The reason is that not any quantity with a dimension length is a length of something, and not any quantity with a dimension of energy is an energy of something. It is not the distance itself that matters but the space-time curvature, which has the dimension of an inverse distance squared. So if a physicist says “quantum gravity becomes important at the Planck length” what they actually mean is “quantum gravity becomes important when the curvature is of the order of an inverse Planck length squared.” Or “energy densities in the center-of-mass frame are of the order Planck mass to the forth power”. Or at least that’s what they should mean...

And so, that the Planck length acts as a minimal length scale means that particle collisions that cram increasingly more energy into a smaller volumes will not reveal any new substructures once past Planckian curvature. Instead they may produce larger extended objects such as (higher-dimensional) black holes or stringballs and other exotic things. But note that in all these cases it is perfectly possible to exceed Planckian energies; this is referred to as “superplanckian scattering.”

Some models, such as deformed special relativity, do have an actual maximal energy. In this case the maximum energy is that of an elementary particle, which seems okay since we have never seen an elementary particle with an energy exceeding the Planck energy. In these models it is assumed that the upper bound for energies does not apply to composite particles, that you probably had in mind. The Planck energy is a huge energy to have for elementary particles, but its mass equivalent of 10-5 gram is tiny on every-day scales. Just exactly how it should come about that in these models the maximum energy bound does not apply for composite objects is not well understood and some people, such as I, believe that it isn’t mathematically consistent.

Thanks for an interesting question!

Saturday, August 08, 2015

To the women pregnant with my children: Here is what to expect [Totally TMI – Proceed on your own risk]

Last year I got a strange email, from a person entirely unknown to me, letting me know that one of their acquaintances seemed to pretend an ultrasound image from my twin pregnancy was their own. They sent along the following screen capture that shows a collection of ultrasound images. It springs to the eye that these images were not taken with the same device as they differ in contrast and color scheme. It seems exceedingly unlikely you would get this selection of ultrasound image from one screening.

In comparison, here is my ultrasound image at 14 weeks pregnancy, taken in July 2010:

You can immediately see that the top right image from the stranger is my ultrasound image, easily recognizable by the structure in the middle that looks like an upside-down V. The header containing my name is cropped. I don’t know where the other images came from, but I’d put my bets on Google.

I didn’t really know what to make of this. Why would some strange woman pretend my ultrasound images are theirs? Did she fake being pregnant? Was she indeed pregnant but didn’t have ultrasound images? Did she just not like their own images?

My ultrasound images were tiny, glossy printouts, and to get them online I first had to take a high resolution photo of the image, straighten it, remove reflections, turn up contrast and twiddle some other software knobs. I’m not exactly an award-winning photoshopper, but from the images that Google brings up, mine is one with the highest resolution.

So maybe somebody just wanted to save time, thinking ultrasound images all look alike anyway. Well, they don’t. Truth be said, to me reading an ultrasound is somewhat like reading tea leaves, and I’m a coffee drinker. But the days in which ultrasound images all looked alike are long gone. If you do an inverse image search, it identifies my ultrasound flawlessly. And then there’s the upside-down V that my doctor said was the cord, which might or might not be correct.

The babies are not a boy and a girl, as is claimed in the caption of the screenshot; they are two girls with separate placentas. In the case with two placentas the twins might be fraternal – stemming from two different eggs – or identical – stemming from the same egg that divided early on. We didn’t know they were two girls though until 20 weeks, at which age you should be able to see the dangling part of the genitals, if there is one.

If I upload an image to my blog, I do not mind it being used by other people. What irked me wasn’t somebody used my image, but that they implicitly claimed my experience was theirs.

In any case, I forgot all about this bizarre story until last week I got another note from a person I don’t know, alerting me that somebody else is going about pretending to carry my children. Excuse me if I might not have made too much effort in blurring out the picture of the supposedly pregnant woman

This case is even more bizarre as I’ve been told the woman apparently had her uterus removed and is claiming the embryos have attached to other organs. Now, it is indeed possible that a fertilized egg implants outside the uterus and the embryo continues to grow, sometimes for several months. The abdomen for example has a good blood circulation that can support a pregnancy for quite some while. Sooner or later though the supply of nutrients and oxygen will become insufficient, and the embryo dies, triggering a miscarriage. That’s a major problem because if the pregnancy isn’t in the uterus the embryo has no exit through which to leave. Such out-of-place pregnancies are medical emergencies and, if not discovered early on, normally end fatally for the mother: Even if the dead embryo can be surgically removed, the placenta has grown into the abdomen and cannot detach the way it can cleanly separate from the rather special lining of the uterus, resulting in heavy inner bleeding and, often, death.

Be that as it may, if you’ve had your uterus removed you can’t get pregnant because the semen has no way to fertilize an egg.

I do not have the faintest clue why somebody would want to fake a twin pregnancy. But then the internet seems to proliferate what I want to call, in absence of a better word, “experience theft”. Some people pretend to suffer from an illness they don’t have, travel to places they’ve never been, or having grown up as members of a minority when they didn’t. Maybe pretending to be pregnant with twins is just the newest trend.

Well, ladies, so let me tell you what to expect, so you will get it right. At 20 weeks you’ll start getting preterm contractions, several hours a day, repeating stoically every 10 minutes. They’ll turn out to be what is called “not labor active”, pushing inwards but not downwards, still damned painful. Doctors warn that you’ll have a preterm delivery and issue a red flag: No sex, no travel, no exercise for the rest of the pregnancy.

At 6 months your bump will have reached the size of a full-term single pregnancy, but you still have 3 months to go. People start making cheerful remarks that you must be almost due! Your cervix length has started to shorten and it is highly recommended you stay in bed with your hips elevated and so you’ll go on sick leave following the doctor’s advice. The allegedly so awesome Swedish health insurance will later refuse to cover for this and you’ll lose two months worth of salary.

By 7 month your cervix length has shortened to 1 cm and the doctors get increasingly nervous. By 8 months it’s dilated 1 cm. You’re now supposed to visit your doctor every day. Every day they record your contractions, which still come, “not labor active”, in 10 minute intervals. They still do when you’ve reached full term, at which point you’ll start developing a nasty kidney problem accompanied by substantial water retention. And so, after warning you of a preterm delivery for 4 months, the doctors now insist that you have labor induced.

Once in the hospital they put you on Cytotec, which after 36 hours hasn’t had any effect other than making you even more miserable. But since the doctors expect that you will need a Cesarean section eventually, they don’t want you to eat. After 48 hours mostly lying in bed, not being allowed to eat more than cookies – while being 9 month pregnant with twins! –  your blood pressure will give in and one of the babies’ heartbeats will drop from a steady 140 to 90. And then it’s entirely gone. An electronic device starts beeping widely, a nurse pushes a red button, and suddenly you will find yourself with an oxygen mask on your face and an Epinephrine shot in your vein. You use the situation to yell at a doctor to stop the Cytotec nonsense and put you on Pitocin, which they promise to do the next morning.

The next morning you finally get your PDA and the Pitocin does its work. Within an hour you’ll go from 1 cm to 8 cm dilation. Your waters will never break – a midwife will break them for you. Both. The doctor insists on shaving off your hair “down there”, because he still expects you’ll need a Cesarean. These days, you don’t deliver twins naturally any more, is the message you get. Eventually, after eternity has come and gone, somebody will ask you to push. And push you will, 5 times for two babies.

I have no scars and I have no stretch marks. The doctor never got to use his knife. I’m living proof you don’t need a Cesarean to give birth to twins. The children whose ultrasound image you’ve used are called Lara Lily and Gloria Sophie. At birth, they had a low weight, but full Apgar score. They are now 4 years old, beat me at memory, and their favorite food is meatballs.

The twins are now 4 years old.

If there are two cases that have been brought to my attention that involve my images, how many of these cases are there in total?

Update: Read comments for some more information about the first case.

Wednesday, August 05, 2015

No, you cannot test quantum gravity by entangling mirrors

Last week I did an interview for a radio program named “Equal Time for Freethought,” which wasn’t as left-esoteric as it sounds. I got to talk about quantum gravity, black holes, the multiverse, and everything else that is cool in physics. I was also asked why I spend time cleaning up after science news outlets that proclaim the most recent nonsense about quantum gravity, to which I didn’t have a good answer other than that I am convinced only truth will set us free. We stand nothing to win from convincing the public that quantum gravity is any closer to being tested than it really is, which is very, very far away.

The most recent example of misleading hype comes, depressingly, from an otherwise reliable news outlet, the Physical Review A synopsis where Michael Shirber explains that
“A proposed interferometry experiment could test quantum gravity theories by entangling two mirrors weighing as much as apples.”

A slight shortcoming of this synopsis is that it does not explain which theories the experiment is supposed to test. Neither, for that matter does the author of the paper, Roman Schnabel, mention any theory of quantum gravity that his proposed experiment would test. He merely uses the words “quantum” and “gravity” in the introduction and the conclusion. One cannot even blame him for overstating his case, because he makes no case.

Leaving aside the introduction and conclusion, the body of Schnabel’s paper (which doesn’t seem to be on the arxiv, sorry) is a detailed description of an experiment to entangle two massive mirrors using photon pressure, which means that one creates macroscopically heavy objects displaying true quantum effects. The idea is that you first let a laser beam go through a splitter to produce an entangled photon state, and then transfer momentum from the photons on the mirrors by allowing the mirrors to bounce. In this way you entangle the mirrors, or at least some of the electrons in some of their atomic orbits.

Similar experiments have been done before (and have previously been falsely claimed to test quantum gravity), but Schnabel proposes an improved experiment that he estimates to be less noise-sensitive which could thus increase the time for which the entanglement can be maintained and the precision by which it can be measured.

Now as I’ve preached numerous times, just because an experiment has something to do with quantum and something to do with gravity doesn’t mean it’s got something to do with quantum gravity. Quantum gravity is about the quantization of the gravitational field. It’s about quantum properties of space and time themselves, not about quantum properties of stuff in space-time. Schnabel’s paper is remarkable in that it doesn’t even bring in unquantized gravity. It merely discusses mass and quantization – no quantum gravity in sight anywhere.

The closest that the paper gets to having something to do with quantum gravity is mentioning the Schrödinger-Newton equation, which is basically the exact opposite of quantum gravity – it’s based on the idea that gravity remains unquantized. And then there are two sentences about Penrose’s hypothesis of gravitationally induced decoherence. Penrose’s idea is that gravity is the reason why we never observe quantum effects on large objects – such as cats that can’t decide whether to die or to live – and his model predicts that this influence of gravity should prevent us from endowing massive objects with quantum properties. It would be nice to rule out this hypothesis, but there is no discussion in Schnabel’s paper about how the proposed experiment would compare to existing bounds on gravitationally induced decoherence or the Schrödinger Newton equation. (Contacted to author and asked for details, but no reply.)

Don’t get me wrong, I think this is worthwhile experiment and one should definitely push tests of quantum mechanics on macroscopically large and heavy objects, but this is interesting for the purpose of testing the foundations of quantum mechanics, not for the purpose of testing quantum gravity.

And the weight of the mirrors btw is about 100g. If the American Physical Society can’t trust its readers to know how much that is, so that one has to paraphrase it with “weighting as much as apples,” then maybe they shouldn’t be discussing quantum gravity.

All together, the paper is a sad demonstration of the utter disconnect between experiment and theory in quantum gravity research.

Monday, August 03, 2015

Dear Dr. B: Can you make up anything in theoretical physics?

“I am a phd-student in neuroscience and I often get the impression that in physics "everything is better". E.g. they replicate their stuff, they care about precision, etc. I've always wondered to what extend that is actually true, as I obviously don't know much about physics (as a science). I've also heard (but to a far lesser extent than physics being praised) that in theoretical physics you can make up anything bc there is no way of testing it. Is that true? Sorry if that sounds ignorant, as I said, I don't know much about it.”

This question was put forward to me by Amoral Atheist at Neuroskeptic’s blog.

Dear Amoral Atheist:

I appreciate your interest because it gives me an opportunity to lay out the relation of physics to other fields of science.

About the first part of your question. The uncertainty in data is very much tied to the objects of study. Physics is such a precise science because it deals with objects whose properties are pretty much the same regardless of where or when you test them. The more you take apart stuff, the simpler it gets, because to our best present knowledge we are made of only a handful of elementary particles, and these few particles are all alike – the electrons in my body behave exactly the same way as the electrons in your body.

If the objects of study get larger, there are more ways the particles can be combined and therefore more variation in the objects. As you go from elementary particle physics to nuclear and atomic physics to condensed matter physics, then chemistry and biology and neuroscience, the variety in construction become increasingly important. It is more difficult to reproduce a crystal than it is to reproduce a Hydrogen atom, and it is even more difficult to reproduce cell cultures or tissue. As variety increases, expectations for precision and reproducibility go down. This is the case already in physics: Condensed matter physics isn’t as precise as elementary particle physics.

Once you move past a certain size, where the messy regime of human society lies, things become easier again. Planets, stars, or galaxies as a whole, can be described with high precision too because for them the details (of, say, organisms populating the planets) don’t matter much.

And so the standards for precision and reproducibility in physics are much higher than in any other science not because physicists are smarter or more ambitious, but because the standards can be higher. Lower standards for statistical significance in other fields is nothing that researchers should be blamed for, it comes with their data.

It is also the case though that since physicists have been dealing with statistics and experimental uncertainty at such high precision since hundreds of years, they sometimes roll eyes about erroneous handling of data in other sciences. It is for example a really bad idea to only choose a way to analyze data after you have seen the results, and you should never try several methods until you find a result that crosses whatever level of significance is standard in your field. In that respect I suppose it is true that in physics “everything is better” because the training in statistical methodology is more rigorous. In other words, one is lead to suspect that the trouble with reproducibility in other fields of science is partly due to preventable problems.

About the second part of your question. The relation between theoretical physics and experimental physics goes both ways. Sometimes experimentalists have data that needs a theory by which they can be explained. And sometimes theorists have come up with a theory that they need new experimental tests for. This way, theory and experiment evolves hand in hand. Physics, as any other science, is all about describing nature. If you make up a theory that cannot be tested, you’re just not doing very interesting research, and you’re not likely to get a grant or find a job.

Theoretical physicists, as they “make up theories” are not free to just do whatever they like. The standards in physics are high, both in experiment and in theory, because there are so many data that are known so precisely. New theories have to be consistent with all the high precision data that we have accumulated in hundreds of years, and theories in physics must be cast in the form of mathematics; this is an unwritten rule, but one that is rigorously enforced. If you come up with an idea and are not able to formulate it in mathematical terms, nobody will take you seriously and you will not get published. This is for good reasons: Mathematics has proved to be an enormously powerful way to ensure logical coherence and prevent humans from fooling themselves by wishful thinking. A theory lacking a mathematical framework is today considered very low standard in physics.

The requirement that new theories both be in agreement with all existing data and be mathematically consistent – ie do not lead to internal disagreements or ambiguities – are not easy requirements to fulfil. Just how hard it is to come up with a theory that improves on the existing ones and meets these requirements is almost always underestimated by people outside the field.

There is for example very little that you can change about Einstein’s theory of General Relativity without ruining it altogether. Almost everything that you can imagine doing to its mathematical framework has dire consequences that lead to either mathematical nonsense or to crude conflict with data. Something as seemingly innocuous as giving a tiny mass to the normally massless carrier field of gravity can entirely spoil the theory.

Of course there are certain tricks you can learn that help you invent new theories that are not in conflict with data and are internally consistent. If you want to invent a new particle for example, as a rule of thumb you better make it very heavy or make it very weakly interacting, or both. And make sure you respect all known symmetries and conservation laws. You also better start with a theory that is known to work already and just twiddle it a little bit. In other words, you have to learn the rules before you break them. Still, it is hard and new theories don’t come easily.

Dark matter is a case in point. Dark matter has first been spotted in the 1930s. 80 years later, after the work of tens of thousands of physicists, we have but a dozen possible explanations for what it may be that are now subject to further experimental test. If it was true that in theoretical physics you “can make up anything” we’d have hundreds of thousands of theories for dark matter! It turns out though most ideas don’t meet the standards and so they are discarded of very quickly.

Sometimes it is very difficult to test a new theory in physics, and it can take a lot of time to find out how to do it. Pauli for example invented a particle, now called the “neutrino,” to explain some experiments that physicists were confused about in the 1930s, but it took almost three decades to actually find a way to measure this particle. Again this is a consequence of just how much physicists know already. The more we know, the more difficult it becomes to find unexplored tests for new ideas.

It is certainly true that some theories that have been proposed by physicists are so hard to test they are almost untestable, like for example parallel universes. These are extreme outliers though and, as I have complained earlier, that they are featured so prominently in the press is extremely misleading. There are few physicists working on this and the topic is very controversial. The vast majority of physicists work in down-to-earth fields like plasma physics or astroparticle physics, and have no business with the multiverse or parallel universes (see my earlier post “What do most physicists work on?”). These are thought-stimulating topics, and I find it interesting to discuss them, but one shouldn’t mistake them for being central to physics.

Another confusion that often comes up is the relevance of physics to other fields of science, and the discussion at Neurosceptic’s blogpost is a sad example. It is perfectly okay for physicists to ignore biology in their experiments, but it is not okay for biologists to ignore physics. This isn’t so because physicists are arrogant, it is because physics studies objects in their simplest form when their more complicated behavior doesn’t play a role. But the opposite is not the case: The simple laws of physics don’t just go way when you get to more complicated objects, they still remain important.

For this reason you cannot just go and proclaim that human brains somehow exchange signals and store memory in some “cloud” because there is no mechanism, no interaction, by which this could happen that we wouldn’t already have seen. No, I'm not narrowminded, I just know how hard it is to find an unexplored niche in the known laws of nature to hide some entirely new effect that has never been observed. Just try yourself to formulate a theory that realizes this idea, a theory which is both mathematically consistent and consistent with all known observations, and you will quickly see that it can’t be done. It is only when you discard the high standard requirements of physics that you really can “make up anything.”

Thanks for an interesting question!