Tuesday, October 06, 2015

Repost in celebration of the 2015 Nobel Prize in Physics: Neutrino masses and angles

It was just announced that this year's Nobel Prize in physics goes to Takaaki Kajita from the Super-Kamiokande Collaboration and Arthur B. McDonald from the Sudbury Neutrino Observatory (SNO) Collaboration “for the discovery of neutrino oscillations, which shows that neutrinos have mass.” On this occasion, I am reposting a brief summary of the evidence for neutrino masses that I wrote in 2007.

Neutrinos come in three known flavors. These flavors correspond to the three charged leptons, the electron, the muon and the tau. The neutrino flavors can change during the neutrino's travel, and one flavor can be converted into another. This happens periodically. The neutrino flavor oscillations have a certain wavelength, and an amplitude which sets the probability of the change to happen. The amplitude is usually quantified in a mixing angle θ. In this, sin2(2 θ) = 1, or θ = π/4 corresponds to maximal mixing, which means one flavor changes completely into another, and then back.

This neutrino mixing happens when the mass-eigenstates of the Hamiltonian are not the same as the flavor eigenstates. The wavelength λ of the oscillation turns out to depend (in the relativistic limit) on the difference in the squared masses Δm2 (not the square of the difference!) and the neutrino's energy E as λ = 4Em2. The larger the energy of the neutrinos the larger the wavelength. For a source with a spectrum of different energies around some mean value, one has a superposition of various wavelengths. On distances larger than the typical oscillation length corresponding to the mean energy, this will average out the oscillation.

The plot below from the KamLAND Collaboration shows an example of an experiment to test neutrino flavor conversion. The KamLAND neutrino sources are several Japanese nuclear reactors that emit electron anti-neutrinos with a very well known energy and power spectrum, that has a mean value around some MeV. The average distance to the reactors is ~180 km. The plot shows the ratio of the observed electron anti-neutrinos to the expected number without oscillations. The KamLAND result is the red dot. The other data points were earlier experiments in other locations that did not find a drop. The dotted line is the best fit to this data.

[Figure: KamLAND Collaboration]

One sees however that there is some kind of redundancy in this fit, since one can shift around the wavelength and stay within the errorbars. These reactor data however are only one of the measurements of neutrino oscillations that have been made during the last decades. There are a lot of other experiments that have measured deficites in the expected solar and atmospheric neutrino flux. Especially important in this regard was the SNO data that confirmed that indeed not only there were less solar electron neutrinos than expected, but that they actually showed up in the detector with a different flavor, and the KamLAND analysis of the energy spectrum that clearly favors oscillation over decay.

The plot below depicts all the currently available data for electron neutrino oscillations, which places the mass-square around 8×10-5 eV2, and θ at about 33.9° (i.e. the mixing is with high confidence not maximal).

[Figure: Hitoshi Murayama, see here for references on the used data]

The lines on the top indicate excluded regions from earlier experiments, the filled regions are allowed values. You see the KamLAND 95%CL area in red, and SNO in brown. The remaining island in the overlap is pretty much constrained by now. Given that neutrinos are so elusive particles, and this mass scale is incredibly tiny, I am always impressed by the precision of these experiments!

To fit the oscillations between all the known three neutrino flavors, one needs three mixing angles, and two mass differences (the overall mass scale factors out and does not enter, neutrino oscillations thus are not sensitive to the total neutrino masses). All the presently available data has allowed us to tightly constrain the mixing angles and mass squares. The only outsider (that was thus excluded from the global fits) is famously LSND (see also the above plot), so MiniBooNE was designed to check on their results. For more info on MiniBooNE, see Heather Ray's excellent post at CV.

This post originally appeared in December 2007 as part of our advent calendar A Plottl A Day.

Friday, October 02, 2015

Book Review: “A Beautiful Question” by Frank Wilczek

A Beautiful Question: Finding Nature's Deep Design
By Frank Wilczek
Penguin Press (July 14, 2015)

My four year old daughter recently discovered that equilateral triangles combine to larger equilateral triangles. When I caught a distracted glimpse of her artwork, I thought she had drawn the baryon decuplet, an often used diagram to depict relations between particles composed of three quarks.

The baryon decuplet doesn’t come easy to us, but the beauty of symmetry does, and how amazing that physicists have found it tightly woven into the fabric of nature itself: Both the standard model of particle physics and General Relativity, our currently most fundamental theories, are in essence mathematically precise implementations of symmetry requirements. But next to being instrumental for the accurate description of nature, the appeal of symmetries is a human universal that resonates in art and design throughout cultures. For the physicist, it is impossible not to note the link, not to see the equations behind the art. It may be a curse or it may be a blessing.

For Frank Wilczek it clearly is a blessing. In his most recent book “A Beautiful Question,” he tells the success of symmetries in physics, and goes on to answer his question whether “the world embodies beautiful ideas” with a clear “Yes.”

Lara’s decuplet
Wilczek starts from the discovery of basic mathematical relationships like Pythagoras’ theorem (not shying away from explaining how to prove it!) and proceeds through the history of physics along selected milestones such as musical harmonies, the nature of light and the basics of optics, Newtonian gravity and its extension to General Relativity, quantum mechanics, and ultimately the standard model of particle physics. He briefly touches on condensed matter physics, graphene in particular, and has an interesting digression about the human eye’s limited ability to decode visual information (yes, the shrimp again).

In the last chapters of the book, Wilczek goes into quite some detail about the particle content of the standard model, and in just which way it seems to be not as beautiful as one may have hoped. He introduces the reader to extended theories, grand unification and supersymmetry, invented to remedy the supposed shortcomings of the standard model. The reader who is not familiar with the quantum numbers used to classify elementary particles will likely find this chapter somewhat demanding. But whether or not one makes the effort to follow the details, Wilczek’s gets his message across clearly: Striving for beauty in natural law has been a useful guide, and he expects it to remain one, even though he is careful to note that relying on beauty has on various occasions lead to plainly wrong theories, such as the attempt to explain planetary orbits with the Platonic solids, or to the idea to develop a theory of atoms based on the mathematics of knots.

“A Beautiful Question” is a skillfully written reflection, or “meditation” as Wilczek puts it. It is well structured and accompanied by many figures, including two inserts with color prints. The book also contains an extensive glossary, recommendations for further reading, and a timeline of the discoveries mentioned in the text.

My husband’s decuplet.
The content of the book is unique in the genre. David Goldberg’s book “The Universe in the Rearview Mirror: How Hidden Symmetries Shape Reality,” for example, also discusses the role of symmetries in fundamental physics, but Wilzcek gives more space to the connection between aesthetics in art and science. “A Beautiful Question” picks up and expands on the theme of Steven Weinberg’s 1992 book “Dreams of a Final Theory” that also expounded the relevance of beauty in the development of physical theories. More than 20 years have passed, but the dream is still as elusive today as it was back then.

For all his elaboration on the beauty of symmetry though, Wilczek’s book falls short of spelling out the main conundrum physicists face today: We have no reason to be confident that the laws of nature which we have yet to discover will conform to the human sense of beauty. Neither does he spend many words on aspects of beauty beyond symmetry; Wilczek only briefly touches on fractals, and never goes into the rich appeal of chaos and complexity.

My mother used to say that “symmetry is the art of the dumb,” which is maybe a somewhat too harsh criticism on the standard model, but seeing that reliance on beauty has not helped us within the last 20 years, maybe it is time to consider that the beauty of the answers might not reveal itself as effortlessly as does the tiling of the plane to a 4 year old. Maybe the inevitable subjectivity in our sense of aesthetic appeal that has served us well so far is about to turn from a blessing to a curse, misleading us as to where the answers lie.

Wilczek’s book contains something for every reader, whether that is the physicist interested to learn how a Nobel Prize winner thinks of the connection between ideas and reality, or the layman wanting to know more about the structure of fundamental law. “A Beautiful Question” reminds us of the many ways that science connects to the arts, and invites us to marvel at the success our species has had in unraveling the mysteries of nature.

[An edited version of this review appeared in the October issue of Physics Today.]

Service Announcement: Backreaction now on facebook!

Over the years the discussion of my blogposts has shifted over to facebook. To follow this trend and to make it easier for you to engage, I have now set up a facebook page for this blog. Just "like" the page to get the newest blogposts and other links that I post :)

Thursday, October 01, 2015

When string theorists are out of luck, will Loop Quantum Gravity come to rescue?

Tl;dr: I don’t think they want rescuing.

String theorists and researchers working on loop quantum gravity (LQG) like to each point out how their own attempt to quantize gravity is better than the others’. In the end though, they’re both trying to achieve the same thing – consistently combining quantum field theory with gravity – and it is hard to pin down just exactly what makes strings and loops incompatible. Other than egos that is.

The obvious difference used to be that LQG works only in 4 dimensions, whereas string theory works only in 10 dimensions, and LQG doesn’t allow for supersymmetry, which is a consequence of quantizing strings. However, several years ago the LQG framework has been extended to higher dimensions, and they can now also include supergravity, so that objection is gone.

Then there’s the issue with Lorentz-invariance, which is respected in string theory, but its fate in LQG has been subject of much debate. As of recently though, some researchers working on LQG have argued that Lorentz-invariance, used as a constraint, leads to requirements on the particle interactions, which then have to become similar to some limits found in string theory. This should come as no surprise to string theorists who have been claiming for decades that there is one and only one way to combine all the known particle interactions...

Two doesn’t make a trend, but I have a third, which is a recent paper that appeared on the arxiv:
Bodendorfer argues in his paper that loop quantization might be useful for calculations in supergravity and thus relevant for the AdS/CFT duality.

This duality relates certain types of gauge theories – similar to those used in the standard model – with string theories. In the last decade, the duality has become exceedingly popular because it provides an alternative to calculations which are difficult or impossible in the gauge theory. The duality is normally used only in the limit where one has classical (super)gravity (λ to ∞) and an infinite number of color charges (Nc to ∞). This limit is reasonably well understood. Most string theorists however believe in the full conjecture, which is that the duality remains valid for all values of these parameters. The problem is though, if one does not work in this limit, it is darned hard to calculate anything.

A string theorist, they joke, is someone who thinks three is infinitely large. Being able to deal with a finite number of color charges is relevant for applications because the strong nuclear force has 3 colors only. If one keeps the size of the space-time fixed relative to the string length (which corresponds to fixed λ), a finite Nc however means taking into account string effects, and since the string coupling gs ~ λ/Nc goes to infinity with λ when Nc remains finite, this is a badly understood limit.

In his paper, Bodendorfer looks at the limit of finite Nc and λ to infinity. It’s a clever limit in that it gets rid of the string excitations, and instead moves the problem of small color charges into the realm of super-quantum gravity. Loop quantum gravity is by design a non-perturbative quantization, so it seems ideally suited to investigate this parameter range where string theorists don’t know what to do. But it’s also a strange limit in that I don’t see how to get back the perturbative limit and classical gravity once one has pushed gs to infinity. (If you have more insight than me, please leave a comment.)

In any case, the connection Bodendorfer makes in his paper is that the limit of Nc to ∞ can also be obtained in LQG by a suitable scaling of the spin network. In LQG one works with a graph that has a representation label, l. The graph describes space-time and this label enters the spectrum of the area operator, so that the average quantum of area increases with this label. When one keeps the network fixed, the limit of large l then blows up the area quanta and thus the whole space, which corresponds to the limit of Nc to infinity.

So far, so good. If LQG could now be used to calculate certain observables on the gravity side, then one could further employ the duality to obtain the corresponding observables in the gauge theory. The key question is though whether the loop-quantization actually reproduces the same limit that one would obtain in string theory. I am highly skeptical that this is indeed the case. Suppose it was. This would mean that LQG, like string theory, must have a dual description as a gauge theory still outside the classical limit in which they both agree (they better do). The supersymmetric version of LQG used here has the matter content of supergravity. But it is missing all the framework that in string theory eventually give rise to branes (stacks thereof) and compactifications, which seem so essential to obtain the duality to begin with.

And then there is the problem that in LQG it isn’t well understood how to get back classical gravity in the continuum limit, which Bodendorfer kind of assumes to be the case. If that doesn’t work, then we don’t even know whether in the classical limit the two descriptions actually agree.

Despite my skepticism, I think this is an important contribution. In lack of experimental guidance, the only way we can find out which theory of quantum gravity is the correct description of nature is to demonstrate that there is only one way to quantize gravity that reproduces the General Relativity and the Standard Model in the suitable limits while being UV-finite. Studying how the known approaches do or don’t relate to each other is a step to understanding whether one has any option in the quantization, or whether we do indeed already have enough data to uniquely identify the sought-after theory.

Summary: It’s good someone is thinking about this. Even better this someone isn’t me. For a theory that has only one parameter, it seems to have a lot of parameters.

Monday, September 28, 2015

No, Loop Quantum Gravity has not been shown to violate the Holographic Principle

Didn't fly.
Tl;dr: The claim in the paper is just wrong. Read on if you want to know why it matters.

Several people asked me for comments on a recent paper that appeared on the arxiv, “Violation of the Holographic Principle in the Loop Quantum Gravity” by Ozan Sargın and Mir Faizal. We have met Mir Faizal before; he is the one who explained that the LHC would make contact to parallel universes [spoiler alert: it won’t]. Now, I have recently decided to adapt a strict diet of intellectual veganism: I’ll refuse to read anything produced by making science suffer. So I wouldn’t normally have touched the paper, not even with a fork. But since you asked, I gave it a look.

The claim in the paper is that Loop Quantum Gravity (LQG), the most popular approach to quantum gravity after string theory, must be wrong because it violates the Holographic Principle. The Holographic Principle requires that the number of different states inside a volume is bounded by the surface of the volume. That sounds like a rather innocuous and academic constraint, but once you start thinking about it it’s totally mindboggling.

All our intuition tells us that the number of different states in a volume is bounded by the volume, not the surface. Try stuffing the Legos back into your kid’s toy box, and you will think it’s the volume that bounds what you can cram inside. But the Holographic Principle says that this is only approximately so. If you would try to pack more and more, smaller and smaller Legos into the box, you would eventually fail to get anything more inside. And if you would measure what bounds the success of your stuffing of the tiniest Legos, it would be the surface area of the box. In more detail, the amount of different states has to be less then a quarter of the surface area measured in Planck units. That’s a huge number and so far off our daily experience that we never notice this limit. What we notice in practice is only the bound by the volume.

The Holographic Principle is a consequence of black hole physics, which does not depend on the details of quantizing gravity, and it is therefore generally expected that the entropy bound must be obeyed by all approaches to quantum gravity.

Physicists have tried, of course, to see whether they can find a way to violate this bound. You can consider various types of systems, pack them as tightly as possible, and then calculate the number of degrees of freedom. In this, it is essential that you take into account quantum behavior, because it’s the uncertainty principle that ultimately prevents arbitrarily tight packing. In all known cases however, it was found that the system will collapse to a black hole before the bound is saturated. And black holes themselves saturate the bound. So whatever physicists tried, they only confirmed that the bound holds indeed. With every such thought-experiment, and with every failure of violating the entropy bound, they have grown more convinced that the holographic principle captures a deep truth about nature.

The only known exception that violates the holographic entropy bound are the super-entropic monster-states constructed by Hsu and collaborators. These states however are pathological in that not only will they inevitably go on to collapse to a black hole, they also must have come out of a white hole in the past. They are thus mathematically possible, but not physically realistic. (Aside: That the states come out of a white hole and vanish into a black hole also means you can’t create these super-entropic configurations by throwing in stuff from infinity, which should come as a relief to anybody who believes in the AdS/CFT correspondence.)

So if Loop Quantum Gravity would violate the Holographic Principle that would be a pretty big deal, making the theory inconsistent with all that’s known about black hole physics!

In the paper, the authors redo the calculation for the entropy of a particular quantum system. With the usual quantization, this system obeys the holographic principle. With the quantization technique from Loop Quantum Gravity, the authors get an additional term but the system still obeys the holographic entropy bound, since the additional term is subdominant to the first. They conclude “We have demonstrated that the holographic principle is violated due to the effects coming from LQG.” It’s a plain non-sequitur.

I suspect that the authors mistook the maximum entropy of the quantum system under consideration, previously calculated by ‘t Hooft, for the holographic bound. This is strange because in the introduction they have the correct definition for the holographic bound. Besides this, the claim that in LQG it should be more difficult to obey the holographic bound is highly implausible to begin with. LQG is a discretization approach. It reduces the number of states, it doesn’t increase them. Clearly, if you go down to the discretization scale, the number of states should drop to zero. This makes me think that not only did the authors misinterpret the result, they probably also got the sign of the additional term wrong.

(To prevent confusion, please note that in the paper they calculated corrections to the entropy of the matter, not corrections to the black hole entropy, which would go onto the other side of the equation.)

You might get away with the impression that we have here two unfortunate researchers who were confused about some terminology, and I’m being an ass for highlighting their mistakes. And you would be right, of course, they were confused, and I’m an ass. But let me add that after having read the paper I did contact the authors and explained that their statement that the LQG violates the Holographic Principle is wrong and does not follow from their calculation. After some back and forth, they agreed with me, but refused to change anything about their paper, claiming that it’s a matter of phrasing and in their opinion it’s all okay even though it might confuse some people. And so I am posting this explanation here because then it will show up as an arxiv trackback. Just to avoid that it confuses some people.

In summary: Loop Quantum Gravity is alive and well. If you feed me papers in the future, could you please take into account my dietary preferences?

Thursday, September 24, 2015

The power of words – and its limit

What images does the word “power” bring to your mind? Weapons? Bulging muscles? A mean-looking capitalist smoking cigar? Whatever your mind’s eye brings up, it is most likely not a fragile figure hunched over a keyboard. But maybe it should.

Words have power, today more than ever. Riots and rebellions can be arranged by text-messages, a single word made hashtag can create a mass movement, 140 characters will ruin a career, and a video gone viral will reach out to the world. Words can destroy lives or they can save them: “If you find the right tone and the right words, you can reach just about anybody,” says Cid Jonas Gutenrath who worked for years at the emergency call center of the Berlin police force [1]:

“I had worked before as a bouncer, and I was always the shortest one there. I had an existential need, so to speak, to solve problems through language. I can express myself well, and when that’s combined with some insights into human nature it’s a valuable skill. What I’m talking about is every conversation with the police, whether it’s an emergency call or a talk on the street. Language makes it possible for everyone involved to get out of the situation in one piece.”

Words won’t break your bones. But more often than not, words – or their failure – decide whether we take to weapons. It is our ability to convince, cooperate, and compromise that has allowed us to conquer an increasingly crowded planet. According to recent research, what made humans so successful might indeed not have been superior intelligence or skills, but instead our ability to communicate and work together.

In a recent SciAm issue, Curtis W. Marean, Professor for Archeology at Arizona State University, lays out a new hypothesis to explain what was the decisive development that allowed humans to dominate Earth. According to Marean, it is not, as previously proposed, handling fire, building tools, or digesting a large variety of food. Instead, he argues, what sets us apart is our willingness to negotiate a common goal.

The evolution of language was necessary to allow our ancestors to find solutions to collective problems, solutions other than hitting each other over the head. So it became possible to reach agreements between groups, to develop a basis for commitments and, eventually, contracts. Language also served to speed up social learning and to spread ideas. Without language we wouldn’t have been able to build a body of knowledge on which the scientific profession could stand today.

“Tell a story,” is the ubiquitous advice given to anybody who writes or speaks in public. And yet, some information fits badly into story-form. In popular science writing, the reader inevitably gets the impression that much of science is a series of insights building onto each other. The truth is that most often it is more like collecting puzzle pieces that might or might not actually belong to the same picture.

The stories we tell are inevitably linear; they follow one neat paragraph after the other, one orderly thought after the next, line by line, word by word. But this simplicity betrays the complexity of the manifold interrelations between scientific concepts. Ask any researcher for their opinion on a news report in their discipline and they will almost certainly say “It’s not so simple…”

Links between different fields of science.
Image Source: Bollen et al, PLOS ONE, March 11, 2009.
There is a value to simple stories. They are easily digestible and convey excitement about science. But the reader who misses the entire context cannot tell how well a new finding is embedded into existing research. The term “fringe science” is a direct visual metaphor alluding to this disconnect, and it’s a powerful one. Real scientific progress must fit into the network of existing knowledge.

The problem with linear stories doesn’t only make the writing about science difficult, but also the writing in science. The main difficulty when composing scientific papers is the necessity to convert a higher-dimensional network of associations into a one-dimensional string. It is plainly impossible. Many research articles are hard to follow, not because they are badly written, but because the nature of knowledge itself doesn’t lend itself to the narrative.

I have an ambivalent relation to science communication because a good story shouldn’t make or break an idea. But the more ideas we are exposed to, the more relevant good presentation becomes. Every day scientific research seems a little less like a quest for truth, and a little more like a competition for attention.

In an ideal world maybe scientists wouldn’t write their papers themselves. They would call for an independent reporter and have to explain their calculations, then hand over their results. The paper would be written up by an unbiased expert, plain, objective, and comprehensible. Without exaggerations, without omissions, and without undue citations to friends. But we don’t live in an ideal world.

What can you do? Clumsy and often imperfect, words are still the best medium we have to convey thought. “I think therefore I am,” Descartes said. But 400 years later, the only reason his thought is still alive is that he put into writing.

[1] Quoted in: Evonik Magazine 2015-02, p 17.

Wednesday, September 23, 2015

Can dark matter cause cancer?

Image Credits: Agnis Schmidt-May
Tl;dr: Yes. But it’s exceedingly unlikely.

Yesterday, a new paper appeared on the arxiv, provocatively titled “Dark matter as a cancer hazard.” It is a comment on an earlier paper by Freese and Savage, which I previously wrote about here.

Freese and Savage in their 2012 paper estimated the interaction rate of dark matter with the human body for weakly interacting massive particls (WIMPs). They came to the conclusion that the risk of getting cancer from damage caused by dark matter to the genetic code is much smaller than the risk posed by the cosmic radiation we are constantly exposed to.

Yes, dark matter can cause cancer. That’s because literally everything can cause cancer: The probability that a particle collision breaks a molecular bond is never strictly speaking zero, and such damage can potentially turn a cell into a cancerous reproduction machine. Even doing nothing at all can cause cancer, just because a bond may break simply due to quantum fluctuations. It’s not fair, I know. It’s also so unlikely to happen that it didn’t even make it onto the Daily Mail’s List of Things That Can Give You Cancer. Should dark matter go onto the list? After all, the idea that dark matter may lead to “biological phenomena having sometimes fatal late effects” dates back at least to 1990.

In the new paper the authors estimate the interaction probability with the human body for a different type of dark matter. They looked specifically at mirror dark matter whereas Freese and Savage had looked at one of the presently most popular dark matter models, the WIMPs. I can see a whole industry growing out of this.

But what is mirror dark matter and why have you never heard of it?

Mirror dark matter is a complex type of dark matter, a complete copy of the standard model that describes our normal matter. The mirror dark matter interacts only gravitationally with us, or at least only very weakly. This sounds like a nice idea, the next best thing you may think of after just having a single particle. The problem is that we know dark matter does not behave just like normal matter, which renders mirror dark matter immediately implausible.

To begin with there is more dark matter in the universe than normal matter. But more importantly, observations tell us that dark matter must be weakly interacting with itself, otherwise the cosmic microwave background would not have the observed spectrum of temperature fluctuations. Our normal matter interacts much too strongly with itself to achieve that. Then there are case studies like the Bullet Cluster, whose gravitational lensing images reveal that dark matter does not have as much friction among itself as normal dark matter. Dark matter also doesn’t form galaxies in the same way that normal matter does, but rather it acts as a seed for our galaxies. If it didn’t, structure formation wouldn’t come out correctly.

So clearly, dark matter that just does the same as normal matter doesn’t work. On the other hand, a copy of the standard model is a large set of particles with many emergent parameters (like particle abundances) that allow a lot of freedom to make the model fit the data.

You can try for example to adapt the mirror matter model by making changes to the initial conditions, so that they differ from the initial condition of normal matter. The mirror dark matter is assumed to start in the early universe from a specifically chosen configuration, that in particular implies that the two types of matter do not have the same temperature later on. This can solve some problems and make mirror dark matter fit many of our observations. It brings up the question though Why these initial conditions?

As has been argued, probably most vocally by Paul Davies, the distinction between initial conditions and evolution laws is fuzzy. If you fabricate your initial conditions smartly enough, you can make pretty much any model fit the data. (You can take the state that we observe today and evolve it backwards time. Then pick whatever you get as initial state. Voila.) So I don’t actually doubt that it is possible to explain the observations with mirror dark matter. But cherry picking initial conditions doesn’t seem very convincing to me.

In any case, leaving aside that mirror dark matter is not particularly popular because dark matter just doesn’t seem to behave anything like normal matter, it’s a model, and it has equations and so on, and now you can go and calculate things.

To estimate the cancer risk from mirror dark matter, the authors assume that the mirror dark matter forms atoms, which can bind together to “mirror micrometeorites” that contain about 1015 mirror atoms. They then estimate the energy deposited by the mirror micrometeorites in the human body and find that they can leave behind more energy than weakly interacting single-particle dark matter. These mirror-objects can thus damage multiple bonds on their path. The reason is basically that they are larger.

So how likely is mirror dark matter to give you cancer? Well, unfortunately in the paper they only estimate the energy deposited by the micrometeorites, but not the probability for these objects to hit you to begin with. I wrote an email to one of the authors and inquired if there is an estimate for the flux of such objects through earth, but apparently there is none. But one thing we know about dark matter is how much there has to be of it in total. So if dark matter is clumped to pieces larger than WIMPs this means that there must be fewer of these pieces. In other words, the flux of the mirror nuclei relative to that of WIMPs should be lower. Without a concrete model though, one really can’t say anything more.

In the new paper, the authors further speculate that dark matter may account for some types of cancer
“We can thus speculate that the mirror micrometeorite, when interacting with the DNA molecules, can lead to multiple simultaneous mutations and cause disease. For instance, there is an evidence that individual malignant cancer cells in human tumors contain thousands of random mutations and that normal mutation rates are insufficient to account for these multiple mutations found in human cancers [...]”
Whatever the risk of getting cancer from dark matter however, it probably hasn’t changed much for the last billion years or so. One could then try to turn the argument around and argue that if there were too many of such mirror micrometeorites then the dinosaurs would have died from cancer, or something like that. I am not very excited about such biological constraints, the uncertainties are much too large in this area. You almost certainly get better accuracy looking at traces in minerals or actual particle detectors.

In summary, the paper doesn’t estimate the cancer risk for an unconfirmed model. And in any case, short of moving to the center of the Earth there isn’t anything you could do about it anyway.

Tuesday, September 15, 2015

10 Things you should know about Dark Matter

Dark matter filaments. Image Credits:
John Dubinski (U of Toronto).
1. “Dark” doesn’t just mean we don’t see it.

It means it doesn’t emit any electromagnetic radiation for all we can tell. Astronomers haven’t been able to find neither light visible to the eye, nor radiation in the radio range or x-ray regime, and not at even higher energies either.

2. “Matter” doesn’t just mean it’s stuff.

What physicists classify as matter must behave like the matter we are made of, at least for what its motion in space and time is concerned. This means in particular dark matter dilutes when it spreads into a larger volume, and causes the same gravitational attraction as ordinary, visible, matter. It is easy to think up “stuff” that does not do this. Dark energy for example does not behave this way.

3. It’s not going away.

You will not wake up one day and hear physicists declare it’s not there at all. The evidence is overwhelming: Weak gravitational lensing demonstrates that galaxies have a larger gravitational pull than visible matter can produce. Additional matter in galaxies is also necessary to explain why stars in the outer arms of galaxies orbit so quickly around the center. The observed temperature fluctuations in the cosmic microwave background can’t be explained without dark matter, and the structures formed by galaxies wouldn’t come out right without dark matter either.

Even if all of this was explained by a modification of gravity rather than an unknown type of matter, it would still have to be possible to formulate this modification of gravity in a way that makes it look pretty much like a new type of matter. And we’d still call it dark matter.

4. Rubin wasn’t the first to find evidence for dark matter.

Though she was the first to recognize its relevance. A few decades before Vera Rubin noticed that stars rotate inexplicably fast around the centers of galaxies, Fritz Zwicky pointed out that a swarm of about a thousand galaxies which are bound together by gravity to the “Coma Cluster” also move too quickly. The velocity of the galaxies in a gravitational potential depends on the total mass in this potential, and the too large velocities indicated already that there was more mass than could be seen. However, it wasn’t until Rubin collected her data that it became clear this isn’t a peculiarity of the Coma Cluster, but that dark matter must be present in almost all galaxies and galaxy clusters.

5. Dark matter doesn’t interact much with itself or anything else.

If it did, it would slow down and clump too much and that wouldn’t be in agreement with the data. A particularly vivid example comes from the Bullet Cluster, which actually consists of two clusters of galaxies that have passed through each other. In the Bullet Cluster, one can detect both the distribution of ordinary matter, mostly be emission of x-rays, and the distribution of dark matter, by gravitational lensing. The data demonstrates that the dark matter is dislocated from the visible matter: The dark matter parts of the clusters seem to have passed through each other almost undisturbed, whereas the visible matter was slowed down and its shape was noticeably distorted.

The same weak interaction is necessary to explain the observations on the cosmic microwave background and galactic structure formation.

6. It’s responsible for the structures in the universe.

Since dark matter doesn’t interact much with itself and other stuff, it’s the first type of matter to settle down when the universe expands and the first to form structures under its own gravitational pull. It is dark matter that seeds the filaments along which galaxies later form when visible matter falls into the gravitational potential created by the dark matter. If you look at some computer simulation of structure formation, what is shown is almost always the distribution of dark matter, not of visible matter. Visible matter is assumed to follow the same distribution later.

7. It’s probably not smoothly distributed.

Dark matter doesn’t only form filaments on supergalactic scales, it also isn’t entirely smoothly distributed within galaxies – at least that’s what the best understood models say. Dark matter doesn’t interact enough to form objects as dense as planets, but it does have ‘halos’ of varying density that move around in galaxies. The dark matter density is generally larger towards the centers of galaxies.

8. Physicists have lots of ideas what dark matter could be.

The presently most popular explanation for the puzzling observations is some kind of weakly interacting particle that doesn’t interact with light. These particles have to be quite massive to form the observed structures, about as heavy as the heaviest particles we know already. If dark matter particles weren’t heavy enough they wouldn’t clump sufficiently, which is why they are called WIMPs for “Weakly Interacting Massive Particles.” Another candidate is a particle called the axion, which is very light but leaves behind some kind of condensate that fills the universe.

There are other types of candidate particles that have more complex interactions or are heavier, such Wimpzillas and other exotic stuff. Macro dark matter is a type of dark matter that could be accommodated in the standard model; it consists of macroscopically heavy chunks of unknown types of nuclear matter.

Then there are several proposals for how to modify gravity to accommodate the observations, such as MOG, entropic gravity, or bimetric theories. Though very different by motivation, the more observations have to be explained the more similar the explanations through additional particles have become to the explanations through modifying gravity.

9. And they know some things dark matter can’t be.

We know that dark matter can’t be constituted by dim brown dwarfs or black holes. The main reason is that we know the total mass dark matter brings into our galaxy, and it’s a lot, about 10 times as much as the visible matter. If that amount of mass was made up from black holes, we should constantly see gravitational lensing events – but we don’t. It also doesn’t quite work with structure formation. And we know that neutrinos, even though weakly interacting, can’t make up dark matter either because they are too light and they wouldn’t clump strongly enough.

10. But we have no direct experimental evidence.

Despite decades of search, nobody has ever directly detected a dark matter particle and the only evidence we have is still indirectly inferred from gravitational pull. Physicists have been looking for the rare interactions of proposed dark matter candidates in many Earth-based experiments starting already in the 1980s. They also look for astrophysical evidence of dark matter, such as signals from the mutual annihilation of dark matter particles. There have been some intriguing findings, such as the PAMELA positron excess, the DAMA annual modulation, or the Fermi gamma-ray excess, but physicists haven’t been able to link any of these convincingly to dark matter.

[This article previously appeared at Starts With a Bang.]

Friday, September 11, 2015

How to publish your first scientific paper

I get a lot of email asking me for advice on paper publishing. There’s no way I can make time to read all these drafts, let alone comment on them. But simple silence leaves me feeling guilty for contributing to the exclusivity myth of academia, the fable of the privileged elitists who smugly grin behind the locked doors of the ivory tower. It’s a myth I don’t want to contribute to. And so, as a sequel to my earlier post on “How to write your first scientific paper”, here is how to avoid roadblocks on the highway to publication.

There are many types of scientific articles: comments, notes, proceedings, reviews, books and book chapters, for just to mention the most common ones. They all have their place and use, but in most of the sciences it is the research article that matters most. It’s what we all want, to get our work out there in a respected journal, and it’s what I will focus on.

Before we start. These days you can publish literally any nonsense in a scam journal, usually for a “small fee” (which might only be mentioned at a late stage in the process, oops). Stay clear of such shady business, it will only damage your reputation. Any journal that sends you an unsolicited “call for papers” is a journal to avoid (and a sender to put on the junk list). When in doubt, check Beall’s list of Potential, Probable and Possible Predators.

1. Picking a good topic

There are two ways you can go on a road trip: Find a car or hitchhike. In academic publishing, almost everyone starts out a hitchhiker, coauthoring a work typically with their supervisor. This moves you forward quickly at first, but sooner or later you must prove that you can drive on your own. And one day, you will have to kick your supervisor off the copilot seat. While you can get lucky with any odd idea as topic, there are a few guidelines that will increase your chances of getting published.

1.1 Novelty

For research topics as with cars it holds that a new one will get you farther than an interesting one. If you have a new result, you will almost certainly eventually get it published in a decent journal. But no matter how interesting you think a topic is, the slightest doubt that it’s new will prevent publication.

As a rule of thumb, I therefore recommend you stay far away from everything older than a century. Nothing reeks of crackpottery as badly as a supposedly interesting find in special relativity or classical electrodynamics or the foundations of quantum mechanics.

At first, you will not find it easy to come up with a new topic at the edge of current research. A good way to get ideas is to attend conferences. This will give you an overview on the currently open questions, and an impression where your contribution would be valuable. Every time someone answers a question with “I don’t know,” listen up.

1.2. Modesty

Yes, I know, you really, really want to solve one of the Big Problems. But don’t claim in your first paper that you did, it’s like trying to break a world record first time you run a mile. Except that in science you don’t only have to break the record, you also have to convince others you did.

For the sake of getting published, by all means refer to whatever question it is that inspires you in the introductory paragraph, but aim at a solid small contribution rather than a fluffy big one. Most senior researchers have a grandmotherly tolerance for the exuberance and naiveté of youth, but forgiveness ends at the journal’s front door. As encouraging as they may be in personal conversation, a journal reference serves as quality check for scientific standard, and nobody wants to be the one to blame for not keeping up the standard. So aim low, but aim well.

1.3. Feasibility

Be realistic about what you can achieve and how much time you have. Give your chosen topic a closer look: Do you already know all you need to know to get started, or will you have to work yourself into an entirely or partially new field? Are you familiar with the literature? Do you know the methods? Do you have the equipment? And lastly, but most importantly, do you have the motivation that will keep you going?

Time management is chronically hard, but give it a try and estimate how long you think it will take, if only to laugh later about how wrong you were. Whatever your estimate, multiply it by two. Does it fit in you plans?

2. Getting the research done

Do it.

3. Preparing the manuscript 

Many scientists dislike the process of writing up their results, thinking it only takes time away from real science. They could not be more mistaken. Science is all about the communication of knowledge – a result not shared is a result that doesn’t matter. But how to get started?

3.1. Collect all material

Get an overview on the material that you want your colleagues to know about: calculations, data, tables, figures, code, what have you. Single out the parts you want to publish, collect them in a suitable folder, and convert them into digital form if necessary, ie type off equations, make vector graphics of sketches, render images, and so on.

3.2. Select journals

If you are unsure what journals to choose, have a look at the literature you have used for your research. Most often this will point you towards journals where your topic will fit in. Check the website to see whether they have length restrictions and if so, if these might become problematic. If all looks good, check their author guidelines and download the relevant templates. Read the guidelines. No, I mean, actually read them. The last thing you want is that your manuscript gets returned by an annoyed editor because your image captions are in the wrong place or similar nonsense.

Select the four journals that you like best and order them by preference. Chances are your paper will get rejected at the first, and laying out a path to continue in advance will prevent you from dwelling on your rejection for an indeterminate amount of time.

3.3. Write the paper

For how to structure a scientific paper, see my earlier post.

3.4. Get feedback

Show your paper to several colleagues and ask for feedback, but only do this once you are confident the content will no longer substantially change. The amount of confusion returning to your inbox will reveal which parts of the manuscript are incomprehensible or, heaven forbid, just plainly wrong.

If you don’t have useful contacts, getting feedback might be difficult, and this difficulty increases exponentially with the length of the draft. It dramatically helps to encourage others to actually read your paper if you tell them why it might be useful for their own research. Explaining this requires that you actually know what their research is about.

If you get comments, make sure to address them.

3.5. Pre-publish or not?

Pre-print sharing, for example on the arxiv, is very common in some areas and less common in others. I would generally recommend it if you work in a fast moving field where the publication delay might limit your claim to originality. Pre-print sharing is also a good way to find out whether you offended someone by forgetting to cite them, because they’ll be in your inbox the next morning.

3.6. Submit the paper

The submission process depends very much on the journal you have chosen. Many journals now allow you to submit an arxiv link, which dramatically simplifies matters. However, if you submit source-files, always check the complied pdf “for your approval”. I’ve seen everything from half-empty pages to corrupted figures to pdfs that simply didn’t load.

Some journals allow you to select an editor to whom the manuscript will be sent. It is generally worth checking the list to see if there is someone you know. Or maybe ask a colleague who they have made good or bad experience with. But don’t worry if you don’t know any one of them.

4. Passing peer review

After submission your paper will generally first be screened to make sure it fulfills the journal requirements. This is why it is really important that the topic fits well. If you pass this stage your paper is sent to some of your colleagues (typically two to four) for the dreaded peer review. The reviewers’ task is to read the paper, send back comments on it, and to assign it one of four categories: publish, minor changes, major changes, reject. I have never heard of any paper that was accepted without changes.

In many cases some of the reviewers are picked from your reference list, excluding people you have worked with yourself or who are in the acknowledgements. So stop and think for a moment whether you really want to list all your friends in the acknowledgements. If you have an archenemy who shouldn’t be commenting on your paper, let the editor know about this in advance.

Never submit a paper to several journals at the same time. Also don’t do this if the papers have even partly overlapping content. You might succeed, but trying to boost your publication count by repeating yourself is generally frowned upon and not considered good practice. The exception is conference proceedings, which often summarize longer paper’s content.

When you submit your paper you will be asked to formally accept the ethics code of the journal. If it’s your first submission, take a moment to actually read it. If nothing else, it will make you feel very grown up and sciency.

Some journals ask you to sign a copyright form already with submission. I have no clue what they are thinking. I never sign a copyright form until the revisions are done.

Peer review can be annoying, frustrating, infuriating even. To keep your sanity and to maximize your chance of passing try the following:

4.1. Keep perspective

This isn’t about you personally, it’s about a manuscript you wrote. It is easy to forget, but in the end you, the reviewers, and the editor have the same goal: to advance science with good research. Work towards that common end.

4.2. Stay polite and professional

Unless you feel a reviewer makes truly inappropriate comments, don’t complain to the editor about strong wording – you will only waste your time. Inappropriate comments are everything that refers to your person or affiliation (or absence thereof), any type of ideological arguments, and opinions not backed up by science. All other comments you should go through and address them one by one, in a reply attached to the resubmission and by changes to the manuscript where appropriate. Never ignore a question posed by a referee, it provides a perfect reason to reject your manuscript.

In case a referee finds an actual mistake in your paper, be reasonable about whether you can fix it in the time given until resubmission. If not, it is better to withdraw the submission.

4.3. Revise, revise, revise

Some journals have a maximum number of revisions that they allow after which an editor will make a final decision. If you don’t meet the mark and your paper is eventually rejected, take a moment to mull over the reason of rejection and revise the paper one more time before you submit it to the next journal.

Goto 3.6. Repeat as necessary.

5. Proofs

If all goes well, one day you will receive a note saying that your paper has been accepted for publication and you will soon receive page proofs. Congratulations!

It might feel like a red light five minutes from home when you have to urgently pee, but please read the page proofs carefully. You will never forgive yourself if you don’t correct that sentence which a well-meaning copy editor rearranged to mean the very opposite of what you intended.

6. And then…

… it’s time to update your CV! Take a walk, and then make plans for the next trip.

Monday, September 07, 2015

Macro Dark Matter

[Some background info on the recent New Scientist feature I wrote with Naomi Lubick.]

I'm the half-face 2nd from the right.
In June I attended the “Continuation of the Space-time Odyssey,” a meeting dedicated to the “celebration of the remarkable advances in the fields of particle physics and cosmology from the turn of the millennium to the present day.” At least that’s what the website said. In reality the meeting was dedicated to celebrate Katie Freese’s arrival in Stockholm. She spared no expenses; pricy hotels, illustrious locations, and fancy food was involved. So of course I went.

It was an “invitation only” conference, but it wasn’t difficult to get on the list because, for reasons mysterious to me, the conference system listed me as a “chair” of the meeting, whatever that might mean. I assure you I didn’t have anything to do neither with the chairs nor with the organization, other than pointing out some organization might be beneficial. So please don’t blame me that there was no open registration.

So much about academia. Now let me say something about the science.
Two talks from the conference were particularly memorable for they stood in stark contrast to each other. The one talk was by Lisa Randall, the other by Glenn Starkman. Both spoke about dark matter, but that’s about where the similarities end.
Randall talked about her recent work on “Partially Interacting Dark Matter” (slides here). Her research in collaboration with JiJi Fan, Andrey Katz, and Matthew Reece is based on a slightly more complicated version of existing particle dark matter models. They consider some so-far undetected particles which are not in the Standard Model. But in contrast to most of the presently used models, these additional particles are not, as commonly assumed, unable to interact with each other. Instead, the particles exert forces among themselves, which has a several consequences.

First, it allows them to explain why dark matter hasn’t been seen in direct detection experiments: the stuff just isn’t as simple as assumed for the estimates of detection rates. Second, it means that dark matter has friction and thus at least partly forms rotation disks during galaxy formation, the “dark disks.” If they are right, our galaxy too should have a dark disk, and our solar system would traverse it periodically every 35 million years or so. If you trust Nature News, which maybe you shouldn’t, then this can explain the extinction of the dinosaurs. Right or wrong, it’s a story catchy enough to spread like measles, and equally spotty. Third, and most important, the partially interacting dark matter introduces additional parameters which you can use to fit unexplained data like the 130 GeV Fermi gamma ray line.

If the model they are using has any theoretical motivation, Randall didn’t mention it. Instead her motivation was that it’s a model which might be testable in the soon future. It’s a search under the lamp post, phenomenological model building at its worst. You do it because you can and you’ll get it published – it certainly won’t harm your peer review assessment if you are one of the best-cited particle physicists ever. But just exactly why this interaction model and not some other that won’t be observed until the return of the Boltzmann brains, I don’t know.

In other words, I wasn’t very convinced that partially interacting dark matter is anything more than something you can publish papers about.

 Now to Starkman (slides here).

He set out alluding to Odysseus’ odyssey, which lead to strange and distant countries that Starkman likened to the proposed dark matter particles like WIMPS and axions. In the end though, Starkman pointed out, Odysseus returned to where he started from. Maybe, he suggested, things have gotten strange enough and it is time to return to where we started from: the Standard Model.

His talk was about “macro dark matter,” a dark matter candidate that has received little if no attention. I had only become aware of it briefly before, through a paper by Starkman together with David Jacobs and Amanda Weltman. Unlike the commonly considered particle dark matter, macro dark matter isn’t composed of single particles, but of macroscopically heavy chunks of matter with masses that are a priori anywhere between a gram and the mass of the Sun.

It is often said that observations indicate dark matter must be made of weakly interacting particles, but that is only true if the matter is thinly dispersed into light, individual particles. What we really know isn’t that the particles are weakly interacting but that are rarely interacting; you never measure a cross-section without a flux. Dark matter could be rarely interacting because it is weakly interacting. That’s the standard assumption. Or it could be rarely interacting because it is clumped together to tiny and dense blobs that are unlikely to meet each other. That’s macro dark matter.

But what is macro dark matter made of? It might for example be a type of nuclear matter that hasn’t been discovered so far, blobs of quarks and gluons that were created in the early universe and lingered around ever since. These blobs would be incredibly dense; at this density the Great Pyramid of Giza would fit inside a raindrop!

If you think nuclear matter is last-century physics, think again. The phases and properties of nuclear matter are still badly understood and certainly can’t be calculated from first principles even today.

Physicists were stunned for example when the quark gluon plasma turned out to have lower viscosity than anybody expected. They still argue about the equations governing the behavior of matter in neutron stars. Nobody has any idea how to calculate lifetimes of unstable isotopes. I recently talked to a nuclear physicist who told me that the state-of-the-art for composites is 20 nucleons. Twenty. This brings you just about up to neon in the periodic table. And that is, needless to say, using an effective model, not quarks and gluons. The Standard Model interactions are well-understood at LHC energies or higher, yes. But once quarks start binding together physicists are back to comparing models with data, rather than making calculations in the full theory.

So matter of nuclear density containing some of the heavier quarks is a possibility. But Starkman and his collaborators prefer to not make specific assumptions and keep their search as model-independent as possible. They were simply looking for constraints on this type of dark matter which are summarized in the figure below

Constraints on macro dark matter. Fig 3 from arxiv:1410.2236

On the vertical axis you have the cross-section, on the horizontal axis the mass of the macros. The grey and green diagonal lines are useful references marking atomic and nuclear density. In general the macro could be made up of a mixture, and so they wanted to keep the density a variable to be constrained by experiment. The shaded regions are excluded by various experiments.

To arrive at the experimental constraints one takes into account two properties of the macros that can be inferred from existing data. The one is the total amount of dark matter which we know from a number of observations, for example gravitational lensing and the CMB power spectrum. This means if we look at a particular mass of the macro, we know how many of them there must be. The other property is the macros’ average velocity which can be estimated from the mass and the strength of the gravitational potential that the particles move in. From the mass and the density one gets the size, and together with the velocity one can then estimate how often these things hit each other – or us.

The grey-shaded left upper region is excluded because the stuff would interact too much, causing it to clump too efficiently, which runs into conflict with the observed large scale structures.

The red regions are excluded by gravitational lensing data. These would be the macros that are so heavy they’d result in frequent strong gravitational lensing which hasn’t been observed. These constraints are also the reason why neutron stars, brown dwarfs, and black holes have long been excluded as possible explanations for dark matter. There are two types of lensing constraints from two different lensing methods, and right now there is a gap between them, but it will probably close in the soon future.

The yellow shaded region excludes macros of small mass, which is possible because these would be hitting Earth quite frequently. A macro with mass 109g for example would pass through Earth about once per year, the lighter ones more frequently. Searches for such particles are similar to searches for magnetic monopoles. One makes use of natural particle detectors, such as the sediment mica that forms neatly ordered layers in which a passing heavy particle would leave a mark. No such marks have been found, which rules out the lighter macros.

What about that open region in the middle? Could macros hide there? Starkman and his collaborators have some pretty cool ideas how to look for macros in that regime, and that’s what my New Scientist piece with Naomi is about. (Want me to keep interesting stories on this blog? Please use the donate button that’s in your face in the top right corner, thank you.)

Macro dark matter of course leaves many open questions. As long as we don’t really know what it’s made of, we have no knowing whether it can form in sufficient amounts or is stable enough. But its big advantage is that it doesn’t necessarily require us to construe up new particles.

Do I like this idea? Holy shit, no, I hate it! Like almost all particle physicists, I prefer my interactions safely in the perturbative regime where I can calculate cross-sections the way I learned it in kindergarten. I fled from the place where I made my PhD because everybody there was doing nuclear physics and I wanted nothing of that. I wanted elementary particles, grand unification and fundamental truths. I would be deeply disappointed if dark matter wasn’t a hint for physics beyond the standard model, but instead would drift into the realm of lattice qcd.

But then I was thinking. If everybody feels this way we might be missing the solution to a 80 year old puzzle because we focus on answer that we like and answers that are simple, not answers that are in our face yet complicated. Yes, macros are the most conservative and in a sense most depressing dark matter model around. But at least they didn’t kill the dinosaurs.

Thursday, September 03, 2015

More about Hawking and Perry’s new proposal to solve the black hole information loss problem

Malcom Perry’s lecture that summarizes the idea he has been working on with Stephen Hawking is now on YouTube:

The first 17 minutes or so are a very well done, but also very basic introduction to the black hole information loss problem. If you’re familiar with that, you can skip to 17:25. If you know the BMS group and only want to hear about the conserved charges on the horizon, skip to 45:00. Go to 56:10 for the summary.

Last week, there furthermore was a paper on the arxiv with a very similar argument: BMS invariance and the membrane paradigm, by Robert Penna from MIT, though this paper doesn’t directly address the information loss problem. One is lead to suspect that the author was working on the topic for a while, then heard about the idea put forward by Hawking and Perry and made sure to finish and upload his paper to the arxiv immediately... Towards the end of the paper the author also expresses concern, as I did earlier, that these degrees of freedom cannot possibly contain all the relevant information “This may be relevant for the information problem, as it forces the outgoing Hawking radiation to carry the same energy and momentum at every angle as the infalling state. This is usually not enough information to fully characterize an S-matrix state...”

The third person involved in this work is Andrew Strominger, who has been very reserved on the whole media hype. Eryn Brown reports for Phys.org
“Contacted via telephone Tuesday evening, Strominger said he felt confident that the information loss paradox was not irreconcilable. But he didn't think everything was settled just yet.

He had heard Hawking say there would be a paper by the end of September. It had been the first he'd learned of it, he laughed, though he said the group did have a draft.”
(Did Hawking actually say that? Can someone point me to a source?)

Meanwhile I’ve pushed this idea back and forth in my head and, lacking further information about what they hope to achieve with this approach, have tentatively come to the conclusion that it can’t solve the problem. The reason is the following.

The endstate of black hole collapse is known to be simple and characterized only by three “hairs” – the mass, charge, and angular momentum of the black hole. This means that all higher multipole moments – deviations of the initial mass configuration from perfect spherical symmetry – have to be radiated off during collapse. If you disregard actual emission of matter, it will be radiated off in gravitons. The angular momentum related to these multipole moments has to be conserved, and there has to be an energy flux related to the emission. In my reading the BMS group and its conserved charges tell you exactly that: the multipole moments can’t vanish, they have to go to infinity. Alternatively you can interpret this as the black hole not actually being hair-less, if you count all that’s happned in the dynamical evolution.

Having said that, I didn’t know the BMS group before, but I never doubted this to be the case, and I don’t think anybody doubted this. But this isn’t the problem. The problem that occurs during collapse exists already classically, and is not in the multipole moments – which we know can’t get lost – it’s in the density profile in the radial direction. Take the simplest example: one shell of mass M, or two concentric shells of half the mass. The outside metric is identical. The inside metric vanishes behind the horizon. Where does the information about this distribution go if you let the shells collapse?

Now as Malcom said in his lecture, you can make a power-series expansion of the (Bondi) mass around the asymptotic value, and I’m pretty sure it contains the missing information about the density profile (which however already misses information of the quantum state). But this isn’t information you can measure locally, since you need an infinite amount of derivatives or infinite space to make your measurement respectively. And besides this, it’s not particularly insightful: If you have a metric that is analytic with an infinite convergence radius, you can expand it around any point and get back the metric in the whole space-time, including the radial profile. You don’t need any BMS group or conserved charges for that. (The example with the two shells is not analytical, it’s also pathological for various reasons.)

As an aside, that the real problem with the missing information in black hole collapse is in the radial direction and not in the angular direction is the main reason I never believed the strong interpretation of the Bekenstein-Hawking entropy. It seems to indicate that the entropy, which scales with the surface area, counts the states that are accessible from the outside and not all the states that black holes form from.

Feedback on this line of thought is welcome.

In summary, the Hawking-Perry argument makes perfect sense and it neatly fits together with all I know about black holes. But I don’t see how it gets one closer to solve the problem.

Tuesday, September 01, 2015

Loops and Strings and Stuff

If you tie your laces, loops and strings might seem like parts of the same equation, but when it comes to quantum gravity they don’t have much in common. String Theory and Loop Quantum Gravity, both attempts to consistently combine Einstein’s General Relativity with quantum theory, rest on entirely different premises.

String theory posits that everything, including the quanta of the gravitational field, is made up of vibrating strings which are characterized by nothing but their tension. Loop Quantum Gravity is a way to quantize gravity while staying as closely as possible to the quantization methods which have been successful with the other interactions.

The mathematical realization of the two theories is completely different too. The former builds on dynamically interacting strings which give rise to higher dimensional membranes, and leads to a remarkably complex theoretical construct that might or might not actually describe the quantum properties of space and time in our universe. The latter divides up space-time into spacelike slices, and then further chops up the slices into discrete chunks to which quantum properties are assigned. This might or might not describe the quantum properties of space and time in our universe...

String theory and Loop Quantum Gravity also differ in their ambition. While String Theory is meant to be a theory for gravity and all the other interactions – a “Theory of Everything” – Loop Quantum Gravity merely aims at finding a quantum version of gravity, leaving aside the quantum properties of matter.

Needless to say, each side claims their approach is the better one. String theorists argue that taking into account all we know about the other interactions provides additional guidance. Researchers in Loop Quantum Gravity emphasize their modest and minimalist approach that carries on the formerly used quantization methods in the most conservative way possible.

They’ve been arguing for 3 decades now, but maybe there’s an end in sight.

In a little noted paper out last year, Jorge Pullin and Rudolfo Gambini argue that taking into account the interaction of matter on a loop-quantized space-time forces one to use a type of interaction that is very similar to that also found in effective models of string interactions.

The reason is Lorentz-invariance, the symmetry of special relativity. The problem with the quantization in Loop Quantum Gravity comes from the difficulty of making anything discrete Lorentz-invariant and thus compatible with special relativity and, ultimately, general relativity. The splitting of space-time into slices is not a priori a problem as long as you don’t introduce any particular length scale on the resulting slices. Once you do that, you’re stuck with a particular slicing, thus ruining Lorentz-invariance. And if you fix the size of a loop, or the length of a link in a network, that’s exactly what happens.

There has been twenty years of debate whether or not the fate of Lorentz-invariance in Loop Quantum Gravity is really problematic, because it isn’t so clear just exactly how it would make itself noticeable in observations as long as you are dealing with the gravitational sector only. But once you start putting matter on the now quantized space, you have something to calculate.

Pullin and Gambini – both from the field of LQG it must be mentioned! – argue that the Lorentz-invariance violation inevitably creeps into the matter sector if one uses local quantum field theory on the loop quantized space. But that violation of Lorentz-invariance in the matter sector would be in conflict with experiment, so that can’t be correct. Instead they suggest that this problem can be circumvented by using an interaction that is non-local in a particular way, which serves to suppress unwanted contributions that spoil Lorentz-invariance. This non-locality is similar to the non-locality that one finds in low-energy string scattering, where the non-locality is a consequence of the extension of the strings. They write:

“It should be noted that this is the first instance in which loop quantum gravity imposes restrictions on the matter content of the theory. Up to now loop quantum gravity, in contrast to supergravity or string theory, did not appear to impose any restrictions on matter. Here we are seeing that in order to be consistent with Lorentz invariance at small energies, limitations on the types of interactions that can be considered arise.”

In a nutshell it means that they’re acknowledging they have a problem and that the only way to solve it is to inch closer to string theory.

But let me extrapolate their paper, if you allow. It doesn’t stop at the matter sector of course, because if one doesn’t assume a fixed background like they do in the paper one should also have gravitons and these need to have an interaction too. This interaction will suffer from the same problem, unless you cure it by the same means. Consequently, you will in the end have to modify the quantization procedure for gravity itself. And while I’m at it anyway, I think a good way to remedy the problem would be to not force the loops to have a fixed length, but to make them dynamical and give them a tension...

I’ll stop here because I know just enough of both string theory and loop quantum gravity to realize that technically this doesn’t make a lot of sense (among many other things because you don’t quantize loops, they are the quantization), and I have no idea how to make this formally correct. All I want to say is that after thirty years maybe something is finally starting to happen.

Should this come as a surprise?

It shouldn’t if you’ve read my review on Minimal Length Scale Scenarios for Quantum Gravity. As I argued in this review, there aren't many ways you can consistently introduce a minimal length scale into quantum field theory as a low-energy effective approximation. And pretty much the only way you can consistently do it is using particular types of non-local Lagrangians (infinite series, no truncation!) that introduce exponential suppression factors. If you have a theory in which a minimal length appears in any other way, for example by means of deformations of the Poincaré algebra (once argued to arise in Loop Quantum Gravity, now ailing on life-support), you get yourself into deep shit (been there, done that, still got the smell in my nose).

Does that mean that the string is the thing? No, because this doesn’t actually tell you anything specific about the UV completion, except that it must have a well-behaved type of non-local interaction that Loop Quantum Gravity doesn’t seem to bring, or at least it isn’t presently understood how it would. Either way, I find this an interesting development.

The great benefit of writing a blog is that I’m not required to contact “researchers not involved in the study” and ask for an “outside opinion.” It’s also entirely superfluous because I can just tell you myself that the String Theorist said “well, it’s about time” and the Loop Quantum Gravity person said “that’s very controversial and actually there is also this paper and that approach which says something different.” Good thing you have me to be plainly unapologetically annoying ;) My pleasure.

Thursday, August 27, 2015

Embrace your 5th dimension.

I found this awesome photo at entdeckungen.net
What does it mean to live in a holographic universe?

“We live in a hologram,” the physicists say, but what do they mean? Is there a flat-world-me living on the walls of the room? Or am I the projection of a mysterious five-dimensional being and beyond my own comprehension? And if everything inside my head can be described by what’s on its boundary, then how many dimensions do I really live in? If these are questions that keep you up at night, I have the answers.

1. Why do some physicists think our universe may be a hologram?

It all started with the search for a unified theory.

Unification has been enormously useful for our understanding of natural law: Apples fall according to the same laws that keep planets on their orbits. The manifold appearances of matter as gases, liquids and solids, can be described as different arrangements of molecules. The huge variety of molecules themselves can be understood as various compositions of atoms. These unifying principles were discovered long ago. Today physicists refer by unification specifically to a common origin of different interactions. The electric and magnetic interactions, for example, turned out to be two different aspects of the same electromagnetic interaction. The electromagnetic interaction, or its quantum version respectively, has further been unified with the weak nuclear interaction. Nobody has succeeded yet in unifying all presently known interactions, the electromagnetic with the strong and weak nuclear ones, plus gravity.

String theory was conceived as a theory of the strong nuclear interaction, but it soon became apparent that quantum chromodynamcis, the theory of quarks and gluons, did a better job at this. But string theory gained second wind after physicists discovered it may serve to explain all the known interactions including gravity, and so could be a unified theory of everything, the holy grail of physics.

It turned out to be difficult however to get specifically the Standard Model interactions back from string theory. And so the story goes that in recent years the quest for unification has slowly been replaced with a quest for dualities that demonstrate that all the different types of string theories are actually different aspects of the same theory, which is yet to be fully understood.

A duality in the most general sense is a relation that identifies two theories. You can understand a duality as a special type of unification: In a normal unification, you merge two theories together to a larger theory that contains the former two in a suitable limit. If you relate two theories by a duality, you show that the theories are the same, they just appear different, depending on how you look at them.

One of the most interesting developments in high energy physics during the last decades is the finding of dualities between theories in a different number of space-time dimensions. One of the theories is a gravitational theory in the higher-dimensional space, often called “the bulk”. The other is a gauge-theory much like the ones in the standard model, and it lives on the boundary of the bulk space-time. This relation is often referred to as the gauge-gravity correspondence, and it is a limit of a more general duality in string theory.

To be careful, this correspondence hasn’t been strictly speaking proved. But there are several examples where it has been so thoroughly studied that there is very little doubt it will be proved at some point.

These dualities are said to be “holographic” because they tell us that everything allowed to happen in the bulk space-time of the gravitational theory is encoded on the boundary of that space. And because there are fewer bits of information on the surface of a volume than in the volume itself, fewer things can happen in the volume than you’d have expected. It might seem as if particles inside a box are all independent from each other, but they must actually be correlated. It’s like you were observing a large room with kids running and jumping but suddenly you’d notice that every time one of them jumps, for a mysterious reason ten others must jump at exactly the same time.

2. Why is it interesting that our universe might be a hologram?

This limitation on the amount of independence between particles due to holography would only become noticeable at densities too high for us to test directly. The reason this type of duality is interesting nevertheless is that physics is mostly the art of skillful approximation, and using dualities is a new skill.

You have probably seen these Feynman diagrams that sketch particle scattering processes? Each of these diagrams makes a contribution to an interaction process. The more loops there are in a diagram, the smaller the contributions are. And so what physicists do is adding up the largest contributions first, then the smaller ones, and even smaller ones, until they’ve reached the desired precision. It’s called “perturbation theory” and only works if the contributions really get smaller the more interactions take place. If that is so, the theory is said to be “weakly coupled” and all is well. If it ain’t so, the theory is said to be “strongly coupled” and you’d never be done summing all the relevant contributions. If a theory is strongly coupled, then the standard methods of particle physicists fail.

The strong nuclear force for example has the peculiar property of “asymptotic freedom,” meaning it becomes weaker at high energies. But at low energies, it is very strong. Consequently nuclear matter at low energies is badly understood, as for example the behavior of the quark gluon plasma, or the reason why single quarks do not travel freely but are always “confined” to larger composite states. Another interesting case that falls in this category is that of “strange” metals, which include high-temperature superconductors, another holy grail of physicists. The gauge-gravity duality helps dealing with these systems because when the one theory is strongly coupled and difficult to treat, then the dual theory is weakly coupled and easy to treat. So the duality essentially serves to convert a difficult calculation to a simple one.

3. Where are we in the holographic universe?

Since the theory on the boundary and the theory in the bulk are related by the duality they can be used to describe the same physics. So on a fundamental level the distinction doesn’t make sense – they are two different ways to describe the same thing. It’s just that sometimes one of them is easier to use, sometimes the other.

One can give meaning to the question though if you look at particular systems, as for example the quark gluon plasma or a black hole, and ask for the number of dimensions that particles experience. This specification of particles is what makes the question meaningful because identifying particles isn’t always possible.

The theory for the quark gluon plasma is placed on the boundary because it would be described by the strongly coupled theory. So if you consider it to be part of your laboratory then you have located the lab, with yourself in it, on the boundary. However, the notion of ‘dimensions’ that we experience is tied to the freedom of particles to move around. This can be made more rigorous in the definition of ‘spectral dimension’ which measures, roughly speaking, in how many directions a particle can get lost. The very fact that makes a system strongly coupled though means that one can’t properly define single particles that travel freely. So while you can move around in the laboratory’s three spatial dimensions, the quark gluon plasma first has to be translated to the higher dimensional theory to even speak about individual particles moving. In that sense, part of the laboratory has become higher dimensional, indeed.

If you look at an astrophysical black hole however, then the situation is reversed. We know that particles in its vicinity are weakly coupled and experience three spatial dimensions. If you wanted to apply the duality in this case then we would be situated in the bulk and there would be lower-dimensional projections of us and the black hole on the boundary, constraining our freedom to move around, but in such a subtle way that we don’t notice. However, the bulk space-times that are relevant in the gauge-gravity duality are so-called Anti-de-Sitter spaces, and these always have a negative cosmological constant. The universe we inhabit however has to our best current knowledge a positive cosmological constant. So it is not clear that there actually is a dual system that can describe the black holes in our universe.

Many researchers are presently working on expanding the gauge-gravity duality to include spaces with a positive cosmological constant or none at all, but at least so far it isn’t clear that this works. So for now we do not know whether there exist projections of us in a lower-dimensional space-time.

4. How well does this duality work?

The applications of the gauge-gravity duality fall roughly into three large areas, plus a diversity of technical developments driving the general understanding of the theory. The three areas are the quark gluon plasma, strange metals, and black hole evaporation. In the former two cases our universe is on the boundary, in the latter we are in the bulk.

The studies of black hole evaporation are examinations of mathematical consistency conducted to unravel just exactly how information may escape a black hole, or what happens at the singularity. In this area there are presently more answers than there are questions. The applications of the duality to the quark gluon plasma initially caused a lot of excitement, but as of recently some skepticism has spread. It seems that the plasma is not as strongly coupled as originally thought, and using the duality is not as straightforward as hoped. The applications to strange metals and other classes of materials are making rapid progress as both analytical and numerical methods are being developed. The behavior for several observables has been qualitatively reproduced, but it is as present not very clear exactly which systems are the best to use. The space of models is still too big, which leaves too much room for useful predictions. In summary, as the scientists say “more work is needed”.

5. Does this have something to do with Stephen Hawking's recent proposal for how to solve the black hole information loss problem?

That’s what he says, yes. Essentially he is claiming that our universe has holographic properties even though it has a positive cosmological constant, and that the horizon of a black hole also serves as a surface that contains all the information of what happens in the full space-time. This would mean in particular that the horizon of a black hole keeps track of what fell into the black hole, and so nothing is really forever lost.

This by itself isn’t a new idea. What is new in this work with Malcom Perry and Andrew Strominger is that they claim to have a way to store and release the information, in a dynamical situation. Details of how this is supposed to work however are so far not clear. By and large the scientific community has reacted with much skepticism, not to mention annoyance over the announcement of an immature idea.

[This post previously appeared at Starts with a Bang.]