Thursday, October 29, 2015

What is basic science and what is it good for?

Basic science is, above all, a stupid word. It sounds like those onions we sliced in 8th grade. And if people don’t mistake “basic” for “everybody knows,” they might think instead it means “foundational,” that is, dedicated to questioning the present paradigms. But that’s not what the word refers to.

Basic science refers to research which is not pursued with the aim of producing new technologies; it is sometimes, more aptly, referred to as “curiosity driven” or “blue skies” research. The NSF calls it “transformative,” the ERC calls it “frontier” research. Quite possibly they don’t mean exactly the same, which is another reason why it’s a stupid word.

A few days ago, Matt Ridley wrote an article for the Wall Street Journal in which he argues that basic research, to the extent that it’s necessary at all, doesn’t need governmental funding. He believes that it is technology that drives science, not the other way round. “Deep scientific insights are the fruits that fall from the tree of technological change,” Ridley concludes. Apparently he has written a whole book with this theme, which is about to be published next week. The WSJ piece strikes me as shallow and deliberately provocative, published with the only aim of drawing attention to his book, which I hope has more substance and not just more words.

The essence of the article seems to be that it’s hard to demonstrate a correlation, not to mention causation, between tax-funded basic science and economic growth. Instead, Ridley argues, in many examples scientific innovations originated not in one single place, but more or less simultaneously in various different places. He concludes that tax-funded research is unnecessary.

Leaving aside for a moment that measures for economic growth can mislead about a countries’ prosperity, it is hardly surprising that a link between tax-funded basic research and economic growth is difficult to find. It must come as a shock to nationalists, but basic research is the possibly most international profession in existence. Ideas don’t stop at country borders. Consequently, to make use of basic research, you don’t yourself need to finance it. You can just wait until a breakthrough occurs elsewhere and then pay your people to jump on it. The main reason we so frequently see examples of simultaneous breakthroughs in different groups is that they build on more or less the same knowledge. Scientists can jump very quickly.

But the conclusion that this means one does not need to support basic research is just wrong. It’s a classic demonstration of the “free rider” problem. Your country can reap the benefits of basic research elsewhere, as long as somebody else does the thinking for you. But if every country does this, innovation would run dry, eventually.

Besides this, the idea that technology drives science might have worked in the last century but it does no longer work today. The times where you could find new laws of nature by dabbling with some equipment in the lab are over. To make breakthroughs today, you need to know what to build, and you need to know how to analyze your data. Where will you get that knowledge if not from basic resarch?

The technologies we use today, the computer that you sit in front of – semiconductors, lasers, liquid crystal displays – are based on last century’s theories. We still reap the benefits. And we all do, regardless of whether our nation paid salary for one of quantum mechanics’ founding fathers. But if we want progress to continue in the next century, we have to go beyond that. You need basic research to find out which direction is promising, which is a good investment. Or otherwise, you’ll waste lots of time and money.

There is a longer discussion that one can have whether some types of basic research have any practical use at all. It is questionable, for example, that knowing about the accelerated expansion of the universe will ever lead to a better phone. In my perspective the materialistic focus is as depressing as meaningless. Sure, it would be nice if my damned phone battery wouldn’t die in the middle of a call, and, yeah, I want to live forever watching cat videos on my hoverboard. But I fail to see what it’s ultimately good for. The only meaning I can find in being thrown into this universe is to understand how it works and how we are part of it. To me, knowledge is an end unto itself. Keep your hoverboard, just tell me how to quantize gravity.

Here is a simple thought experiment. Consider all tax-funded basic research were to cease tomorrow. What would go missing? No more stories about black holes, exoplanets, or loophole-free tests of quantum entanglement. No more string theory, no multiverses, no theories of everything, no higgsinos, no dark matter, no cosmic neutrinos, extra-dimensions, wormholes, or holographic universes. Except for a handful of lucky survivors at partly privately funded places – like Perimeter Institute, the KAVLI institutes, and some Templeton-funded initiatives, who in no way would be able to continue all of this research – all this research would die quickly. The world would be a poorer place, one with no hope of ever understanding this amazing universe that we live in.

Democracy is a funny thing, you know, it’s kind of like an opinion poll. Basic research is tax-funded in all developed countries. Could there be any clearer expression of the people’s opinion? They say: we want to know. We want to know where we come from, and what we are made of, and what’s the fate of our universe. Yes, they say, we are willing to pay taxes for that, but please tell us. As someone who works in basic research, I see my task as delivering to this want.

Monday, October 26, 2015

Black holes and academic walls

Image credits: Paul Terry Sutton
According to Einstein you wouldn’t notice crossing a black hole horizon. But now researchers argue that a firewall or brickwall would be in your way. Have they entirely lost their mind?

Tl;dr: Yes.

It is hard, sometimes, to understand why anyone would waste time on a problem as academic as black hole information loss. And I say that as someone who spent a significant part of the last decade pushing this very problem around in my head. Don’t physicists have anything better to do, in a world that is suffering from war and disease, bad grammar even? What drives these researchers, other than the hope to make headlines for solving a 40 years old conundrum?

Many physicists today work on topics that, like black hole information loss, seem entirely detached from reality. Black holes only succeed in destroying information once they entirely evaporate, and that won’t happen for the next 100 billion years or so. What drives these researchers is not making tangible use of their insights, but the recognition that someone today has to pave way for the science that will become relevant in a hundred, a thousand, or ten thousand years from now. And as I scan the human mess in my news feed, the unearthly cleanliness of the argument, the seemingly inescapable logic leading to a paradox, admittedly only adds to its appeal.

If black hole information loss was a cosmic whodunit, then quantum theory would be the victim. Stephen Hawking demonstrated in the early 1970s that when one combines quantum theory with gravity, one finds that black holes must emit thermal radiation. This “Hawking radiation” is composed of particles that besides their temperature do not contain any information. And so, when a black hole entirely evaporates all the information about what fell inside must ultimately be destroyed. But such destruction of information is incompatible with the very quantum theory one used to arrive at this conclusion. In quantum theory all processes can happen both forward and backward in time, but black hole evaporation, it seems, cannot be reversed.

This presented physicists with a major conundrum, because it demonstrated that gravity and quantum theory refused to combine. It didn’t help either to try to explain away the problem alluding to the unknown theory of quantum gravity. Hawking radiation is not a quantum gravitational process, and while quantum gravity does eventually become important in the very late stages of a black hole’s evaporation, the argument goes that by this time it is too late to get all the information out.

The situation changed dramatically in the late 1990s, when Maldacena proposed that certain gravitational theories are equivalent to gauge theories. Discovered in string theory, this famed gauge-gravity correspondence, though still mathematically unproved, does away with the problem because whatever happens when a black hole evaporates is equivalently described in the gauge theory. The gauge theory however is known to not be capable of murdering information, thus implying that the problem doesn’t exist.

While the gauge-gravity correspondence convinced many physicists, including Stephen Hawking himself, that black holes do not destroy information, it did not shed much light on just exactly how the information escapes the black hole. Research continued, but complacency spread through the ranks of theoretical physicists. String theory, it seemed, had resolved the paradox, and it was only a matter of time until details would be understood.

But that wasn’t how things panned out. Instead, in 2012, a group of four physicist, Almheiri, Marolf, Polchinski, and Sully (AMPS) demonstrated that what was thought to be a solution is actually also inconsistent. Specifically they demonstrated that four assumptions, generally believed by most string theorists to all be correct, cannot in fact be simultaneously true. These four assumptions are that:
  1. Black holes don’t destroy information.
  2. The Standard Model of particle physics and General Relativity remain valid close by the black hole horizon.
  3. The amount of information stored inside a black hole is proportional to its surface area.
  4. An observer crossing the black hole horizon will not notice it.
The second assumption rephrases the statement that Hawking radiation is not a quantum gravitational effect. The third assumption is a conclusion drawn from calculations of the black hole microstates in string theory. The fourth assumption is Einstein’s equivalence principle. In a nutshell, AMPS say that at least one of these assumptions must be wrong. One of the witnesses is lying, but who?

In their paper, AMPS suggested, maybe not quite seriously, giving up on the least contested of these assumptions, number 4). Giving up on 4), the other three assumptions imply that an observer falling into the black hole would encounter a “firewall” and be burnt to ashes. The equivalence principle however is the central tenet of general relativity and giving it up really is the last resort.

For the uninitiated observer, the lying witness is obviously 3). In contrast to the other assumptions, which are consequences of theories we already know and have tested to high precision, number 3) comes from a so-far untested theory. So if one assumption has to dropped then maybe it is the assumption that string theory is right about the information content of black holes, but that option isn’t very popular with string theorists...

And so within a matter of months the hep-th category of the arxiv was cluttered with attempts to reconcile the disagreeable assumptions with another. Proposed solutions included everything from just accepting the black hole firewall to the multiverse to elaborated thought-experiments meant to demonstrate that an observer wouldn’t actually notice being burnt. Yes, that’s modern physics for you.

I too of course have an egg in the basket. I found the witnesses all to be convincing, none of them seemed to be lying. And taking them at face value, it finally occurred to me that what made the assumptions seemingly incompatible was an unstated fifth assumption. Like witnesses’ accounts might suddenly all make sense once you realize the victim wasn’t killed at the same place the body was found, the four assumptions suddenly all make sense when you do not require the information to be saved in a particular way (that the final state is “typical” state). Instead the requirement that energy must be locally conserved near the horizon makes the firewall impossible and at the same time also told me exactly just how the black hole evaporation remains compatible with quantum theory.

I think nobody really liked my paper because it lead you to the rather strange conclusion that somewhere near the horizon there is a boundary which does alter the quantum theory, yet in a way that isn’t noticeable for any observer near by the black hole. It is possible to measure its effects, but only in the far distance. And while my proposal did resolve the firewall conundrum, it didn’t do anything about the black hole information loss problem. I mentioned in a side-note that in principle one could use this boundary to hand information into the outgoing radiation, but that would still not explain how the information would get into the boundary to begin with.

After publishing this paper, I vowed once again to never think about black hole evaporation again. But then last month, an arxiv preprint appeared by ‘t Hooft. One of the first to dabble in black hole thermodynamics, in his new paper ‘t Hooft proposes that the black hole horizon acts like a boundary that reflects information, a “brick wall” as New Scientist wants it. This new idea has been inspired by Stephen Hawking’s recent suggestion that much of the information falling into black holes continues to be stored on the horizon. If that is so, then giving the horizon a chance to act can allow the information to leave again.

I don’t think that bricks are much of an improvement over fire and I’m pretty sure that this exact realization of the idea won’t hold up. But after all the confusion, this might eventually allow us to better understand just exactly how the horizon interacts with the Hawking radiation and how it might manage to encode information in it.

Fast forward a thousand years. At the end of the road there is a theory of quantum gravity that will allow us to understand the behavior of space and time on shortest distance scales and, so many hope, the origin of quantum theory itself. Progress might seem incremental and sometimes history leads us in circles, but what keeps physicists going is the knowledge that there must be a solution.

[This post previously appeared at Starts with a Bang.]

Monday, October 19, 2015

Book review: Spooky Action at a Distance by George Musser

Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time--and What It Means for Black Holes, the Big Bang, and Theories of Everything
By George Musser
Scientific American, To be released November 3, 2015

“Spooky Action at a Distance” explores the question Why aren’t you here? And if you aren’t here, what is it that prevents you from being here? Trying to answer this simple-sounding question leads you down a rabbit hole where you have to discuss the nature of space and time with many-world proponents and philosophers. In his book, George reports back what he’s found down in the rabbit hole.

Locality and non-locality are topics as confusing as controversial, both in- and outside the community, and George’s book is a great introduction to an intriguing development in contemporary physics. It’s a courageous book. I can only imagine how much headache writing it must have been, after I once organized a workshop on nonlocality and realized that no two people could agree on what they even meant with the word.

George is a very gifted writer. He gets across the most relevant concepts the reader needs to know on a nontechnical level with a light and unobtrusive humor. The historical background is nicely woven together with the narrative, and the reader gets to meet many researchers in the field, Steve Giddings, Fotini Markopoulou, and Nima Arkani-Hamed, to only mention a few.

In his book, George lays out how the attitude of scientists towards nonlocality has gone from acceptance to rejection and makes a case that now the pendulum is swinging back to acceptance again. I think he is right that this is the current trend (thus the workshop).

I found the book somewhat challenging to read because I was constantly trying to translate George’s metaphors back into equations and I didn’t always succeed. But then that’s a general problem I have with popular science books and I can’t blame George for this. I have another complaint though, which his that George covers a lot of different research in rapid succession without adding qualifiers about these research programs’ shortcomings. There’s quantum graphity and string theory and black holes in AdS and causal sets and then there’s many worlds. The reader might be left with the mistaken impression that these topics are somehow all related with each other.

Spooky Action at a Distance starts out as an Ode to Steve Giddings and ends as a Symphony for Arkani-Hamed. For my taste it’s a little too heavy on person-stories, but then that seems to be the style of science writing today. In summary, I can only say it’s a great book, so go buy it, you won’t regret it.

[Disclaimers: Free review copy; I know the author.]

Fade-out ramble: You shouldn’t judge a book by its subtitle, really, but whoever is responsible for this title-inflation, please make it stop. What’s next? Print the whole damn book on the cover?

Monday, October 12, 2015

A newly proposed table-top experiment might be able to demonstrate that gravity is quantized

Tl;dr: Experimentalists are bringing increasingly massive systems into quantum states. They are now close to masses where they might be able to just measure what happens to the gravitational field.

Quantum effects of gravity are weak, so weak they are widely believed to not be measurable at all. Freeman Dyson indeed is fond of saying that a theory of quantum gravity is entirely unnecessary, arguing that we could never observe its effects anyway. Theorists of course disagree, and not just because they’re being paid to figure out the very theory Dyson deems unnecessary. Measurable or not, they search for a quantized version of gravity because the existing description of nature is not merely incomplete – it is far worse, it contains internal contradictions, meaning we know it is wrong.

Take the century-old double-slit experiment, the prime example for quantum behavior. A single electron that goes through the double-slit is able to interact with itself, as if it went through both slits at once. Its behavior is like that of a wave which overlaps with itself after passing an obstacle. And yet, when you measure the electron after it went through the slit it makes a dot on a screen, like a particle would. The wave-like behavior again shows up if one measures the distribution of many electrons that passed the slit. This and many other experiments demonstrate that the electron is neither a particle nor a wave – it is described by a wave-function from which we obtain a probability distribution, a formulation that is the core of quantum mechanics.

Well understood as this is, it leads to a so-far unsolved conundrum.

The most relevant property of the electron’s quantum behavior is that it can go through both slits at once. It’s not that half of the electron goes one way and half the other. Neither does the electron sometimes take this slit and sometimes the other. Impossible as it sounds, the electron goes fully through both slits at the same time, in a state referred to as quantum superposition.

Electrons carry a charge and so they have an electric field. This electric field also has quantum properties and moves along with the electron in its own quantum superposition. The electron also has a mass. Mass generates a gravitational field, so what happens to the gravitational field? You would expect it to also move along with the electron, and go through both slits in a quantum superposition. But that can only work if gravity is quantized too. According to Einstein’s theory of General Relativity though, it’s not. So we simply don’t know what happens to the gravitational field unless we find a theory of quantum gravity.

It’s been 80 years since the question was first raised, but we still don’t know what’s the right theory. The main problem is that gravity is an exceedingly weak force. We notice it so prominently in our daily life only because, in contrast to the other interactions, it cannot be neutralized. But the very reason that planet Earth doesn’t collapse to a black hole is that much stronger forces than gravity prevent this from happening. The electromagnetic force, the strong nuclear force, and even the supposedly weak nuclear force, are all much more powerful than gravity.

For the experimentalist this means they either have an object heavy enough so its gravitational field can be measured. Or they have an object light enough so its quantum properties can be measured. But not both at once.

At least that was the case so far. But the last decade has seen an enormous progress in experimental techniques to bring heavier and heavier objects into quantum states and measure their behavior. And in a recent paper a group of researchers from Italy and the UK propose an experiment that might just be the first feasible measurement of the gravitational field of a quantum object.

Almost all researchers who work on the theory of quantum gravity expect that the gravitational field of the electron behaves like its electric field, that is, it has quantum properties. They are convinced of this because we have a well-working theory to describe this situation. Yes, I know, they told you nobody has quantized gravity, but that isn’t true. Gravity has been quantized in the 1960s by DeWitt, Feynman, and others using a method known as perturbative quantization. However, the result one gets with this method only works when the gravitational field is weak, and it breaks down when gravity becomes strong, such as at the Big Bang or inside black holes. In other words, this approach, while well understood, fails us exactly in the situations we are interested in the most.

Because of this failure in strong gravitational fields, perturbatively quantized gravity cannot be a fundamentally valid theory; it requires completion. It is this completion that is normally referred to as “quantum gravity.” However, when gravitational fields are weak, which is definitely the case for the little electron, the method works perfectly fine. Whether it is realized in nature though, nobody knows.

If the gravitational field is not quantized, one has instead a theory known as “semi-classical gravity,” in which the matter is quantized but gravity isn’t. Though nobody can make much sense of this theory conceptually, it’s infuriatingly hard to disprove. If the gravitational field of the electron remained classical, its distribution would follow the probability of the electron taking either slit rather than itself going through the slits with this probability.

To see the difference, consider you put a (preferably uncharged) test particle in the middle between the slits to see where the gravitational pull goes. If the gravitational field is quantized, then in half of the cases when the electron goes through the slit, the test particle will move left, in the other half of cases it would move right (it would also destroy the interference pattern). If the gravitational field is classical however, the test particle won’t move because it’s pulled equally to both sides.

So the difference between quantized and semi-classical gravity is observable. Unfortunately, even for the most massive objects that can be pushed through double slits, like large molecules, the gravitational field is far too weak to be measurable.

In the new paper now, the researchers propose a different method. They consider a tiny charged disk of osmium with a mass of about a nano-gram, held by electromagnetic fields in a trap. The particle is cooled down to some hundred mK which brings it into the lowest possible energy state. Above this ground-level there are now discrete energy levels for the disk, much like the electron orbits around the atomic nucleus, except that the level spacing is tiny. The important point is that the exact energy values of these levels depend on the gravitational self-interaction of the whole object. Measure the spacing of the energy levels precisely enough, and you can figure out whether the gravitational field was quantized or not.

Figure 1 from arxiv:arXiv:1510.01696. Depicted are the energy levels of the disk in the potential, and how they shift with the classical gravitational self-interaction taken into account, for two different scenarios of the distribution of the disk’s wave-function.

For this calculation they use the Schrödinger-Newton equation, which is the non-relativistic limit of semi-classical gravity incorporated in quantum mechanics. In an accompanying paper they have worked out the description of multi-particle systems in this framework, and demonstrated how the system approximately decomposes into a center-of-mass variable and the motions relative to the center of mass. They then calculate how the density distribution is affected by the gravitational field caused by its own probability distribution, and finally the energy levels of the system.

I haven’t checked this calculation in detail, but it seems both plausible that the effect should be present, and that it is large enough to potentially be measurable. I don’t know much about these types of experiments, but two of the authors of the paper, Hendrik Ulbricht and James Bateman, are experimentalists and I trust they know what current technology allows to measure.

Suppose they make this measurement and they do, as expected, not find the additional shift of energy levels that should exist if gravity was unquantized. This would not, strictly speaking, demonstrate that perturbatively quantized gravity is correct, but merely that the Schrödinger-Newton equation is incorrect. However, since these are the only two alternatives I am aware of, it would in practice be the first experimental confirmation that gravity is indeed quantized.

Tuesday, October 06, 2015

Repost in celebration of the 2015 Nobel Prize in Physics: Neutrino masses and angles

It was just announced that this year's Nobel Prize in physics goes to Takaaki Kajita from the Super-Kamiokande Collaboration and Arthur B. McDonald from the Sudbury Neutrino Observatory (SNO) Collaboration “for the discovery of neutrino oscillations, which shows that neutrinos have mass.” On this occasion, I am reposting a brief summary of the evidence for neutrino masses that I wrote in 2007.

Neutrinos come in three known flavors. These flavors correspond to the three charged leptons, the electron, the muon and the tau. The neutrino flavors can change during the neutrino's travel, and one flavor can be converted into another. This happens periodically. The neutrino flavor oscillations have a certain wavelength, and an amplitude which sets the probability of the change to happen. The amplitude is usually quantified in a mixing angle θ. In this, sin2(2 θ) = 1, or θ = π/4 corresponds to maximal mixing, which means one flavor changes completely into another, and then back.

This neutrino mixing happens when the mass-eigenstates of the Hamiltonian are not the same as the flavor eigenstates. The wavelength λ of the oscillation turns out to depend (in the relativistic limit) on the difference in the squared masses Δm2 (not the square of the difference!) and the neutrino's energy E as λ = 4Em2. The larger the energy of the neutrinos the larger the wavelength. For a source with a spectrum of different energies around some mean value, one has a superposition of various wavelengths. On distances larger than the typical oscillation length corresponding to the mean energy, this will average out the oscillation.

The plot below from the KamLAND Collaboration shows an example of an experiment to test neutrino flavor conversion. The KamLAND neutrino sources are several Japanese nuclear reactors that emit electron anti-neutrinos with a very well known energy and power spectrum, that has a mean value around some MeV. The average distance to the reactors is ~180 km. The plot shows the ratio of the observed electron anti-neutrinos to the expected number without oscillations. The KamLAND result is the red dot. The other data points were earlier experiments in other locations that did not find a drop. The dotted line is the best fit to this data.

[Figure: KamLAND Collaboration]

One sees however that there is some kind of redundancy in this fit, since one can shift around the wavelength and stay within the errorbars. These reactor data however are only one of the measurements of neutrino oscillations that have been made during the last decades. There are a lot of other experiments that have measured deficites in the expected solar and atmospheric neutrino flux. Especially important in this regard was the SNO data that confirmed that indeed not only there were less solar electron neutrinos than expected, but that they actually showed up in the detector with a different flavor, and the KamLAND analysis of the energy spectrum that clearly favors oscillation over decay.

The plot below depicts all the currently available data for electron neutrino oscillations, which places the mass-square around 8×10-5 eV2, and θ at about 33.9° (i.e. the mixing is with high confidence not maximal).

[Figure: Hitoshi Murayama, see here for references on the used data]

The lines on the top indicate excluded regions from earlier experiments, the filled regions are allowed values. You see the KamLAND 95%CL area in red, and SNO in brown. The remaining island in the overlap is pretty much constrained by now. Given that neutrinos are so elusive particles, and this mass scale is incredibly tiny, I am always impressed by the precision of these experiments!

To fit the oscillations between all the known three neutrino flavors, one needs three mixing angles, and two mass differences (the overall mass scale factors out and does not enter, neutrino oscillations thus are not sensitive to the total neutrino masses). All the presently available data has allowed us to tightly constrain the mixing angles and mass squares. The only outsider (that was thus excluded from the global fits) is famously LSND (see also the above plot), so MiniBooNE was designed to check on their results. For more info on MiniBooNE, see Heather Ray's excellent post at CV.

This post originally appeared in December 2007 as part of our advent calendar A Plottl A Day.

Friday, October 02, 2015

Book Review: “A Beautiful Question” by Frank Wilczek

A Beautiful Question: Finding Nature's Deep Design
By Frank Wilczek
Penguin Press (July 14, 2015)

My four year old daughter recently discovered that equilateral triangles combine to larger equilateral triangles. When I caught a distracted glimpse of her artwork, I thought she had drawn the baryon decuplet, an often used diagram to depict relations between particles composed of three quarks.

The baryon decuplet doesn’t come easy to us, but the beauty of symmetry does, and how amazing that physicists have found it tightly woven into the fabric of nature itself: Both the standard model of particle physics and General Relativity, our currently most fundamental theories, are in essence mathematically precise implementations of symmetry requirements. But next to being instrumental for the accurate description of nature, the appeal of symmetries is a human universal that resonates in art and design throughout cultures. For the physicist, it is impossible not to note the link, not to see the equations behind the art. It may be a curse or it may be a blessing.

For Frank Wilczek it clearly is a blessing. In his most recent book “A Beautiful Question,” he tells the success of symmetries in physics, and goes on to answer his question whether “the world embodies beautiful ideas” with a clear “Yes.”

Lara’s decuplet
Wilczek starts from the discovery of basic mathematical relationships like Pythagoras’ theorem (not shying away from explaining how to prove it!) and proceeds through the history of physics along selected milestones such as musical harmonies, the nature of light and the basics of optics, Newtonian gravity and its extension to General Relativity, quantum mechanics, and ultimately the standard model of particle physics. He briefly touches on condensed matter physics, graphene in particular, and has an interesting digression about the human eye’s limited ability to decode visual information (yes, the shrimp again).

In the last chapters of the book, Wilczek goes into quite some detail about the particle content of the standard model, and in just which way it seems to be not as beautiful as one may have hoped. He introduces the reader to extended theories, grand unification and supersymmetry, invented to remedy the supposed shortcomings of the standard model. The reader who is not familiar with the quantum numbers used to classify elementary particles will likely find this chapter somewhat demanding. But whether or not one makes the effort to follow the details, Wilczek’s gets his message across clearly: Striving for beauty in natural law has been a useful guide, and he expects it to remain one, even though he is careful to note that relying on beauty has on various occasions lead to plainly wrong theories, such as the attempt to explain planetary orbits with the Platonic solids, or to the idea to develop a theory of atoms based on the mathematics of knots.

“A Beautiful Question” is a skillfully written reflection, or “meditation” as Wilczek puts it. It is well structured and accompanied by many figures, including two inserts with color prints. The book also contains an extensive glossary, recommendations for further reading, and a timeline of the discoveries mentioned in the text.

My husband’s decuplet.
The content of the book is unique in the genre. David Goldberg’s book “The Universe in the Rearview Mirror: How Hidden Symmetries Shape Reality,” for example, also discusses the role of symmetries in fundamental physics, but Wilzcek gives more space to the connection between aesthetics in art and science. “A Beautiful Question” picks up and expands on the theme of Steven Weinberg’s 1992 book “Dreams of a Final Theory” that also expounded the relevance of beauty in the development of physical theories. More than 20 years have passed, but the dream is still as elusive today as it was back then.

For all his elaboration on the beauty of symmetry though, Wilczek’s book falls short of spelling out the main conundrum physicists face today: We have no reason to be confident that the laws of nature which we have yet to discover will conform to the human sense of beauty. Neither does he spend many words on aspects of beauty beyond symmetry; Wilczek only briefly touches on fractals, and never goes into the rich appeal of chaos and complexity.

My mother used to say that “symmetry is the art of the dumb,” which is maybe a somewhat too harsh criticism on the standard model, but seeing that reliance on beauty has not helped us within the last 20 years, maybe it is time to consider that the beauty of the answers might not reveal itself as effortlessly as does the tiling of the plane to a 4 year old. Maybe the inevitable subjectivity in our sense of aesthetic appeal that has served us well so far is about to turn from a blessing to a curse, misleading us as to where the answers lie.

Wilczek’s book contains something for every reader, whether that is the physicist interested to learn how a Nobel Prize winner thinks of the connection between ideas and reality, or the layman wanting to know more about the structure of fundamental law. “A Beautiful Question” reminds us of the many ways that science connects to the arts, and invites us to marvel at the success our species has had in unraveling the mysteries of nature.

[An edited version of this review appeared in the October issue of Physics Today.]

Service Announcement: Backreaction now on facebook!

Over the years the discussion of my blogposts has shifted over to facebook. To follow this trend and to make it easier for you to engage, I have now set up a facebook page for this blog. Just "like" the page to get the newest blogposts and other links that I post :)

Thursday, October 01, 2015

When string theorists are out of luck, will Loop Quantum Gravity come to rescue?

Tl;dr: I don’t think they want rescuing.

String theorists and researchers working on loop quantum gravity (LQG) like to each point out how their own attempt to quantize gravity is better than the others’. In the end though, they’re both trying to achieve the same thing – consistently combining quantum field theory with gravity – and it is hard to pin down just exactly what makes strings and loops incompatible. Other than egos that is.

The obvious difference used to be that LQG works only in 4 dimensions, whereas string theory works only in 10 dimensions, and LQG doesn’t allow for supersymmetry, which is a consequence of quantizing strings. However, several years ago the LQG framework has been extended to higher dimensions, and they can now also include supergravity, so that objection is gone.

Then there’s the issue with Lorentz-invariance, which is respected in string theory, but its fate in LQG has been subject of much debate. As of recently though, some researchers working on LQG have argued that Lorentz-invariance, used as a constraint, leads to requirements on the particle interactions, which then have to become similar to some limits found in string theory. This should come as no surprise to string theorists who have been claiming for decades that there is one and only one way to combine all the known particle interactions...

Two doesn’t make a trend, but I have a third, which is a recent paper that appeared on the arxiv:
Bodendorfer argues in his paper that loop quantization might be useful for calculations in supergravity and thus relevant for the AdS/CFT duality.

This duality relates certain types of gauge theories – similar to those used in the standard model – with string theories. In the last decade, the duality has become exceedingly popular because it provides an alternative to calculations which are difficult or impossible in the gauge theory. The duality is normally used only in the limit where one has classical (super)gravity (λ to ∞) and an infinite number of color charges (Nc to ∞). This limit is reasonably well understood. Most string theorists however believe in the full conjecture, which is that the duality remains valid for all values of these parameters. The problem is though, if one does not work in this limit, it is darned hard to calculate anything.

A string theorist, they joke, is someone who thinks three is infinitely large. Being able to deal with a finite number of color charges is relevant for applications because the strong nuclear force has 3 colors only. If one keeps the size of the space-time fixed relative to the string length (which corresponds to fixed λ), a finite Nc however means taking into account string effects, and since the string coupling gs ~ λ/Nc goes to infinity with λ when Nc remains finite, this is a badly understood limit.

In his paper, Bodendorfer looks at the limit of finite Nc and λ to infinity. It’s a clever limit in that it gets rid of the string excitations, and instead moves the problem of small color charges into the realm of super-quantum gravity. Loop quantum gravity is by design a non-perturbative quantization, so it seems ideally suited to investigate this parameter range where string theorists don’t know what to do. But it’s also a strange limit in that I don’t see how to get back the perturbative limit and classical gravity once one has pushed gs to infinity. (If you have more insight than me, please leave a comment.)

In any case, the connection Bodendorfer makes in his paper is that the limit of Nc to ∞ can also be obtained in LQG by a suitable scaling of the spin network. In LQG one works with a graph that has a representation label, l. The graph describes space-time and this label enters the spectrum of the area operator, so that the average quantum of area increases with this label. When one keeps the network fixed, the limit of large l then blows up the area quanta and thus the whole space, which corresponds to the limit of Nc to infinity.

So far, so good. If LQG could now be used to calculate certain observables on the gravity side, then one could further employ the duality to obtain the corresponding observables in the gauge theory. The key question is though whether the loop-quantization actually reproduces the same limit that one would obtain in string theory. I am highly skeptical that this is indeed the case. Suppose it was. This would mean that LQG, like string theory, must have a dual description as a gauge theory still outside the classical limit in which they both agree (they better do). The supersymmetric version of LQG used here has the matter content of supergravity. But it is missing all the framework that in string theory eventually give rise to branes (stacks thereof) and compactifications, which seem so essential to obtain the duality to begin with.

And then there is the problem that in LQG it isn’t well understood how to get back classical gravity in the continuum limit, which Bodendorfer kind of assumes to be the case. If that doesn’t work, then we don’t even know whether in the classical limit the two descriptions actually agree.

Despite my skepticism, I think this is an important contribution. In lack of experimental guidance, the only way we can find out which theory of quantum gravity is the correct description of nature is to demonstrate that there is only one way to quantize gravity that reproduces the General Relativity and the Standard Model in the suitable limits while being UV-finite. Studying how the known approaches do or don’t relate to each other is a step to understanding whether one has any option in the quantization, or whether we do indeed already have enough data to uniquely identify the sought-after theory.

Summary: It’s good someone is thinking about this. Even better this someone isn’t me. For a theory that has only one parameter, it seems to have a lot of parameters.