Friday, June 24, 2016

Where can new physics hide?

Also an acronym for “Not Even Wrong.”

The year is 2016, and physicists are restless. Four years ago, the LHC confirmed the Higgs-boson, the last outstanding prediction of the standard model. The chances were good, so they thought, that the LHC would also discover other new particles – naturalness seem to demand it. But their hopes were disappointed.

The standard model and general relativity do a great job, but physicists know this can’t be it. Or at least they think they know: The theories are incomplete, not only disagreeable and staring each other in the face without talking, but inadmissibly wrong, giving rise to paradoxa with no known cure. There has to be more to find, somewhere. But where?

The hiding places for novel phenomena are getting smaller. But physicists haven’t yet exhausted their options. Here are the most promising areas where they currently search:

1. Weak Coupling

Particle collisions at high energies, like those reached at the LHC, can produce all existing particles up to the energy that the colliding particles had. The amount of new particles however depends on the strength by which they couple to the particles that were brought to collision (for the LHC that’s protons, or their constituents quarks and gluons, respectively). A particle that couples very weakly might be produced so rarely that it could have gone unnoticed so far.

Physicists have proposed many new particles which fall into this category because weakly interacting stuff generally looks a lot like dark matter. Most notably there are the weakly interacting massive particles (WIMPs), sterile neutrinos (that are neutrinos which don’t couple to the known leptons), and axions (proposed to solve the strong CP problem and also a dark matter candidate).

These particles are being looked for both by direct detection measurements – monitoring large tanks in underground mines for rare interactions – and by looking out for unexplained astrophysical processes that could make for an indirect signal.

2. High Energies

If the particles are not of the weakly interacting type, we would have noticed them already, unless their mass is beyond the energy that we have reached so far with particle colliders. In this category we find all the supersymmetric partner particles, which are much heavier than the standard model particles because supersymmetry is broken. Also at high energies could hide excitations of particles that exist in models with compactified extra dimensions. These excitations are similar to higher harmonics of a string and show up at certain discrete energy levels which depend on the size of the extra dimension.

Strictly speaking, it isn’t the mass that is relevant to the question whether a particle can be discovered, but the energy necessary to produce the particles, which includes binding energy. An interaction like the strong nuclear force, for example, displays “confinement” which means that it takes a lot of energy to tear quarks apart even though their masses are not all that large. Hence, quarks could have constituents – often called “preons” – that have an interaction – dubbed “technicolor” – similar to the strong nuclear force. The most obvious models of technicolor however ran into conflict with data decades ago. The idea however isn’t entirely dead, and though the surviving models aren’t presently particularly popular, some variants are still viable.

These phenomena are being looked for at the LHC and also in highly energetic cosmic ray showers.

3. High Precision

High precision tests of standard model processes are complementary to high energy measurements. They can be sensitive to tiniest effects stemming from virtual particles with energies too high to be produced at colliders, but still making a contribution at lower energies due to quantum effects. Examples for this are proton decay, neutron-antineutron oscillation, the muon g-2, the neutron electric dipole moment, or Kaon oscillations. There are existing experiments for all of these, searching for deviations from the standard model, and the precision for these measurements is constantly increasing.

A somewhat different high precision test is the search for neutrinoless double-beta decay which would demonstrate that neutrinos are Majorana-particles, an entirely new type of particle. (When it comes to fundamental particles that is. Majorana particles have recently been produced as emergent excitations in condensed matter systems.)

4. Long ago

In the early universe, matter was much denser and hotter than we can hope to ever achieve in our particle colliders. Hence, signatures left over from this time can deliver a bounty of new insights. The temperature fluctuations in the cosmic microwave background (B-modes and non-Gaussianities) may be able to test scenarios of inflation or its alternatives (like phase transitions from a non-geometric phase), whether our universe had a big bounce instead of a big bang, and – with some optimism – even whether gravity was quantized back them.

5. Far away

Some signatures of new physics appear on long distances rather than of short. An outstanding question is for example what’s the shape of the universe? Is it really infinitely large, or does it close back onto itself? And if it does, then how does it do this? One can study these questions by looking for repeating patterns in the temperature fluctuation of the cosmic microwave background (CMB). If we live in a multiverse, it might occasionally happen that two universes collide, and this too would leave a signal in the CMB.

New insights might also hide in some of the well-known problems with the cosmological concordance model, such as the too pronounced galaxy cusps or the too many dwarf galaxies that don’t fit well with observations. It is widely believed that these problems are numerical issues or due to a lack of understanding of astrophysical processes and not pointers to something fundamentally new. But who knows?

Another novel phenomenon that would become noticeable on long distances is a fifth force, which would lead to subtle deviations from general relativity. This might have all kinds of effects, from violations of the equivalence principle to a time-dependence of dark energy. Hence, there are experiments testing the equivalence principle and the constancy of dark energy to every higher precision.

6. Right here

Not all experiments are huge and expensive. While tabletop discoveries have become increasingly unlikely simply because we’ve pretty much tried all that could be done, there are still areas where small-scale lab experiments reach into unknown territory. This is the case notably in the foundations of quantum mechanics, where nanoscale devices, single photon sources and – detectors, and increasingly sophisticated noise-control technics have enabled previously impossible experiments. Maybe one day we’ll be able to solve the dispute over the “correct” interpretation of quantum mechanics simply by measuring which one is right.

So, physics isn’t over yet. It has become more difficult to test new fundamental theories, but we are pushing the limits in many currently running experiments.

[This post previously appeared on Starts With a Bang.]

Wissenschaft auf Abwegen

Ich war am Montag in Regensburg und habe dort einen öffentlichen Vortrag gegeben zum Thema “Wissenschaft auf Abwegen” für eine Reihe unter dem Titel “Was ist Wirklich?” Das ganze ist jetzt auf YouTube. Das Video besteht aus etwa 30 Minuten Vortrag und danach noch eine Stunde Diskussion. Alles in Deutsch. Nur was für eche Fans ;)

Saturday, June 18, 2016

New study finds no sign of entanglement with other universes

Somewhere in the multiverse
you’re having a good day.
The German Autobahn is famous for its lack of speed limits, and yet the greatest speed limit of all comes from a German: Nothing, Albert Einstein taught us, is allowed to travel faster than light. This doesn’t prevent our ideas from racing, but sometimes it prevents us from ticketing them.

If we live in an eternally inflating multiverse that contains a vast number of universes, then the other universes recede from us faster than light. We are hence “causally disconnected” from the rest of the multiverse, separated from the other universes by the ongoing exponential expansion of space, unable to ever make a measurement that could confirm their existence. It is this causal disconnect that has lead multiverse critics to complain the idea isn’t within the realm of science.

There are however some situations in which a multiverse can give rise to observable consequences. One is that our universe might in the past have collided with another universe, which would have left a tell-tale signature in the cosmic microwave background. Unfortunately, no evidence for this has been found.

Another proposal for how to test the multiverse is to exploit the subtle non-locality that quantum mechanics gives rise to. If we live in an ensemble of universes, and these universes started out in an entangled quantum state, then we might be able to today detect relics of their past entanglement.

This idea was made concrete by Richard Holman, Laura Mersini-Houghton, and Tomo Takahashi ten years ago. In their model (hep-th/0611223, hep-th/0612142), the original entanglement present among universes in the landscape decays and effectively leaves a correction to the potential that gives rise to inflation in our universe. This corrected potential in return affects observables that we can measure today.

The particular way of Mersini-Houghton and Holman to include entanglement in the landscape isn’t by any means derived from first principles. It is a phenomenological construction that implicitly makes many assumptions about the way quantum effects are realized on the landscape. But, hey, it’s a model that makes predictions, and in theoretical high energy today that’s something to be grateful for.

They predicted back then that such an entanglement-corrected cosmology would in particular affect the physics on very large scales, giving rise to a modulation of the power spectrum that makes the cold spot a more likely appearance, a suppression of the power at large angular scale, and an alignment in the directions in which large structures move – the so-called “dark flow.” The tentative evidence of a dark flow, which was predicted in 2008 had gone by 2013. But this disagreement with the data didn’t do much to the popularity of the model in the press.

In a recent paper, William Kinney from the University at Buffalo put to test the multiverse-entanglement with the most recent cosmological data:
    Limits on Entanglement Effects in the String Landscape from Planck and BICEP/Keck Data
    William H. Kinney
    arXiv:1606.00672 [astro-ph.CO]
The brief summary is that not only hasn’t he found any evidence for the entanglement-modification, he has ruled out the formerly proposed model for two general types of inflationary potentials. The first, a generic exponential inflation, is by itself incompatible with the data, but adding the entanglement correction doesn’t help to make it fit. The second, Starobinski inflation, is by itself a good fit to the data, but the entanglement correction spoils the fit.

Much to my puzzlement, his analysis also shows that some of the predictions of the original model (such as the modulation of the power spectrum) weren’t predictions to begin with, because Kinney in his calculation found that there are choices of parameters in which these effects don’t appear at all.

Leaving aside that this sheds a rather odd light on the original predictions, it’s not even clear exactly what has been ruled out here. What Kinney’s analysis does is to exclude a particular form of the effective potential for inflation (the one with the entanglement modification). This potential is, in the model by Holman and Mersini-Houghton, a function of the original potential (the one without the entanglement correction). Rather than ruling out the entanglement-modification, I can hence interpret this result to mean that the original potential just wasn’t the right one.

Or, in other words, how am I to know that one can’t find some other potential that will fit the data after adding the entanglement correction. The only difficulty I see in this would be to ensure that the uncorrected potential should still lead to eternal inflation.

To add meat to an unfalsifiable idea that made predictions which weren’t, one of the authors who proposed the entanglement model, Laura Mersini-Houghton, is apparently quite unhappy with Kinney’s paper and tries to use an intellectual property claim to get it removed from the arXiv (see comments for details). I will resist the temptation to comment on the matter and simply direct you to the Wikipedia entry on the Streisand Effect. Dear Internet, please do your job.

For better or worse, I have in the last years been dragged into a discussion about what is and isn’t science, which has forced me to think more about the multiverse than I and my infinitely many copies believe is good for their sanity. After this latter episode, the status is that I side with Joe Silk who captured it well: “[O]ne can always find inflationary models to explain whatever phenomenon is represented by the flavour of the month.”

Monday, June 13, 2016

String phenomenology of the somewhat different kind

[Cat’s cradle. Image Source.]
Ten years ago, I didn’t take the “string wars” seriously. To begin with, referring to such an esoteric conflict as “war” seems disrespectful to millions caught in actual wars. In comparison to their suffering it’s hard to take anything seriously.

Leaving aside my discomfort with the nomenclature, the focus on string theory struck me as odd. String theory as a research area stands out in hep-th and gr-qc merely because of the large number of followers, not by the supposedly controversial research practices. For anybody working in the field it is apparent that string theorists don’t differ in their single-minded focus from physicists in other disciplines. Overspecialization is a common disease of academia, but one that necessarily goes along with division of labor, and often it is an efficient route to fast progress.

No, I thought back then, string theory wasn’t the disease, it was merely a symptom. The underlying disease was one that would surely soon be recognized and addressed: Theoreticians – as scientists whose most-used equipment is their own brain – must be careful to avoid systematic bias introduced by their apparatuses. In other words, scientific communities, and especially those which lack timely feedback by data, need guidelines to avoid social and cognitive biases.

This is so obvious it came as a surprise to me that, in 2006, everybody was hitting on Lee Smolin for pointing out what everybody knew anyway, that string theorists, lacking experimental feedback for decades, had drifted off in a math bubble with questionable relevance for the description of nature. It’s somewhat ironic that, from my personal experience, the situation is actually worse in Loop Quantum Gravity, an approach pioneered, among others, by Lee Smolin. At least the math used by string theorists seems to be good for something. The same cannot be said about LQG.

Ten years later, it is clear that I was wrong in thinking that just drawing attention to the problem would seed a solution. Not only has the situation not improved, it has worsened. We now have some theoretical physicists who argue that we should alter the scientific method so that the success of a theory can be assessed by means other than empirical evidence. This idea, which has sprung up in the philosophy community, isn’t all that bad in principle. In practice, however, it will merely serve to exacerbate social streamlining: If theorists can draw on criteria other than the ability of a theory to explain observations, the first criterion they’ll take into account is aesthetic value, and the second is popularity with their colleagues. Nothing good can come out of this.

And nothing good has come out of it, nothing has changed. The string wars clearly were more interesting for sociologists than they were for physicists. In the last couple of months several articles have appeared which comment on various aspects of this episode, which I’ve read and want to briefly summarize for you.

First, there is
    Collective Belief, Kuhn, and the String Theory Community
    Weatherall, James Owen and Gilbert, Margaret
    philsci-archive:11413
This paper is a very Smolin-centric discussion of whether string theorists are exceptional in their group beliefs. The authors argue that, no, actually string theorists just behave like normal humans and “these features seem unusual to Smolin not because they are actually unusual, but because he occupies an unusual position from which to observe them.” He is unusual, the authors explain, for having worked on string theory, but then deciding to not continue in the field.

It makes sense, the authors write, that people whose well-being to some extent depends on the acceptance by the group will adapt to the group:
“Expressing a contrary view – bucking the consensus – is an offense against the other members of the community… So, irrespective of their personal beliefs, there are pressures on individual scientists to speak in certain ways. Moreover, insofar as individuals are psychologically disposed to avoid cognitive dissonance, the obligation to speak in certain ways can affect one’s personal beliefs so as to bring them into line with the consensus, further suppressing dissent from within the group.”
Furthermore:
“As parties to a joint commitment, members of the string theory community are obligated to act as mouthpieces of their collective belief.”
I actually thought we knew this since 1895, when Le Bon’s published his “Study of the Popular Mind.”

The authors of the paper then point out that it’s normal for members of a scientific community to not jump ship at the slightest indication of conflicting evidence because often such evidence turns out to be misleading. It didn’t become clear to me what evidence they might be referring to; supposedly it’s non-empirical.

They further argue that a certain disregard for what is happening outside one’s own research area is also normal: “Science is successful in part because of a distinctive kind of focused, collaborative research,” and due to their commitment to the agenda “participants can be expected to resist change with respect to the framework of collective beliefs.”

This is all reasonable enough. Unfortunately, the authors entirely miss the main point, the very reason for the whole debate. The question isn’t whether string theorists’ behavior is that of normal humans – I don’t think that was ever in doubt – but whether that “normal human behavior” is beneficial for science. Scientific research requires, in a very specific sense, non-human behavior. It’s not normal for individuals to disregard subjective assessments and to not pay attention to social pressure. And yet, that is exactly what good science would require.

The second paper is
This paper is basically a summary of the string wars that focuses on the question whether or not string theory can be considered science. This “demarcation problem” is a topic that philosophers and sociologists love to discuss, but to me it really isn’t particularly interesting how you classify some research area, to me the question is whether it’s good for something. This is a question which should be decided by the community, but as long as decision making is influenced by social pressures and cognitive biases I can’t trust the community judgement.

The article has a lot of fun quotations from very convinced string theorists, for example by David Gross: “String theory is full of qualitative predictions, such as the production of black holes at the LHC.” I’m not sure what’s the difference between a qualitative prediction and no prediction, but either way it’s certainly not a prediction that was very successful. Also nice is John Schwarz claiming that “supersymmetry is the major prediction of string theory that could appear at accessible energies” and that “some of these superpartners should be observable at the LHC.” Lots of coulds and shoulds that didn’t quite pan out.

While the article gives a good overview on the opinions about string theory that were voiced during the 2006 controversy, the authors themselves clearly don’t know very well the topic they are writing about. A particularly odd statement that highlights their skewed perspective is: “String theory currently enjoys a privileged status by virtue of being the dominant paradigm within theoretical physics.”

I find it quite annoying how frequently I encounter this extrapolation from a particular research area – may that be string theory, supersymmetry, or multiverse cosmology – to all of physics. The vast majority of physicists work in fields like quantum optics, photonics, hadronic and nuclear physics, statistical mechanics, atomic physics, solid state physics, low-temperature physics, plasma physics, astrophysics, condensed matter physics, and so on. They have nothing whatsoever to do with string theory, and certainly would be very surprised to hear that it’s “the dominant paradigm.”

In any case, you might find this paper useful if you didn’t follow the discussion 10 years ago.

Finally, there is this paper

The title of the paper doesn’t explicitly refer to string theory, but most of it is also a discussion of the demarcation problem on the example of arXiv trackbacks. (I suspect this paper is a spin-off of the previous paper.)

ArXiv trackbacks, in case you didn’t know, are links to blogposts that show up on some papers’ arxiv sites, when the blogpost has referred to the paper. To exactly which blogs trackbacks show up and who makes the decision whether they do is one of the arXiv’s best-kept secrets. Peter Woit’s blog, infamously, doesn’t show up in the arXiv trackbacks on the, rather spurious, reason that he supposedly doesn’t count as “active researcher.” The paper tells the full 2006 story with lots of quotes from bloggers you are probably familiar with.

The arXiv recently conducted a user survey, among other things about the trackback feature, which makes me think they might have some updates planned.

On the question who counts as crackpot, the paper (unsurprisingly) doesn’t come to a conclusion other than noting that scientists deal with the issue by stating “we know one when we see one.” I don’t think there can be any other definition than that. To me the notion of “crackpot” is an excellent example of an emergent feature – it’s a demarcation that the community creates during its operation. Any attempt to come up with a definition from first principles is hence doomed to fail.

The rest of the paper is a general discussion of the role of blogs in science communication, but I didn’t find it particularly insightful. The author comes to the (correct) conclusion that blog content turned out not to have such a short life-time as many feared, but otherwise basically just notes that there are as many ways to use blogs as there are bloggers. But then if you are reading this, you already knew that.

One of the main benefits that I see in blogs isn’t mentioned in the paper at all, which is that blogs supports communication between scientific communities that are only loosely connected. In my own research area, I read the papers, hear the seminars, and go to conferences, and I therefore know pretty well what is going on – with or without blogs. But I use blogs to keep up to date in adjacent fields, like cosmology, astrophysics and, to a lesser extent, condensed matter physics and quantum optics. For this purpose I find blogs considerably more useful than popular science news, because the latter often doesn’t provide a useful amount of detail and commentary, not to mention that they all tend to latch onto the same three papers that made big unsubstantiated claims.

Don’t worry, I haven’t suddenly become obsessed with string theory. I’ve read through these sociology papers mainly because I cannot not write a few paragraphs about the topic in my book. But I promise that’s it from me about string theory for some while.

Update: Peter Woit has some comments on the trackback issue.

Monday, June 06, 2016

Dear Dr B: Why not string theory?

[I got this question in reply to my last week’s book review of Why String Theory? by Joseph Conlon.]

Dear Marco:

Because we might be wasting time and money and, ultimately, risk that progress stalls entirely.

In contrast to many of my colleagues I do not think that trying to find a quantum theory of gravity is an endeavor purely for the sake of knowledge. Instead, it seems likely to me that finding out what are the quantum properties of space and time will further our understanding of quantum theory in general. And since that theory underlies all modern technology, this is research which bears relevance for applications. Not in ten years and not in 50 years, but maybe in 100 or 500 years.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

As far as quantum gravity is concerned, string theorist’s main argument seems to be “Well, can you come up with something better?” Then of course if someone answers this question with “Yes” they would never agree that something else might possibly be better. And why would they – there’s no evidence forcing them one way or the other.

I don’t see what one learns from discussing which theory is “better” based on philosophical or aesthetic criteria. That’s why I decided to stay out of this and instead work on quantum gravity phenomenology. As far as testability is concerned all existing approaches to quantum gravity do equally badly, and so I’m equally unconvinced by all of them. It is somewhat of a mystery to me why string theory has become so dominant.

String theorists are very proud of having a microcanonical explanation for the black hole entropy. But we don’t know whether that’s actually a correct description of nature, since nobody has ever seen a black hole evaporate. In fact one could read the firewall problem as a demonstration that indeed this cannot be a correct description of nature. Therefore, this calculation leaves me utterly unimpressed.

But let me be clear here. Nobody (at least nobody whose opinion matters) says that string theory is a research program that should just be discontinued. The question is instead one of balance – does the promise justify the amount of funding spend on it? And the answer to this question is almost certainly no.

The reason is that academia is currently organized so that it invites communal reinforcement, prevents researchers from leaving fields whose promise is dwindling, and supports a rich-get-richer trend. That institutional assessments use the quantity of papers and citation counts as a proxy for quality creates a bonus for fields in which papers can be cranked out quickly. Hence it isn’t surprising that an area whose mathematics its own practitioners frequently describe as “rich” would flourish. What does mathematical “richness” tell us about the use of a theory in the description of nature? I am not aware of any known relation.

In his book Why String Theory?, Conlon tells the history of the discipline from a string theorist’s perspective. As a counterpoint, let me tell you how a cynical outsider might tell this story:

String theory was originally conceived as a theory of the strong nuclear force, but it was soon discovered that quantum chromodynamics was more up to the task. After noting that string theory contains a particle that could be identified as the graviton, it was reconsidered as a theory of quantum gravity.

It turned out however that string theory only makes sense in a 25-dimensional space. To make that compatible with observations, 22 of the dimensions were moved out of sight by rolling them up (compactifying) them to a radius so small they couldn’t be observationally probed.

Next it was noted that the theory also needs supersymmetry. This brings down the number of space dimensions to 9, but also brings a new problem: The world, unfortunately, doesn’t seem to be supersymmetric. Hence, it was postulated that supersymmetry is broken at an energy scale so high we wouldn’t see the symmetry. Even with that problem fixed, however, it was quickly noticed that moving the superpartners out of direct reach would still induce flavor changing neutral currents that, among other things, would lead to proton decay and so be in conflict with observation. Thus, theorists invented R-parity to fix that problem.

The next problem that appeared was that the cosmological constant turned out to be positive instead of zero or negative. While a negative cosmological constant would have been easy to accommodate, string theorists didn’t know what to do with a positive one. But it only took some years to come up with an idea to make that happen too.

String theory was hoped to be a unique completion of the standard model including general relativity. Instead it slowly became clear that there is a huge number of different ways to get rid of the additional dimensions, each of which leads to a different theory at low energies. String theorists are now trying to deal with that problem by inventing some probability measure according to which the standard model is at least a probable occurrence in string theory.

So, you asked, why not string theory? Because it’s an approach that has been fixed over and over again to make it compatible with conflicting observations. Every time that’s been done, string theorists became more convinced of their ideas. And every time they did this, I became more convinced they are merely building a mathematical toy universe.

String theorists of course deny that they are influenced by anything but objective assessment. One noteworthy exception is Joe Polchinski who has considered that social effects play a role, but just came to the conclusion that they aren’t relevant. I think it speaks for his intellectual sincerity that he at least considered it.

At the Munich workshop last December, David Gross (in an exchange with Carlo Rovelli) explained that funding decisions have no influence on whether theoretical physicists chose to work in one field or the other. Well, that’s easy to say if you’re a Nobel Prize winner.

Conlon in his book provides “evidence” that social bias plays no role by explaining that there was only one string theorist in a panel that (positively) evaluated one of his grants. To begin with anecdotes can’t replace data and there is ample evidence that social biases are common human traits, so by default scientists should be susceptible. But even considering his anecdote, I’m not sure why Conlon thinks leaving decisions to non-experts limits bias. My expectation would be that it amplifies bias because it requires drawing on simplified criteria, like the number of papers published and how often they’ve been cited. And what does that depend on? Depends on how many people there are in the field and how many peers favorably reviewed papers on the topic of your work.

I am listing these examples to demonstrate that it is quite common of theoretical physicists (not string theorists in particular) to dismiss the mere possibility that social dynamics influences research decisions.

How large a role play social dynamics and cognitive biases, and how much do they slow down progress on the foundations of physics? I can’t tell you. But even though I can’t tell you how much faster progress could be, I am sure it’s slowed down. I can tell that in the same way that I can tell you diesel in Germany is sold under market value even though I don’t know the market value. I know that because it’s subsidized. And in the same way I can tell that string theory is overpopulated and its promise is overestimated because it’s an idea that benefits from biases which humans demonstrably possess. But I can’t tell you what its real value would be.

The reproduction crisis in the life-sciences and psychology has spurred a debate for better measures of statistical significance. Experimentalists go to length to put into place all kinds of standardized procedures to not draw the wrong conclusions from what their apparatuses measures. In theory development, we have our own crisis, but nobody talks about it. The apparatuses that we use are our own brains and biases we should guard against are cognitive and social biases, communal reinforcement, sunk cost fallacy, wishful thinking and status-quo bias, for just to mention the most common ones. These however are presently entirely unaccounted for. Is this the reason why string theory has gathered so many followers?

Some days I side with Polchinski and Gross and don’t think it makes that much of a difference. It really is an interesting topic and it’s promising. On other days I think we’ve wasted 30 years studying bizarre aspects of a theory that doesn’t bring us any closer to understanding quantum gravity, and it’s nothing but an empty bubble of disappointed expectations. Most days I have to admit I just don’t know.

Why not string theory? Because enough is enough.

Thanks for an interesting question.

Monday, May 30, 2016

Book Review: “Why String Theory?” by Joseph Conlon

Why String Theory?
By Joseph Conlon
CRC Press (November 24, 2015)

I was sure I’d hate the book. Let me explain.

I often hear people speak about the “marketplace of ideas” as if science was a trade show where researchers sell their work. But science isn’t about manufacturing and selling products, it’s about understanding nature. And the sine qua non for evaluating the promise of an idea is objectivity.

In my mind, therefore, the absolutely last thing that scientists should engage in is marketing. Marketing, advertising, and product promotion are commercial tactics with the very purpose of affecting their targets’ objectivity. These tactics shouldn’t have any place in science.

Consequently, I have mixed feelings about scientists who attempt to convince the public that their research area is promising, with the implicit or explicit goal of securing funding and attracting students. It’s not that I have a problem with scientists who write for the public in general – I have a problem with scientists who pass off their personal opinion as fact, often supporting their conviction by quoting the number of people who share their beliefs.

In the last two decades this procedure has created an absolutely astonishing amount of so-called “science” books about string theory, supersymmetry, the multiverse and other fantasies (note careful chosen placement of commata), with no other purpose than asking the reader to please continue funding fruitless avenues of research by appealing to lofty ideals like elegance and beauty.

And indeed, Conlon starts with dedicating the book to “the taxpayers of the UK without whom this book could never have been written” and then states explicitly that his goal is to win the favor of taxpayers:
“I want to explain, to my wonderful fellow citizens who support scientific research through their taxes, why string theory is so popular, and why, despite the lack of direct empirical support, it has attained the level of prominence it has.”

That’s on page six. The prospect of reading 250 pages filled with a string theorists’ attempt to lick butts of his “wonderful fellow citizens” made me feel somewhat nauseous. I put the book aside and instead read Sean Carroll’s new book. After that I felt slightly better and made a second attempt at Why String Theory?

Once I got past the first chapter, however, the book got markedly better. Conlon keeps the introduction to basic physics (relativity and quantum theory) to an absolute minimum. After this he lays out the history of string theory, with its many twists and turns, and explains how much string theorists’ understanding of the approach has changed within the decades.

He then gets to the reasons why people work on string theory. The first reason he lists is a chapter titled “Direct Experimental Evidence for String Theory” which consists of the single sentence “There is no direct experimental evidence for string theory.” At first, I thought that he might have wanted to point out that string theorists work on it despite the lack of evidence, but that the previous paragraph accidentally made it look as if he, rather cynically, wanted to say that the absence of evidence is the main reason they work on it.

But actually he returns to this point later in the book (in section 10.5), where he addresses “objections made concerning connection to experiment” and points out very clearly that even though these are prevalent, he thinks these deserve little or no sympathy. This makes me think, maybe he indeed wanted to say that he suspects the main reason so many people work on string theory is because there’s no evidence for it. Especially the objection that it is “too early” to seek experimental support for string theory because the theory is not fully understood he responds to with:
“The problem with this objection is that it is a time-invariant statement. It was made thirty years ago, it was made twenty years ago, it was made a decade ago, and it is made now. It is also, by observation, an objection made by those who are uninterested in observation. Muscles that are never used waste away. It is like never commencing a journey because one is always waiting for better modes of transportation, and in the end produces a community of scientists where the language of measurement and experiment is one that may be read but cannot be spoken.”
Conlon writes that he himself isn’t particularly interested in quantum gravity. His own research is finding evidence for moduli fields in cosmology, and he has a chapter about this. He lists the usual arguments in favor of string theory, that it connects well to both general relativity and the standard model, that it’s been helpful in deriving some math theorems, and that now there is the AdS/CFT duality by help of which one might maybe one day be able to describe some aspect of the real world.

He somehow forgets to mention that the AdS/CFT predictions for heavy ion collisions at the LHC turned out to be dramatically wrong, and by now very few people think that the duality is of much use in this area. I actually suspect he just plainly didn’t know this. It’s not something that string theorists like to talk about. This omission is my major point of criticism. The rest of the book seems a quite balanced account, and he restrains from making cheap arguments of the type that the theory must be right because a thousand people with brains can’t be mistaken. Conlon even has a subsection addressing Witten-cult, which is rather scathing, and a hit on Arkani-Hamed gathering 5000 citations and a $3 million price for proposing large extra dimensions (an idea that was quietly buried after the LHC ruled it out).

At the end of the book Conlon has a chapter addressing explicit criticisms – he manages to remain remarkably neutral and polite – and a “fun” chapter in which he lists different styles of doing research. Maybe there’s something wrong with my sense of humor but I didn’t find it much fun. It’s more like he is converting Kuhn’s phases of “normal science” and “revolution” into personal profiles, trying to reassure students that they don’t need to quantize gravity to get tenure.

Leaving aside Conlon’s fondness of mixing up sometimes rather odd metaphors (“quantum mechanics is a jealous theory... it has spread through the population of scientific theories like a successful mutation” – “The anthropic landscape... represents incontinence of speculation joined to constipation of experiment.” – “quantum field theorists became drunk on the new wine of string theory”) and an overuse of unnecessary loanwords (in pectore, pons asinorum, affaire de coer, lebensraum, mirabile dictum, for just to mention a few), the book is reasonably well written. The reference list isn’t too extensive. This is to say in the couple of cases in which I wanted to look up a reference it wasn’t listed, and the one case I wanted to check a quotation it didn’t have an original source.

Altogether, Why String Theory? gives the reader a mostly fair and balanced account of string theory, and a pretty good impression for just how much the field has changed since Brian Greene’s Elegant Universe. I looked up something in Greene’s book the other day, and found him complaining that the standard model is “too flexible.” Oh, yes, things have changed a lot since. I doubt it’s a complaint any string theorist dare raise today.

In the end, I didn’t hate Conlon’s book. Maybe I’m getting older, or maybe I’m getting wiser, or maybe I’m just not capable of hating books.

[Disclaimer: Free review copy.]


Win a copy of Why String Theory by Joseph Conlon!

I had bought the book before I was sent the review copy, and so I have a second copy of the book, entirely new and untouched. You can win the book if you are the first to answer this question correctly: Who was second author on the first paper to point out that some types of neutrino detectors might also be used to directly detect certain candidate particles for dark matter? Submit answer in the comments, do not send an email. The time-stamp of the comment counts. (Please only submit an answer if you are willing to send me a postal address to which the book can be shipped.)

Update: The book is gone!

Away Note

I have a trip upcoming to Helsinki. After this I'll be tied up in family business, and then my husband goes on a business trip and I have the kids alone. Then Kindergarten will be closed for a day (forgot why, I'm sure they must have some reason), I have to deal with an ant-infection in our apartment, and more family business follows. In summary: busy times.

I have a book review to appear on this blog later today, but after this you won't hear much from me for a week or two. Keep in mind that since I have comment moderation on, it might take some while for your comment to appear when I am traveling. With thanks for your understanding, here's a random cute pic of Gloria :)


Thursday, May 26, 2016

How can we test quantum gravity?

If you have good eyes, the smallest objects you can make out are about a tenth of a millimeter, roughly the width of a human hair. Add technology, and the smallest structures we have measured so far are approximately 10-19m, that’s the wavelength of the protons collided at the LHC. It has taken us about 400 years from the invention of the microscope to the construction of the LHC – 400 years to cross 15 orders of magnitude.

Quantum effects of gravity are estimated to become relevant on distance scales of approximately 10-35m, known as the Planck length. That’s another 16 orders of magnitude to go. It makes you wonder whether it’s possible at all, or whether all the effort to find a quantum theory of gravity is just idle speculation.

I am optimistic. The history of science is full with people who thought things to be impossible that have meanwhile been done: measuring the light deflection on the sun, heavier-than-air flying machines, detecting gravitational waves. Hence, I don’t think it’s impossible to experimentally test quantum gravity. Maybe it will take some decades, or maybe it will take some centuries – but if only we keep pushing, one day we will measure quantum gravitational effects. Not by directly crossing these 15 orders of magnitude, I believe, but instead by indirect detections at lower energies.

From nothing comes nothing though. If we don’t think about how quantum gravitational effects can look like and where they might show up, we’ll certainly never find them. But fueling my optimism is the steadily increasing interest in the phenomenology of quantum gravity, the research area dedicated to studying how to best find evidence for quantum gravitational effects.

Since there isn’t any one agreed-upon theory for quantum gravity, existing efforts to find observable phenomena focus on finding ways to test general features of the theory, properties that have been found in several different approaches to quantum gravity. Quantum fluctuations of space-time, for example, or the presence of a “minimal length” that would impose a fundamental resolution limit. Such effects can be quantified in mathematical models, which can then be used to estimate the strength of the effects and thus to find out which experiments are most promising.

Testing quantum gravity has long thought to be out of reach of experiments, based on estimates that show it would take a collider the size of the Milky Way to accelerate protons enough to produce a measureable amount of gravitons (the quanta of the gravitational field), or that we would need a detector the size of planet Jupiter to measure a graviton produced elsewhere. Not impossible, but clearly not something that will happen in my lifetime.

One testable consequence of quantum gravity might be, for example, the violation of the symmetry of special and general relativity, known as Lorentz-invariance. Interestingly it turns out that violations of Lorentz-invariance are not necessarily small even if they are created at distances too short to be measurable. Instead, these symmetry violations seep into many particle reactions at accessible energies, and these have been tested to extremely high accuracy. No evidence for violations of Lorentz-invariance have been found. This might sound like not much, but knowing that this symmetry has to be respected by quantum gravity is an extremely useful guide in the development of the theory.

Other testable consequences might be in the weak-field limit of quantum gravity. In the early universe, quantum fluctuations of space-time would have led to temperature fluctuation of matter. And these temperature fluctuations are still observable today in the Cosmic Microwave Background (CMB). The imprint of such “primordial gravitational waves” on the CMB has not yet been measured (LIGO is not sensitive to them), but they are not so far off measurement precision.

A lot of experiments are currently searching for this signal, including BICEP and Planck. This raises the question whether it is possible to infer from the primordial gravitational waves that gravity must have been quantized in the early universe. Answering this question is one of the presently most active areas in quantum gravity phenomenology.

Also testing the weak-field limit of quantum gravity are attempts to bring objects into quantum superpositions that are much heavier than elementary particles. This makes the gravitational field stronger and potentially offers the chance to probe its quantum behavior. The heaviest objects that have so far been brought into superpositions weigh about a nano-gram, which is still several orders of magnitude too small to measure the gravitational field. But a group in Vienna recently proposed an experimental scheme that would allow to measure the gravitational field more precisely than ever before. We are slowly closing in on the quantum gravitational range.

Such arguments however merely concern the direct detection of gravitons, and that isn’t the only manifestation of quantum gravitational effects. There are various other observable consequences that quantum gravity could give rise to, some of which have already been looked for, and others that we plan to look for. So far, we have only negative results. But even negative results are valuable because they tell us what properties the sought-for theory cannot have.

[From arXiv:1602.07539, for details, see here]

The weak field limit would prove that gravity really is quantized and finally deliver the much-needed experimental evidence, confirming that we’re not just doing philosophy. However, for most of us in the field the strong gravity limit is more interesting. With strong gravity limit I mean Planckian curvature, which (not counting those galaxy-sized colliders) can only be found close by the center of black holes and towards the big bang.

(Note that in astrophysics, “strong gravity” is sometimes used to mean something different, referring to large deviations from Newtonian gravity which can be found, eg, around the horizon of black holes. In comparison to the Planckian curvature required for strong quantum gravitational effects, this is still exceedingly weak.)

Strong quantum gravitational effects could also have left an imprint in the cosmic microwave background, notably in the type of correlations that can be found in the fluctuations. There are various models of string cosmology and loop quantum cosmology that have explored the observational consequences, and proposed experiments like EUCLID and PRISM might find first hints. Also the upcoming experiments to test the 21-cm hydrogen absorption could harbor information about quantum gravity.

A somewhat more speculative idea is based on a recent finding according to which the gravitational collapse of matter might not always form a black hole, but could escape the formation of a horizon. If that is so, then the remaining object would give us open view on a region with quantum gravitational effects. It isn’t yet clear exactly what signals we would have to look for to find such an object, but this is promising research direction because it could give us direct access to strong space-time curvature.

There are many other ideas out there. A large class of models for example deals with the possibility that quantum gravitational effects endow space-time with the properties of a medium. This can lead to the dispersion of light (colors running apart), birefringence (polarizations running apart), decoherence (preventing interference), or an opacity of otherwise empty space. More speculative ideas include Craig Hogan’s quest for holographic noise, Bekenstein’s table-top experiment that searches for Planck-length discreteness, or searches for evidence of a minimal length in tritium decay. Some general properties that have recently been found and that we yet have to find good experimental tests for are geometric phase transitions in the early universe, or dimensional reduction.

Without doubt, there is much that remains to be done. But we’re on the way.

[This post previously appeared on Starts With A Bang.]

Thursday, May 19, 2016

The Holy Grail of Crackpot Filtering: How the arXiv decides what’s science – and what’s not.

Where do we draw the boundary between science and pseudoscience? It’s is a question philosophers have debated for as long as there’s been science – and last time I looked they hadn’t made much progress. When you ask a sociologist their answer is normally a variant of: Science is what scientists do. So what do scientists do?

You might have heard that scientists use what’s called the scientific method, a virtuous cycle of generating and testing hypotheses which supposedly separates the good ideas from the bad ones. But that’s only part of the story because it doesn’t tell you where the hypotheses come from to begin with.

Science doesn’t operate with randomly generated hypotheses for the same reason natural selection doesn’t work with randomly generated genetic codes: it would be highly inefficient and any attempt to optimize the outcome would be doomed to fail. What we do instead is heavily filtering hypotheses, and then we consider only those which are small mutations of ideas that have previously worked. Scientists like to be surprised, but not too much.

Indeed, if you look at the scientific enterprise today, almost all of its institutionalized procedures are methods not for testing hypotheses, but for filtering hypotheses: Degrees, peer reviews, scientific guidelines, reproduction studies, measures for statistical significance, and community quality standards. Even the use of personal recommendations works to that end. In theoretical physics in particular the prevailing quality standard is that theories need to be formulated in mathematical terms. All these are requirements which have evolved over the last two centuries – and they have proved to work very well. It’s only smart to use them.

But the business of hypotheses filtering is a tricky one and it doesn’t proceed by written rules. It is a method that has developed through social demarcation, and as such it has its pitfalls. Humans are prone to social biases and every once in a while an idea get dismissed not because it’s bad, but because it lacks community support. And there is no telling how often this happens because these are the stories we never get to hear.

It isn’t news that scientists lock shoulders to defend their territory and use technical terms like fraternities use secret handshakes. It thus shouldn’t come as a surprise that an electronic archive which caters to the scientific community would develop software to emulate the community’s filters. And that is, in a nutshell, basically what the arXiv is doing.

In an interesting recent paper, Luis Reyes-Galindo had a look at the arXiv moderators and their reliance on automated filters:


In the attempt to develop an algorithm that would sort papers into arXiv categories automatically, thereby supporting arXiv moderators to decide when a submission needs to be reclassified, it turned out that papers which scientists would mark down as “crackpottery” showed up as not classifiable or stood out by language significantly different from that in the published literature. According to Paul Ginsparg, who developed the arXiv more than 20 years ago:
“The first thing I noticed was that every once in a while the classifier would spit something out as ‘I don't know what category this is’ and you’d look at it and it would be what we’re calling this fringe stuff. That quite surprised me. How can this classifier that was tuned to figure out category be seemingly detecting quality?

“[Outliers] also show up in the stop word distribution, even if the stop words are just catching the style and not the content! They’re writing in a style which is deviating, in a way. [...]

“What it’s saying is that people who go through a certain training and who read these articles and who write these articles learn to write in a very specific language. This language, this mode of writing and the frequency with which they use terms and in conjunctions and all of the rest is very characteristic to people who have a certain training. The people from outside that community are just not emulating that. They don’t come from the same training and so this thing shows up in ways you wouldn’t necessarily guess. They’re combining two willy-nilly subjects from different fields and so that gets spit out.”
It doesn’t surprise me much – you can see this happening in comment sections all over the place: The “insiders” can immediately tell who is an “outsider.” Often it doesn’t take more than a sentence or two, an odd expression, a term used in the wrong context, a phrase that nobody in the field would ever use. It is only consequential that with smart software you can tell insiders from outsiders even more efficiently than humans. According to Ginsparg:
“We've actually had submissions to arXiv that are not spotted by the moderators but are spotted by the automated programme [...] All I was trying to do is build a simple text classifier and inadvertently I built what I call The Holy Grail of Crackpot Filtering.”
Trying to speak in the code of a group you haven’t been part of at least for some time is pretty much impossible, much like it’s impossible to fake the accent of a city you haven’t lived in for some while. Such in-group and out-group demarcation is subject of much study in sociology, not specifically the sociology of science, but generally. Scientists are human and of course in-group and out-group behavior also shapes their profession, even though they like to deny it as if they were superhuman think-machines.

What is interesting about this paper is that, for the first time, it openly discusses how the process of filtering happens. It’s software that literally encodes the hidden rules that physicists use to sort out cranks. For what I can tell, the arXiv filters work reasonably well, otherwise there would be much complaint in the community. But the vast majority of researchers in the field are quite satisfied with what the arXiv is doing, meaning the arXiv filters match their own judgement.

There are exceptions of course. I have heard some stories of people who were working on new approaches that fell between the stools and were flagged as potential crackpottery. The cases that I know of could eventually be resolved, but that might tell you more about the people I know than about the way such issues typically end.

Personally, I have never had a problem with the arXiv moderation. I had a paper reclassified from gen-ph to gr-qc once by a well-meaning moderator, which is how I learned that gen-ph is the dump for borderline crackpottery. (How would I have known? I don’t read gen-ph. I was just assuming someone reads it.)

I don’t so much have an issue with what gets filtered on the arXiv, what bothers me much more is what does not get filtered and hence, implicitly, gets approval by the community. I am very sympathetic to the concerns of John The-End-Of-Science Horgan that scientists don’t clean enough on their own doorsteps. There is no “invisible hand” that corrects scientists if they go astray. We have to do this ourselves. In-group behavior can greatly misdirect science because, given sufficiently many people, even fruitless research can become self-supportive. No filter that is derived from the community’s own judgement will do anything about this.

It’s about time that scientists start paying attention to social behavior in their community. It can, and sometimes does, affect objective judgement. Ignoring or flagging what doesn’t fit into pre-existing categories is one such social problem that can stand in the way of progress.

In a 2013 paper published in Science, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” (meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.”

Conventional science isn’t bad science. But we also need unconventional science, and we should be careful to not assign the label “crackpottery” too quickly. If science is what scientists do, scientists should pay some attention to the science of what they do.

Sunday, May 15, 2016

Dear Dr B: If photons have a mass, would this mean special relativity is no longer valid?

Einstein and Lorentz.
[Image: Wikipedia]
“[If photons have a restmass] would that mean the whole business of the special theory of relativity being derived from the idea that light has to go at a particular velocity in order for it to exist/Maxwell’s identification of e/m waves as light because they would have to go at the appropriate velocity is no longer valid?”

(This question came up in the discussion of a recent proposal according to which photons with a tiny restmass might cause an effect similar to the cosmological constant.)

Dear Brian,

The short answer to your question is “No.” If photons had a restmass, special relativity would still be as valid as it’s always been.

The longer answer is that the invariance of the speed of light features prominently in the popular explanations of special relativity for historic reasons, not for technical reasons. Einstein was lead to special relativity contemplating what it would be like to travel with light, and then tried to find a way to accommodate an observer’s motion with the invariance of the speed of light. But the derivation of special relativity is much more general than that, and it is unnecessary to postulate that the speed of light is invariant.

Special relativity is really just physics in Minkowski space, that is the 4-dimensional space-time you obtain after promoting time from a parameter to a coordinate. Einstein wanted the laws of physics to be the same for all inertial observers in Minkowski-space, ie observers moving at constant velocity. If you translate this requirement into mathematics, you are lead to ask for the symmetry transformations in Minkowski-space. These transformations form a group – the Poincaré-group – from which you can read off all the odd things you have heard of: time-dilatation, length-contraction, relativistic mass, and so on.

The Poincaré-group itself has two subgroups. One contains just translations in space and time. This tells you that if you have an infinitely extended and unchanging space then it doesn’t matter where or when you do your experiment, the outcome will be the same. The remaining part of the Poincaré-group is the Lorentz-group. The Lorentz-group contains rotations – this tells you it doesn’t matter in which direction you turn, the laws of nature will still be the same. Besides the rotations, the Lorentz-group contains boosts, that are basically rotations between space and time. Invariance under boosts tells you that it doesn’t matter at which velocity you move, the laws of nature will remain the same. It’s the boosts where all the special relativistic fun goes on.

Deriving the Lorentz-group, if you know how to do it, is a three-liner, and I assure you it has absolutely nothing to do with rocket ships and lasers and so on. It is merely based on the requirement that the metric of Minkowski-space has to remain invariant. Carry through with the math and you’ll find that the boosts depend on a free constant with the dimension of a speed. You can further show that this constant is the speed of massless particles.

Hence, if photons are massless, then the constant in the Lorentz-transformation is the speed of light. If photons are not massless, then the constant in the Lorentz-transformation is still there, but not identical to the speed of light. We already know however that these constants must be identical to very good precision, which is the same as saying the mass of photons must be very small.

Giving a mass to photons is unappealing not because it violates special relativity – it doesn’t – but because it violates gauge-invariance, the most cherished principle underlying the standard model. But that’s a different story and shall be told another time.

Thanks for an interesting question!

Monday, May 09, 2016

Book review: “The Big Picture” by Sean Carroll

The Big Picture: On the Origins of Life, Meaning, and the Universe Itself
Sean Carroll
Dutton (May 10, 2016)

Among the scientific disciplines, physics is unique: Concerned with the most fundamental entities, its laws must be respected in all other areas of science. While there are many emergent laws which are interesting in their own right – from neurobiology to sociology – there is no doubt they all have to be compatible with energy conservation. And the second law of thermodynamics. And quantum mechanics. And the standard model better be consistent with whatever you think are the neurological processes that make you “you.” There’s no avoiding physics.

In his new book, The Big Picture Sean explains just why you can’t ignore physics when you talk about extrasensory perception, consciousness, god, afterlife, free will, or morals. In the first part, Sean lays out what, to our best current knowledge, the fundamental laws of nature are, and what their relevance is for all other emergent laws. In the later parts he then goes through the consequences that follow from this.

On the way from quantum field theory to morals, he covers what science has to say about complexity, the arrow of time, and the origin of life. (If you attended the 2011 FQXi conference, parts will sound very familiar.) Then, towards the end of the book, he derives advice from his physics-based philosophy – which he calls “poetic naturalism” – for finding “meaning” in life and finding a “good” way to organize our living together (scare quotes because these words might not mean what you think they mean). His arguments rely heavily on Bayesian reasoning, so you better be prepared to update your belief system while reading.

The Big Picture is, above everything, a courageous book – and an overdue one. I have had many arguments about exactly the issues that Sean addresses in his book – from “qualia” to “downwards causation” – but I neither have the patience nor the interest to talk people out of their cherished delusions. I’m an atheist primarily because I think religion would be wasting my time, time that I’d rather spend on something more insightful. Trying to convince people that their beliefs are inconsistent would also be wasting my time, hence I don’t. But if I did, I almost certainly wouldn’t be able to remain as infallibly polite as Sean.

So, I am super happy about this book. Because now, whenever someone brings up Mary The Confused Color Scientist who can’t tell sensory perception from knowledge about that perception, I’ll just – politely – tell them to read Sean’s book. The best thing I learned from The Big Picture is that apparently Franck Jackson, the philosopher who came up with The Color Scientist, eventually conceded himself that the argument was wrong. The world of philosophy indeed sometimes moves! Time then, to stop talking about qualia.

I really wish I had found something to disagree with in Sean’s book, but the only quibble I have (you won’t be surprised to hear) is that I think what Sean-The-Compatibilist calls “free will” doesn’t deserve being called “free will.” Using the adjective “free” strongly suggests an independence from the underlying microscopic laws, and hence a case of “strong emergence” – which is an idea that should go into the same bin as qualia. I also agree with Sean however that fighting about the use of words is moot.

(The other thing I’m happy about is that, leaving aside the standard model and general relativity, Sean’s book has almost zero overlap with the book I’m writing. *wipes_sweat_off_forehead*. Could you all please stop writing books until I’m done, it makes me nervous.)

In any case, it shouldn’t come as a surprise that I agree so wholeheartedly with Sean because I think everybody who open-mindedly looks at the evidence – ie all we currently know about the laws of nature – must come to the same conclusions. The main obstacle in conveying this message is that most people without training in particle physics don’t understand effective field theory, and consequently don’t see what this implies for the emergence of higher level laws. Sean does a great job overcoming this obstacle.

I wish I could make myself believe that after the publication of Sean’s book I’ll never again have to endure someone insisting there must be something about their experience that can’t be described by a handful of elementary particles. But I’m not very good at making myself believe in exceedingly unlikely scenarios, whether that’s the existence of an omniscient god or the ability of humans to agree on how unlikely this existence is. At the very least however, The Big Picture should make clear that physicists aren’t just arrogant when they say their work reveals insights that reach far beyond the boundaries of their discipline. Physics indeed has an exceptional status among the sciences.

[Disclaimer: Free review copy.]

Tuesday, May 03, 2016

Experimental Search for Quantum Gravity 2016

I am happy to announce that this year we will run the 5th international conference on Experimental Search for Quantum Gravity here in Frankfurt, Germany. The meeting will take place Sep 19-23, 2016.

We have a (quite preliminary) website up here. Application is now open and will run through June 1st. If you're a student or young postdoc with an interest in the phenomenology of quantum gravity, this conference might be a good starting point and I encourage you to apply. We cannot afford handing out travel grants, but we will waive the conference fee for young participants (young in terms of PhD age, not biological age).

The location of the meeting will be at my new workplace, the Frankfurt Institute for Advanced Studies, FIAS for short. When it comes to technical support, they seem considerably better organized (not to mention staffed) than my previous institution. At this stage I am thus tentatively hopeful that this year we'll both record and livestream the talks. So stay tuned, there's more to come.

Wednesday, April 27, 2016

If you fall into a black hole

If you fall into a black hole, you’ll die. That much is pretty sure. But what happens before that?

The gravitational pull of a black hole depends on its mass. At a fixed distance from the center, it isn’t any stronger or weaker than that of a star with the same mass. The difference is that, since a black hole doesn’t have a surface, the gravitational pull can continue to increase as you approach the center.

The gravitational pull itself isn’t the problem, the problem is the change in the pull, the tidal force. It will stretch any extended object in a process with technical name “spaghettification.” That’s what will eventually kill you. Whether this happens before or after you cross the horizon depends, again, on the mass of the black hole. The larger the mass, the smaller the space-time curvature at the horizon, and the smaller the tidal force.

Leaving aside lots of hot gas and swirling particles, you have good chances to survive crossing the horizon of a supermassive black hole, like that in the center of our galaxy. You would, however, probably be torn apart before crossing the horizon of a solar-mass black hole.

It takes you a finite time to reach the horizon of a black hole. For an outside observer however, you seem to be moving slower and slower and will never quite reach the black hole, due to the (technically infinitely large) gravitational redshift. If you take into account that black holes evaporate, it doesn’t quite take forever, and your friends will eventually see you vanishing. It might just take a few hundred billion years.

In an article that recently appeared on “Quick And Dirty Tips” (featured by SciAm), Everyday Einstein Sabrina Stierwalt explains:
“As you approach a black hole, you do not notice a change in time as you experience it, but from an outsider’s perspective, time appears to slow down and eventually crawl to a stop for you [...] So who is right? This discrepancy, and whose reality is ultimately correct, is a highly contested area of current physics research.”
No, it isn’t. The two observers have different descriptions of the process of falling into a black hole because they both use different time coordinates. There is no contradiction between the conclusions they draw. The outside observer’s story is an infinitely stretched version of the infalling observer’s story, covering only the part before horizon crossing. Nobody contests this.

I suspect this confusion was caused by the idea of black hole complementarity. Which is indeed a highly contest area of current physics research. According to black hole complementarity the information that falls into a black hole both goes in and comes out. This is in contradiction with quantum mechanics which forbids making exact copies of a state. The idea of black hole complementarity is that nobody can ever make a measurement to document the forbidden copying and hence, it isn’t a real inconsistency. Making such measurements is typically impossible because the infalling observer only has a limited amount of time before hitting the singularity.

Black hole complementarity is actually a pretty philosophical idea.

Now, the black hole firewall issue points out that black hole complementarity is inconsistent. Even if you can’t measure that a copy has been made, pushing the infalling information in the outgoing radiation changes the vacuum state in the horizon vicinity to a state which is no longer empty: that’s the firewall.

Be that as it may, even in black hole complementarity the infalling observer still falls in, and crosses the horizon at a finite time.

The real question that drives much current research is how the information comes out of the black hole before it has completely evaporated. It’s a topic which has been discussed for more than 40 years now, and there is little sign that theorists will agree on a solution. And why would they? Leaving aside fluid analogies, there is no experimental evidence for what happens with black hole information, and there is hence no reason for theorists to converge on any one option.

The theory assessment in this research area is purely non-empirical, to use an expression by philosopher Richard Dawid. It’s why I think if we ever want to see progress on the foundations of physics we have to think very carefully about the non-empirical criteria that we use.

Anyway, the lesson here is: Everyday Einstein’s Quick and Dirty Tips is not a recommended travel guide for black holes.

Wednesday, April 20, 2016

Dear Dr B: Why is Lorentz-invariance in conflict with discreteness?

Can we build up space-time from
discrete entities?
“Could you elaborate (even) more on […] the exact tension between Lorentz invariance and attempts for discretisation?

Best,

Noa”

Dear Noa:

Discretization is a common procedure to deal with infinities. Since quantum mechanics relates large energies to short (wave) lengths, introducing a shortest possible distance corresponds to cutting off momentum integrals. This can remove infinites that come in at large momenta (or, as the physicists say “in the UV”).

Such hard cut-off procedures were quite common in the early days of quantum field theory. They have since been replaced with more sophisticated regulation procedures, but these don’t work for quantum gravity. Hence it lies at hand to use discretization to get rid of the infinities that plague quantum gravity.

Lorentz-invariance is the symmetry of Special Relativity; it tells us how observables transform from one reference frame to another. Certain types of observables, called “scalars,” don’t change at all. In general, observables do change, but they do so under a well-defined procedure that is by the application of Lorentz-transformations.We call these “covariant.” Or at least we should. Most often invariance is conflated with covariance in the literature.

(To be precise, Lorentz-covariance isn’t the full symmetry of Special Relativity because there are also translations in space and time that should maintain the laws of nature. If you add these, you get PoincarĂ©-invariance. But the translations aren’t so relevant for our purposes.)

Lorentz-transformations acting on distances and times lead to the phenomena of Lorentz-contraction and time-dilatation. That means observers at relative velocities to each other measure different lengths and time-intervals. As long as there aren’t any interactions, this has no consequences. But once you have objects that can interact, relativistic contraction has measurable consequences.

Heavy ions for example, which are collided in facilities like RHIC or the LHC, are accelerated to almost the speed of light, which results in a significant length contraction in beam direction, and a corresponding increase in the density. This relativistic squeeze has to be taken into account to correctly compute observables. It isn’t merely an apparent distortion, it’s a real effect.

Now consider you have a regular cubic lattice which is at rest relative to you. Alice comes by in a space-ship at high velocity, what does she see? She doesn’t see a cubic lattice – she sees a lattice that is squeezed into one direction due to Lorentz-contraction. Who of you is right? You’re both right. It’s just that the lattice isn’t invariant under the Lorentz-transformation, and neither are any interactions with it.

The lattice can therefore be used to define a preferred frame, that is a particular reference frame which isn’t like any other frame, violating observer independence. The easiest way to do this would be to use the frame in which the spacing is regular, ie your restframe. If you compute any observables that take into account interactions with the lattice, the result will now explicitly depend on the motion relative to the lattice. Condensed matter systems are thus generally not Lorentz-invariant.

A Lorentz-contraction can convert any distance, no matter how large, into another distance, no matter how short. Similarly, it can blue-shift long wavelengths to short wavelengths, and hence can make small momenta arbitrarily large. This however runs into conflict with the idea of cutting off momentum integrals. For this reason approaches to quantum gravity that rely on discretization or analogies to condensed matter systems are difficult to reconcile with Lorentz-invariance.

So what, you may say, let’s just throw out Lorentz-invariance then. Let us just take a tiny lattice spacing so that we won’t see the effects. Unfortunately, it isn’t that easy. Violations of Lorentz-invariance, even if tiny, spill over into all kinds of observables even at low energies.

A good example is vacuum Cherenkov radiation, that is the spontaneous emission of a photon by an electron. This effect is normally – ie when Lorentz-invariance is respected – forbidden due to energy-momentum conservation. It can only take place in a medium which has components that can recoil. But Lorentz-invariance violation would allow electrons to radiate off photons even in empty space. No such effect has been seen, and this leads to very strong bounds on Lorentz-invariance violation.

And this isn’t the only bound. There are literally dozens of particle interactions that have been checked for Lorentz-invariance violating contributions with absolutely no evidence showing up. Hence, we know that Lorentz-invariance, if not exact, is respected by nature to extremely high precision. And this is very hard to achieve in a model that relies on a discretization.

Having said that, I must point out that not every quantity of dimension length actually transforms as a distance. Thus, the existence of a fundamental length scale is not a priori in conflict with Lorentz-invariance. The best example is maybe the Planck length itself. It has dimension length, but it’s defined from constants of nature that are themselves frame-independent. It has units of a length, but it doesn’t transform as a distance. For the same reason string theory is perfectly compatible with Lorentz-invariance even though it contains a fundamental length scale.

The tension between discreteness and Lorentz-invariance appears always if you have objects that transform like distances or like areas or like spatial volumes. The Causal Set approach therefore is an exception to the problems with discreteness (to my knowledge the only exception). The reason is that Causal Sets are a randomly distributed collection of (unconnected!) points with a four-density that is constant on the average. The random distribution prevents the problems with regular lattices. And since points and four-volumes are both Lorentz-invariant, no preferred frame is introduced.

It is remarkable just how difficult Lorentz-invariance makes it to reconcile general relativity with quantum field theory. The fact that no violations of Lorentz-invariance have been found and the insight that discreteness therefore seems an ill-fated approach has significantly contributed to the conviction of string theorists that they are working on the only right approach. Needless to say there are some people who would disagree, such as probably Carlo Rovelli and Garrett Lisi.

Either way, the absence of Lorentz-invariance violations is one of the prime examples that I draw upon to demonstrate that it is possible to constrain theory development in quantum gravity with existing data. Everyone who still works on discrete approaches must now make really sure to demonstrate there is no conflict with observation.

Thanks for an interesting question!

Wednesday, April 13, 2016

Dark matter might connect galaxies through wormholes

Tl;dr: A new paper shows that one of the most popular types of dark matter – the axion – could make wormholes possible if strong electromagnetic fields, like those found around supermassive black holes, are present. Unclear remains how such wormholes would be formed and whether they would be stable.
Wormhole dress.
Source: Shenova.

Wouldn’t you sometimes like to vanish into a hole and crawl out in another galaxy? It might not be as impossible as it seems. General relativity has long been known to allow for “wormholes” that are short connections between seemingly very distant places. Unfortunately, these wormholes are unstable and cannot be traversed unless filled by “exotic matter,” which must have negative energy density to keep the hole from closing. And no matter that we have ever seen has this property.

The universe, however, contains a lot of matter that we have never seen, which might give you hope. We observe this “dark matter” only through its gravitational pull, but this is enough to tell that it behaves pretty much like regular matter. Dark matter too is thus not exotic enough to help with stabilizing wormholes. Or so we thought.

In a recent paper, Konstantinos Dimopoulos from the “Consortium for Fundamental Physics” at Lancaster University points out that dark matter might be able to mimic the behavior of exotic matter when caught in strong electromagnetic fields:
    Active galaxies may harbour wormholes if dark matter is axionic
    By Konstantinos Dimopoulos
    arXiv:1603.04671 [astro-ph.HE]
Axions are one of the most popular candidates for dark matter. The particles themselves are very light, but they form a condensate in the early universe that should still be around today, giving rise to the observed dark matter distribution. Like all other dark matter candidates, axions have been searched for but so far not been detected.

In his paper, Dimopoulos points out that, due to their peculiar coupling to electromagnetic fields, axions can acquire an apparent mass which makes a negative contribution to their energy. This effect isn’t so unusual – it is similar to the way that fermions obtain masses by coupling to the Higgs or that scalar fields can obtain effective masses by coupling to electromagnetic fields. In other words, it’s not totally unheard of.

Dimopoulos then estimates how strong an electromagnetic field is necessary to turn axions into exotic matter and finds that around supermassive black holes the conditions would just be right. Hence, he concludes, axionic dark matter might keep wormholes open and traversable.

In his present work, Dimopoulos has however not done a fully relativistic computation. He considers the axions in the background of the black hole, but not the coupled solution of axions plus black hole. The analysis so far also does not check whether the wormhole would indeed be stable, or if it would instead blow off the matter that is supposed to stabilize it. And finally, it leaves open the question how the wormhole would form. It is one thing to discuss configurations that are mathematically possible, but it’s another thing entirely to demonstrate that they can actually come into being in our universe.

So it’s an interesting idea, but it will take a little more to convince me that this is possible.

And in case you warmed up to the idea of getting out of this galaxy, let me remind you that the closest supermassive black hole is still 26,000 light years away.

Note added: As mentioned by a commenter (see below) the argument in the paper might be incorrect. I asked the author for comment, but no reply so far.
Another note: The author says he has revised and replaced the paper, and that the conclusions are not affected.