Pages

Friday, June 24, 2016

Where can new physics hide?

Also an acronym for “Not Even Wrong.”

The year is 2016, and physicists are restless. Four years ago, the LHC confirmed the Higgs-boson, the last outstanding prediction of the standard model. The chances were good, so they thought, that the LHC would also discover other new particles – naturalness seem to demand it. But their hopes were disappointed.

The standard model and general relativity do a great job, but physicists know this can’t be it. Or at least they think they know: The theories are incomplete, not only disagreeable and staring each other in the face without talking, but inadmissibly wrong, giving rise to paradoxa with no known cure. There has to be more to find, somewhere. But where?

The hiding places for novel phenomena are getting smaller. But physicists haven’t yet exhausted their options. Here are the most promising areas where they currently search:

1. Weak Coupling

Particle collisions at high energies, like those reached at the LHC, can produce all existing particles up to the energy that the colliding particles had. The amount of new particles however depends on the strength by which they couple to the particles that were brought to collision (for the LHC that’s protons, or their constituents quarks and gluons, respectively). A particle that couples very weakly might be produced so rarely that it could have gone unnoticed so far.

Physicists have proposed many new particles which fall into this category because weakly interacting stuff generally looks a lot like dark matter. Most notably there are the weakly interacting massive particles (WIMPs), sterile neutrinos (that are neutrinos which don’t couple to the known leptons), and axions (proposed to solve the strong CP problem and also a dark matter candidate).

These particles are being looked for both by direct detection measurements – monitoring large tanks in underground mines for rare interactions – and by looking out for unexplained astrophysical processes that could make for an indirect signal.

2. High Energies

If the particles are not of the weakly interacting type, we would have noticed them already, unless their mass is beyond the energy that we have reached so far with particle colliders. In this category we find all the supersymmetric partner particles, which are much heavier than the standard model particles because supersymmetry is broken. Also at high energies could hide excitations of particles that exist in models with compactified extra dimensions. These excitations are similar to higher harmonics of a string and show up at certain discrete energy levels which depend on the size of the extra dimension.

Strictly speaking, it isn’t the mass that is relevant to the question whether a particle can be discovered, but the energy necessary to produce the particles, which includes binding energy. An interaction like the strong nuclear force, for example, displays “confinement” which means that it takes a lot of energy to tear quarks apart even though their masses are not all that large. Hence, quarks could have constituents – often called “preons” – that have an interaction – dubbed “technicolor” – similar to the strong nuclear force. The most obvious models of technicolor however ran into conflict with data decades ago. The idea however isn’t entirely dead, and though the surviving models aren’t presently particularly popular, some variants are still viable.

These phenomena are being looked for at the LHC and also in highly energetic cosmic ray showers.

3. High Precision

High precision tests of standard model processes are complementary to high energy measurements. They can be sensitive to tiniest effects stemming from virtual particles with energies too high to be produced at colliders, but still making a contribution at lower energies due to quantum effects. Examples for this are proton decay, neutron-antineutron oscillation, the muon g-2, the neutron electric dipole moment, or Kaon oscillations. There are existing experiments for all of these, searching for deviations from the standard model, and the precision for these measurements is constantly increasing.

A somewhat different high precision test is the search for neutrinoless double-beta decay which would demonstrate that neutrinos are Majorana-particles, an entirely new type of particle. (When it comes to fundamental particles that is. Majorana particles have recently been produced as emergent excitations in condensed matter systems.)

4. Long ago

In the early universe, matter was much denser and hotter than we can hope to ever achieve in our particle colliders. Hence, signatures left over from this time can deliver a bounty of new insights. The temperature fluctuations in the cosmic microwave background (B-modes and non-Gaussianities) may be able to test scenarios of inflation or its alternatives (like phase transitions from a non-geometric phase), whether our universe had a big bounce instead of a big bang, and – with some optimism – even whether gravity was quantized back them.

5. Far away

Some signatures of new physics appear on long distances rather than of short. An outstanding question is for example what’s the shape of the universe? Is it really infinitely large, or does it close back onto itself? And if it does, then how does it do this? One can study these questions by looking for repeating patterns in the temperature fluctuation of the cosmic microwave background (CMB). If we live in a multiverse, it might occasionally happen that two universes collide, and this too would leave a signal in the CMB.

New insights might also hide in some of the well-known problems with the cosmological concordance model, such as the too pronounced galaxy cusps or the too many dwarf galaxies that don’t fit well with observations. It is widely believed that these problems are numerical issues or due to a lack of understanding of astrophysical processes and not pointers to something fundamentally new. But who knows?

Another novel phenomenon that would become noticeable on long distances is a fifth force, which would lead to subtle deviations from general relativity. This might have all kinds of effects, from violations of the equivalence principle to a time-dependence of dark energy. Hence, there are experiments testing the equivalence principle and the constancy of dark energy to every higher precision.

6. Right here

Not all experiments are huge and expensive. While tabletop discoveries have become increasingly unlikely simply because we’ve pretty much tried all that could be done, there are still areas where small-scale lab experiments reach into unknown territory. This is the case notably in the foundations of quantum mechanics, where nanoscale devices, single photon sources and – detectors, and increasingly sophisticated noise-control technics have enabled previously impossible experiments. Maybe one day we’ll be able to solve the dispute over the “correct” interpretation of quantum mechanics simply by measuring which one is right.

So, physics isn’t over yet. It has become more difficult to test new fundamental theories, but we are pushing the limits in many currently running experiments.

[This post previously appeared on Starts With a Bang.]

Wissenschaft auf Abwegen

Ich war am Montag in Regensburg und habe dort einen öffentlichen Vortrag gegeben zum Thema “Wissenschaft auf Abwegen” für eine Reihe unter dem Titel “Was ist Wirklich?” Das ganze ist jetzt auf YouTube. Das Video besteht aus etwa 30 Minuten Vortrag und danach noch eine Stunde Diskussion. Alles in Deutsch. Nur was für eche Fans ;)

Saturday, June 18, 2016

New study finds no sign of entanglement with other universes

Somewhere in the multiverse
you’re having a good day.
The German Autobahn is famous for its lack of speed limits, and yet the greatest speed limit of all comes from a German: Nothing, Albert Einstein taught us, is allowed to travel faster than light. This doesn’t prevent our ideas from racing, but sometimes it prevents us from ticketing them.

If we live in an eternally inflating multiverse that contains a vast number of universes, then the other universes recede from us faster than light. We are hence “causally disconnected” from the rest of the multiverse, separated from the other universes by the ongoing exponential expansion of space, unable to ever make a measurement that could confirm their existence. It is this causal disconnect that has lead multiverse critics to complain the idea isn’t within the realm of science.

There are however some situations in which a multiverse can give rise to observable consequences. One is that our universe might in the past have collided with another universe, which would have left a tell-tale signature in the cosmic microwave background. Unfortunately, no evidence for this has been found.

Another proposal for how to test the multiverse is to exploit the subtle non-locality that quantum mechanics gives rise to. If we live in an ensemble of universes, and these universes started out in an entangled quantum state, then we might be able to today detect relics of their past entanglement.

This idea was made concrete by Richard Holman, Laura Mersini-Houghton, and Tomo Takahashi ten years ago. In their model (hep-th/0611223, hep-th/0612142), the original entanglement present among universes in the landscape decays and effectively leaves a correction to the potential that gives rise to inflation in our universe. This corrected potential in return affects observables that we can measure today.

The particular way of Mersini-Houghton and Holman to include entanglement in the landscape isn’t by any means derived from first principles. It is a phenomenological construction that implicitly makes many assumptions about the way quantum effects are realized on the landscape. But, hey, it’s a model that makes predictions, and in theoretical high energy today that’s something to be grateful for.

They predicted back then that such an entanglement-corrected cosmology would in particular affect the physics on very large scales, giving rise to a modulation of the power spectrum that makes the cold spot a more likely appearance, a suppression of the power at large angular scale, and an alignment in the directions in which large structures move – the so-called “dark flow.” The tentative evidence of a dark flow, which was predicted in 2008 had gone by 2013. But this disagreement with the data didn’t do much to the popularity of the model in the press.

In a recent paper, William Kinney from the University at Buffalo put to test the multiverse-entanglement with the most recent cosmological data:
    Limits on Entanglement Effects in the String Landscape from Planck and BICEP/Keck Data
    William H. Kinney
    arXiv:1606.00672 [astro-ph.CO]
The brief summary is that not only hasn’t he found any evidence for the entanglement-modification, he has ruled out the formerly proposed model for two general types of inflationary potentials. The first, a generic exponential inflation, is by itself incompatible with the data, but adding the entanglement correction doesn’t help to make it fit. The second, Starobinski inflation, is by itself a good fit to the data, but the entanglement correction spoils the fit.

Much to my puzzlement, his analysis also shows that some of the predictions of the original model (such as the modulation of the power spectrum) weren’t predictions to begin with, because Kinney in his calculation found that there are choices of parameters in which these effects don’t appear at all.

Leaving aside that this sheds a rather odd light on the original predictions, it’s not even clear exactly what has been ruled out here. What Kinney’s analysis does is to exclude a particular form of the effective potential for inflation (the one with the entanglement modification). This potential is, in the model by Holman and Mersini-Houghton, a function of the original potential (the one without the entanglement correction). Rather than ruling out the entanglement-modification, I can hence interpret this result to mean that the original potential just wasn’t the right one.

Or, in other words, how am I to know that one can’t find some other potential that will fit the data after adding the entanglement correction. The only difficulty I see in this would be to ensure that the uncorrected potential should still lead to eternal inflation.

To add meat to an unfalsifiable idea that made predictions which weren’t, one of the authors who proposed the entanglement model, Laura Mersini-Houghton, is apparently quite unhappy with Kinney’s paper and tries to use an intellectual property claim to get it removed from the arXiv (see comments for details). I will resist the temptation to comment on the matter and simply direct you to the Wikipedia entry on the Streisand Effect. Dear Internet, please do your job.

For better or worse, I have in the last years been dragged into a discussion about what is and isn’t science, which has forced me to think more about the multiverse than I and my infinitely many copies believe is good for their sanity. After this latter episode, the status is that I side with Joe Silk who captured it well: “[O]ne can always find inflationary models to explain whatever phenomenon is represented by the flavour of the month.”

Monday, June 13, 2016

String phenomenology of the somewhat different kind

[Cat’s cradle. Image Source.]
Ten years ago, I didn’t take the “string wars” seriously. To begin with, referring to such an esoteric conflict as “war” seems disrespectful to millions caught in actual wars. In comparison to their suffering it’s hard to take anything seriously.

Leaving aside my discomfort with the nomenclature, the focus on string theory struck me as odd. String theory as a research area stands out in hep-th and gr-qc merely because of the large number of followers, not by the supposedly controversial research practices. For anybody working in the field it is apparent that string theorists don’t differ in their single-minded focus from physicists in other disciplines. Overspecialization is a common disease of academia, but one that necessarily goes along with division of labor, and often it is an efficient route to fast progress.

No, I thought back then, string theory wasn’t the disease, it was merely a symptom. The underlying disease was one that would surely soon be recognized and addressed: Theoreticians – as scientists whose most-used equipment is their own brain – must be careful to avoid systematic bias introduced by their apparatuses. In other words, scientific communities, and especially those which lack timely feedback by data, need guidelines to avoid social and cognitive biases.

This is so obvious it came as a surprise to me that, in 2006, everybody was hitting on Lee Smolin for pointing out what everybody knew anyway, that string theorists, lacking experimental feedback for decades, had drifted off in a math bubble with questionable relevance for the description of nature. It’s somewhat ironic that, from my personal experience, the situation is actually worse in Loop Quantum Gravity, an approach pioneered, among others, by Lee Smolin. At least the math used by string theorists seems to be good for something. The same cannot be said about LQG.

Ten years later, it is clear that I was wrong in thinking that just drawing attention to the problem would seed a solution. Not only has the situation not improved, it has worsened. We now have some theoretical physicists who argue that we should alter the scientific method so that the success of a theory can be assessed by means other than empirical evidence. This idea, which has sprung up in the philosophy community, isn’t all that bad in principle. In practice, however, it will merely serve to exacerbate social streamlining: If theorists can draw on criteria other than the ability of a theory to explain observations, the first criterion they’ll take into account is aesthetic value, and the second is popularity with their colleagues. Nothing good can come out of this.

And nothing good has come out of it, nothing has changed. The string wars clearly were more interesting for sociologists than they were for physicists. In the last couple of months several articles have appeared which comment on various aspects of this episode, which I’ve read and want to briefly summarize for you.

First, there is
    Collective Belief, Kuhn, and the String Theory Community
    Weatherall, James Owen and Gilbert, Margaret
    philsci-archive:11413
This paper is a very Smolin-centric discussion of whether string theorists are exceptional in their group beliefs. The authors argue that, no, actually string theorists just behave like normal humans and “these features seem unusual to Smolin not because they are actually unusual, but because he occupies an unusual position from which to observe them.” He is unusual, the authors explain, for having worked on string theory, but then deciding to not continue in the field.

It makes sense, the authors write, that people whose well-being to some extent depends on the acceptance by the group will adapt to the group:
“Expressing a contrary view – bucking the consensus – is an offense against the other members of the community… So, irrespective of their personal beliefs, there are pressures on individual scientists to speak in certain ways. Moreover, insofar as individuals are psychologically disposed to avoid cognitive dissonance, the obligation to speak in certain ways can affect one’s personal beliefs so as to bring them into line with the consensus, further suppressing dissent from within the group.”
Furthermore:
“As parties to a joint commitment, members of the string theory community are obligated to act as mouthpieces of their collective belief.”
I actually thought we knew this since 1895, when Le Bon’s published his “Study of the Popular Mind.”

The authors of the paper then point out that it’s normal for members of a scientific community to not jump ship at the slightest indication of conflicting evidence because often such evidence turns out to be misleading. It didn’t become clear to me what evidence they might be referring to; supposedly it’s non-empirical.

They further argue that a certain disregard for what is happening outside one’s own research area is also normal: “Science is successful in part because of a distinctive kind of focused, collaborative research,” and due to their commitment to the agenda “participants can be expected to resist change with respect to the framework of collective beliefs.”

This is all reasonable enough. Unfortunately, the authors entirely miss the main point, the very reason for the whole debate. The question isn’t whether string theorists’ behavior is that of normal humans – I don’t think that was ever in doubt – but whether that “normal human behavior” is beneficial for science. Scientific research requires, in a very specific sense, non-human behavior. It’s not normal for individuals to disregard subjective assessments and to not pay attention to social pressure. And yet, that is exactly what good science would require.

The second paper is
This paper is basically a summary of the string wars that focuses on the question whether or not string theory can be considered science. This “demarcation problem” is a topic that philosophers and sociologists love to discuss, but to me it really isn’t particularly interesting how you classify some research area, to me the question is whether it’s good for something. This is a question which should be decided by the community, but as long as decision making is influenced by social pressures and cognitive biases I can’t trust the community judgement.

The article has a lot of fun quotations from very convinced string theorists, for example by David Gross: “String theory is full of qualitative predictions, such as the production of black holes at the LHC.” I’m not sure what’s the difference between a qualitative prediction and no prediction, but either way it’s certainly not a prediction that was very successful. Also nice is John Schwarz claiming that “supersymmetry is the major prediction of string theory that could appear at accessible energies” and that “some of these superpartners should be observable at the LHC.” Lots of coulds and shoulds that didn’t quite pan out.

While the article gives a good overview on the opinions about string theory that were voiced during the 2006 controversy, the authors themselves clearly don’t know very well the topic they are writing about. A particularly odd statement that highlights their skewed perspective is: “String theory currently enjoys a privileged status by virtue of being the dominant paradigm within theoretical physics.”

I find it quite annoying how frequently I encounter this extrapolation from a particular research area – may that be string theory, supersymmetry, or multiverse cosmology – to all of physics. The vast majority of physicists work in fields like quantum optics, photonics, hadronic and nuclear physics, statistical mechanics, atomic physics, solid state physics, low-temperature physics, plasma physics, astrophysics, condensed matter physics, and so on. They have nothing whatsoever to do with string theory, and certainly would be very surprised to hear that it’s “the dominant paradigm.”

In any case, you might find this paper useful if you didn’t follow the discussion 10 years ago.

Finally, there is this paper

The title of the paper doesn’t explicitly refer to string theory, but most of it is also a discussion of the demarcation problem on the example of arXiv trackbacks. (I suspect this paper is a spin-off of the previous paper.)

ArXiv trackbacks, in case you didn’t know, are links to blogposts that show up on some papers’ arxiv sites, when the blogpost has referred to the paper. To exactly which blogs trackbacks show up and who makes the decision whether they do is one of the arXiv’s best-kept secrets. Peter Woit’s blog, infamously, doesn’t show up in the arXiv trackbacks on the, rather spurious, reason that he supposedly doesn’t count as “active researcher.” The paper tells the full 2006 story with lots of quotes from bloggers you are probably familiar with.

The arXiv recently conducted a user survey, among other things about the trackback feature, which makes me think they might have some updates planned.

On the question who counts as crackpot, the paper (unsurprisingly) doesn’t come to a conclusion other than noting that scientists deal with the issue by stating “we know one when we see one.” I don’t think there can be any other definition than that. To me the notion of “crackpot” is an excellent example of an emergent feature – it’s a demarcation that the community creates during its operation. Any attempt to come up with a definition from first principles is hence doomed to fail.

The rest of the paper is a general discussion of the role of blogs in science communication, but I didn’t find it particularly insightful. The author comes to the (correct) conclusion that blog content turned out not to have such a short life-time as many feared, but otherwise basically just notes that there are as many ways to use blogs as there are bloggers. But then if you are reading this, you already knew that.

One of the main benefits that I see in blogs isn’t mentioned in the paper at all, which is that blogs supports communication between scientific communities that are only loosely connected. In my own research area, I read the papers, hear the seminars, and go to conferences, and I therefore know pretty well what is going on – with or without blogs. But I use blogs to keep up to date in adjacent fields, like cosmology, astrophysics and, to a lesser extent, condensed matter physics and quantum optics. For this purpose I find blogs considerably more useful than popular science news, because the latter often doesn’t provide a useful amount of detail and commentary, not to mention that they all tend to latch onto the same three papers that made big unsubstantiated claims.

Don’t worry, I haven’t suddenly become obsessed with string theory. I’ve read through these sociology papers mainly because I cannot not write a few paragraphs about the topic in my book. But I promise that’s it from me about string theory for some while.

Update: Peter Woit has some comments on the trackback issue.

Monday, June 06, 2016

Dear Dr B: Why not string theory?

[I got this question in reply to my last week’s book review of Why String Theory? by Joseph Conlon.]

Dear Marco:

Because we might be wasting time and money and, ultimately, risk that progress stalls entirely.

In contrast to many of my colleagues I do not think that trying to find a quantum theory of gravity is an endeavor purely for the sake of knowledge. Instead, it seems likely to me that finding out what are the quantum properties of space and time will further our understanding of quantum theory in general. And since that theory underlies all modern technology, this is research which bears relevance for applications. Not in ten years and not in 50 years, but maybe in 100 or 500 years.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

As far as quantum gravity is concerned, string theorist’s main argument seems to be “Well, can you come up with something better?” Then of course if someone answers this question with “Yes” they would never agree that something else might possibly be better. And why would they – there’s no evidence forcing them one way or the other.

I don’t see what one learns from discussing which theory is “better” based on philosophical or aesthetic criteria. That’s why I decided to stay out of this and instead work on quantum gravity phenomenology. As far as testability is concerned all existing approaches to quantum gravity do equally badly, and so I’m equally unconvinced by all of them. It is somewhat of a mystery to me why string theory has become so dominant.

String theorists are very proud of having a microcanonical explanation for the black hole entropy. But we don’t know whether that’s actually a correct description of nature, since nobody has ever seen a black hole evaporate. In fact one could read the firewall problem as a demonstration that indeed this cannot be a correct description of nature. Therefore, this calculation leaves me utterly unimpressed.

But let me be clear here. Nobody (at least nobody whose opinion matters) says that string theory is a research program that should just be discontinued. The question is instead one of balance – does the promise justify the amount of funding spend on it? And the answer to this question is almost certainly no.

The reason is that academia is currently organized so that it invites communal reinforcement, prevents researchers from leaving fields whose promise is dwindling, and supports a rich-get-richer trend. That institutional assessments use the quantity of papers and citation counts as a proxy for quality creates a bonus for fields in which papers can be cranked out quickly. Hence it isn’t surprising that an area whose mathematics its own practitioners frequently describe as “rich” would flourish. What does mathematical “richness” tell us about the use of a theory in the description of nature? I am not aware of any known relation.

In his book Why String Theory?, Conlon tells the history of the discipline from a string theorist’s perspective. As a counterpoint, let me tell you how a cynical outsider might tell this story:

String theory was originally conceived as a theory of the strong nuclear force, but it was soon discovered that quantum chromodynamics was more up to the task. After noting that string theory contains a particle that could be identified as the graviton, it was reconsidered as a theory of quantum gravity.

It turned out however that string theory only makes sense in a 25-dimensional space. To make that compatible with observations, 22 of the dimensions were moved out of sight by rolling them up (compactifying) them to a radius so small they couldn’t be observationally probed.

Next it was noted that the theory also needs supersymmetry. This brings down the number of space dimensions to 9, but also brings a new problem: The world, unfortunately, doesn’t seem to be supersymmetric. Hence, it was postulated that supersymmetry is broken at an energy scale so high we wouldn’t see the symmetry. Even with that problem fixed, however, it was quickly noticed that moving the superpartners out of direct reach would still induce flavor changing neutral currents that, among other things, would lead to proton decay and so be in conflict with observation. Thus, theorists invented R-parity to fix that problem.

The next problem that appeared was that the cosmological constant turned out to be positive instead of zero or negative. While a negative cosmological constant would have been easy to accommodate, string theorists didn’t know what to do with a positive one. But it only took some years to come up with an idea to make that happen too.

String theory was hoped to be a unique completion of the standard model including general relativity. Instead it slowly became clear that there is a huge number of different ways to get rid of the additional dimensions, each of which leads to a different theory at low energies. String theorists are now trying to deal with that problem by inventing some probability measure according to which the standard model is at least a probable occurrence in string theory.

So, you asked, why not string theory? Because it’s an approach that has been fixed over and over again to make it compatible with conflicting observations. Every time that’s been done, string theorists became more convinced of their ideas. And every time they did this, I became more convinced they are merely building a mathematical toy universe.

String theorists of course deny that they are influenced by anything but objective assessment. One noteworthy exception is Joe Polchinski who has considered that social effects play a role, but just came to the conclusion that they aren’t relevant. I think it speaks for his intellectual sincerity that he at least considered it.

At the Munich workshop last December, David Gross (in an exchange with Carlo Rovelli) explained that funding decisions have no influence on whether theoretical physicists chose to work in one field or the other. Well, that’s easy to say if you’re a Nobel Prize winner.

Conlon in his book provides “evidence” that social bias plays no role by explaining that there was only one string theorist in a panel that (positively) evaluated one of his grants. To begin with anecdotes can’t replace data and there is ample evidence that social biases are common human traits, so by default scientists should be susceptible. But even considering his anecdote, I’m not sure why Conlon thinks leaving decisions to non-experts limits bias. My expectation would be that it amplifies bias because it requires drawing on simplified criteria, like the number of papers published and how often they’ve been cited. And what does that depend on? Depends on how many people there are in the field and how many peers favorably reviewed papers on the topic of your work.

I am listing these examples to demonstrate that it is quite common of theoretical physicists (not string theorists in particular) to dismiss the mere possibility that social dynamics influences research decisions.

How large a role play social dynamics and cognitive biases, and how much do they slow down progress on the foundations of physics? I can’t tell you. But even though I can’t tell you how much faster progress could be, I am sure it’s slowed down. I can tell that in the same way that I can tell you diesel in Germany is sold under market value even though I don’t know the market value. I know that because it’s subsidized. And in the same way I can tell that string theory is overpopulated and its promise is overestimated because it’s an idea that benefits from biases which humans demonstrably possess. But I can’t tell you what its real value would be.

The reproduction crisis in the life-sciences and psychology has spurred a debate for better measures of statistical significance. Experimentalists go to length to put into place all kinds of standardized procedures to not draw the wrong conclusions from what their apparatuses measures. In theory development, we have our own crisis, but nobody talks about it. The apparatuses that we use are our own brains and biases we should guard against are cognitive and social biases, communal reinforcement, sunk cost fallacy, wishful thinking and status-quo bias, for just to mention the most common ones. These however are presently entirely unaccounted for. Is this the reason why string theory has gathered so many followers?

Some days I side with Polchinski and Gross and don’t think it makes that much of a difference. It really is an interesting topic and it’s promising. On other days I think we’ve wasted 30 years studying bizarre aspects of a theory that doesn’t bring us any closer to understanding quantum gravity, and it’s nothing but an empty bubble of disappointed expectations. Most days I have to admit I just don’t know.

Why not string theory? Because enough is enough.

Thanks for an interesting question.