Monday, July 16, 2018 A new tool for arXiv users

Time is money. It’s also short. And so we save time wherever we can, even when we describe our own research. All too often, one word must do: You are a cosmologist, or a particle physicist, or a string theorist. You work on condensed matter, or quantum optics, or plasma physics.

Most departments of physics use such simple classifications. But our scientific interests cannot be so easily classified. All too often, one word is not enough.

Each scientists has their own, unique, research interests. Maybe you work on astrophysics and cosmology and particle physics and quantum gravity. Maybe you work on condensed matter physics and quantum computing and quantitative finance.

Whatever your research interests, now you can show off its full breadth, not in one word, but in one image. On our new website SciMeter, you can create a keyword cloud from your arXiv papers. For example here is the cloud for Stephen Hawking’s papers:

You can also search for similar authors and for people who have worked on a certain topic, or a set of topics.

As I promised previously, on this website you can also find out your broadness-value (it is listed below the cloud). Please note that the value we quote on the website is standard deviations from the average, so that negative values of broadness are below average and positive values above. Also keep in mind that we measure the broadness relative to the total average, ie for all arXiv categories.

While this website is mostly aimed at authors in the field of physics, we hope it will also be of use to journalists looking for an expert or for editors looking for reviewers.

The software for this website was developed by Tom Price and Tobias Mistele, who were funded on an FQXi minigrant. It is entirely non-profit and we do not plan on making money with it. This means maintaining and expanding this service (eg to include other data) will only be possible if we can find sponsors.

If you encounter any problems with the website, please to not submit the issue here, but use the form that you find on the help-page.

Wednesday, July 11, 2018

What's the purpose of working in the foundations of physics?

That’s me. Photo by George Musser.
Yes, I need a haircut.
[Several people asked me for a transcript of my intro speech that I gave yesterday in Utrecht at the 19th UK and European conference on foundations of physics. So here it is.]

Thank you very much for the invitation to this 19th UK and European conference on Foundations of physics.

The topic of this conference combines everything that I am interested in, and I have seen the organizers have done an awesome job lining up the program. From locality and non-locality to causality, the past hypothesis, determinism, indeterminism, and irreversibility, the arrow of time and presentism, symmetries, naturalness and finetuning, and, of course, everyone’s favorites: black holes and the multiverse.

This is sure to be a fun event. But working in the foundations of physics is not always easy.

When I write a grant proposal, inevitably I will get to the part in which I have to explain the purpose of my work. My first reaction to this is always: What’s the purpose of anything anyway?

My second thought is. Why do only scientists get this question? Why doesn’t anyone ask Gucci what’s the purpose of the Spring collection? Or Ed Sheeran what’s the purpose of singing about your ex-lover? Or Ronaldo what’s the purpose of running after a leather ball and trying to kick it into a net?

Well, you might say, the purpose is that people like to buy it, hear it, watch it. But what’s the purpose of that? Well, it makes their lives better. And what’s the purpose of that?

If you go down the rabbit hole, you find that whenever you ask for purpose you end up asking what’s the purpose of life. And to that, not even scientists have an answer.

Sometimes I therefore think maybe that’s why they ask us to explain the purpose of our work. Just to remind us that science doesn’t have answers to everything.

But then we all know that the purpose of the purpose section in a grant proposal is not to actually explain the purpose of what you do. It is to explain how your work contributes to what other people think its purpose should be. And that often means applications and new technology. It means something you can build, or sell, or put under the Christmas tree.

I am sure I am not the only one here who has struggled to explain the purpose of work in the foundations of physics. I therefore want to share with you an observation that I have made during more than a decade of public outreach: No one from the public ever asks this question. It comes from funding bodies and politicians exclusively.

Everyone else understands just fine what’s the purpose of trying to describe space and time and matter, and the laws they are governed by. The purpose is to understand. These laws describe our universe; they describe us. We want to know how they work.

Seeking this knowledge is the purpose of our work. And, if you collect it in a book, you can even put it under a Christmas tree.

So I think we should not be too apologetic about what we are doing. We are not the only ones who care about the questions we are trying to answer. A lot of people want to understand how the universe works. Because understanding makes their lives better. Whatever is the purpose of that.

But I must add that through my children I have rediscovered the joys of materialism. Kids these days have the most amazing toys. They have tablets that take videos – by voice control. They have toy helicopters – that actually fly. They have glittery slime that glows in the dark.

So, stuff is definitely fun. Let me say some words on applications of the foundations of physics.

In contrast to most people who work in the field – and probably most of you – I do not think that whatever new we will discover in the foundations will remain pure knowledge, detached from technology. The reason is that I believe we are missing something big about the way that quantum theory cooperates with space and time.

And if we solve this problem, it will lead to new insights about quantum mechanics, the theory behind all our fancy new electronic gadgets. I believe the impact will be substantial.

You don’t have to believe me on this.

I hope you will believe me, though, when I say that this conference gathers some of the brightest minds on the planet and tackles some of the biggest questions we know.

I wish all of you an interesting and successful meeting.

Sunday, July 08, 2018

Away Note

I’ll be in Utrecht next week for the 19th UK and European Conference on Foundations of Physics. August 28th I’ll be in Santa Fe, September 6th in Oslo, September 22nd I’ll be in London for another installment of the HowTheLightGetsIn Festival.

I have been educated that this festival derives its name from Leonard Cohen’s song “Anthem” which features the lines
“Ring the bells that still can ring
Forget your perfect offering
There is a crack in everything
That’s how the light gets in.”
If you have read my book, the crack metaphor may ring a bell. If you haven’t, you should.

October 3rd I’m in NYC, October 4th I’m in Richmond, Kentucky, and the second week of October I am at the International Book Fair in Frankfurt.

In case our paths cross, please say “Hi” – I’m always happy to meet readers irl.

Thursday, July 05, 2018

Limits of Reductionism

Almost forgot to mention I made it 3rd prize in the 2018 FQXi essay contest “What is fundamental?”

The new essay continues my thoughts about whether free will is or isn’t compatible with what we know about the laws of nature. For many years I was convinced that the only way to make free will compatible with physics is to adopt a meaningless definition of free will. The current status is that I cannot exclude it’s compatible.

The conflict between physics and free will is that to our best current knowledge everything in the universe is made of a few dozen particles (take or give some more for dark matter) and we know the laws that determine those particles’ behavior. They all work the same way: If you know the state of the universe at one time, you can use the laws to calculate the state of the universe at all other times. This implies that what you do tomorrow is already encoded in the state of the universe today. There is, hence, nothing free about your behavior.

Of course nobody knows the state of the universe at any one time. Also, quantum mechanics makes the situation somewhat more difficult in that it adds randomness. This randomness would prevent you from actually making a prediction for exactly what happens tomorrow even if you knew the state of the universe at one moment in time. With quantum mechanics, you can merely make probabilistic statements. But just because your actions have a random factor doesn’t mean you have free will. Atoms randomly decay and no one would call that free will. (Well, no one in their right mind anyway, but I’ll postpone my rant about panpsychic pseudoscience to some other time.)

People also often quote chaos to insist that free will is a thing, but please note that chaos is predictable in principle, it’s just not predictable in practice because it makes a system’s behavior highly dependent on the exact values of initial conditions. The initial conditions, however, still determine the behavior. So, neither quantum mechanics nor chaos bring back free will into the laws of nature.

Now, there are a lot of people who want you to accept watered-down versions of free will, eg that you have free will because no one can in practice predict your behavior, or because no one can tell what’s going on in your brain, and so on. But I think this is just verbal gymnastics. If you accept that the current theories of particle physics are correct, free will doesn’t exist in a meaningful way.

That is as long as you believe – as almost all physicists do – that the laws that dictate the behavior of large objects follow from the laws that dictate the behavior of the object’s constituents. That’s what reductionism tells us, and let me emphasize that reductionism is not a philosophy, it’s an empirically well-established fact. It describes what we observe. There are no known exceptions to it.

And we have methods to derive the laws of large objects from the laws for small objects. In this case, then, we know that predictive laws for human behavior exist, it’s just that in practice we can’t compute them. It is the formalism of effective field theories that tells us just what is the relation between the behavior of large objects and their interactions to the behavior of smaller objects and their interactions.

There are a few examples in the literature where people have tried to find systems for which the behavior on large scales cannot be computed from the behavior at small scales. But these examples use unrealistic systems with an infinite number of constituents and I don’t find them convincing cases against reductionism.

It occurred to me some years ago, however, that there is a much simpler example for how reductionism can fail. It can fail simply because the extrapolation from the theory at short distances to the one at long distances is not possible without inputting further information. This can happen if the scale-dependence of a constant has a singularity, and that’s something which we cannot presently exclude.

With singularity I here do not mean a divergence, ie that something becomes infinitely large. Such situations are unphysical and not cases I would consider plausible for realistic systems. But functions can have singularities without anything becoming infinite: A singularity is merely a point beyond which a function cannot be continued.

I do not currently know of any example for which this actually happens. But I also don’t know a way to exclude it.

Now consider you want to derive the theory for the large objects (think humans) from the theory for the small objects (think elementary particles) but in your derivation you find that one of the functions has a singularity at some scale in between. This means you need new initial values past the singularity. It’s a clean example for a failure of reductionism, and it implies that the laws for large objects indeed might not follow from the laws for small objects.

It will take more than this to convince me that free will isn’t an illusion, but this example for the failure of reductionism gives you an excuse to continue believing in free will.

Full essay with references here.

Thursday, June 28, 2018

How nature became unnatural

Naturalness is an old idea; it dates back at least to the 16th century and captures the intuition that a useful explanation shouldn’t rely on improbable coincidences. Typical examples for such coincidences, often referred to as “conspiracies,” are two seemingly independent parameters that almost cancel each other, or an extremely small yet nonzero number. Physicists believe that theories which do not have such coincidences, and are natural in this particular sense, are more promising than theories that are unnatural.

Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.

As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.

But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.

Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.

The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.

The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.

In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.

A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.

Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.

I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.

Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.

That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.

I was therefore excited to see that James Wells has a new paper on the arXiv
In his paper, Wells lays out the problems with the lacking probability distribution with several simple examples. And in contrast to me, Wells isn’t a no-one; he’s a well-known US-American particle physicist and Professor at the University of Michigan.

So, now that a man has said it, I hope physicists will listen.

Aside: I continue to have technical troubles with the comments on this blog. Notification has not been working properly for several weeks, which is why I am approving comments with much delay and reply erratically. In the current arrangement, I can neither read the full comment before approving it, nor can I keep comments unread, so as to remind myself to reply, what I did previously. Google says they’ll be fixing it, but Im not sure what, if anything, they’re doing to make that happen.

Also, my institute wants me to move my publicly available files elsewhere because they are discontinuing the links that I have used so far. For this reason most images in the older blogposts have disappeared. I have to manually replace all these links which will take a while. I am very sorry for the resulting ugliness.

Saturday, June 23, 2018

Particle Physics now Belly Up

Particle physics. Artist’s impression.
Professor Ben Allanach is a particle physicist at Cambridge University. He just wrote an advertisement for my book that appeared on Aeon some days ago under the title “Going Nowhere Fast”.

I’m kidding of course, Allanach’s essay has no relation to my book. At least not that I know of. But it’s not a coincidence he writes about the very problems that I also discuss in my book. After all, the whole reason I wrote the book was that this situation was foreseeable: The Large Hadron Collider hasn’t found evidence for any new particles besides the Higgs-boson (at least not so far), so now particle physicists are at a loss for how to proceed. Even if they find something in the data that’s yet to come, it is clear already that their predictions were wrong.

Theory-development in particle physics for the last 40 years has worked mostly by what is known as “top-down” approaches. In these approaches you invent a new theory based on principles you cherish and then derive what you expect to see at particle colliders. This approach has worked badly, to say the least. The main problem, as I lay out in my book, is that the principles which physicists used to construct their theories are merely aesthetic requirements. Top-down approaches, for example, postulate that the fundamental forces are unified or that the universe has additional symmetries or that the parameters in the theory are “natural.” But none of these assumptions are necessary, they’re just pretty guesses.

The opposite to a top-down approach, as Allanach lays out, is a “bottom-up” approach. For that you begin with the theories you have confirmed already and add possible modifications. You do this so that the modifications only become relevant in situations that you have not yet tested. Then you look at the data to find out which modifications are promising because they improve the fit to the data. It’s an exceedingly unpopular approach because the data have just told us over and over and over again that the current theories are working fine and require no modification. Also, bottom-up approaches aren’t pretty which doesn’t help their popularity.

Allanach, as several other people who I know, has stopped working on supersymmetry, an idea that has for a long time been the most popular top-down approach. In principle it’s a good development that researchers in the field draw consequences from the data. But if they don’t try to understand just what went wrong – why so many theoretical physicists believed in ideas that do not describe reality – they risk repeating the same mistake. It’s of no use if they just exchange one criterion of beauty with another.

Bottom-up approaches are safely on the scientific side. But they also increase the risk that we get stuck with the development of new theories because without top-down approaches we do not know where to look for new data. That’s why I argue in my book that some mathematical principles for theory-development are okay to use, namely those which prevent internal contradictions. I know this sounds lame and rather obvious, but in fact it is an extremely strong requirement that, I believe, hasn’t been pushed as far as we could push it.

This top-down versus bottom-up discussion isn’t new. It has come up each time the supposed predictions for new particles turned out to be wrong. And each time the theorists in the field, rather than recognizing the error in their ways, merely adjusted their models to evade experimental bounds and continued as before. Will you let them get away with this once again?

Tuesday, June 19, 2018

Astrophysicists try to falsify multiverse, find they can’t.

Ben Carson, trying to
make sense of the multiverse.
The idea that we live in a multiverse – an infinite collection of universes from which ours is merely one – is interesting but unscientific. It postulates the existence of entities that are unnecessary to describe what we observe. All those other universes are inaccessible to experiment. Science, therefore, cannot say anything about their existence, neither whether they do exist nor whether they don’t exist.

The EAGLE collaboration now knows this too. They recently published results of a computer simulation that details how the formation of galaxies is affected when one changes the value of the cosmological constant, the constant which quantifies how fast the expansion of the universe accelerates. The idea is that, if you believe in the multiverse, then each simulation shows a different universe. And once you know which universes give rise to galaxies, you can calculate how likely we are to be in a universe that contains galaxies and also has the cosmological constant that we observe.

We already knew before the new EAGLE paper that not all values of the cosmological constant are compatible with our existence. If the cosmological constant is too large, the universe either collapses quickly after formation (if the constant is negative) and galaxies are never formed, or it expands so quickly that structures are torn apart before galaxies can form (if the constant is positive).

New is that by using computer simulations, the EAGLE collaboration is able to quantify and also illustrate just how the structure formation differs with the cosmological constant.

The quick summary of their results is that if you turn up the cosmological constant and keep all other physics the same, then making galaxies becomes difficult once the cosmological constant exceeds about 100 times the measured value. The authors haven’t looked at negative values of the cosmological constant because (so they write) that would be difficult to include in their code.

The below image from their simulation shows an example for the gas density. On the left you see a galaxy prototype in a universe with zero cosmological constant. On the right the cosmological constant is 30 times the measured value. In the right image structures are smaller because the gas halos have difficulties growing in a rapidly expanding universe.

From Figure 7 of Barnes et al, MNRAS 477, 3, 1 3727–3743 (2018).

This, however, is just turning knobs on computer code, so what does this have to do with the multiverse? Nothing really. But it’s fun to see how the authors are trying really hard to make sense of the multiverse business.

A particular headache for multiverse arguments, for example, is that if you want to speak about the probability of an observer finding themselves in a particular part of the multiverse, you have to specify what counts as observer. The EAGLE collaboration explains:
“We might wonder whether any complex life form counts as an observer (an ant?), or whether we need to see evidence of communication (a dolphin?), or active observation of the universe at large (an astronomer?). Our model does not contain anything as detailed as ants, dolphins or astronomers, so we are unable to make such a fine distinction anyway.”
But even after settling the question whether dolphins merit observer-status, a multiverse per se doesn’t allow you to calculate the probability for finding this or that universe. For this you need additional information: a probability distribution or “measure” on the multiverse. And this is where the real problem begins. If the probability of finding yourself in a universe like ours is small you may think that disfavors the multiverse hypothesis. But it doesn’t: It merely disfavors the probability distribution, not the multiverse itself.

The EAGLE collaboration elaborates on the conundrum:
“What would it mean to apply two different measures to this model, to derive two different predictions? How could all the physical facts be the same, and yet the predictions of the model be different in the two cases? What is the measure about, if not the universe? Is it just our own subjective opinion? In that case, you can save yourself all the bother of calculating probabilities by having an opinion about your multiverse model directly.”
Indeed. You can even save yourself the bother of having a multiverse to begin with because it doesn’t explain any observation that a single universe wouldn’t also explain.

The authors eventually find that some probability distributions make our universe more, others less probable. Not that you need a computer cluster for that insight. Still, I guess we should applaud the EAGLE people for trying. In their paper, they conclude: “A specific multiverse model must justify its measure on its own terms, since the freedom to choose a measure is simply the freedom to choose predictions ad hoc.”

But of course a model can never justify itself. The only way to justify a physical model is that it fits observation. And if you make ad hoc choices to fit observations you may as well just chose the cosmological constant to be what we observe and be done with it.

In summary, the paper finds that the multiverse hypothesis isn’t falsifiable. If you paid any attention to the multiverse debate, that’s hardly surprising, but it is interesting to see astrophysicists attempting to squeeze some science out of it.

I think the EAGLE study makes a useful contribution to the literature. Multiverse proponents have so far argued that what they do is science because some versions of the multiverse are testable in our universe, for example by searching for entanglement between universes, or for evidence that our universe has collided with another one in the past.

It is correct that some multiverse types are testable, but to the extent that they have been tested, they have been ruled out. This, of course, has not ruled out the multiverse per se, because there are still infinitely many types of multiverses left. For those, the only thing you can do is make probabilistic arguments. The EAGLE paper now highlights that these can’t be falsified either.

I hope that showcasing the practical problem, as the EAGLE paper does, will help clarify the unscientific basis of the multiverse hypothesis.

Let me be clear that the multiverse is a fringe idea in a small part of the physics community. Compared to the troubled scientific methodologies in some parts of particle physics and cosmology, multiverse madness is a minor pest. No, the major problem with the multiverse is its popularity outside of physics. Physicists from Brian Greene to Leonard Susskind to Andrei Linde have publicly spoken about the multiverse as if it was best scientific practice. And that well-known physicists pass the multiverse off as science isn’t merely annoying, it actively damages the reputation of science. A prominent example for the damage that can result comes from the 2015 Republican Presidential Candidate Ben Carson.

Carson is a retired neurosurgeon who doesn’t know much physics, but what he knows he seems to have learned from multiverse enthusiasts. On September 22, 2015, Carson gave a speech at a Baptist school in Ohio, informing his audience that “science is not always correct,” and then went on to justify his science skepticism by making fun of the multiverse:
“And then they go to the probability theory, and they say “but if there’s enough big bangs over a long enough period of time, one of them will be the perfect big bang and everything will be perfectly organized.””
In an earlier speech he cheerfully added: “I mean, you want to talk about fairy tales? This is amazing.”

Now, Carson has misunderstood much of elementary thermodynamics and cosmology, and I have no idea why he thinks he’s even qualified to give speeches about physics. But really this isn’t the point. I don’t expect neurosurgeons to be experts in the foundations of physics and I hope Carson’s audience doesn’t expect that either. Point is, he shows what happens when scientists mix fact with fiction: Non-experts throw out both together.

In his speech, Carson goes on: “I then say to them, look, I’m not going to criticize you. You have a lot more faith than I have… I give you credit for that. But I’m not going to denigrate you because of your faith and you shouldn’t denigrate me for mine.”

And I’m with him on that. No one should be denigrated for what they believe in. If you want to believe in the existence of infinitely many universes with infinitely many copies of yourself, that’s all fine with me. But please don’t pass it off as science.

If you want to know more about the conflation between faith and knowledge in theoretical physics, read my book “Lost in Math: How Beauty Leads Physics Astray.”

Friday, June 15, 2018

Physicists are very predictable

I have a new paper on the arXiv, which came out of a collaboration with Tobias Mistele and Tom Price. We fed a neural network with data about the publication activity of physicists and tried to make a “fake” prediction, for which we used data from the years 1996 up to 2008 to predict the next ten years. Data come from the arXiv via the Open Archives Initiative.

To train the network, we took a random sample of authors and asked the network to predict these authors’ publication data. In each cycle the network learned how good or bad its prediction was and then tried to further improve it.

Concretely, we trained the network to predict the h-index, a measure for the number of citations a researcher has accumulated. We didn’t use this number because we think it’s particularly important, but simply because other groups have previously studied it with neural networks in disciplines other than physics. Looking at the h-index, therefore, allowed us to compare our results with those of the other groups.

After completing the training, we asked how well the network can predict the citations accumulated by authors that were not in the training group. The common way to quantify the goodness of such a prediction is with the coefficient of determination, R2. The higher the coefficient of determination, the stronger the correlation of the prediction with the actual number, hence the better the prediction. The below figure shows the result of our neural network, compared with some other predictors. As you can see we did pretty well!

The blue (solid) curve labelled “Net” shows how good the prediction
of our network is for extrapolating the h-index over the number of years.
The other two curves use simpler predictors on same data. 

We found a coefficient of determination of  0.85 for a prediction over ten years. Earlier studies based on machine learning found 0.48 in the life-sciences and 0.72 in the computer sciences.

But admittedly the coefficient of determination doesn’t tell you all that much unless you’re a statistician. So for illustration, here are some example trajectories that show the network’s prediction compared with the actual trend (more examples in the paper).

However, that our prediction is better than the earlier ones is only partly due to our network’s performance. Turns out, our data are also intrinsically easier to predict, even with simple measures. You can for example just try to linearly extrapolate the h-index, and while that prediction isn’t as good as that of the network, it is still better than the prediction from the other disciplines. You see this in the figure I showed you above for the coefficient of determination. Used on the arXiv data even the simple predictors achieve something like 0.75.

Why that is so, we don’t know. One possible reason could be that the sub-disciplines of physicists are more compartmentalized and researchers often stay in the fields that they started out with. Or, as Nima Arkani-Hamed put it when I interviewed him “everybody does the analytic continuation of what they’ve been doing for their PhD”. (Srsly, the book is fun, you don’t want to miss it.) In this case you establish a reputation early on and your colleagues know what to expect from you. It seems plausible to me that in such highly specialized communities it would be easier to extrapolate citations than in more mixed-up communities. But really this is just speculation; the data don’t tell us that.

Having said this, by and large the network predictions are scarily good. And that’s even though our data is woefully incomplete. We cannot presently, for example, include any papers that are not on the arXiv. Now, in some categories, like hep-th, pretty much all papers are on the arXiv. But in other categories that isn’t the case. So we are simply missing information about what researchers are doing. We also have the usual problem of identifying authors by their names, and haven’t always been able to find the journal in which a paper was published.

Now, if you allow me to extrapolate the present situation, data will become better and more complete. Also the author-identification problem will, hopefully, be resolved at some point. And this means that the predictivity of neural networks chewing on this data is likely to increase some more.

Of course we did not actually make future predictions in the present paper, because in this case we wouldn’t have been able to quantify how good the prediction was. But we could now go and train the network with data up to 2018 and extrapolate up to 2028. And I predict it won’t be long until such extrapolations of scientists’ research careers will be used in hiring and funding decisions. Sounds scary?

Oh, I know, many of you are now dying to see the extrapolation of their own publishing history. I haven’t seen mine. (Really I haven’t. We treat the authors as anonymous numbers.) But (if I can get funding for it) we will make these predictions publicly available in the coming year. If we don’t, rest assured someone else will. And in this case it might end up being proprietary software.

My personal conclusion from this study is that it’s about time we think about how to deal with personalized predictors for research activity.

Tuesday, June 12, 2018

Lost in Math: Out Now.

Today is the official publication date of my book “Lost in Math: How Beauty Leads Physics Astray.” There’s an interview with me in the current issue of “Der Spiegel” (in German) with a fancy photo, and an excerpt at Scientific American.

In the book I don’t say much about myself or my own research. I felt that was both superfluous and not particularly interesting. However, a lot of people have asked me about a comment I made in the passing in an earlier blogpost: “By writing [this book], I waived my hopes of ever getting tenure.” Even the Spiegel-guy who interviewed me asked about this! So I feel like I should add some explanation here to prevent misunderstandings. I hope you excuse that this will be somewhat more personal than my usual blogposts.

I am not tenured and I do not have a tenure-track position, so not like someone threatened me. I presently have a temporary contract which will run out next year. What I should be doing right now is applying for faculty positions. Now imagine you work at some institution which has a group in my research area. Everyone is happily producing papers in record numbers, but I go around and say this is a waste of money. Would you give me a job? You probably wouldn’t. I probably wouldn’t give me a job either.

That’s what prompted my remark, and I think it is a realistic assessment. But please note that the context in which I wrote this wasn’t a sudden bout of self-pity, it was to report a decision I made years ago.

I know you only get to see the results now, but I sold the book proposal in 2015. In the years prior to this, I was shortlisted for some faculty positions. In the end that didn’t pan out, but the interviews prompted me to imagine the scenario in which I actually got the job. And if I was being honest to myself, I didn’t like the scenario.

I have never been an easy fit to academia. I guess I was hoping I’d grow into it, but with time my fit has only become more uneasy. At some point I simply concluded I have had enough of this nonsense. I don’t want to be associated with a community which wastes tax-money because its practitioners think they are morally and intellectually so superior that they cannot possibly be affected by cognitive biases. You only have to read the comments on this blog to witness the origin of the problem, as with commenters who work in the field laughing off the idea that their objectivity can possibly be affected by working in echo-chambers. I can’t even.

As to what I’ll do when my contract runs out, I don’t know. As everyone who has written popular science books will confirm, you don’t get rich from it. The contract with Basic Books would never have paid for the actual working time, and that was before my agent got his share and the tax took its bite. (And while I am already publicly answering questions about my income, I also frequently get asked how much “money” I make with the ads on this blog. It’s about one dollar a day. Really the ads are only there so I don’t feel like I am writing here entirely for nothing.)

What typically happens when I write about my job situation is that everyone offers me advice. This is very kind, but I assure you I am not writing this because I am asking for help. I will be fine, do not worry about me. Yes, I don’t know what I’ll do next year, but something will come to my mind.

What needs help isn’t me, but academia: The current organization amplifies rather than limits the pressure to work on popular and productive topics. If you want to be part of the solution, the best starting point is to read my book.

Thanks for listening. And if you still haven’t had enough of me, Tam Hunt has an interview with me up at Medium. You can leave comments on this interview here.

More info on the official book website:

Monday, June 11, 2018

Are the laws of nature beautiful? (2nd book trailer)

Here is the other video trailer for my book "Lost in Math: How Beauty Leads Physics Astray". 

Since I have been asked repeatedly, let me emphasize again that the book is aimed at non-experts, or "the interested lay-reader" as they say. You do not need to bring any background knowledge in math or physics and, no, there are no equations in the book, just a few numbers. It's really about the question what we mean by "beautiful theories" and how that quest for beauty affects scientists' research interests. You will understand that without equations.

The book has meanwhile been read by several non-physicists and none of them reported great levels of frustration, so I have reason to think I roughly aimed at the right level of technical detail.

Having said that, the book should also be of interest for you if you are a physicist, not because I explain what the standard model is, but because you will get to hear what some top people in the field think about the current situation. (And I swear I was nice to them. My reputation is far worse than I.)

You find a table of contents, and a list of people who I interviewed, as well as transcripts of the video trailers on the book website.

Saturday, June 09, 2018

Video Trailer for "Lost in Math"

I’ve been told that one now does video trailers for books and so here’s me explaining what led me to write the book.

Friday, June 08, 2018

Science Magazine had my book reviewed, and it’s glorious, glorious.

Science Magazine had my book “Lost in Math” reviewed by Dr. Djuna Lize Croon, a postdoctoral associate at the Department of Physics at Dartmouth College, in New Hampshire, USA. Dr. Croon has worked on the very research topics that my book exposes as mathematical fantasies, such as “extra natural inflation” or “holographic composite Higgs models,” so choosing her to review my book is an interesting move for sure.

Dr. Croon does not disappoint. After just having read a whole book that explains how scientists fail to acknowledge that their opinions are influenced by the communities they are part of, Dr. Croon begins her review by quoting an anonymous Facebook friend who denigrates me as a blogger and tells Dr. Croon to dislike my book because I am not “offering solutions.” In her review, then, Dr. Croon reports being shocked to find that I disagree with her scientific heroes, dislikes that I put forward own opinions, and then promptly arrives at the same conclusion that her Facebook friend kindly offered beforehand.

The complaint that I merely criticize my community without making recommendations for improvement is particularly interesting because to begin with it’s wrong. I do spell out very clearly in the book that I think theoretical physicists in the foundations should focus on mathematical consistency and making contact to experiment rather than blathering about beauty. I also say concretely what topics I think are most promising, though I warn the reader that of course I too am biased and they should come to their own conclusions.

Even leaving aside that I do offer recommendations for improvement, I don’t know why it’s my task to come up with something else for those people to do. If they can’t come up with something else themselves, maybe we should just stop throwing money at them?

On a more technical note, I find it depressing that Dr. Croon in her review writes that naturalness has “a statistical meaning” even though I explain like a dozen times throughout the book that you can’t speak of probabilities without having a probability distribution. We have only this set of laws of nature. We have no statistics from which we could infer a likelihood of getting exactly these laws.

In summary, this review does an awesome job highlighting the very problems that my book is about.

Update June 17th: Remarkably enough, the editors at Science decided to remove the facebook quote from the review.

Physicist concludes there are no laws of physics.

It was June 4th, 2018, when Robbert Dijkgraaf, director of the world-renown Princeton Institute for Advanded Study, announced his breakthrough insight. After decades of investigating string theory, Dijkgraaf has concluded that there are no laws of physics.

Guess that’s it, then, folks. Don’t forget to turn off the light when you close the lab door for the last time.

Dijkgraaf knows what he is talking about. “Once you have understood the language of which [the cosmos] is written, it is extremely elegant, beautiful, powerful and natural,” he explained already in 2015, “The universe wants to be understood and that’s why we are all driven by this urge to find an all-encompassing theory.”

This urge has driven Dijkgraaf and many of his colleagues to pursue string theory, which they originally hoped would give rise to the unique theory of everything. That didn’t quite pan out though, and not surprisingly so: The idea of a unique theory is a non-starter. Whether a theory is unique or not depends on what you require of it. The more requirements, the better specified the theory. And whether a theory is the unique one that fulfils these or those requirements tells you nothing about whether it actually correctly describes nature.

But in the last two decades it has become popular in the foundations of physics to no longer require a theory to describe our universe. Without that requirement, then, theories contain an infinite number of universes that are collectively referred to as “the multiverse.” Theorists like this idea because having fewer requirements makes their theory simpler, and thus more beautiful. The resulting theory then uniquely explains nothing.

Of course if you have a theory with a multiverse and want to describe our universe, you have to add back the requirements you discarded earlier. That’s why no one who actually works with data ever starts with a multiverse – it’s utterly useless; Occam’s razor safely shaves it off. The multiverse doesn’t gain you anything, except possibly the ability to make headlines and speak of “paradigm changes.”

In string theory in particular, to describe our universe we’d need to specify just what happens with the additional dimensions of space that the theory needs. String theorists don’t like to make this specification because they don’t know how to make it. So instead they say that since string theory offers so many options for how to make a universe, one of them will probably fit the bill. And maybe one day they will find a meta-law that selects our universe.

Maybe they will. But until then the rest of us will work under the assumption that there are laws of physics. So far, it’s worked quite well, thank you very much.

If you want to know more about what bizarre ideas theoretical physicists have lately come up with, read my book “Lost in Math.”

Sunday, June 03, 2018

Book Update: Books are printed!

I had just returned from my trip to Dublin when the door rang and the UPS man dumped two big boxes on our doorstep. My husband has a habit of ordering books by the dozens, so my first thought was that this time he’d really outdone himself. Alas, the UPS guy pointed out, the boxes were addressed to me.

I signed, feeling guilty for having forgotten I ordered something from Lebanon, that being the origin of the parcels. But when I cut the tape and opened the boxes I found – drumrolls please – 25 copies “Lost in Math”. Turns out my publisher has their books printed in Lebanon.

I hadn’t gotten neither galleys nor review copies, so that was the first time I actually saw The-Damned-Book, as it’s been referred to in our household for the past three years. And The-Damned-Book is finally, FINALLY, a real book!

The cover looks much better in print than it does in the digital version because it has some glossy and some matte parts and, well, at least two seven-year-old girls agree that it’s a pretty book and also mommy’s name is on the cover and a mommy photo in the back, and that’s about as far as their interest went.

I’m so glad this is done. When I signed the contract in 2015, I had no idea how nerve-wrecking it would be to wait for the publication. In hindsight, it was a totally nutty idea to base the whole premise of The-Damned-Book on the absence of progress in the foundations of physics when such progress could happen literally any day. For three years now I’ve been holding my breath every time there was a statistical fluctuation in the data.

But now – with little more than a week to go until publication – it seems exceedingly unlikely anything will change about the story I am telling: Fact is, theorists in the foundations of physics have been spectacularly unsuccessful with their predictions for more than 30 years now. (The neutrino-anomaly I recently wrote about wasn’t a prediction, so even if it holds up it’s not something you could credit theorists with.)

The story here isn’t that theorists have been unsuccessful per se, but that they’ve been unsuccessful and yet don’t change their methods. That’s especially perplexing if you know that these methods rely on arguments from beauty even though everyone agrees that beauty isn’t a scientific criterion. Parallels to the continued use of flawed statistical methods in psychology and the life sciences are obvious. There too, everyone kept using bad methods that were known to be bad, just because it was the state of the art. And that’s the real story here: Scientists get stuck on unsuccessful methods.

Some people have voiced their disapproval that I went and argued with some prominent people in the field without them knowing they’d end up in my book. First, I recommend you read the book before you disapprove of what you believe it contains. I think I have treated everyone politely and respectfully.

Second, it should go without saying but apparently doesn’t, that everyone who I interviewed signed an interview waiver, transferring all rights for everything they told me to my publisher in all translations and all formats, globally and for all eternity, Amen. They knew what they were being interviewed for. I’m not an undercover agent, and my opinions about arguments from beauty are no secret.

Furthermore, everyone I interviewed got to see and approved a transcript with the exact wording that appears in the book. Though I later removed some parts entirely because it was just too much material. (And no, I cannot reuse it elsewhere because that was indeed not what they agreed on.) I had to replace a few technical terms here or there that most readers wouldn’t have understood, but these instances are marked in the text.

So, I think I did my best to accurately represent their opinions, and if anyone comes off looking like an idiot it should be me.

Most importantly though, the very purpose of these interviews is to offer the reader a variety of viewpoints rather than merely my own. So of course I disagree with the people I spoke with here and there – because who’d read a dialogue in which two people constantly agree with each other?

In any case, everything’s been said and done and now I can only wait and hope. This isn’t a problem that physicists can solve themselves. The whole organization of academic research today acts against major changes in methodology because that would result in a sudden and drastic decrease of publishing activity. The only way I can see change come about is public pressure. We have had enough talk about elegant universes and beautiful theories.

If you still haven’t made up your mind whether to buy the book, we now have a website which contains a table of contents and links to reviews and such, and Amazon offers you can “Look Inside” the book. Two video trailers will be coming next week. Silicon Republic writes about the book here and Dan Falk has a piece at NBC titled “Why some scientists say physics has gone off the rails.”

Thursday, May 31, 2018

New results confirm old anomaly in neutrino data

The collaboration of a neutrino experiment called MiniBooNe just published their new results:
    Observation of a Significant Excess of Electron-Like Events in the MiniBooNE Short-Baseline Neutrino Experiment
    MiniBooNE Collaboration
    arXiv:1805.12028 [hep-ex]
It’s a rather unassuming paper, but it deserves a signal boost because for once we have an anomaly that did not vanish with further examination. Indeed, it actually increased in significance, now standing at a whopping 6.1σ.

MiniBooNE was designed to check the results of an earlier experiment called LSND, the Liquid Scintillator Neutrino Detector experiment that ran in the 1990s. The LSND results were famously incompatible the results of a bulk of other neutrino experiments. So incompatible, indeed, that the LSND data are usually excluded from global fits – they just don’t fit.

All the other experimental data could be neatly fitted with assuming that the three known types of neutrinos “oscillate,” which means they change back and forth into each other as they travel. Problem is, in a three-neutrino oscillation model, there are only maximally nine parameters (the masses, mixing angles, and a few phases, the latter depending on the type of neutrino). These parameters are not sufficient to also fit the LSND data.

See figure below for the LSND trouble: The turquoise/yellow areas in the figure do not overlap with the red/blue ones.

[Figure: Hitoshi Murayama]

The new data from MiniBooNE, now, confirms that this tension in the data is real. It’s not a quirk of the LSND experiment. This data can (to my best knowledge) not be fitted with the standard framework of three types of neutrinos (one for the electron, one for the muon, and one of the tau). Fitting this data requires either new particles (sterile neutrinos) or some kind of symmetry violation, typically CPT violation or Lorentz-invariance violation (or both).

This is the key figure from the new paper. See how the new results agree with the earlier LSND results. And note the pale grey line indicating that this area is “ruled out” by other experiments (assuming the standard explanation).

[Figure 5 from arXiv:1805.12028]

So this is super-exciting news: An old anomaly was re-examined and confirmed! Now it’s time for theoretical physicists to come up with an explanation.

Monday, May 28, 2018

What do physicists mean when they say the laws of nature are beautiful?

Simplicity in photographic art.
“Monday Blues Chat”
By Erin Photography
In my upcoming book “Lost in Math: How Beauty Leads Physics Astray,” I explain what makes a theory beautiful in the eyes of a physicist and how beauty matters for their research. For this, I interviewed about a dozen theoretical physicists (full list here) and spoke to many more. I also read every book I could find on the topic, starting with Chandrasekhar’s “Truth and Beauty” to McAllister’s “Beauty and Revolution in Science” and Orrell’s “Truth or Beauty”.

Turns out theoretical physicists largely agree on what they mean by beauty, and it has the following three aspects:


A beautiful theory is simple, and it is simple if it can be derived from few assumptions. Currently common ways to increase simplicity in the foundations of physics is unifying different concepts or adding symmetries. To make a theory simpler, you can also remove axioms; this will eventually result in one or the other version of a multiverse.

Please note that the simplicity I am referring to here is absolute simplicity and has nothing to do with Occam’s razor, which merely tells you that from two theories that achieve the same you should pick the simpler one.


A beautiful theory is also natural, meaning it does not contain numbers without units that are either much larger or much smaller than 1. In physics-speak you’d say “dimensionless parameters are of order 1.” In high energy particle physics in particular, theorists use a relaxed version of naturalness called “technical naturalness” which says that small numbers are permitted if there is an explanation for their smallness. Symmetries, for example, can serve as such an explanation.

Note that in contrast to simplicity, naturalness is an assumption about the type of assumptions, not about the number of assumptions.


Elegance is the fuzziest aspect of beauty. It is often described as an element of surprise, the “aha-effect,” or the discovery of unexpected connections. One specific aspect of elegance is a theory’s resistance to change, often referred to as “rigidity” or (misleadingly, I think) as the ability of a theory to “explain itself.”

By no way do I mean to propose this as a definition of beauty; it is merely a summary of what physicists mean when they say a theory is beautiful. General relativity, string theory, grand unification, and supersymmetry score high on all three aspects of beauty. The standard model, modified gravity, or asymptotically safe gravity, not so much.

But while physicists largely agree on what they mean by beauty, in some cases they disagree on whether a theory fulfills the requirements. This is the case most prominently for quantum mechanics and the multiverse.

For quantum mechanics, the disagreement originates in the measurement axiom. On the one hand it’s a simple axiom. On the other hand, it covers up a mess, that being the problem of defining just what a measurement and a measurement apparatus are.

For the multiverse, the disagreement is over whether throwing out an assumption counts as a simplification if you have to add it again later because otherwise you cannot describe our observations.

If you want to know more about how arguments from beauty are used and abused in the foundations of physics, my book will be published on June 12th and then it’s all yours to peruse!

Wednesday, May 23, 2018

This is how I pray

I know, you have all missed my awesome chord progressions, but do not despair, I have relief for your bored ears.

I am finally reasonably happy with the vocal recordings if not with the processing, or the singing, or the lyrics. But working on that. The video was painful, both figuratively and literally; I am clearly too old to crouch on the floor for extended periods and had to hobble stairs for days after the filming.

The number one comment I get on my music videos is, sadly enough, not "wow, you are so amazingly gifted" but "where do you find the time?". To which the answer is "I don't". I did the filming for this video on Christmas, the audio on Easter, and finished on Pentecost. If it wasn't for those Christian holidays, I'd never get anything done. So, thank God, now you know how I pray.

Friday, May 18, 2018

The Overproduction Crisis in Physics and Why You Should Care About It

[Image: Everett Collection]
In the years of World War II, National Socialists executed about six million Jews. The events did not have a single cause, but most historians agree that a major reason was social reinforcement, more widely known as “group think.”

The Germans who went along with the Nazis’ organized mass murder were not psychopaths. By all accounts they were “normal people.” They actively or passively supported genocide because at the time it was a socially acceptable position; everyone else did it too. And they did not personally feel responsible for the evils of the system. It eased their mind that some scientists claimed it was only rational to prevent supposedly genetically inferior humans from reproducing. And they hesitated to voice disagreement because those who opposed the Nazis risked retaliation.

It’s comforting to think that was Then and There, not Now and Here. But group-think think isn’t a story of the past; it still happens and it still has devastating consequences. Take for example the 2008 mortgage crisis.

Again, many factors played together in the crisis’ buildup. But oiling the machinery were bankers who approved loans to applicants that likely couldn’t pay the money back. It wasn’t that the bankers didn’t know the risk; they thought it was acceptable because everyone else was doing it too. And anyone who didn’t play along would have put themselves at a disadvantage, by missing out on profits or by getting fired.

A vivid recount comes from an anonymous Wall Street banker quoted in a 2008 NPR broadcast:
“We are telling you to lie to us. We’re hoping you don't lie. Tell us what you make, tell us what you have in the bank, but we won’t verify. We’re setting you up to lie. Something about that feels very wrong. It felt wrong way back then and I wish we had never done it. Unfortunately, what happened ... we did it because everyone else was doing it.”
When the mortgage bubble burst, banks defaulted by the hundreds. In the following years, millions of people would lose their jobs in what many economists consider the worst financial crisis since the Great Depression of the 1930s.

It’s not just “them” it’s “us” too. Science isn’t immune to group-think. On the contrary: Scientific communities are ideal breeding ground for social reinforcement.

Research is currently organized in a way that amplifies, rather than alleviates, peer pressure: Measuring scientific success by the number of citations encourages scientists to work on what their colleagues approve of. Since the same colleagues are the ones who judge what is and isn’t sound science, there is safety in numbers. And everyone who does not play along risks losing funding.

As a result, scientific communities have become echo-chambers of likeminded people who, maybe not deliberately but effectively, punish dissidents. And scientists don’t feel responsible for the evils of the system. Why would they? They just do what everyone else is also doing.

The reproducibility crisis in psychology and in biomedicine is one of the consequences. In both fields, an overreliance on data with low statistical relevance and improper methods of data analysis (“p-value hacking”) have become common. That these statistical methods are unreliable has been known for a long time. As Jessica Utts, President of the American Statistical Association, pointed out in 2016 “statisticians and other scientists have been writing on the topic for decades.”

So why then did researchers in psychology continue using flawed methods? Because everyone else did it. It was what they were taught; it was generally accepted procedure. And psychologists who’d have insisted on stricter methods of analysis would have put themselves at a disadvantage: They’d have gotten fewer results with more effort. Of course they didn’t go the extra way.

The same problem underlies an entirely different popular-but-flawed scientific procedure: “Mouse models,” ie using mice to test the efficiency of drugs and medical procedures.

How often have you read that Alzheimer or cancer has been cured in mice? Right – it’s been done hundreds of times. But humans aren’t mice, and it’s no secret that mice-results – while not uninteresting – often don’t transfer to humans. Scientists only partly understand why, but that mouse models are of limited use to develop treatments for humans isn’t controversial.

So why do researchers continue to use them anyway? Because it’s easy and cheap and everyone else does it too. As Richard Harris put it in his book Rigor Mortis: “One reason everybody uses mice: everybody else uses mice.”

It happens here in the foundations of physics too.

In my community, it has become common to justify the publication of new theories by claiming the theories are falsifiable. But falsifiability is a weak criterion for a scientific hypothesis. It’s necessary, but certainly not sufficient, for many hypotheses are falsifiable yet almost certainly wrong. Example: It will rain peas tomorrow. Totally falsifiable. Also totally nonsense.

Of course this isn’t news. Philosophers have gone on about this for at least half a century. So why do physicists do it? Because it’s easy and because all their colleagues do it. And since they all do it, theories produced by such methods will usually get published, which officially marks them as “good science”.

In the foundations of physics, the appeal to falsifiability isn’t the only flawed method that everyone uses because everyone else does. There are also those theories which are plainly unfalsifiable. And another example are arguments from beauty.

In hindsight it seems perplexing, to say the least, but physicists published ten-thousands of papers with predictions for new particles at the Large Hadron Collider because they believed that the underlying theory must be “natural”. None of those particles were found.

Similar arguments underlie the belief that the fundamental forces should be unified because that’s prettier (no evidence for unification has been found) or that we should be able to measure particles that make up dark matter (we didn’t). Maybe most tellingly, physicists in these community refuse to consider the possibility that their opinions are affected by the opinions of their peers.

One way to address the current crises in scientific communities is to impose tighter controls on scientific standards. That’s what is happening in psychology right now, and I hope it’ll also happen in the foundations of physics soon. But this is curing the symptoms, not the disease. The disease is a lacking awareness for how we are affected by the opinions of those around us.

The problem will reappear until everyone understands the circumstances that benefit group-think and learns to recognize the warning signs: People excusing what they do with saying everyone else does it too. People refusing to take responsibility for what they think are “evils of the system.” People unwilling to even consider that they are influenced by the opinions of others. We have all the warning signs in science – had them for decades.

Accusing scientists of group-think is standard practice of science deniers. The tragedy is, there’s truth in what they say. And it’s no secret: The problem is easy to see for everyone who has the guts to look. Sweeping the problem under the rug will only further erode trust in science.

Read all about the overproduction crisis in the foundations of physics and what you – yes you! – can do to help in my book “Lost in Math,” out June 12, 2018.

Tuesday, May 15, 2018

Measuring Scientific Broadness

I have a new paper out today and it wouldn’t have happened without this blog.

A year ago, I wrote a blogpost declaring that “academia is fucked up,” to quote myself because my words are the best words. In that blogpost, I had some suggestions how to improve the situation, for example by offering ways to quantify scientific activity other than counting papers and citations.

But ranting on a blog is like screaming in the desert: when the dust settles you’re still in the desert. At least that’s been my experience.

Not so this time! In the week following the blogpost, three guys wrote to me expressing their interest in working on what I suggested. One of them I never heard of again. The other two didn’t get along and one of them eventually dropped out. My hopes sank.

But then I got a small grant from the Foundational Questions Institute and was able to replace the drop-out with someone else. So now we were three again. And I could actually pay the other two, which probably helped keeping them interested.

One of the guys is Tom Price. I’ve never met him, but – believe it or not – we now have a paper on the arXiv.
    Measuring Scientific Broadness
    Tom Price, Sabine Hossenfelder

    Who has not read letters of recommendations that comment on a student's `broadness' and wondered what to make of it? We here propose a way to quantify scientific broadness by a semantic analysis of researchers' publications. We apply our methods to papers on the open-access server and report our findings.

    arXiv:1805.04647 [physics.soc-ph]

In the paper we propose a way to measure how broad or specialized a scientists’ research interests are. We quantify this by analyzing the words they use in the title and abstracts of their arXiv papers.

Tom tried several ways to quantify the distribution of keywords, and so our paper contains four different measures for broadness. We eventually picked one for the main text, but checked that the other ones give largely comparable results. In the paper, we report the results of various analyses of the arXiv data. For example, here is the distribution of broadness over authors:

It’s a near-perfect normal distribution!

I should add that you get this distribution only after removing collaborations from the sample, which we have done by excluding all authors with the word “collaboration” in the name and all papers with more than 30 authors. If you don’t do this, the distribution has a peak at small broadness.

We also looked at the average broadness of authors in different arXiv categories, where we associate an author with a category if it’s the primary category for at least 60% of their papers. By that criterion, we find physics.plasm-ph has the highest broadness and astro-ph.GA the lowest one. However, we have only ranked categories with more than 100 associated authors to get sensible statistics. In this ranking, therefore, some categories don’t even appear.

That’s why we then also looked at the average broadness associated with papers (rather than authors) that have a certain category as the primary one. This brings physics.pop-ph to the top, while astro-ph.GA stays on the bottom.

That astrophysics is highly specialized also shows up in our list of keywords, where phrases like “star-forming” or “stellar mass” are among those of the lowest broadness. On the other hand, the keywords “agents”, “chaos,” “network”, or “fractal” have high values of broadness. You find the top broadest and top specialized words in the below table. See paper for reference to full list.

We also compared the average broadness associated with authors who have listed affiliations in certain countries. The top scores of broadness go to Israel, Austria, and China. The correlation between the h-index and broadness is weak. Neither did we find a correlation between broadness and gender (where we associated genders by common first names). Broadness also isn’t correlated with the Nature Index, which is a measure of a country’s research productivity.

A correlation we did find though is that researchers whose careers suddenly end, in the sense that their publishing activity discontinues, are more likely to have a low value of broadness. Note that this doesn’t necessarily say much about the benefits of some research styles over others. It might be, for example, that research areas with high competition and few positions are more likely to also be specialized.

Let me be clear that it isn’t our intention to declare that the higher the broadness the better. Indeed, there might well be cases when broadness is distinctly not what you want. Depending on which position you want to fill or which research program you have announced, you may want candidates who are specialized on a particular topic. Offering a measure for broadness, so we hope, is a small step to honoring the large variety of ways to excel in science.

I want to add that Tom did the bulk of the work on this paper, while my contributions were rather minor. We have another paper coming up in the next weeks (or so I hope), and we are also working on a website where everyone will be able to determine their own broadness value. So stay tuned!

Friday, May 11, 2018

Dear Dr B: Should I study string theory?

Strings. [image:]
“Greetings Dr. Hossenfelder!

I am a Princeton physics major who regularly reads your wonderful blog.

I recently came across a curious passage in Brian Greene’s introduction to a reprint edition of Einstein's Meaning of Relativity which claims that:
“Superstring theory successfully merges general relativity and quantum mechanics [...] Moreover, not only does superstring theory merge general relativity with quantum mechanics, but it also has the capacity to embrace — on an equal footing — the electromagnetic force, the weak force, and the strong force. Within superstring theory, each of these forces is simply associated with a different vibrational pattern of a string. And so, like a guitar chord composed of four different notes, the four forces of nature are united within the music of superstring theory. What’s more, the same goes for all of matter as well. The electron, the quarks, the neutrinos, and all other particles are also described in superstring theory as strings undergoing different vibrational patterns. Thus, all matter and all forces are brought together under the same rubric of vibrating strings — and that’s about as unified as a unified theory could be.”
Is all this true? Part of the reason I am asking is that I am thinking about pursuing String Theory, but it has been somewhat difficult wrapping my head around its current status. Does string theory accomplish all of the above?

Thank you!

An Anonymous Princeton Physics Major”

Dear Anonymous,

Yes, it is true that superstring theory merges general relativity and quantum mechanics. Is it successful? Depends on what you mean by success.

Greene states very carefully that superstring theory “has the capacity to embrace” gravity as well as the other known fundamental forces (electromagnetic, weak, and strong). What he means is that most string theorists currently believe there exists a specific model for superstring theory which gives rise to these four forces. The vague phrase “has the capacity” is an expression of this shared belief; it glosses over the fact that no one has been able to find a model that actually does what Greene says.

Superstring theory also comes with many side-effects which all too often go unnoticed. To begin with, the “super” isn’t there to emphasize the theory is awesome, but to indicate it’s supersymmetric. Supersymmetry, to remind you, is a symmetry that postulates all particles of the standard model have a partner particle. These partner particles were not found. This doesn’t rule out supersymmetry because the particles might only be produced at energies higher than what we have tested. But it does mean we have no evidence that supersymmetry is realized in nature.

Worse, if you make the standard model supersymmetric, the resulting theory conflicts with experiment. The reason is that doing so enables flavor changing neutral currents which have not been seen. This became clear in the mid 1990s, sufficiently long ago so that it’s now one of the “well known problems” that nobody ever mentions. To save both supersymmetry and superstrings, theorists postulated an additional symmetry, called “R-parity” that simply forbids the worrisome processes.

Another side-effect of superstrings is that they require additional dimensions of space, nine in total. Since we haven’t seen more than the usual three, the other six have to be rolled up or “compactified” as the terminology has it. There are many ways to do this compactification and that’s what eventually gives rise to the “landscape” of string theory: The vast number of different theories that supposedly all exist somewhere in the multiverse.

The problems don’t stop there. Superstring theory does contain gravity, yes, but not the normal type of gravity. It is gravity plus a large number of additional fields, the so-called moduli fields. These fields are potentially observable, but we haven’t seen them. Hence, if you want to continue believing in superstrings you have to prevent these fields from making trouble. There are ways to do that, and that adds a further layer of complexity.

Then there’s the issue with the cosmological constant. Superstring theory works best in a space-time with a cosmological constant that is negative, the so-called “Anti de Sitter spaces.” Unfortunately, we don’t live in such a space. For all we presently know the cosmological constant in our universe is positive. When astrophysicists measured the cosmological constant and found it to be positive, string theorists cooked up another fix for their theory to get the right sign. Even among string-theorists this fix isn’t popular, and in any case it’s yet another ad-hoc construction that must be added to make the theory work.

Finally, there is the question how much the requirement of mathematical consistency can possibly tell you about the real world to begin with. Even if superstring theory is a way to unify general relativity and quantum mechanics, it’s not the only way, and without experimental test we won’t know which one is the right way. Currently the best developed competing approach is asymptotically safe gravity, which requires neither supersymmetry nor extra dimensions.

Leaving aside the question whether superstring theory is the right way to combine the known fundamental forces, the approach may have other uses. The theory of strings has many mathematical ties with the quantum field theories of the standard model, and some think that the gauge-gravity correspondence may have applications in condensed matter physics. However, the dosage of string theory in these applications is homeopathic at best.

This is a quick overview. If you want more details, a good starting point is Joseph Conlon’s book “Why String Theory?” On a more general level, I hope you excuse if I mention that the question what makes a theory promising is the running theme of my upcoming book “Lost in Math.” In the book I go through the pros and cons of string theory and supersymmetry and the multiverse, and also discuss the relevance of arguments from mathematical consistency.

Thanks for an interesting question!

With best wishes for your future research,