Thursday, June 28, 2018

How nature became unnatural

Naturalness is an old idea; it dates back at least to the 16th century and captures the intuition that a useful explanation shouldn’t rely on improbable coincidences. Typical examples for such coincidences, often referred to as “conspiracies,” are two seemingly independent parameters that almost cancel each other, or an extremely small yet nonzero number. Physicists believe that theories which do not have such coincidences, and are natural in this particular sense, are more promising than theories that are unnatural.

Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.

As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.

But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.

Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.

The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.

The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.

In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.

A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.

Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.

I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.

Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.

That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.

I was therefore excited to see that James Wells has a new paper on the arXiv
In his paper, Wells lays out the problems with the lacking probability distribution with several simple examples. And in contrast to me, Wells isn’t a no-one; he’s a well-known US-American particle physicist and Professor at the University of Michigan.

So, now that a man has said it, I hope physicists will listen.



Aside: I continue to have technical troubles with the comments on this blog. Notification has not been working properly for several weeks, which is why I am approving comments with much delay and reply erratically. In the current arrangement, I can neither read the full comment before approving it, nor can I keep comments unread, so as to remind myself to reply, what I did previously. Google says they’ll be fixing it, but Im not sure what, if anything, they’re doing to make that happen.

Also, my institute wants me to move my publicly available files elsewhere because they are discontinuing the links that I have used so far. For this reason most images in the older blogposts have disappeared. I have to manually replace all these links which will take a while. I am very sorry for the resulting ugliness.

Saturday, June 23, 2018

Particle Physics now Belly Up

Particle physics. Artist’s impression.
Professor Ben Allanach is a particle physicist at Cambridge University. He just wrote an advertisement for my book that appeared on Aeon some days ago under the title “Going Nowhere Fast”.

I’m kidding of course, Allanach’s essay has no relation to my book. At least not that I know of. But it’s not a coincidence he writes about the very problems that I also discuss in my book. After all, the whole reason I wrote the book was that this situation was foreseeable: The Large Hadron Collider hasn’t found evidence for any new particles besides the Higgs-boson (at least not so far), so now particle physicists are at a loss for how to proceed. Even if they find something in the data that’s yet to come, it is clear already that their predictions were wrong.

Theory-development in particle physics for the last 40 years has worked mostly by what is known as “top-down” approaches. In these approaches you invent a new theory based on principles you cherish and then derive what you expect to see at particle colliders. This approach has worked badly, to say the least. The main problem, as I lay out in my book, is that the principles which physicists used to construct their theories are merely aesthetic requirements. Top-down approaches, for example, postulate that the fundamental forces are unified or that the universe has additional symmetries or that the parameters in the theory are “natural.” But none of these assumptions are necessary, they’re just pretty guesses.

The opposite to a top-down approach, as Allanach lays out, is a “bottom-up” approach. For that you begin with the theories you have confirmed already and add possible modifications. You do this so that the modifications only become relevant in situations that you have not yet tested. Then you look at the data to find out which modifications are promising because they improve the fit to the data. It’s an exceedingly unpopular approach because the data have just told us over and over and over again that the current theories are working fine and require no modification. Also, bottom-up approaches aren’t pretty which doesn’t help their popularity.

Allanach, as several other people who I know, has stopped working on supersymmetry, an idea that has for a long time been the most popular top-down approach. In principle it’s a good development that researchers in the field draw consequences from the data. But if they don’t try to understand just what went wrong – why so many theoretical physicists believed in ideas that do not describe reality – they risk repeating the same mistake. It’s of no use if they just exchange one criterion of beauty with another.

Bottom-up approaches are safely on the scientific side. But they also increase the risk that we get stuck with the development of new theories because without top-down approaches we do not know where to look for new data. That’s why I argue in my book that some mathematical principles for theory-development are okay to use, namely those which prevent internal contradictions. I know this sounds lame and rather obvious, but in fact it is an extremely strong requirement that, I believe, hasn’t been pushed as far as we could push it.

This top-down versus bottom-up discussion isn’t new. It has come up each time the supposed predictions for new particles turned out to be wrong. And each time the theorists in the field, rather than recognizing the error in their ways, merely adjusted their models to evade experimental bounds and continued as before. Will you let them get away with this once again?

Tuesday, June 19, 2018

Astrophysicists try to falsify multiverse, find they can’t.

Ben Carson, trying to
make sense of the multiverse.
The idea that we live in a multiverse – an infinite collection of universes from which ours is merely one – is interesting but unscientific. It postulates the existence of entities that are unnecessary to describe what we observe. All those other universes are inaccessible to experiment. Science, therefore, cannot say anything about their existence, neither whether they do exist nor whether they don’t exist.

The EAGLE collaboration now knows this too. They recently published results of a computer simulation that details how the formation of galaxies is affected when one changes the value of the cosmological constant, the constant which quantifies how fast the expansion of the universe accelerates. The idea is that, if you believe in the multiverse, then each simulation shows a different universe. And once you know which universes give rise to galaxies, you can calculate how likely we are to be in a universe that contains galaxies and also has the cosmological constant that we observe.

We already knew before the new EAGLE paper that not all values of the cosmological constant are compatible with our existence. If the cosmological constant is too large, the universe either collapses quickly after formation (if the constant is negative) and galaxies are never formed, or it expands so quickly that structures are torn apart before galaxies can form (if the constant is positive).

New is that by using computer simulations, the EAGLE collaboration is able to quantify and also illustrate just how the structure formation differs with the cosmological constant.

The quick summary of their results is that if you turn up the cosmological constant and keep all other physics the same, then making galaxies becomes difficult once the cosmological constant exceeds about 100 times the measured value. The authors haven’t looked at negative values of the cosmological constant because (so they write) that would be difficult to include in their code.

The below image from their simulation shows an example for the gas density. On the left you see a galaxy prototype in a universe with zero cosmological constant. On the right the cosmological constant is 30 times the measured value. In the right image structures are smaller because the gas halos have difficulties growing in a rapidly expanding universe.

From Figure 7 of Barnes et al, MNRAS 477, 3, 1 3727–3743 (2018).

This, however, is just turning knobs on computer code, so what does this have to do with the multiverse? Nothing really. But it’s fun to see how the authors are trying really hard to make sense of the multiverse business.

A particular headache for multiverse arguments, for example, is that if you want to speak about the probability of an observer finding themselves in a particular part of the multiverse, you have to specify what counts as observer. The EAGLE collaboration explains:
“We might wonder whether any complex life form counts as an observer (an ant?), or whether we need to see evidence of communication (a dolphin?), or active observation of the universe at large (an astronomer?). Our model does not contain anything as detailed as ants, dolphins or astronomers, so we are unable to make such a fine distinction anyway.”
But even after settling the question whether dolphins merit observer-status, a multiverse per se doesn’t allow you to calculate the probability for finding this or that universe. For this you need additional information: a probability distribution or “measure” on the multiverse. And this is where the real problem begins. If the probability of finding yourself in a universe like ours is small you may think that disfavors the multiverse hypothesis. But it doesn’t: It merely disfavors the probability distribution, not the multiverse itself.

The EAGLE collaboration elaborates on the conundrum:
“What would it mean to apply two different measures to this model, to derive two different predictions? How could all the physical facts be the same, and yet the predictions of the model be different in the two cases? What is the measure about, if not the universe? Is it just our own subjective opinion? In that case, you can save yourself all the bother of calculating probabilities by having an opinion about your multiverse model directly.”
Indeed. You can even save yourself the bother of having a multiverse to begin with because it doesn’t explain any observation that a single universe wouldn’t also explain.

The authors eventually find that some probability distributions make our universe more, others less probable. Not that you need a computer cluster for that insight. Still, I guess we should applaud the EAGLE people for trying. In their paper, they conclude: “A specific multiverse model must justify its measure on its own terms, since the freedom to choose a measure is simply the freedom to choose predictions ad hoc.”

But of course a model can never justify itself. The only way to justify a physical model is that it fits observation. And if you make ad hoc choices to fit observations you may as well just chose the cosmological constant to be what we observe and be done with it.

In summary, the paper finds that the multiverse hypothesis isn’t falsifiable. If you paid any attention to the multiverse debate, that’s hardly surprising, but it is interesting to see astrophysicists attempting to squeeze some science out of it.

I think the EAGLE study makes a useful contribution to the literature. Multiverse proponents have so far argued that what they do is science because some versions of the multiverse are testable in our universe, for example by searching for entanglement between universes, or for evidence that our universe has collided with another one in the past.

It is correct that some multiverse types are testable, but to the extent that they have been tested, they have been ruled out. This, of course, has not ruled out the multiverse per se, because there are still infinitely many types of multiverses left. For those, the only thing you can do is make probabilistic arguments. The EAGLE paper now highlights that these can’t be falsified either.

I hope that showcasing the practical problem, as the EAGLE paper does, will help clarify the unscientific basis of the multiverse hypothesis.

Let me be clear that the multiverse is a fringe idea in a small part of the physics community. Compared to the troubled scientific methodologies in some parts of particle physics and cosmology, multiverse madness is a minor pest. No, the major problem with the multiverse is its popularity outside of physics. Physicists from Brian Greene to Leonard Susskind to Andrei Linde have publicly spoken about the multiverse as if it was best scientific practice. And that well-known physicists pass the multiverse off as science isn’t merely annoying, it actively damages the reputation of science. A prominent example for the damage that can result comes from the 2015 Republican Presidential Candidate Ben Carson.

Carson is a retired neurosurgeon who doesn’t know much physics, but what he knows he seems to have learned from multiverse enthusiasts. On September 22, 2015, Carson gave a speech at a Baptist school in Ohio, informing his audience that “science is not always correct,” and then went on to justify his science skepticism by making fun of the multiverse:
“And then they go to the probability theory, and they say “but if there’s enough big bangs over a long enough period of time, one of them will be the perfect big bang and everything will be perfectly organized.””
In an earlier speech he cheerfully added: “I mean, you want to talk about fairy tales? This is amazing.”

Now, Carson has misunderstood much of elementary thermodynamics and cosmology, and I have no idea why he thinks he’s even qualified to give speeches about physics. But really this isn’t the point. I don’t expect neurosurgeons to be experts in the foundations of physics and I hope Carson’s audience doesn’t expect that either. Point is, he shows what happens when scientists mix fact with fiction: Non-experts throw out both together.

In his speech, Carson goes on: “I then say to them, look, I’m not going to criticize you. You have a lot more faith than I have… I give you credit for that. But I’m not going to denigrate you because of your faith and you shouldn’t denigrate me for mine.”

And I’m with him on that. No one should be denigrated for what they believe in. If you want to believe in the existence of infinitely many universes with infinitely many copies of yourself, that’s all fine with me. But please don’t pass it off as science.


If you want to know more about the conflation between faith and knowledge in theoretical physics, read my book “Lost in Math: How Beauty Leads Physics Astray.”

Friday, June 15, 2018

Physicists are very predictable

I have a new paper on the arXiv, which came out of a collaboration with Tobias Mistele and Tom Price. We fed a neural network with data about the publication activity of physicists and tried to make a “fake” prediction, for which we used data from the years 1996 up to 2008 to predict the next ten years. Data come from the arXiv via the Open Archives Initiative.

To train the network, we took a random sample of authors and asked the network to predict these authors’ publication data. In each cycle the network learned how good or bad its prediction was and then tried to further improve it.

Concretely, we trained the network to predict the h-index, a measure for the number of citations a researcher has accumulated. We didn’t use this number because we think it’s particularly important, but simply because other groups have previously studied it with neural networks in disciplines other than physics. Looking at the h-index, therefore, allowed us to compare our results with those of the other groups.

After completing the training, we asked how well the network can predict the citations accumulated by authors that were not in the training group. The common way to quantify the goodness of such a prediction is with the coefficient of determination, R2. The higher the coefficient of determination, the stronger the correlation of the prediction with the actual number, hence the better the prediction. The below figure shows the result of our neural network, compared with some other predictors. As you can see we did pretty well!

The blue (solid) curve labelled “Net” shows how good the prediction
of our network is for extrapolating the h-index over the number of years.
The other two curves use simpler predictors on same data. 

We found a coefficient of determination of  0.85 for a prediction over ten years. Earlier studies based on machine learning found 0.48 in the life-sciences and 0.72 in the computer sciences.

But admittedly the coefficient of determination doesn’t tell you all that much unless you’re a statistician. So for illustration, here are some example trajectories that show the network’s prediction compared with the actual trend (more examples in the paper).

However, that our prediction is better than the earlier ones is only partly due to our network’s performance. Turns out, our data are also intrinsically easier to predict, even with simple measures. You can for example just try to linearly extrapolate the h-index, and while that prediction isn’t as good as that of the network, it is still better than the prediction from the other disciplines. You see this in the figure I showed you above for the coefficient of determination. Used on the arXiv data even the simple predictors achieve something like 0.75.

Why that is so, we don’t know. One possible reason could be that the sub-disciplines of physicists are more compartmentalized and researchers often stay in the fields that they started out with. Or, as Nima Arkani-Hamed put it when I interviewed him “everybody does the analytic continuation of what they’ve been doing for their PhD”. (Srsly, the book is fun, you don’t want to miss it.) In this case you establish a reputation early on and your colleagues know what to expect from you. It seems plausible to me that in such highly specialized communities it would be easier to extrapolate citations than in more mixed-up communities. But really this is just speculation; the data don’t tell us that.

Having said this, by and large the network predictions are scarily good. And that’s even though our data is woefully incomplete. We cannot presently, for example, include any papers that are not on the arXiv. Now, in some categories, like hep-th, pretty much all papers are on the arXiv. But in other categories that isn’t the case. So we are simply missing information about what researchers are doing. We also have the usual problem of identifying authors by their names, and haven’t always been able to find the journal in which a paper was published.

Now, if you allow me to extrapolate the present situation, data will become better and more complete. Also the author-identification problem will, hopefully, be resolved at some point. And this means that the predictivity of neural networks chewing on this data is likely to increase some more.

Of course we did not actually make future predictions in the present paper, because in this case we wouldn’t have been able to quantify how good the prediction was. But we could now go and train the network with data up to 2018 and extrapolate up to 2028. And I predict it won’t be long until such extrapolations of scientists’ research careers will be used in hiring and funding decisions. Sounds scary?

Oh, I know, many of you are now dying to see the extrapolation of their own publishing history. I haven’t seen mine. (Really I haven’t. We treat the authors as anonymous numbers.) But (if I can get funding for it) we will make these predictions publicly available in the coming year. If we don’t, rest assured someone else will. And in this case it might end up being proprietary software.

My personal conclusion from this study is that it’s about time we think about how to deal with personalized predictors for research activity.

Tuesday, June 12, 2018

Lost in Math: Out Now.

Today is the official publication date of my book “Lost in Math: How Beauty Leads Physics Astray.” There’s an interview with me in the current issue of “Der Spiegel” (in German) with a fancy photo, and an excerpt at Scientific American.

In the book I don’t say much about myself or my own research. I felt that was both superfluous and not particularly interesting. However, a lot of people have asked me about a comment I made in the passing in an earlier blogpost: “By writing [this book], I waived my hopes of ever getting tenure.” Even the Spiegel-guy who interviewed me asked about this! So I feel like I should add some explanation here to prevent misunderstandings. I hope you excuse that this will be somewhat more personal than my usual blogposts.

I am not tenured and I do not have a tenure-track position, so not like someone threatened me. I presently have a temporary contract which will run out next year. What I should be doing right now is applying for faculty positions. Now imagine you work at some institution which has a group in my research area. Everyone is happily producing papers in record numbers, but I go around and say this is a waste of money. Would you give me a job? You probably wouldn’t. I probably wouldn’t give me a job either.

That’s what prompted my remark, and I think it is a realistic assessment. But please note that the context in which I wrote this wasn’t a sudden bout of self-pity, it was to report a decision I made years ago.

I know you only get to see the results now, but I sold the book proposal in 2015. In the years prior to this, I was shortlisted for some faculty positions. In the end that didn’t pan out, but the interviews prompted me to imagine the scenario in which I actually got the job. And if I was being honest to myself, I didn’t like the scenario.

I have never been an easy fit to academia. I guess I was hoping I’d grow into it, but with time my fit has only become more uneasy. At some point I simply concluded I have had enough of this nonsense. I don’t want to be associated with a community which wastes tax-money because its practitioners think they are morally and intellectually so superior that they cannot possibly be affected by cognitive biases. You only have to read the comments on this blog to witness the origin of the problem, as with commenters who work in the field laughing off the idea that their objectivity can possibly be affected by working in echo-chambers. I can’t even.

As to what I’ll do when my contract runs out, I don’t know. As everyone who has written popular science books will confirm, you don’t get rich from it. The contract with Basic Books would never have paid for the actual working time, and that was before my agent got his share and the tax took its bite. (And while I am already publicly answering questions about my income, I also frequently get asked how much “money” I make with the ads on this blog. It’s about one dollar a day. Really the ads are only there so I don’t feel like I am writing here entirely for nothing.)

What typically happens when I write about my job situation is that everyone offers me advice. This is very kind, but I assure you I am not writing this because I am asking for help. I will be fine, do not worry about me. Yes, I don’t know what I’ll do next year, but something will come to my mind.

What needs help isn’t me, but academia: The current organization amplifies rather than limits the pressure to work on popular and productive topics. If you want to be part of the solution, the best starting point is to read my book.

Thanks for listening. And if you still haven’t had enough of me, Tam Hunt has an interview with me up at Medium. You can leave comments on this interview here.

More info on the official book website: lostinmathbook.com

Monday, June 11, 2018

Are the laws of nature beautiful? (2nd book trailer)

Here is the other video trailer for my book "Lost in Math: How Beauty Leads Physics Astray". 



Since I have been asked repeatedly, let me emphasize again that the book is aimed at non-experts, or "the interested lay-reader" as they say. You do not need to bring any background knowledge in math or physics and, no, there are no equations in the book, just a few numbers. It's really about the question what we mean by "beautiful theories" and how that quest for beauty affects scientists' research interests. You will understand that without equations.

The book has meanwhile been read by several non-physicists and none of them reported great levels of frustration, so I have reason to think I roughly aimed at the right level of technical detail.

Having said that, the book should also be of interest for you if you are a physicist, not because I explain what the standard model is, but because you will get to hear what some top people in the field think about the current situation. (And I swear I was nice to them. My reputation is far worse than I.)

You find a table of contents, and a list of people who I interviewed, as well as transcripts of the video trailers on the book website.

Saturday, June 09, 2018

Video Trailer for "Lost in Math"

I’ve been told that one now does video trailers for books and so here’s me explaining what led me to write the book.

Friday, June 08, 2018

Science Magazine had my book reviewed, and it’s glorious, glorious.

Science Magazine had my book “Lost in Math” reviewed by Dr. Djuna Lize Croon, a postdoctoral associate at the Department of Physics at Dartmouth College, in New Hampshire, USA. Dr. Croon has worked on the very research topics that my book exposes as mathematical fantasies, such as “extra natural inflation” or “holographic composite Higgs models,” so choosing her to review my book is an interesting move for sure.

Dr. Croon does not disappoint. After just having read a whole book that explains how scientists fail to acknowledge that their opinions are influenced by the communities they are part of, Dr. Croon begins her review by quoting an anonymous Facebook friend who denigrates me as a blogger and tells Dr. Croon to dislike my book because I am not “offering solutions.” In her review, then, Dr. Croon reports being shocked to find that I disagree with her scientific heroes, dislikes that I put forward own opinions, and then promptly arrives at the same conclusion that her Facebook friend kindly offered beforehand.

The complaint that I merely criticize my community without making recommendations for improvement is particularly interesting because to begin with it’s wrong. I do spell out very clearly in the book that I think theoretical physicists in the foundations should focus on mathematical consistency and making contact to experiment rather than blathering about beauty. I also say concretely what topics I think are most promising, though I warn the reader that of course I too am biased and they should come to their own conclusions.

Even leaving aside that I do offer recommendations for improvement, I don’t know why it’s my task to come up with something else for those people to do. If they can’t come up with something else themselves, maybe we should just stop throwing money at them?

On a more technical note, I find it depressing that Dr. Croon in her review writes that naturalness has “a statistical meaning” even though I explain like a dozen times throughout the book that you can’t speak of probabilities without having a probability distribution. We have only this set of laws of nature. We have no statistics from which we could infer a likelihood of getting exactly these laws.

In summary, this review does an awesome job highlighting the very problems that my book is about.

Update June 17th: Remarkably enough, the editors at Science decided to remove the facebook quote from the review.

Physicist concludes there are no laws of physics.

It was June 4th, 2018, when Robbert Dijkgraaf, director of the world-renown Princeton Institute for Advanded Study, announced his breakthrough insight. After decades of investigating string theory, Dijkgraaf has concluded that there are no laws of physics.

Guess that’s it, then, folks. Don’t forget to turn off the light when you close the lab door for the last time.

Dijkgraaf knows what he is talking about. “Once you have understood the language of which [the cosmos] is written, it is extremely elegant, beautiful, powerful and natural,” he explained already in 2015, “The universe wants to be understood and that’s why we are all driven by this urge to find an all-encompassing theory.”

This urge has driven Dijkgraaf and many of his colleagues to pursue string theory, which they originally hoped would give rise to the unique theory of everything. That didn’t quite pan out though, and not surprisingly so: The idea of a unique theory is a non-starter. Whether a theory is unique or not depends on what you require of it. The more requirements, the better specified the theory. And whether a theory is the unique one that fulfils these or those requirements tells you nothing about whether it actually correctly describes nature.

But in the last two decades it has become popular in the foundations of physics to no longer require a theory to describe our universe. Without that requirement, then, theories contain an infinite number of universes that are collectively referred to as “the multiverse.” Theorists like this idea because having fewer requirements makes their theory simpler, and thus more beautiful. The resulting theory then uniquely explains nothing.

Of course if you have a theory with a multiverse and want to describe our universe, you have to add back the requirements you discarded earlier. That’s why no one who actually works with data ever starts with a multiverse – it’s utterly useless; Occam’s razor safely shaves it off. The multiverse doesn’t gain you anything, except possibly the ability to make headlines and speak of “paradigm changes.”

In string theory in particular, to describe our universe we’d need to specify just what happens with the additional dimensions of space that the theory needs. String theorists don’t like to make this specification because they don’t know how to make it. So instead they say that since string theory offers so many options for how to make a universe, one of them will probably fit the bill. And maybe one day they will find a meta-law that selects our universe.

Maybe they will. But until then the rest of us will work under the assumption that there are laws of physics. So far, it’s worked quite well, thank you very much.

If you want to know more about what bizarre ideas theoretical physicists have lately come up with, read my book “Lost in Math.”

Sunday, June 03, 2018

Book Update: Books are printed!

Lara.
I had just returned from my trip to Dublin when the door rang and the UPS man dumped two big boxes on our doorstep. My husband has a habit of ordering books by the dozens, so my first thought was that this time he’d really outdone himself. Alas, the UPS guy pointed out, the boxes were addressed to me.

I signed, feeling guilty for having forgotten I ordered something from Lebanon, that being the origin of the parcels. But when I cut the tape and opened the boxes I found – drumrolls please – 25 copies “Lost in Math”. Turns out my publisher has their books printed in Lebanon.

I hadn’t gotten neither galleys nor review copies, so that was the first time I actually saw The-Damned-Book, as it’s been referred to in our household for the past three years. And The-Damned-Book is finally, FINALLY, a real book!

The cover looks much better in print than it does in the digital version because it has some glossy and some matte parts and, well, at least two seven-year-old girls agree that it’s a pretty book and also mommy’s name is on the cover and a mommy photo in the back, and that’s about as far as their interest went.

Gloria.
I’m so glad this is done. When I signed the contract in 2015, I had no idea how nerve-wrecking it would be to wait for the publication. In hindsight, it was a totally nutty idea to base the whole premise of The-Damned-Book on the absence of progress in the foundations of physics when such progress could happen literally any day. For three years now I’ve been holding my breath every time there was a statistical fluctuation in the data.

But now – with little more than a week to go until publication – it seems exceedingly unlikely anything will change about the story I am telling: Fact is, theorists in the foundations of physics have been spectacularly unsuccessful with their predictions for more than 30 years now. (The neutrino-anomaly I recently wrote about wasn’t a prediction, so even if it holds up it’s not something you could credit theorists with.)

The story here isn’t that theorists have been unsuccessful per se, but that they’ve been unsuccessful and yet don’t change their methods. That’s especially perplexing if you know that these methods rely on arguments from beauty even though everyone agrees that beauty isn’t a scientific criterion. Parallels to the continued use of flawed statistical methods in psychology and the life sciences are obvious. There too, everyone kept using bad methods that were known to be bad, just because it was the state of the art. And that’s the real story here: Scientists get stuck on unsuccessful methods.

Some people have voiced their disapproval that I went and argued with some prominent people in the field without them knowing they’d end up in my book. First, I recommend you read the book before you disapprove of what you believe it contains. I think I have treated everyone politely and respectfully.

Second, it should go without saying but apparently doesn’t, that everyone who I interviewed signed an interview waiver, transferring all rights for everything they told me to my publisher in all translations and all formats, globally and for all eternity, Amen. They knew what they were being interviewed for. I’m not an undercover agent, and my opinions about arguments from beauty are no secret.

Furthermore, everyone I interviewed got to see and approved a transcript with the exact wording that appears in the book. Though I later removed some parts entirely because it was just too much material. (And no, I cannot reuse it elsewhere because that was indeed not what they agreed on.) I had to replace a few technical terms here or there that most readers wouldn’t have understood, but these instances are marked in the text.

So, I think I did my best to accurately represent their opinions, and if anyone comes off looking like an idiot it should be me.

Most importantly though, the very purpose of these interviews is to offer the reader a variety of viewpoints rather than merely my own. So of course I disagree with the people I spoke with here and there – because who’d read a dialogue in which two people constantly agree with each other?

In any case, everything’s been said and done and now I can only wait and hope. This isn’t a problem that physicists can solve themselves. The whole organization of academic research today acts against major changes in methodology because that would result in a sudden and drastic decrease of publishing activity. The only way I can see change come about is public pressure. We have had enough talk about elegant universes and beautiful theories.

If you still haven’t made up your mind whether to buy the book, we now have a website which contains a table of contents and links to reviews and such, and Amazon offers you can “Look Inside” the book. Two video trailers will be coming next week. Silicon Republic writes about the book here and Dan Falk has a piece at NBC titled “Why some scientists say physics has gone off the rails.”